Science.gov

Sample records for image compression recommendation

  1. The New CCSDS Image Compression Recommendation

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron B.; Masschelein, Bart; Moury, Gilles; Schafer, Christoph

    2004-01-01

    The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists a two dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An ASIC implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm.

  2. The New CCSDS Image Compression Recommendation

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron B.; Masschelein, Bart; Moury, Gilles; Schafer, Christoph

    2004-01-01

    The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists a two dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An ASIC implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm.

  3. The New CCSDS Image Compression Recommendation

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron; Masschelein, Bart; Moury, Gilles; Schaefer, Christoph

    2005-01-01

    The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists of a two-dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-Earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An Application-Specific Integrated Circuit (ASIC) implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm. Performance results and comparisons with other compressors are given for a test set of space images.

  4. Interactive decoding for the CCSDS recommendation for image data compression

    NASA Astrophysics Data System (ADS)

    García-Vílchez, Fernando; Serra-Sagristà, Joan; Zabala, Alaitz; Pons, Xavier

    2007-10-01

    In 2005, the Consultative Committee for Space Data Systems (CCSDS) approved a new Recommendation (CCSDS 122.0-B-1) for Image Data Compression. Our group has designed a new file syntax for the Recommendation. The proposal consists of adding embedded headers. Such modification provides scalability by quality, spatial location, resolution and component. The main advantages of our proposal are: 1) the definition of multiple types of progression order, which enhances abilities in transmission scenarios, and 2) the support for the extraction and decoding of specific windows of interest without needing to decode the complete code-stream. In this paper we evaluate the performance of our proposal. First we measure the impact of the embedded headers in the encoded stream. Second we compare the compression performance of our technique to JPEG2000.

  5. Fractal image compression

    NASA Technical Reports Server (NTRS)

    Barnsley, Michael F.; Sloan, Alan D.

    1989-01-01

    Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed.

  6. Compressive optical image encryption.

    PubMed

    Li, Jun; Sheng Li, Jiao; Yang Pan, Yang; Li, Rong

    2015-05-20

    An optical image encryption technique based on compressive sensing using fully optical means has been proposed. An object image is first encrypted to a white-sense stationary noise pattern using a double random phase encoding (DRPE) method in a Mach-Zehnder interferometer. Then, the encrypted image is highly compressed to a signal using single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the encrypted image is reconstructed well via compressive sensing theory, and the original image can be decrypted with three reconstructed holograms and the correct keys. The numerical simulations show that the method is effective and suitable for optical image security transmission in future all-optical networks because of the ability of completely optical implementation and substantially smaller hologram data volume.

  7. Compressive Optical Image Encryption

    PubMed Central

    Li, Jun; Sheng Li, Jiao; Yang Pan, Yang; Li, Rong

    2015-01-01

    An optical image encryption technique based on compressive sensing using fully optical means has been proposed. An object image is first encrypted to a white-sense stationary noise pattern using a double random phase encoding (DRPE) method in a Mach-Zehnder interferometer. Then, the encrypted image is highly compressed to a signal using single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the encrypted image is reconstructed well via compressive sensing theory, and the original image can be decrypted with three reconstructed holograms and the correct keys. The numerical simulations show that the method is effective and suitable for optical image security transmission in future all-optical networks because of the ability of completely optical implementation and substantially smaller hologram data volume. PMID:25992946

  8. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  9. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  10. Compressed image deblurring

    NASA Astrophysics Data System (ADS)

    Xu, Yuquan; Hu, Xiyuan; Peng, Silong

    2014-03-01

    We propose an algorithm to recover the latent image from the blurred and compressed input. In recent years, although many image deblurring algorithms have been proposed, most of the previous methods do not consider the compression effect in blurry images. Actually, it is unavoidable in practice that most of the real-world images are compressed. This compression will introduce a typical kind of noise, blocking artifacts, which do not meet the Gaussian distribution assumed in most existing algorithms. Without properly handling this non-Gaussian noise, the recovered image will suffer severe artifacts. Inspired by the statistic property of compression error, we model the non-Gaussian noise as hyper-Laplacian distribution. Based on this model, an efficient nonblind image deblurring algorithm based on variable splitting technique is proposed to solve the resulting nonconvex minimization problem. Finally, we also address an effective blind image deblurring algorithm which can deal with the compressed and blurred images efficiently. Extensive experiments compared with state-of-the-art nonblind and blind deblurring methods demonstrate the effectiveness of the proposed method.

  11. The CCDS Data Compression Recommendations: Development and Status

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Moury, Gilles; Armbruster, Philippe; Day, John H. (Technical Monitor)

    2002-01-01

    The Consultative Committee for Space Data Systems (CCSDS) has been engaging in recommending data compression standards for space applications. The first effort focused on a lossless scheme that was adopted in 1997. Since then, space missions benefiting from this recommendation range from deep space probes to near Earth observatories. The cost savings result not only from reduced onboard storage and reduced bandwidth, but also in ground archive of mission data. In many instances, this recommendation also enables more science data to be collected for added scientific value. Since 1998, the compression sub-panel of CCSDS has been investigating lossy image compression schemes and is currently working towards a common solution for a single recommendation. The recommendation will fulfill the requirements for remote sensing conducted on space platforms.

  12. Image compression for dermatology

    NASA Astrophysics Data System (ADS)

    Cookson, John P.; Sneiderman, Charles; Colaianni, Joseph; Hood, Antoinette F.

    1990-07-01

    Color 35mm photographic slides are commonly used in dermatology for education, and patient records. An electronic storage and retrieval system for digitized slide images may offer some advantages such as preservation and random access. We have integrated a system based on a personal computer (PC) for digital imaging of 35mm slides that depict dermatologic conditions. Such systems require significant resources to accommodate the large image files involved. Methods to reduce storage requirements and access time through image compression are therefore of interest. This paper contains an evaluation of one such compression method that uses the Hadamard transform implemented on a PC-resident graphics processor. Image quality is assessed by determining the effect of compression on the performance of an image feature recognition task.

  13. Image data compression investigation

    NASA Technical Reports Server (NTRS)

    Myrie, Carlos

    1989-01-01

    NASA continuous communications systems growth has increased the demand for image transmission and storage. Research and analysis was conducted on various lossy and lossless advanced data compression techniques or approaches used to improve the efficiency of transmission and storage of high volume stellite image data such as pulse code modulation (PCM), differential PCM (DPCM), transform coding, hybrid coding, interframe coding, and adaptive technique. In this presentation, the fundamentals of image data compression utilizing two techniques which are pulse code modulation (PCM) and differential PCM (DPCM) are presented along with an application utilizing these two coding techniques.

  14. Space-time compressive imaging.

    PubMed

    Treeaporn, Vicha; Ashok, Amit; Neifeld, Mark A

    2012-02-01

    Compressive imaging systems typically exploit the spatial correlation of the scene to facilitate a lower dimensional measurement relative to a conventional imaging system. In natural time-varying scenes there is a high degree of temporal correlation that may also be exploited to further reduce the number of measurements. In this work we analyze space-time compressive imaging using Karhunen-Loève (KL) projections for the read-noise-limited measurement case. Based on a comprehensive simulation study, we show that a KL-based space-time compressive imager offers higher compression relative to space-only compressive imaging. For a relative noise strength of 10% and reconstruction error of 10%, we find that space-time compressive imaging with 8×8×16 spatiotemporal blocks yields about 292× compression compared to a conventional imager, while space-only compressive imaging provides only 32× compression. Additionally, under high read-noise conditions, a space-time compressive imaging system yields lower reconstruction error than a conventional imaging system due to the multiplexing advantage. We also discuss three electro-optic space-time compressive imaging architecture classes, including charge-domain processing by a smart focal plane array (FPA). Space-time compressive imaging using a smart FPA provides an alternative method to capture the nonredundant portions of time-varying scenes.

  15. Progressive transmission and compression images

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.

    1996-01-01

    We describe an image data compression strategy featuring progressive transmission. The method exploits subband coding and arithmetic coding for compression. We analyze the Laplacian probability density, which closely approximates the statistics of individual subbands, to determine a strategy for ordering the compressed subband data in a way that improves rate-distortion performance. Results are presented for a test image.

  16. Image Compression Devices

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The Rice algorithm is a "lossless" compression algorithm; it takes an image or other data that has been broken down into short strings of digital data, then processes each string mathematically to reduce the amount of memory required to store or transmit them. It is particularly useful in medical, scientific or engineering applications where all data must be preserved. Originally developed at Jet Propulsion Laboratory, the technology is marketed by Advanced Hardware Architectures, a company started by a former employee of the NASA Microelectronics Research Center.

  17. Compressive sensing in medical imaging

    PubMed Central

    Graff, Christian G.; Sidky, Emil Y.

    2015-01-01

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400

  18. Compressive sensing in medical imaging.

    PubMed

    Graff, Christian G; Sidky, Emil Y

    2015-03-10

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

  19. Image quality (IQ) guided multispectral image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  20. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  1. Compressive Sensing for Quantum Imaging

    NASA Astrophysics Data System (ADS)

    Howland, Gregory A.

    This thesis describes the application of compressive sensing to several challenging problems in quantum imaging with practical and fundamental implications. Compressive sensing is a measurement technique that compresses a signal during measurement such that it can be dramatically undersampled. Compressive sensing has been shown to be an extremely efficient measurement technique for imaging, particularly when detector arrays are not available. The thesis first reviews compressive sensing through the lens of quantum imaging and quantum measurement. Four important applications and their corresponding experiments are then described in detail. The first application is a compressive sensing, photon-counting lidar system. A novel depth mapping technique that uses standard, linear compressive sensing is described. Depth maps up to 256 x 256 pixel transverse resolution are recovered with depth resolution less than 2.54 cm. The first three-dimensional, photon counting video is recorded at 32 x 32 pixel resolution and 14 frames-per-second. The second application is the use of compressive sensing for complementary imaging---simultaneously imaging the transverse-position and transverse-momentum distributions of optical photons. This is accomplished by taking random, partial projections of position followed by imaging the momentum distribution on a cooled CCD camera. The projections are shown to not significantly perturb the photons' momenta while allowing high resolution position images to be reconstructed using compressive sensing. A variety of objects and their diffraction patterns are imaged including the double slit, triple slit, alphanumeric characters, and the University of Rochester logo. The third application is the use of compressive sensing to characterize spatial entanglement of photon pairs produced by spontaneous parametric downconversion. The technique gives a theoretical speedup N2/log N for N-dimensional entanglement over the standard raster scanning technique

  2. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  3. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  4. Compressive passive millimeter wave imager

    DOEpatents

    Gopalsami, Nachappa; Liao, Shaolin; Elmer, Thomas W; Koehl, Eugene R; Heifetz, Alexander; Raptis, Apostolos C

    2015-01-27

    A compressive scanning approach for millimeter wave imaging and sensing. A Hadamard mask is positioned to receive millimeter waves from an object to be imaged. A subset of the full set of Hadamard acquisitions is sampled. The subset is used to reconstruct an image representing the object.

  5. Compressive Imaging via Approximate Message Passing

    DTIC Science & Technology

    2015-09-04

    We propose novel compressive imaging algorithms that employ approximate message passing (AMP), which is an iterative signal estimation algorithm that...Approved for Public Release; Distribution Unlimited Final Report: Compressive Imaging via Approximate Message Passing The views, opinions and/or findings...Research Triangle Park, NC 27709-2211 approximate message passing , compressive imaging, compressive sensing, hyperspectral imaging, signal reconstruction

  6. Image compression for functional imaging

    NASA Astrophysics Data System (ADS)

    Feng, Dagan D.; Li, Xianjin; Siu, Wan-Chi

    1997-04-01

    Function imaging has been playing an important role in modern biomedical research and clinical diagnosis, which provides human internal biochemical information previously not available. However, for a routine dynamic study with a typical medical function imaging system, such as positron emission tomography (PET), it is easily to acquire nearly 1000 images for just one patient in one study. Such a large number of images has given a considerable burden for computer image storage space, data processing and transmission time. In this paper, we present the theory and principles for the minimization of image frames in dynamic biomedical function imaging. We show that the minimum number of image frames required is just equal to the model identifiable parameters and that the quality of the physiological parameter estimation, based on these minimum number of image frames, can be controlled at a comparable level. As a result of our study, the image storage space required can be reduced by more than 80 percent.

  7. A Compressed Terahertz Imaging Method

    NASA Astrophysics Data System (ADS)

    Zhang, Man; Pan, Rui; Xiong, Wei; He, Ting; Shen, Jing-Ling

    2012-10-01

    A compressed terahertz imaging method using a terahertz time domain spectroscopy system (THz-TDSS) is suggested and demonstrated. In the method, a parallel THz wave with the beam diameter 4cm from a usual THz-TDSS is used and a square shaped 2D echelon is placed in front of an imaged object. We confirm both in simulation and in experiment that only one terahertz time domain spectrum is needed to image the object. The image information is obtained from the compressed THz signal by deconvolution signal processing, and therefore the whole imaging time is greatly reduced in comparison with some other pulsed THz imaging methods. The present method will hopefully be used in real-time imaging.

  8. Satellite image compression using wavelet

    NASA Astrophysics Data System (ADS)

    Santoso, Alb. Joko; Soesianto, F.; Dwiandiyanto, B. Yudi

    2010-02-01

    Image data is a combination of information and redundancies, the information is part of the data be protected because it contains the meaning and designation data. Meanwhile, the redundancies are part of data that can be reduced, compressed, or eliminated. Problems that arise are related to the nature of image data that spends a lot of memory. In this paper will compare 31 wavelet function by looking at its impact on PSNR, compression ratio, and bits per pixel (bpp) and the influence of decomposition level of PSNR and compression ratio. Based on testing performed, Haar wavelet has the advantage that is obtained PSNR is relatively higher compared with other wavelets. Compression ratio is relatively better than other types of wavelets. Bits per pixel is relatively better than other types of wavelet.

  9. Imaging of venous compression syndromes

    PubMed Central

    Ganguli, Suvranu; Ghoshhajra, Brian B.; Gupta, Rajiv; Prabhakar, Anand M.

    2016-01-01

    Venous compression syndromes are a unique group of disorders characterized by anatomical extrinsic venous compression, typically in young and otherwise healthy individuals. While uncommon, they may cause serious complications including pain, swelling, deep venous thrombosis (DVT), pulmonary embolism, and post-thrombotic syndrome. The major disease entities are May-Thurner syndrome (MTS), variant iliac vein compression syndrome (IVCS), venous thoracic outlet syndrome (VTOS)/Paget-Schroetter syndrome, nutcracker syndrome (NCS), and popliteal venous compression (PVC). In this article, we review the key clinical features, multimodality imaging findings, and treatment options of these disorders. Emphasis is placed on the growing role of noninvasive imaging options such as magnetic resonance venography (MRV) in facilitating early and accurate diagnosis and tailored intervention. PMID:28123973

  10. A programmable image compression system

    NASA Technical Reports Server (NTRS)

    Farrelle, Paul M.

    1989-01-01

    A programmable image compression system which has the necessary flexibility to address diverse imaging needs is described. It can compress and expand single frame video images (monochrome or color) as well as documents and graphics (black and white or color) for archival or transmission applications. Through software control, the compression mode can be set for lossless or controlled quality coding; the image size and bit depth can be varied; and the image source and destination devices can be readily changed. Despite the large combination of image data types, image sources, and algorithms, the system provides a simple consistent interface to the programmer. This system (OPTIPAC) is based on the TITMS320C25 digital signal processing (DSP) chip and has been implemented as a co-processor board for an IBM PC-AT compatible computer. The underlying philosophy can readily be applied to different hardware platforms. By using multiple DSP chips or incorporating algorithm specific chips, the compression and expansion times can be significantly reduced to meet performance requirements.

  11. Terahertz wavelength encoding compressive imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Wang, Xinke; Zhang, Yan

    2016-11-01

    Terahertz (THz) compressive imaging can obtain two dimensional image with a single or linear detector, which can overcome the bottleneck problem of lacking of THz two dimensional detectors. In this presentation, we propose a method to obtain two dimensional images using a linear detector. A plano-convex cylindrical lens is employed to perform Fourier transform and to encode one dimensional information of an object into wavelengths. After recording, both amplitude and phase information for different frequency at each pixel of the line detector are extracted, two dimensional image of the object can be reconstructed. Numerical simulation demonstrates the validity of the proposed method.

  12. Compressing TV-image data

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Lee, J.; Rice, R. F.; Schlutsmeyer, A. P.

    1981-01-01

    Compressing technique calculates activity estimator for each segment of image line. Estimator is used in conjunction with allowable bits per line, N, to determine number of bits necessary to code each segment and which segments can tolerate truncation. Preprocessed line data are then passed to adaptive variable-length coder, which selects optimum transmission code. Method increases capacity of broadcast and cable television transmissions and helps reduce size of storage medium for video and digital audio recordings.

  13. Compressing TV-image data

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Lee, J.; Rice, R. F.; Schlutsmeyer, A. P.

    1981-01-01

    Compressing technique calculates activity estimator for each segment of image line. Estimator is used in conjunction with allowable bits per line, N, to determine number of bits necessary to code each segment and which segments can tolerate truncation. Preprocessed line data are then passed to adaptive variable-length coder, which selects optimum transmission code. Method increases capacity of broadcast and cable television transmissions and helps reduce size of storage medium for video and digital audio recordings.

  14. Study on Huber fractal image compression.

    PubMed

    Jeng, Jyh-Horng; Tseng, Chun-Chieh; Hsieh, Jer-Guang

    2009-05-01

    In this paper, a new similarity measure for fractal image compression (FIC) is introduced. In the proposed Huber fractal image compression (HFIC), the linear Huber regression technique from robust statistics is embedded into the encoding procedure of the fractal image compression. When the original image is corrupted by noises, we argue that the fractal image compression scheme should be insensitive to those noises presented in the corrupted image. This leads to a new concept of robust fractal image compression. The proposed HFIC is one of our attempts toward the design of robust fractal image compression. The main disadvantage of HFIC is the high computational cost. To overcome this drawback, particle swarm optimization (PSO) technique is utilized to reduce the searching time. Simulation results show that the proposed HFIC is robust against outliers in the image. Also, the PSO method can effectively reduce the encoding time while retaining the quality of the retrieved image.

  15. Longwave infrared compressive hyperspectral imager

    NASA Astrophysics Data System (ADS)

    Dupuis, Julia R.; Kirby, Michael; Cosofret, Bogdan R.

    2015-06-01

    Physical Sciences Inc. (PSI) is developing a longwave infrared (LWIR) compressive sensing hyperspectral imager (CS HSI) based on a single pixel architecture for standoff vapor phase plume detection. The sensor employs novel use of a high throughput stationary interferometer and a digital micromirror device (DMD) converted for LWIR operation in place of the traditional cooled LWIR focal plane array. The CS HSI represents a substantial cost reduction over the state of the art in LWIR HSI instruments. Radiometric improvements for using the DMD in the LWIR spectral range have been identified and implemented. In addition, CS measurement and sparsity bases specifically tailored to the CS HSI instrument and chemical plume imaging have been developed and validated using LWIR hyperspectral image streams of chemical plumes. These bases enable comparable statistics to detection based on uncompressed data. In this paper, we present a system model predicting the overall performance of the CS HSI system. Results from a breadboard build and test validating the system model are reported. In addition, the measurement and sparsity basis work demonstrating the plume detection on compressed hyperspectral images is presented.

  16. [Realization of DICOM medical image compression technology].

    PubMed

    Wang, Chenxi; Wang, Quan; Ren, Haiping

    2013-05-01

    This paper introduces the implement method of DICOM medical image compression technology, The image part of DICOM files are extracted and converted to BMP format. The non-image information in DICOM file are stored into the text. When the final image of JPEG standard and non-image information are encapsulated to DICOM format images, it realizes the compression of medical image, which is beneficial to the image storage and transmission.

  17. Correlation and image compression for limited-bandwidth CCD.

    SciTech Connect

    Thompson, Douglas G.

    2005-07-01

    As radars move to Unmanned Aerial Vehicles with limited-bandwidth data downlinks, the amount of data stored and transmitted with each image becomes more significant. This document gives the results of a study to determine the effect of lossy compression in the image magnitude and phase on Coherent Change Detection (CCD). We examine 44 lossy compression types, plus lossless zlib compression, and test each compression method with over 600 CCD image pairs. We also derive theoretical predictions for the correlation for most of these compression schemes, which compare favorably with the experimental results. We recommend image transmission formats for limited-bandwidth programs having various requirements for CCD, including programs which cannot allow performance degradation and those which have stricter bandwidth requirements at the expense of CCD performance.

  18. Progressive Transmission and Compression of Images

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.

    1996-01-01

    We describe an image data compression strategy featuring progressive transmission. The method exploits subband coding and arithmetic coding for compression. We analyze the Laplacian probability density, which closely approximates the statistics of individual subbands, to determine a strategy for ordering the compressed subband data in a way that improves rate-distortion performance. Results are presented for a test image.

  19. Psychophysical rating of image compression techniques

    NASA Technical Reports Server (NTRS)

    Stein, Charles S.; Hitchner, Lewis E.; Watson, Andrew B.

    1989-01-01

    Image compression schemes abound with little work which compares their bit-rate performance based on subjective fidelity measures. Statistical measures of image fidelity, such as squared error measures, do not necessarily correspond to subjective measures of image fidelity. Most previous comparisons of compression techniques have been based on these statistical measures. A psychophysical method has been used to estimate, for a number of compression techniques, a threshold bit-rate yielding a criterion level of performance in discriminating original and compressed images. The compression techniques studied include block truncation, Laplacian pyramid, block discrete cosine transform, with and without a human visual system scaling, and cortex transform coders.

  20. Psychophysical rating of image compression techniques

    NASA Technical Reports Server (NTRS)

    Stein, Charles S.; Hitchner, Lewis E.; Watson, Andrew B.

    1989-01-01

    Image compression schemes abound with little work which compares their bit-rate performance based on subjective fidelity measures. Statistical measures of image fidelity, such as squared error measures, do not necessarily correspond to subjective measures of image fidelity. Most previous comparisons of compression techniques have been based on these statistical measures. A psychophysical method has been used to estimate, for a number of compression techniques, a threshold bit-rate yielding a criterion level of performance in discriminating original and compressed images. The compression techniques studied include block truncation, Laplacian pyramid, block discrete cosine transform, with and without a human visual system scaling, and cortex transform coders.

  1. High compression image and image sequence coding

    NASA Technical Reports Server (NTRS)

    Kunt, Murat

    1989-01-01

    The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.

  2. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  3. Image coding compression based on DCT

    NASA Astrophysics Data System (ADS)

    Feng, Fei; Liu, Peixue; Jiang, Baohua

    2012-04-01

    With the development of computer science and communications, the digital image processing develops more and more fast. High quality images are loved by people, but it will waste more stored space in our computer and it will waste more bandwidth when it is transferred by Internet. Therefore, it's necessary to have an study on technology of image compression. At present, many algorithms about image compression is applied to network and the image compression standard is established. In this dissertation, some analysis on DCT will be written. Firstly, the principle of DCT will be shown. It's necessary to realize image compression, because of the widely using about this technology; Secondly, we will have a deep understanding of DCT by the using of Matlab, the process of image compression based on DCT, and the analysis on Huffman coding; Thirdly, image compression based on DCT will be shown by using Matlab and we can have an analysis on the quality of the picture compressed. It is true that DCT is not the only algorithm to realize image compression. I am sure there will be more algorithms to make the image compressed have a high quality. I believe the technology about image compression will be widely used in the network or communications in the future.

  4. Digital image compression in dermatology: format comparison.

    PubMed

    Guarneri, F; Vaccaro, M; Guarneri, C

    2008-09-01

    Digital image compression (reduction of the amount of numeric data needed to represent a picture) is widely used in electronic storage and transmission devices. Few studies have compared the suitability of the different compression algorithms for dermatologic images. We aimed at comparing the performance of four popular compression formats, Tagged Image File (TIF), Portable Network Graphics (PNG), Joint Photographic Expert Group (JPEG), and JPEG2000 on clinical and videomicroscopic dermatologic images. Nineteen (19) clinical and 15 videomicroscopic digital images were compressed using JPEG and JPEG2000 at various compression factors and TIF and PNG. TIF and PNG are "lossless" formats (i.e., without alteration of the image), JPEG is "lossy" (the compressed image has a lower quality than the original), JPEG2000 has a lossless and a lossy mode. The quality of the compressed images was assessed subjectively (by three expert reviewers) and quantitatively (by measuring, point by point, the color differences from the original). Lossless JPEG2000 (49% compression) outperformed the other lossless algorithms, PNG and TIF (42% and 31% compression, respectively). Lossy JPEG2000 compression was slightly less efficient than JPEG, but preserved image quality much better, particularly at higher compression factors. For its good quality and compression ratio, JPEG2000 appears to be a good choice for clinical/videomicroscopic dermatologic image compression. Additionally, its diffusion and other features, such as the possibility of embedding metadata in the image file and to encode various parts of an image at different compression levels, make it perfectly suitable for the current needs of dermatology and teledermatology.

  5. Simultaneous denoising and compression of multispectral images

    NASA Astrophysics Data System (ADS)

    Hagag, Ahmed; Amin, Mohamed; Abd El-Samie, Fathi E.

    2013-01-01

    A new technique for denoising and compression of multispectral satellite images to remove the effect of noise on the compression process is presented. One type of multispectral images has been considered: Landsat Enhanced Thematic Mapper Plus. The discrete wavelet transform (DWT), the dual-tree DWT, and a simple Huffman coder are used in the compression process. Simulation results show that the proposed technique is more effective than other traditional compression-only techniques.

  6. Image Compression in Signal-Dependent Noise

    NASA Astrophysics Data System (ADS)

    Shahnaz, Rubeena; Walkup, John F.; Krile, Thomas F.

    1999-09-01

    The performance of an image compression scheme is affected by the presence of noise, and the achievable compression may be reduced significantly. We investigated the effects of specific signal-dependent-noise (SDN) sources, such as film-grain and speckle noise, on image compression, using JPEG (Joint Photographic Experts Group) standard image compression. For the improvement of compression ratios noisy images are preprocessed for noise suppression before compression is applied. Two approaches are employed for noise suppression. In one approach an estimator designed specifically for the SDN model is used. In an alternate approach, the noise is first transformed into signal-independent noise (SIN) and then an estimator designed for SIN is employed. The performances of these two schemes are compared. The compression results achieved for noiseless, noisy, and restored images are also presented.

  7. Studies on image compression and image reconstruction

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Nori, Sekhar; Araj, A.

    1994-01-01

    During this six month period our works concentrated on three, somewhat different areas. We looked at and developed a number of error concealment schemes for use in a variety of video coding environments. This work is described in an accompanying (draft) Masters thesis. In the thesis we describe application of this techniques to the MPEG video coding scheme. We felt that the unique frame ordering approach used in the MPEG scheme would be a challenge to any error concealment/error recovery technique. We continued with our work in the vector quantization area. We have also developed a new type of vector quantizer, which we call a scan predictive vector quantization. The scan predictive VQ was tested on data processed at Goddard to approximate Landsat 7 HRMSI resolution and compared favorably with existing VQ techniques. A paper describing this work is included. The third area is concerned more with reconstruction than compression. While there is a variety of efficient lossless image compression schemes, they all have a common property that they use past data to encode future data. This is done either via taking differences, context modeling, or by building dictionaries. When encoding large images, this common property becomes a common flaw. When the user wishes to decode just a portion of the image, the requirement that the past history be available forces the decoding of a significantly larger portion of the image than desired by the user. Even with intelligent partitioning of the image dataset, the number of pixels decoded may be four times the number of pixels requested. We have developed an adaptive scanning strategy which can be used with any lossless compression scheme and which lowers the additional number of pixels to be decoded to about 7 percent of the number of pixels requested! A paper describing these results is included.

  8. Methods and limits of digital image compression of retinal images for telemedicine.

    PubMed

    Eikelboom, R H; Yogesan, K; Barry, C J; Constable, I J; Tay-Kearney, M L; Jitskaia, L; House, P H

    2000-06-01

    To investigate image compression of digital retinal images and the effect of various levels of compression on the quality of the images. JPEG (Joint Photographic Experts Group) and Wavelet image compression techniques were applied in five different levels to 11 eyes with subtle retinal abnormalities and to 4 normal eyes. Image quality was assessed by four different methods: calculation of the root mean square (RMS) error between the original and compressed image, determining the level of arteriole branching, identification of retinal abnormalities by experienced observers, and a subjective assessment of overall image quality. To verify the techniques used and findings, a second set of retinal images was assessed by calculation of RMS error and overall image quality. Plots and tabulations of the data as a function of the final image size showed that when the original image size of 1.5 MB was reduced to 29 KB using JPEG compression, there was no serious degradation in quality. The smallest Wavelet compressed images in this study (15 KB) were generally still of acceptable quality. For situations where digital image transmission time and costs should be minimized, Wavelet image compression to 15 KB is recommended, although there is a slight cost of computational time. Where computational time should be minimized, and to remain compatible with other imaging systems, the use of JPEG compression to 29 KB is an excellent alternative.

  9. Image Compression: Making Multimedia Publishing a Reality.

    ERIC Educational Resources Information Center

    Anson, Louisa

    1993-01-01

    Describes the new Fractal Transform technology, a method of compressing digital images to represent images as seen by the mind's eye. The International Organization for Standardization (ISO) standards for compressed image formats are discussed in relationship to Fractal Transform, and it is compared with Discrete Cosine Transform. Thirteen figures…

  10. Image Compression: Making Multimedia Publishing a Reality.

    ERIC Educational Resources Information Center

    Anson, Louisa

    1993-01-01

    Describes the new Fractal Transform technology, a method of compressing digital images to represent images as seen by the mind's eye. The International Organization for Standardization (ISO) standards for compressed image formats are discussed in relationship to Fractal Transform, and it is compared with Discrete Cosine Transform. Thirteen figures…

  11. Tomographic Image Compression Using Multidimensional Transforms.

    ERIC Educational Resources Information Center

    Villasenor, John D.

    1994-01-01

    Describes a method for compressing tomographic images obtained using Positron Emission Tomography (PET) and Magnetic Resonance (MR) by applying transform compression using all available dimensions. This takes maximum advantage of redundancy of the data, allowing significant increases in compression efficiency and performance. (13 references) (KRN)

  12. Compressing images for the Internet

    NASA Astrophysics Data System (ADS)

    Beretta, Giordano B.

    1998-01-01

    The World Wide Web has rapidly become the hot new mass communications medium. Content creators are using similar design and layout styles as in printed magazines, i.e., with many color images and graphics. The information is transmitted over plain telephone lines, where the speed/price trade-off is much more severe than in the case of printed media. The standard design approach is to use palettized color and to limit as much as possible the number of colors used, so that the images can be encoded with a small number of bits per pixel using the Graphics Interchange Format (GIF) file format. The World Wide Web standards contemplate a second data encoding method (JPEG) that allows color fidelity but usually performs poorly on text, which is a critical element of information communicated on this medium. We analyze the spatial compression of color images and describe a methodology for using the JPEG method in a way that allows a compact representation while preserving full color fidelity.

  13. [Statistical study of the wavelet-based lossy medical image compression technique].

    PubMed

    Puniene, Jūrate; Navickas, Ramūnas; Punys, Vytenis; Jurkevicius, Renaldas

    2002-01-01

    Medical digital images have informational redundancy. Both the amount of memory for image storage and their transmission time could be reduced if image compression techniques are applied. The techniques are divided into two groups: lossless (compression ratio does not exceed 3 times) and lossy ones. Compression ratio of lossy techniques depends on visibility of distortions. It is a variable parameter and it can exceed 20 times. A compression study was performed to evaluate the compression schemes, which were based on the wavelet transform. The goal was to develop a set of recommendations for an acceptable compression ratio for different medical image modalities: ultrasound cardiac images and X-ray angiographic images. The acceptable image quality after compression was evaluated by physicians. Statistical analysis of the evaluation results was used to form a set of recommendations.

  14. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  15. Image compression based on GPU encoding

    NASA Astrophysics Data System (ADS)

    Bai, Zhaofeng; Qiu, Yuehong

    2015-07-01

    With the rapid development of digital technology, the data increased greatly in both static image and dynamic video image. It is noticeable how to decrease the redundant data in order to save or transmit information more efficiently. So the research on image compression becomes more and more important. Using GPU to achieve higher compression ratio has superiority in interactive remote visualization. Contrast to CPU, GPU may be a good way to accelerate the image compression. Currently, GPU of NIVIDIA has evolved into the eighth generation, which increasingly dominates the high-powered general purpose computer field. This paper explains the way of GPU encoding image. Some experiment results are also presented.

  16. Image compression algorithm using wavelet transform

    NASA Astrophysics Data System (ADS)

    Cadena, Luis; Cadena, Franklin; Simonov, Konstantin; Zotin, Alexander; Okhotnikov, Grigory

    2016-09-01

    Within the multi-resolution analysis, the study of the image compression algorithm using the Haar wavelet has been performed. We have studied the dependence of the image quality on the compression ratio. Also, the variation of the compression level of the studied image has been obtained. It is shown that the compression ratio in the range of 8-10 is optimal for environmental monitoring. Under these conditions the compression level is in the range of 1.7 - 4.2, depending on the type of images. It is shown that the algorithm used is more convenient and has more advantages than Winrar. The Haar wavelet algorithm has improved the method of signal and image processing.

  17. Digital Image Compression Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.

    1993-01-01

    The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.

  18. An image-data-compression algorithm

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Rice, R. F.

    1981-01-01

    Cluster Compression Algorithm (CCA) preprocesses Landsat image data immediately following satellite data sensor (receiver). Data are reduced by extracting pertinent image features and compressing this result into concise format for transmission to ground station. This results in narrower transmission bandwidth, increased data-communication efficiency, and reduced computer time in reconstructing and analyzing image. Similar technique could be applied to other types of recorded data to cut costs of transmitting, storing, distributing, and interpreting complex information.

  19. Lossless Compression on MRI Images Using SWT.

    PubMed

    Anusuya, V; Raghavan, V Srinivasa; Kavitha, G

    2014-10-01

    Medical image compression is one of the growing research fields in biomedical applications. Most medical images need to be compressed using lossless compression as each pixel information is valuable. With the wide pervasiveness of medical imaging applications in health-care settings and the increased interest in telemedicine technologies, it has become essential to reduce both storage and transmission bandwidth requirements needed for archival and communication of related data, preferably by employing lossless compression methods. Furthermore, providing random access as well as resolution and quality scalability to the compressed data has become of great utility. Random access refers to the ability to decode any section of the compressed image without having to decode the entire data set. The system proposes to implement a lossless codec using an entropy coder. 3D medical images are decomposed into 2D slices and subjected to 2D-stationary wavelet transform (SWT). The decimated coefficients are compressed in parallel using embedded block coding with optimized truncation of the embedded bit stream. These bit streams are decoded and reconstructed using inverse SWT. Finally, the compression ratio (CR) is evaluated to prove the efficiency of the proposal. As an enhancement, the proposed system concentrates on minimizing the computation time by introducing parallel computing on the arithmetic coding stage as it deals with multiple subslices.

  20. The impact of image information on compressibility and degradation in medical image compression

    SciTech Connect

    Fidler, Ales; Skaleric, Uros; Likar, Bostjan

    2006-08-15

    The aim of the study was to demonstrate and critically discuss the influence of image information on compressibility and image degradation. The influence of image information on image compression was demonstrated on the axial computed tomography images of a head. The standard Joint Photographic Expert Group (JPEG) and JPEG 2000 compression methods were used in compression ratio (CR) and in quality factor (QF) compression modes. Image information was estimated by calculating image entropy, while the effects of image compression were evaluated quantitatively, by file size reduction and by local and global mean square error (MSE), and qualitatively, by visual perception of distortion in high and low contrast test patterns. In QF compression mode, a strong correlation between image entropy and file size was found for JPEG (r=0.87, p<0.001) and JPEG 2000 (r=0.84, p<0.001), while corresponding local MSE was constant (4.54) or nearly constant (2.36-2.37), respectively. For JPEG 2000 CR compression mode, CR was nearly constant (1:25), while local MSE varied considerably (2.26 and 10.09). The obtained qualitative and quantitative results clearly demonstrate that image degradation highly depends on image information, which indicates that the degree of image degradation cannot be guaranteed in CR but only in QF compression mode. CR is therefore not a measure of choice for expressing the degree of image degradation in medical image compression. Moreover, even when using QF compression modes, objective evaluation, and comparison of the compression methods within and between studies is often not possible due to the lack of standardization of compression quality scales.

  1. High-performance compression of astronomical images

    NASA Technical Reports Server (NTRS)

    White, Richard L.

    1993-01-01

    Astronomical images have some rather unusual characteristics that make many existing image compression techniques either ineffective or inapplicable. A typical image consists of a nearly flat background sprinkled with point sources and occasional extended sources. The images are often noisy, so that lossless compression does not work very well; furthermore, the images are usually subjected to stringent quantitative analysis, so any lossy compression method must be proven not to discard useful information, but must instead discard only the noise. Finally, the images can be extremely large. For example, the Space Telescope Science Institute has digitized photographic plates covering the entire sky, generating 1500 images each having 14000 x 14000 16-bit pixels. Several astronomical groups are now constructing cameras with mosaics of large CCD's (each 2048 x 2048 or larger); these instruments will be used in projects that generate data at a rate exceeding 100 MBytes every 5 minutes for many years. An effective technique for image compression may be based on the H-transform (Fritze et al. 1977). The method that we have developed can be used for either lossless or lossy compression. The digitized sky survey images can be compressed by at least a factor of 10 with no noticeable losses in the astrometric and photometric properties of the compressed images. The method has been designed to be computationally efficient: compression or decompression of a 512 x 512 image requires only 4 seconds on a Sun SPARCstation 1. The algorithm uses only integer arithmetic, so it is completely reversible in its lossless mode, and it could easily be implemented in hardware for space applications.

  2. Determining optimal medical image compression: psychometric and image distortion analysis

    PubMed Central

    2012-01-01

    Background Storage issues and bandwidth over networks have led to a need to optimally compress medical imaging files while leaving clinical image quality uncompromised. Methods To determine the range of clinically acceptable medical image compression across multiple modalities (CT, MR, and XR), we performed psychometric analysis of image distortion thresholds using physician readers and also performed subtraction analysis of medical image distortion by varying degrees of compression. Results When physician readers were asked to determine the threshold of compression beyond which images were clinically compromised, the mean image distortion threshold was a JPEG Q value of 23.1 ± 7.0. In Receiver-Operator Characteristics (ROC) plot analysis, compressed images could not be reliably distinguished from original images at any compression level between Q = 50 and Q = 95. Below this range, some readers were able to discriminate the compressed and original images, but high sensitivity and specificity for this discrimination was only encountered at the lowest JPEG Q value tested (Q = 5). Analysis of directly measured magnitude of image distortion from subtracted image pairs showed that the relationship between JPEG Q value and degree of image distortion underwent an upward inflection in the region of the two thresholds determined psychometrically (approximately Q = 25 to Q = 50), with 75 % of the image distortion occurring between Q = 50 and Q = 1. Conclusion It is possible to apply lossy JPEG compression to medical images without compromise of clinical image quality. Modest degrees of compression, with a JPEG Q value of 50 or higher (corresponding approximately to a compression ratio of 15:1 or less), can be applied to medical images while leaving the images indistinguishable from the original. PMID:22849336

  3. Determining optimal medical image compression: psychometric and image distortion analysis.

    PubMed

    Flint, Alexander C

    2012-07-31

    Storage issues and bandwidth over networks have led to a need to optimally compress medical imaging files while leaving clinical image quality uncompromised. To determine the range of clinically acceptable medical image compression across multiple modalities (CT, MR, and XR), we performed psychometric analysis of image distortion thresholds using physician readers and also performed subtraction analysis of medical image distortion by varying degrees of compression. When physician readers were asked to determine the threshold of compression beyond which images were clinically compromised, the mean image distortion threshold was a JPEG Q value of 23.1 ± 7.0. In Receiver-Operator Characteristics (ROC) plot analysis, compressed images could not be reliably distinguished from original images at any compression level between Q = 50 and Q = 95. Below this range, some readers were able to discriminate the compressed and original images, but high sensitivity and specificity for this discrimination was only encountered at the lowest JPEG Q value tested (Q = 5). Analysis of directly measured magnitude of image distortion from subtracted image pairs showed that the relationship between JPEG Q value and degree of image distortion underwent an upward inflection in the region of the two thresholds determined psychometrically (approximately Q = 25 to Q = 50), with 75 % of the image distortion occurring between Q = 50 and Q = 1. It is possible to apply lossy JPEG compression to medical images without compromise of clinical image quality. Modest degrees of compression, with a JPEG Q value of 50 or higher (corresponding approximately to a compression ratio of 15:1 or less), can be applied to medical images while leaving the images indistinguishable from the original.

  4. An algorithm for compression of bilevel images.

    PubMed

    Reavy, M D; Boncelet, C G

    2001-01-01

    This paper presents the block arithmetic coding for image compression (BACIC) algorithm: a new method for lossless bilevel image compression which can replace JBIG, the current standard for bilevel image compression. BACIC uses the block arithmetic coder (BAC): a simple, efficient, easy-to-implement, variable-to-fixed arithmetic coder, to encode images. BACIC models its probability estimates adaptively based on a 12-bit context of previous pixel values; the 12-bit context serves as an index into a probability table whose entries are used to compute p(1) (the probability of a bit equaling one), the probability measure BAC needs to compute a codeword. In contrast, the Joint Bilevel Image Experts Group (JBIG) uses a patented arithmetic coder, the IBM QM-coder, to compress image data and a predetermined probability table to estimate its probability measures. JBIG, though, has not get been commercially implemented; instead, JBIG's predecessor, the Group 3 fax (G3), continues to be used. BACIC achieves compression ratios comparable to JBIG's and is introduced as an alternative to the JBIG and G3 algorithms. BACIC's overall compression ratio is 19.0 for the eight CCITT test images (compared to JBIG's 19.6 and G3's 7.7), is 16.0 for 20 additional business-type documents (compared to JBIG's 16.0 and G3's 6.74), and is 3.07 for halftone images (compared to JBIG's 2.75 and G3's 0.50).

  5. Context-Aware Image Compression

    PubMed Central

    Chan, Jacky C. K.; Mahjoubfar, Ata; Chen, Claire L.; Jalali, Bahram

    2016-01-01

    We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling. PMID:27367904

  6. Cloud Optimized Image Format and Compression

    NASA Astrophysics Data System (ADS)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  7. Lossless compression of VLSI layout image data.

    PubMed

    Dai, Vito; Zakhor, Avideh

    2006-09-01

    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.

  8. Block adaptive rate controlled image data compression

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Hilbert, E.; Lee, J.-J.; Schlutsmeyer, A.

    1979-01-01

    A block adaptive rate controlled (BARC) image data compression algorithm is described. It is noted that in the algorithm's principal rate controlled mode, image lines can be coded at selected rates by combining practical universal noiseless coding techniques with block adaptive adjustments in linear quantization. Compression of any source data at chosen rates of 3.0 bits/sample and above can be expected to yield visual image quality with imperceptible degradation. Exact reconstruction will be obtained if the one-dimensional difference entropy is below the selected compression rate. It is noted that the compressor can also be operated as a floating rate noiseless coder by simply not altering the input data quantization. Here, the universal noiseless coder ensures that the code rate is always close to the entropy. Application of BARC image data compression to the Galileo orbiter mission of Jupiter is considered.

  9. Lossless wavelet compression on medical image

    NASA Astrophysics Data System (ADS)

    Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong

    2006-09-01

    An increasing number of medical imagery is created directly in digital form. Such as Clinical image Archiving and Communication Systems (PACS), as well as telemedicine networks require the storage and transmission of this huge amount of medical image data. Efficient compression of these data is crucial. Several lossless and lossy techniques for the compression of the data have been proposed. Lossless techniques allow exact reconstruction of the original imagery, while lossy techniques aim to achieve high compression ratios by allowing some acceptable degradation in the image. Lossless compression does not degrade the image, thus facilitating accurate diagnosis, of course at the expense of higher bit rates, i.e. lower compression ratios. Various methods both for lossy (irreversible) and lossless (reversible) image compression are proposed in the literature. The recent advances in the lossy compression techniques include different methods such as vector quantization. Wavelet coding, neural networks, and fractal coding. Although these methods can achieve high compression ratios (of the order 50:1, or even more), they do not allow reconstructing exactly the original version of the input data. Lossless compression techniques permit the perfect reconstruction of the original image, but the achievable compression ratios are only of the order 2:1, up to 4:1. In our paper, we use a kind of lifting scheme to generate truly loss-less non-linear integer-to-integer wavelet transforms. At the same time, we exploit the coding algorithm producing an embedded code has the property that the bits in the bit stream are generated in order of importance, so that all the low rate codes are included at the beginning of the bit stream. Typically, the encoding process stops when the target bit rate is met. Similarly, the decoder can interrupt the decoding process at any point in the bit stream, and still reconstruct the image. Therefore, a compression scheme generating an embedded code can

  10. Iris Recognition: The Consequences of Image Compression

    NASA Astrophysics Data System (ADS)

    Ives, Robert W.; Bishop, Daniel A.; Du, Yingzi; Belcher, Craig

    2010-12-01

    Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA) is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  11. Image quality, compression and segmentation in medicine.

    PubMed

    Morgan, Pam; Frankish, Clive

    2002-12-01

    This review considers image quality in the context of the evolving technology of image compression, and the effects image compression has on perceived quality. The concepts of lossless, perceptually lossless, and diagnostically lossless but lossy compression are described, as well as the possibility of segmented images, combining lossy compression with perceptually lossless regions of interest. The different requirements for diagnostic and training images are also discussed. The lack of established methods for image quality evaluation is highlighted and available methods discussed in the light of the information that may be inferred from them. Confounding variables are also identified. Areas requiring further research are illustrated, including differences in perceptual quality requirements for different image modalities, image regions, diagnostic subtleties, and tasks. It is argued that existing tools for measuring image quality need to be refined and new methods developed. The ultimate aim should be the development of standards for image quality evaluation which take into consideration both the task requirements of the images and the acceptability of the images to the users.

  12. Postprocessing of Compressed Images via Sequential Denoising

    NASA Astrophysics Data System (ADS)

    Dar, Yehuda; Bruckstein, Alfred M.; Elad, Michael; Giryes, Raja

    2016-07-01

    In this work we propose a novel postprocessing technique for compression-artifact reduction. Our approach is based on posing this task as an inverse problem, with a regularization that leverages on existing state-of-the-art image denoising algorithms. We rely on the recently proposed Plug-and-Play Prior framework, suggesting the solution of general inverse problems via Alternating Direction Method of Multipliers (ADMM), leading to a sequence of Gaussian denoising steps. A key feature in our scheme is a linearization of the compression-decompression process, so as to get a formulation that can be optimized. In addition, we supply a thorough analysis of this linear approximation for several basic compression procedures. The proposed method is suitable for diverse compression techniques that rely on transform coding. Specifically, we demonstrate impressive gains in image quality for several leading compression methods - JPEG, JPEG2000, and HEVC.

  13. Data compression for satellite images

    NASA Technical Reports Server (NTRS)

    Chen, P. H.; Wintz, P. A.

    1976-01-01

    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.

  14. Hyperspectral image data compression based on DSP

    NASA Astrophysics Data System (ADS)

    Fan, Jiming; Zhou, Jiankang; Chen, Xinhua; Shen, Weimin

    2010-11-01

    The huge data volume of hyperspectral image challenges its transportation and store. It is necessary to find an effective method to compress the hyperspectral image. Through analysis and comparison of current various algorithms, a mixed compression algorithm based on prediction, integer wavelet transform and embedded zero-tree wavelet (EZW) is proposed in this paper. We adopt a high-powered Digital Signal Processor (DSP) of TMS320DM642 to realize the proposed algorithm. Through modifying the mixed algorithm and optimizing its algorithmic language, the processing efficiency of the program was significantly improved, compared the non-optimized one. Our experiment show that the mixed algorithm based on DSP runs much faster than the algorithm on personal computer. The proposed method can achieve the nearly real-time compression with excellent image quality and compression performance.

  15. Novel wavelet coder for color image compression

    NASA Astrophysics Data System (ADS)

    Wang, Houng-Jyh M.; Kuo, C.-C. Jay

    1997-10-01

    A new still image compression algorithm based on the multi-threshold wavelet coding (MTWC) technique is proposed in this work. It is an embedded wavelet coder in the sense that its compression ratio can be controlled depending on the bandwidth requirement of image transmission. At low bite rates, MTWC can avoid the blocking artifact from JPEG to result in a better reconstructed image quality. An subband decision scheme is developed based on the rate-distortion theory to enhance the image fidelity. Moreover, a new quantization sequence order is introduced based on our analysis of error energy reduction in significant and refinement maps. Experimental results are given to demonstrate the superior performance of the proposed new algorithm in its high reconstructed quality for color and gray level image compression and low computational complexity. Generally speaking, it gives a better rate- distortion tradeoff and performs faster than most existing state-of-the-art wavelet coders.

  16. A New Approach for Fingerprint Image Compression

    SciTech Connect

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  17. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  18. Locally Adaptive Perceptual Compression for Color Images

    NASA Astrophysics Data System (ADS)

    Liu, Kuo-Cheng; Chou, Chun-Hsien

    The main idea in perceptual image compression is to remove the perceptual redundancy for representing images at the lowest possible bit rate without introducing perceivable distortion. A certain amount of perceptual redundancy is inherent in the color image since human eyes are not perfect sensors for discriminating small differences in color signals. Effectively exploiting the perceptual redundancy will help to improve the coding efficiency of compressing color images. In this paper, a locally adaptive perceptual compression scheme for color images is proposed. The scheme is based on the design of an adaptive quantizer for compressing color images with the nearly lossless visual quality at a low bit rate. An effective way to achieve the nearly lossless visual quality is to shape the quantization error as a part of perceptual redundancy while compressing color images. This method is to control the adaptive quantization stage by the perceptual redundancy of the color image. In this paper, the perceptual redundancy in the form of the noise detection threshold associated with each coefficient in each subband of three color components of the color image is derived based on the finding of perceptually indistinguishable regions of color stimuli in the uniform color space and various masking effects of human visual perception. The quantizer step size for the target coefficient in each color component is adaptively adjusted by the associated noise detection threshold to make sure that the resulting quantization error is not perceivable. Simulation results show that the compression performance of the proposed scheme using the adaptively coefficient-wise quantization is better than that using the band-wise quantization. The nearly lossless visual quality of the reconstructed image can be achieved by the proposed scheme at lower entropy.

  19. Issues in multiview autostereoscopic image compression

    NASA Astrophysics Data System (ADS)

    Shah, Druti; Dodgson, Neil A.

    2001-06-01

    Multi-view auto-stereoscopic images and image sequences require large amounts of space for storage and large bandwidth for transmission. High bandwidth can be tolerated for certain applications where the image source and display are close together but, for long distance or broadcast, compression of information is essential. We report on the results of our two- year investigation into multi-view image compression. We present results based on four techniques: differential pulse code modulation (DPCM), disparity estimation, three- dimensional discrete cosine transform (3D-DCT), and principal component analysis (PCA). Our work on DPCM investigated the best predictors to use for predicting a given pixel. Our results show that, for a given pixel, it is generally the nearby pixels within a view that provide better prediction than the corresponding pixel values in adjacent views. This led to investigations into disparity estimation. We use both correlation and least-square error measures to estimate disparity. Both perform equally well. Combining this with DPCM led to a novel method of encoding, which improved the compression ratios by a significant factor. The 3D-DCT has been shown to be a useful compression tool, with compression schemes based on ideas from the two-dimensional JPEG standard proving effective. An alternative to 3D-DCT is PCA. This has proved to be less effective than the other compression methods investigated.

  20. Lossless compression of synthetic aperture radar images

    SciTech Connect

    Ives, R.W.; Magotra, N.; Mandyam, G.D.

    1996-02-01

    Synthetic Aperture Radar (SAR) has been proven an effective sensor in a wide variety of applications. Many of these uses require transmission and/or processing of the image data in a lossless manner. With the current state of SAR technology, the amount of data contained in a single image may be massive, whether the application requires the entire complex image or magnitude data only. In either case, some type of compression may be required to losslessly transmit this data in a given bandwidth or store it in a reasonable volume. This paper provides the results of applying several lossless compression schemes to SAR imagery.

  1. FBI compression standard for digitized fingerprint images

    NASA Astrophysics Data System (ADS)

    Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas

    1996-11-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  2. Classification and compression of digital newspaper images

    NASA Astrophysics Data System (ADS)

    Jiang, Wey-Wen C.; Meadows, H. E.

    1993-10-01

    An improved scheme for newspaper block segmentation and classification is described. The newspaper image is first segmented into blocks using three passes of a run-length smoothing algorithm. Blocks may have any shape and need not be non-overlapped rectangles. The height H between the top-line and base-line of lower case letters, and the number of pixels that have values differing from their four neighboring pixels, are measured for simple and reliable block classification. Blocks of different types are compressed based on their own characteristics. Unlike conventional methods, halftone image blocks are treated differently from black and white graphic blocks for better compression. A lossless compression scheme for halftoned images is proposed. Reconstruction of gray-tones from halftone images employing information of both smooth and edgy areas is presented.

  3. Compression of gray-scale fingerprint images

    NASA Astrophysics Data System (ADS)

    Hopper, Thomas

    1994-03-01

    The FBI has developed a specification for the compression of gray-scale fingerprint images to support paperless identification services within the criminal justice community. The algorithm is based on a scalar quantization of a discrete wavelet transform decomposition of the images, followed by zero run encoding and Huffman encoding.

  4. Compressive hyperspectral and multispectral imaging fusion

    NASA Astrophysics Data System (ADS)

    Espitia, Óscar; Castillo, Sergio; Arguello, Henry

    2016-05-01

    Image fusion is a valuable framework which combines two or more images of the same scene from one or multiple sensors, allowing to improve the resolution of the images and increase the interpretable content. In remote sensing a common fusion problem consists of merging hyperspectral (HS) and multispectral (MS) images that involve large amount of redundant data, which ignores the highly correlated structure of the datacube along the spatial and spectral dimensions. Compressive HS and MS systems compress the spectral data in the acquisition step allowing to reduce the data redundancy by using different sampling patterns. This work presents a compressed HS and MS image fusion approach, which uses a high dimensional joint sparse model. The joint sparse model is formulated by combining HS and MS compressive acquisition models. The high spectral and spatial resolution image is reconstructed by using sparse optimization algorithms. Different fusion spectral image scenarios are used to explore the performance of the proposed scheme. Several simulations with synthetic and real datacubes show promising results as the reliable reconstruction of a high spectral and spatial resolution image can be achieved by using as few as just the 50% of the datacube.

  5. An Analog Processor for Image Compression

    NASA Technical Reports Server (NTRS)

    Tawel, R.

    1992-01-01

    This paper describes a novel analog Vector Array Processor (VAP) that was designed for use in real-time and ultra-low power image compression applications. This custom CMOS processor is based architectually on the Vector Quantization (VQ) algorithm in image coding, and the hardware implementation fully exploits the inherent parallelism built-in the VQ algorithm.

  6. Compressive line sensing underwater imaging system

    NASA Astrophysics Data System (ADS)

    Ouyang, Bing; Dalgleish, Fraser R.; Caimi, Frank M.; Giddings, Thomas E.; Britton, Walter; Vuorenkoski, Anni K.; Nootz, Gero

    2014-05-01

    Compressive sensing (CS) theory has drawn great interest and led to new imaging techniques in many different fields. Over the last few years, the authors have conducted extensive research on CS-based active electro-optical imaging in a scattering medium, such as the underwater environment. This paper proposes a compressive line sensing underwater imaging system that is more compatible with conventional underwater survey operations. This new imaging system builds on our frame-based CS underwater laser imager concept, which is more advantageous for hover capable platforms. We contrast features of CS underwater imaging with those of traditional underwater electro-optical imaging and highlight some advantages of the CS approach. Simulation and initial underwater validation test results are also presented.

  7. Multidimensional imaging using compressive Fresnel holography.

    PubMed

    Horisaki, Ryoichi; Tanida, Jun; Stern, Adrian; Javidi, Bahram

    2012-06-01

    We propose a generalized framework for single-shot acquisition of multidimensional objects using compressive Fresnel holography. A multidimensional object with spatial, spectral, and polarimetric information is propagated with the Fresnel diffraction, and the propagated signal of each channel is observed by an image sensor with randomly arranged optical elements for filtering. The object data are reconstructed using a compressive sensing algorithm. This scheme is verified with numerical experiments. The proposed framework can be applied to imageries for spectrum, polarization, and so on.

  8. The effect of lossy image compression on image classification

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.

  9. Robust object tracking in compressed image sequences

    NASA Astrophysics Data System (ADS)

    Mujica, Fernando; Murenzi, Romain; Smith, Mark J.; Leduc, Jean-Pierre

    1998-10-01

    Accurate object tracking is important in defense applications where an interceptor missile must hone into a target and track it through the pursuit until the strike occurs. The expense associated with an interceptor missile can be reduced through a distributed processing arrangement where the computing platform on which the tracking algorithm is run resides on the ground, and the interceptor need only carry the sensor and communications equipment as part of its electronics complement. In this arrangement, the sensor images are compressed, transmitted to the ground, and compressed to facilitate real-time downloading of the data over available bandlimited channels. The tracking algorithm is run on a ground-based computer while tracking results are transmitted back to the interceptor as soon as they become available. Compression and transmission in this scenario introduce distortion. If severe, these distortions can lead to erroneous tracking results. As a consequence, tracking algorithms employed for this purpose must be robust to compression distortions. In this paper we introduced a robust object racking algorithm based on the continuous wavelet transform. The algorithm processes image sequence data on a frame-by-frame basis, implicitly taking advantage of temporal history and spatial frame filtering to reduce the impact of compression artifacts. Test results show that tracking performance can be maintained at low transmission bit rates and can be used reliably in conjunction with many well-known image compression algorithms.

  10. MRC for compression of Blake Archive images

    NASA Astrophysics Data System (ADS)

    Misic, Vladimir; Kraus, Kari; Eaves, Morris; Parker, Kevin J.; Buckley, Robert R.

    2002-11-01

    The William Blake Archive is part of an emerging class of electronic projects in the humanities that may be described as hypermedia archives. It provides structured access to high-quality electronic reproductions of rare and often unique primary source materials, in this case the work of poet and painter William Blake. Due to the extensive high frequency content of Blake's paintings (namely, colored engravings), they are not suitable for very efficient compression that meets both rate and distortion criteria at the same time. Resolving that problem, the authors utilized modified Mixed Raster Content (MRC) compression scheme -- originally developed for compression of compound documents -- for the compression of colored engravings. In this paper, for the first time, we have been able to demonstrate the successful use of the MRC compression approach for the compression of colored, engraved images. Additional, but not less important benefits of the MRC image representation for Blake scholars are presented: because the applied segmentation method can essentially lift the color overlay of an impression, it provides the student of Blake the unique opportunity to recreate the underlying copperplate image, model the artist's coloring process, and study them separately.

  11. Imaging With Nature: Compressive Imaging Using a Multiply Scattering Medium

    PubMed Central

    Liutkus, Antoine; Martina, David; Popoff, Sébastien; Chardon, Gilles; Katz, Ori; Lerosey, Geoffroy; Gigan, Sylvain; Daudet, Laurent; Carron, Igor

    2014-01-01

    The recent theory of compressive sensing leverages upon the structure of signals to acquire them with much fewer measurements than was previously thought necessary, and certainly well below the traditional Nyquist-Shannon sampling rate. However, most implementations developed to take advantage of this framework revolve around controlling the measurements with carefully engineered material or acquisition sequences. Instead, we use the natural randomness of wave propagation through multiply scattering media as an optimal and instantaneous compressive imaging mechanism. Waves reflected from an object are detected after propagation through a well-characterized complex medium. Each local measurement thus contains global information about the object, yielding a purely analog compressive sensing method. We experimentally demonstrate the effectiveness of the proposed approach for optical imaging by using a 300-micrometer thick layer of white paint as the compressive imaging device. Scattering media are thus promising candidates for designing efficient and compact compressive imagers. PMID:25005695

  12. Multi-shot compressed coded aperture imaging

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Du, Juan; Wu, Tengfei; Jin, Zhenhua

    2013-09-01

    The classical methods of compressed coded aperture (CCA) still require an optical sensor with high resolution, although the sampling rate has broken the Nyquist sampling rate already. A novel architecture of multi-shot compressed coded aperture imaging (MCCAI) using a low resolution optical sensor is proposed, which is mainly based on the 4-f imaging system, combining with two spatial light modulators (SLM) to achieve the compressive imaging goal. The first SLM employed for random convolution is placed at the frequency spectrum plane of the 4-f imaging system, while the second SLM worked as a selecting filter is positioned in front of the optical sensor. By altering the random coded pattern of the second SLM and sampling, a couple of observations can be obtained by a low resolution optical sensor easily, and these observations will be combined mathematically and used to reconstruct the high resolution image. That is to say, MCCAI aims at realizing the super resolution imaging with multiple random samplings by using a low resolution optical sensor. To improve the computational imaging performance, total variation (TV) regularization is introduced into the super resolution reconstruction model to get rid of the artifacts, and alternating direction method of multipliers (ADM) is utilized to solve the optimal result efficiently. The results show that the MCCAI architecture is suitable for super resolution computational imaging using a much lower resolution optical sensor than traditional CCA imaging methods by capturing multiple frame images.

  13. Optical Data Compression in Time Stretch Imaging

    PubMed Central

    Chen, Claire Lifan; Mahjoubfar, Ata; Jalali, Bahram

    2015-01-01

    Time stretch imaging offers real-time image acquisition at millions of frames per second and subnanosecond shutter speed, and has enabled detection of rare cancer cells in blood with record throughput and specificity. An unintended consequence of high throughput image acquisition is the massive amount of digital data generated by the instrument. Here we report the first experimental demonstration of real-time optical image compression applied to time stretch imaging. By exploiting the sparsity of the image, we reduce the number of samples and the amount of data generated by the time stretch camera in our proof-of-concept experiments by about three times. Optical data compression addresses the big data predicament in such systems. PMID:25906244

  14. Compressive Sensing Image Sensors-Hardware Implementation

    PubMed Central

    Dadkhah, Mohammadreza; Deen, M. Jamal; Shirani, Shahram

    2013-01-01

    The compressive sensing (CS) paradigm uses simultaneous sensing and compression to provide an efficient image acquisition technique. The main advantages of the CS method include high resolution imaging using low resolution sensor arrays and faster image acquisition. Since the imaging philosophy in CS imagers is different from conventional imaging systems, new physical structures have been developed for cameras that use the CS technique. In this paper, a review of different hardware implementations of CS encoding in optical and electrical domains is presented. Considering the recent advances in CMOS (complementary metal–oxide–semiconductor) technologies and the feasibility of performing on-chip signal processing, important practical issues in the implementation of CS in CMOS sensors are emphasized. In addition, the CS coding for video capture is discussed. PMID:23584123

  15. Compressive framework for demosaicing of natural images.

    PubMed

    Moghadam, Abdolreza Abdolhosseini; Aghagolzadeh, Mohammad; Kumar, Mrityunjay; Radha, Hayder

    2013-06-01

    Typical consumer digital cameras sense only one out of three color components per image pixel. The problem of demosaicing deals with interpolating those missing color components. In this paper, we present compressive demosaicing (CD), a framework for demosaicing natural images based on the theory of compressed sensing (CS). Given sensed samples of an image, CD employs a CS solver to find the sparse representation of that image under a fixed sparsifying dictionary Ψ. As opposed to state of the art CS-based demosaicing approaches, we consider a clear distinction between the interchannel (color) and interpixel correlations of natural images. Utilizing some well-known facts about the human visual system, those two types of correlations are utilized in a nonseparable format to construct the sparsifying transform Ψ. Our simulation results verify that CD performs better (both visually and in terms of PSNR) than leading demosaicing approaches when applied to the majority of standard test images.

  16. Compressive line sensing underwater imaging system

    NASA Astrophysics Data System (ADS)

    Ouyang, B.; Dalgleish, F. R.; Vuorenkoski, A. K.; Caimi, F. M.; Britton, W.

    2013-05-01

    Compressive sensing (CS) theory has drawn great interest and led to new imaging techniques in many different fields. In recent years, the FAU/HBOI OVOL has conducted extensive research to study the CS based active electro-optical imaging system in the scattering medium such as the underwater environment. The unique features of such system in comparison with the traditional underwater electro-optical imaging system are discussed. Building upon the knowledge from the previous work on a frame based CS underwater laser imager concept, more advantageous for hover-capable platforms such as the Hovering Autonomous Underwater Vehicle (HAUV), a compressive line sensing underwater imaging (CLSUI) system that is more compatible with the conventional underwater platforms where images are formed in whiskbroom fashion, is proposed in this paper. Simulation results are discussed.

  17. Compression technique for plume hyperspectral images

    NASA Astrophysics Data System (ADS)

    Feather, B. K.; Fulkerson, S. A.; Jones, J. H.; Reed, R. A.; Simmons, M. A.; Swann, D. G.; Taylor, W. E.; Bernstein, L. S.

    2005-06-01

    The authors recently developed a hyperspectral image output option for a standardized government code designed to predict missile exhaust plume infrared signatures. Typical predictions cover the 2- to 5-m wavelength range (2000 to 5000 cm-1) at 5 cm-1 spectral resolution, and as a result the hyperspectral images have several hundred frequency channels. Several hundred hyperspectral plume images are needed to span the full operational envelope of missile altitude, Mach number, and aspect angle. Since the net disk storage space can be as large as 100 GB, a Principal Components Analysis is used to compress the spectral dimension, reducing the volume of data to just a few gigabytes. The principal challenge was to specify a robust default setting for the data compression routine suitable for general users, who are not necessarily specialists in data compression. Specifically, the objective was to provide reasonable data compression efficiency of the hyperspectral imagery while at the same time retaining sufficient accuracy for infrared scene generation and hardware-in-the-loop test applications over a range of sensor bandpasses and scenarios. In addition, although the end users of the code do not usually access the detailed spectral information contained in these hyperspectral images, this information must nevertheless be of sufficient fidelity so that atmospheric transmission losses between the missile plume and the sensor could be reliably computed as a function of range. Several metrics were used to determine how far the plume signature hyperspectral data could be safely compressed while still meeting these end-user requirements.

  18. Image and video compression for HDR content

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Reinhard, Erik; Agrafiotis, Dimitris; Bull, David R.

    2012-10-01

    High Dynamic Range (HDR) technology can offer high levels of immersion with a dynamic range meeting and exceeding that of the Human Visual System (HVS). A primary drawback with HDR images and video is that memory and bandwidth requirements are significantly higher than for conventional images and video. Many bits can be wasted coding redundant imperceptible information. The challenge is therefore to develop means for efficiently compressing HDR imagery to a manageable bit rate without compromising perceptual quality. In this paper, we build on previous work of ours and propose a compression method for both HDR images and video, based on an HVS optimised wavelet subband weighting method. The method has been fully integrated into a JPEG 2000 codec for HDR image compression and implemented as a pre-processing step for HDR video coding (an H.264 codec is used as the host codec for video compression). Experimental results indicate that the proposed method outperforms previous approaches and operates in accordance with characteristics of the HVS, tested objectively using a HDR Visible Difference Predictor (VDP). Aiming to further improve the compression performance of our method, we additionally present the results of a psychophysical experiment, carried out with the aid of a high dynamic range display, to determine the difference in the noise visibility threshold between HDR and Standard Dynamic Range (SDR) luminance edge masking. Our findings show that noise has increased visibility on the bright side of a luminance edge. Masking is more consistent on the darker side of the edge.

  19. JPEG compression history estimation for color images.

    PubMed

    Neelamani, Ramesh; de Queiroz, Ricardo; Fan, Zhigang; Dash, Sanjeeb; Baraniuk, Richard G

    2006-06-01

    We routinely encounter digital color images that were previously compressed using the Joint Photographic Experts Group (JPEG) standard. En route to the image's current representation, the previous JPEG compression's various settings-termed its JPEG compression history (CH)-are often discarded after the JPEG decompression step. Given a JPEG-decompressed color image, this paper aims to estimate its lost JPEG CH. We observe that the previous JPEG compression's quantization step introduces a lattice structure in the discrete cosine transform (DCT) domain. This paper proposes two approaches that exploit this structure to solve the JPEG Compression History Estimation (CHEst) problem. First, we design a statistical dictionary-based CHEst algorithm that tests the various CHs in a dictionary and selects the maximum a posteriori estimate. Second, for cases where the DCT coefficients closely conform to a 3-D parallelepiped lattice, we design a blind lattice-based CHEst algorithm. The blind algorithm exploits the fact that the JPEG CH is encoded in the nearly orthogonal bases for the 3-D lattice and employs novel lattice algorithms and recent results on nearly orthogonal lattice bases to estimate the CH. Both algorithms provide robust JPEG CHEst performance in practice. Simulations demonstrate that JPEG CHEst can be useful in JPEG recompression; the estimated CH allows us to recompress a JPEG-decompressed image with minimal distortion (large signal-to-noise-ratio) and simultaneously achieve a small file-size.

  20. Directly Estimating Endmembers for Compressive Hyperspectral Images

    PubMed Central

    Xu, Hongwei; Fu, Ning; Qiao, Liyan; Peng, Xiyuan

    2015-01-01

    The large volume of hyperspectral images (HSI) generated creates huge challenges for transmission and storage, making data compression more and more important. Compressive Sensing (CS) is an effective data compression technology that shows that when a signal is sparse in some basis, only a small number of measurements are needed for exact signal recovery. Distributed CS (DCS) takes advantage of both intra- and inter- signal correlations to reduce the number of measurements needed for multichannel-signal recovery. HSI can be observed by the DCS framework to reduce the volume of data significantly. The traditional method for estimating endmembers (spectral information) first recovers the images from the compressive HSI and then estimates endmembers via the recovered images. The recovery step takes considerable time and introduces errors into the estimation step. In this paper, we propose a novel method, by designing a type of coherent measurement matrix, to estimate endmembers directly from the compressively observed HSI data via convex geometry (CG) approaches without recovering the images. Numerical simulations show that the proposed method outperforms the traditional method with better estimation speed and better (or comparable) accuracy in both noisy and noiseless cases. PMID:25905699

  1. Interframe Adaptive Data Compression Techniques for Images.

    DTIC Science & Technology

    1979-08-01

    1.3.1 Predictive Coding Techniques 8 1.3.2 Transform Coding Techniques 15 1.3.3 Hybrid Coding Techniques 17 1.4 Research Objectives 18 1.5 Description ...Chemical Plant Images 82 4.2.3 X-ray Projection Images 83 V INTERFRAME HYBRID CODING SCHEMES 91 5.1 Adaptive Interframe Hybrid Coding Scheme 95 5.2 Hybrid...Images 99 5.4.2 Chemical Plant Images 109 5.4.3 Angiocardiogram Images .7 777 T- - . vi Page VI DATA COMPRESSION FOR NOISY CHANNELS 117 6.1 Channel

  2. Sparsity optimized compressed sensing image recovery

    NASA Astrophysics Data System (ADS)

    Wang, Sha; Chen, Yueting; Feng, Huajun; Xu, Zhihai; Li, Qi

    2014-05-01

    Training over-complete dictionaries which facilitate a sparse representation of the image leads to state-of-the-art results in compressed sensing image restoration. The training sparsity should be specified when training, while the recovering sparsity should also be set when image recovery. We find that the recovering sparsity has significant effects on the image reconstruction properties. To further improve the compressed sensing image recover accuracy, in this paper, we proposed a method by optimal estimation of the recovering sparsity according to the training sparsity to control the reconstruction method, and better reconstruction results can be achieved successfully. The method mainly includes three procedures. Firstly, forecasting the possible sparsity range by analyzing a large test data set to obtain a possible sparsity set. We find that the possible sparsity is always 3~5 times the training sparsity. Secondly, to precisely estimate the optimal recovering sparsity, we choose only several samples randomly from the compressed sensing measurements and using the sparsity candidates in the possible sparsity set to reconstruct the original image patches. Thirdly, choosing the sparsity corresponding to the best recovered result as the optimal recovering sparsity to be used in image reconstruction. The estimation computational cost is relatively small and the reconstruction result can be much better than the traditional method. The experimental results show that, the PSNR of the recovered images adopting our estimation method can be higher up to 4dB compared to the traditional method without the sparsity estimation.

  3. Lossless compression algorithm for multispectral imagers

    NASA Astrophysics Data System (ADS)

    Gladkova, Irina; Grossberg, Michael; Gottipati, Srikanth

    2008-08-01

    Multispectral imaging is becoming an increasingly important tool for monitoring the earth and its environment from space borne and airborne platforms. Multispectral imaging data consists of visible and IR measurements from a scene across space and spectrum. Growing data rates resulting from faster scanning and finer spatial and spectral resolution makes compression an increasingly critical tool to reduce data volume for transmission and archiving. Research for NOAA NESDIS has been directed to finding for the characteristics of satellite atmospheric Earth science Imager sensor data what level of Lossless compression ratio can be obtained as well as appropriate types of mathematics and approaches that can lead to approaching this data's entropy level. Conventional lossless do not achieve the theoretical limits for lossless compression on imager data as estimated from the Shannon entropy. In a previous paper, the authors introduce a lossless compression algorithm developed for MODIS as a proxy for future NOAA-NESDIS satellite based Earth science multispectral imagers such as GOES-R. The algorithm is based on capturing spectral correlations using spectral prediction, and spatial correlations with a linear transform encoder. In decompression, the algorithm uses a statistically computed look up table to iteratively predict each channel from a channel decompressed in the previous iteration. In this paper we present a new approach which fundamentally differs from our prior work. In this new approach, instead of having a single predictor for each pair of bands we introduce a piecewise spatially varying predictor which significantly improves the compression results. Our new algorithm also now optimizes the sequence of channels we use for prediction. Our results are evaluated by comparison with a state of the art wavelet based image compression scheme, Jpeg2000. We present results on the 14 channel subset of the MODIS imager, which serves as a proxy for the GOES-R imager. We

  4. Compressive Deconvolution in Medical Ultrasound Imaging.

    PubMed

    Chen, Zhouye; Basarab, Adrian; Kouamé, Denis

    2016-03-01

    The interest of compressive sampling in ultrasound imaging has been recently extensively evaluated by several research teams. Following the different application setups, it has been shown that the RF data may be reconstructed from a small number of measurements and/or using a reduced number of ultrasound pulse emissions. Nevertheless, RF image spatial resolution, contrast and signal to noise ratio are affected by the limited bandwidth of the imaging transducer and the physical phenomenon related to US wave propagation. To overcome these limitations, several deconvolution-based image processing techniques have been proposed to enhance the ultrasound images. In this paper, we propose a novel framework, named compressive deconvolution, that reconstructs enhanced RF images from compressed measurements. Exploiting an unified formulation of the direct acquisition model, combining random projections and 2D convolution with a spatially invariant point spread function, the benefit of our approach is the joint data volume reduction and image quality improvement. The proposed optimization method, based on the Alternating Direction Method of Multipliers, is evaluated on both simulated and in vivo data.

  5. JPEG2000 Image Compression on Solar EUV Images

    NASA Astrophysics Data System (ADS)

    Fischer, Catherine E.; Müller, Daniel; De Moortel, Ineke

    2017-01-01

    For future solar missions as well as ground-based telescopes, efficient ways to return and process data have become increasingly important. Solar Orbiter, which is the next ESA/NASA mission to explore the Sun and the heliosphere, is a deep-space mission, which implies a limited telemetry rate that makes efficient onboard data compression a necessity to achieve the mission science goals. Missions like the Solar Dynamics Observatory (SDO) and future ground-based telescopes such as the Daniel K. Inouye Solar Telescope, on the other hand, face the challenge of making petabyte-sized solar data archives accessible to the solar community. New image compression standards address these challenges by implementing efficient and flexible compression algorithms that can be tailored to user requirements. We analyse solar images from the Atmospheric Imaging Assembly (AIA) instrument onboard SDO to study the effect of lossy JPEG2000 (from the Joint Photographic Experts Group 2000) image compression at different bitrates. To assess the quality of compressed images, we use the mean structural similarity (MSSIM) index as well as the widely used peak signal-to-noise ratio (PSNR) as metrics and compare the two in the context of solar EUV images. In addition, we perform tests to validate the scientific use of the lossily compressed images by analysing examples of an on-disc and off-limb coronal-loop oscillation time-series observed by AIA/SDO.

  6. Lossy image compression for digital medical imaging systems

    NASA Astrophysics Data System (ADS)

    Wilhelm, Paul S.; Haynor, David R.; Kim, Yongmin; Nelson, Alan C.; Riskin, Eve A.

    1990-07-01

    Image compression at rates of 10:1 or greater could make PACS much more responsive and economically attractive. This paper describes a protocol for subjective and objective evaluation of the fidelity of compressed/decompressed images to the originals and presents the results ofits application to four representative and promising compression methods. The methods examined are predictive pruned tree-structured vector quantization, fractal compression, the discrete cosine transform with equal weighting of block bit allocation, and the discrete cosine transform with human visual system weighting of block bit allocation. Vector quantization is theoretically capable of producing the best compressed images, but has proven to be difficult to effectively implement. It has the advantage that it can reconstruct images quickly through a simple lookup table. Disadvantages are that codebook training is required, the method is computationally intensive, and achieving the optimum performance would require prohibitively long vector dimensions. Fractal compression is a relatively new compression technique, but has produced satisfactory results while being computationally simple. It is fast at both image compression and image reconstruction. Discrete cosine iransform techniques reproduce images well, but have traditionally been hampered by the need for intensive computing to compress and decompress images. A protocol was developed for side-by-side observer comparison of reconstructed images with originals. Three 1024 X 1024 CR (Computed Radiography) images and two 512 X 512 X-ray CT images were viewed at six bit rates (0.2, 0.4, 0.6, 0.9, 1.2, and 1.5 bpp for CR, and 1.0, 1.3, 1.6, 1.9, 2.2, 2.5 bpp for X-ray CT) by nine radiologists at the University of Washington Medical Center. The CR images were viewed on a Pixar II Megascan (2560 X 2048) monitor and the CT images on a Sony (1280 X 1024) monitor. The radiologists' subjective evaluations of image fidelity were compared to

  7. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as phi(sub gamma), are discrete time signals, where gamma represents the dictionary index. A dictionary with a collection of these waveforms is typically complete or overcomplete. Given such a dictionary, the goal is to obtain a representation image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  8. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as lambda, are discrete time signals, where y represents the dictionary index. A dictionary with a collection of these waveforms Is typically complete or over complete. Given such a dictionary, the goal is to obtain a representation Image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  9. Compression of color-mapped images

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, A. C.; Sayood, Khalid

    1992-01-01

    In a standard image coding scenario, pixel-to-pixel correlation nearly always exists in the data, especially if the image is a natural scene. This correlation is what allows predictive coding schemes (e.g., DPCM) to perform efficient compression. In a color-mapped image, the values stored in the pixel array are no longer directly related to the pixel intensity. Two color indices which are numerically adjacent (close) may point to two very different colors. The correlation still exists, but only via the colormap. This fact can be exploited by sorting the color map to reintroduce the structure. The sorting of colormaps is studied and it is shown how the resulting structure can be used in both lossless and lossy compression of images.

  10. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as phi(sub gamma), are discrete time signals, where gamma represents the dictionary index. A dictionary with a collection of these waveforms is typically complete or overcomplete. Given such a dictionary, the goal is to obtain a representation image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  11. Listless zerotree image compression algorithm

    NASA Astrophysics Data System (ADS)

    Lian, Jing; Wang, Ke

    2006-09-01

    In this paper, an improved zerotree structure and a new coding procedure are adopted, which improve the reconstructed image qualities. Moreover, the lists in SPIHT are replaced by flag maps, and lifting scheme is adopted to realize wavelet transform, which lowers the memory requirements and speeds up the coding process. Experimental results show that the algorithm is more effective and efficient compared with SPIHT.

  12. Compressive imaging using fast transform coding

    NASA Astrophysics Data System (ADS)

    Thompson, Andrew; Calderbank, Robert

    2016-10-01

    We propose deterministic sampling strategies for compressive imaging based on Delsarte-Goethals frames. We show that these sampling strategies result in multi-scale measurements which can be related to the 2D Haar wavelet transform. We demonstrate the effectiveness of our proposed strategies through numerical experiments.

  13. Performance visualization for image compression in telepathology

    NASA Astrophysics Data System (ADS)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-04-01

    The conventional approach to performance evaluation for image compression in telemedicine is simply to measure compression ratio, signal-to-noise ratio and computational load. Evaluation of performance is however a much more complex and many sided issue. It is necessary to consider more deeply the requirements of the applications. In telemedicine, the preservation of clinical information must be taken into account when assessing the suitability of any particular compression algorithm. In telemedicine the metrication of this characteristic is subjective because human judgement must be brought in to identify what is of clinical importance. The assessment must therefore take into account subjective user evaluation criteria as well as objective criteria. This paper develops the concept of user based assessment techniques for image compression used in telepathology. A novel visualization approach has been developed to show and explore the highly complex performance space taking into account both types of measure. The application considered is within a general histopathology image management system; the particular component is a store-and-forward facility for second opinion elicitation. Images of histopathology slides are transmitted to the workstations of consultants working remotely to enable them to provide second opinions.

  14. Entangled-photon compressive ghost imaging

    SciTech Connect

    Zerom, Petros; Chan, Kam Wai Clifford; Howell, John C.; Boyd, Robert W.

    2011-12-15

    We have experimentally demonstrated high-resolution compressive ghost imaging at the single-photon level using entangled photons produced by a spontaneous parametric down-conversion source and using single-pixel detectors. For a given mean-squared error, the number of photons needed to reconstruct a two-dimensional image is found to be much smaller than that in quantum ghost imaging experiments employing a raster scan. This procedure not only shortens the data acquisition time, but also suggests a more economical use of photons for low-light-level and quantum image formation.

  15. MR imaging of compressive myelomalacia.

    PubMed

    Ramanauskas, W L; Wilner, H I; Metes, J J; Lazo, A; Kelly, J K

    1989-01-01

    The authors studied retrospectively 42 patients with the magnetic resonance (MR) diagnosis of myelomalacia. Depending on MR findings, the patients were grouped into early, intermediate, and late stages of myelomalacia. Early stage myelomalacia patients presented with high intensity signal changes on T2-weighted images involving the width of the affected cord. The intermediate stage patients were characterized by varying degrees of cystic necrosis of the central gray matter, better seen on T2-weighted images. Central cystic degeneration, syrinx formation, and atrophy were prominent features of the late stage of myelomalacia. Ten patients had follow-up MR examinations within 6 months of initial imaging. Two of the four early stage myelomalacia patients showed improvement in the repeat studies. The follow-up scans of the six intermediate and late stage myelomalacia patients showed either no change or progression of disease. Early stage myelomalacia may be reversible, depending on the severity of the initial spinal cord injury. Magnetic resonance can serve as a useful tool in the assessment and management of myelomalacia patients.

  16. Image compression with Iris-C

    NASA Astrophysics Data System (ADS)

    Gains, David

    2009-05-01

    Iris-C is an image codec designed for streaming video applications that demand low bit rate, low latency, lossless image compression. To achieve compression and low latency the codec features the discrete wavelet transform, Exp-Golomb coding, and online processes that construct dynamic models of the input video. Like H.264 and Dirac, the Iris-C codec accepts input video from both the YUV and YCOCG colour spaces, but the system can also operate on Bayer RAW data read directly from an image sensor. Testing shows that the Iris-C codec is competitive with the Dirac low delay syntax codec which is typically regarded as the state-of-the-art low latency, lossless video compressor.

  17. Expandable Image Compression System, A Modular Approach

    NASA Astrophysics Data System (ADS)

    Ho, Bruce K.; Chan, K. K.; Ishimitsu, Yoshiyuki; Lo, Shih C.; Huang, H. K.

    1987-01-01

    The full-frame bit allocation algorithm for radiological image compression developed in our laboratory can achieve compression ratios as high as 30:1. The software development and clinical evaluation of this algorithm has been completed. It involves two stages of operations: a two-dimensional discrete cosine transform and pixel quantization in the transform space with pixel depth kept accountable by a bit allocation table. The greatest engineering challenge in implementing a hardware version of the compression system lies in the fast cosine transform of 1Kx1K images. Our design took an expandable modular approach based on the VME bus system which has a maximum data transfer rate of 48 Mbytes per second and a Motorola 68020 microprocessor as the master controller. The transform modules are based on advanced digital signal processor (DSP) chips microprogrammed to perform fast cosine transforms. Four DSP's built into a single-board transform module can process an 1K x 1K image in 1.7 seconds. Additional transform modules working in parallel can be added if even greater speeds are desired. The flexibility inherent in the microcode extends the capabilities of the system to incorporate images of variable sizes. Our design allows fof a maximum image size of 2K x 2K.

  18. Microseismic source imaging in a compressed domain

    NASA Astrophysics Data System (ADS)

    Vera Rodriguez, Ismael; Sacchi, Mauricio D.

    2014-08-01

    Microseismic monitoring is an essential tool for the characterization of hydraulic fractures. Fast estimation of the parameters that define a microseismic event is relevant to understand and control fracture development. The amount of data contained in the microseismic records however, poses a challenge for fast continuous detection and evaluation of the microseismic source parameters. Work inspired by the emerging field of Compressive Sensing has showed that it is possible to evaluate source parameters in a compressed domain, thereby reducing processing time. This technique performs well in scenarios where the amplitudes of the signal are above the noise level, as is often the case in microseismic monitoring using downhole tools. This paper extends the idea of the compressed domain processing to scenarios of microseismic monitoring using surface arrays, where the signal amplitudes are commonly at the same level as, or below, the noise amplitudes. To achieve this, we resort to the use of an imaging operator, which has previously been found to produce better results in detection and location of microseismic events from surface arrays. The operator in our method is formed by full-waveform elastodynamic Green's functions that are band-limited by a source time function and represented in the frequency domain. Where full-waveform Green's functions are not available, ray tracing can also be used to compute the required Green's functions. Additionally, we introduce the concept of the compressed inverse, which derives directly from the compression of the migration operator using a random matrix. The described methodology reduces processing time at a cost of introducing distortions into the results. However, the amount of distortion can be managed by controlling the level of compression applied to the operator. Numerical experiments using synthetic and real data demonstrate the reductions in processing time that can be achieved and exemplify the process of selecting the

  19. Lossless compression for three-dimensional images

    NASA Astrophysics Data System (ADS)

    Tang, Xiaoli; Pearlman, William A.

    2004-01-01

    We investigate and compare the performance of several three-dimensional (3D) embedded wavelet algorithms on lossless 3D image compression. The algorithms are Asymmetric Tree Three-Dimensional Set Partitioning In Hierarchical Trees (AT-3DSPIHT), Three-Dimensional Set Partitioned Embedded bloCK (3D-SPECK), Three-Dimensional Context-Based Embedded Zerotrees of Wavelet coefficients (3D-CB-EZW), and JPEG2000 Part II for multi-component images. Two kinds of images are investigated in our study -- 8-bit CT and MR medical images and 16-bit AVIRIS hyperspectral images. First, the performances by using different size of coding units are compared. It shows that increasing the size of coding unit improves the performance somewhat. Second, the performances by using different integer wavelet transforms are compared for AT-3DSPIHT, 3D-SPECK and 3D-CB-EZW. None of the considered filters always performs the best for all data sets and algorithms. At last, we compare the different lossless compression algorithms by applying integer wavelet transform on the entire image volumes. For 8-bit medical image volumes, AT-3DSPIHT performs the best almost all the time, achieving average of 12% decreases in file size compared with JPEG2000 multi-component, the second performer. For 16-bit hyperspectral images, AT-3DSPIHT always performs the best, yielding average 5.8% and 8.9% decreases in file size compared with 3D-SPECK and JPEG2000 multi-component, respectively. Two 2D compression algorithms, JPEG2000 and UNIX zip, are also included for reference, and all 3D algorithms perform much better than 2D algorithms.

  20. Complementary compressive imaging for the telescopic system

    PubMed Central

    Yu, Wen-Kai; Liu, Xue-Feng; Yao, Xu-Ri; Wang, Chao; Zhai, Yun; Zhai, Guang-Jie

    2014-01-01

    Conventional single-pixel cameras recover images only from the data recorded in one arm of the digital micromirror device, with the light reflected to the other direction not to be collected. Actually, the sampling in these two reflection orientations is correlated with each other, in view of which we propose a sampling concept of complementary compressive imaging, for the first time to our knowledge. We use this method in a telescopic system and acquire images of a target at about 2.0 km range with 20 cm resolution, with the variance of the noise decreasing by half. The influence of the sampling rate and the integration time of photomultiplier tubes on the image quality is also investigated experimentally. It is evident that this technique has advantages of large field of view over a long distance, high-resolution, high imaging speed, high-quality imaging capabilities, and needs fewer measurements in total than any single-arm sampling, thus can be used to improve the performance of all compressive imaging schemes and opens up possibilities for new applications in the remote-sensing area. PMID:25060569

  1. Combining image-processing and image compression schemes

    NASA Technical Reports Server (NTRS)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  2. Combining image-processing and image compression schemes

    NASA Technical Reports Server (NTRS)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  3. Compressed imaging by sparse random convolution.

    PubMed

    Marcos, Diego; Lasser, Theo; López, Antonio; Bourquard, Aurélien

    2016-01-25

    The theory of compressed sensing (CS) shows that signals can be acquired at sub-Nyquist rates if they are sufficiently sparse or compressible. Since many images bear this property, several acquisition models have been proposed for optical CS. An interesting approach is random convolution (RC). In contrast with single-pixel CS approaches, RC allows for the parallel capture of visual information on a sensor array as in conventional imaging approaches. Unfortunately, the RC strategy is difficult to implement as is in practical settings due to important contrast-to-noise-ratio (CNR) limitations. In this paper, we introduce a modified RC model circumventing such difficulties by considering measurement matrices involving sparse non-negative entries. We then implement this model based on a slightly modified microscopy setup using incoherent light. Our experiments demonstrate the suitability of this approach for dealing with distinct CS scenarii, including 1-bit CS.

  4. Multiwavelet-transform-based image compression techniques

    NASA Astrophysics Data System (ADS)

    Rao, Sathyanarayana S.; Yoon, Sung H.; Shenoy, Deepak

    1996-10-01

    Multiwavelet transforms are a new class of wavelet transforms that use more than one prototype scaling function and wavelet in the multiresolution analysis/synthesis. The popular Geronimo-Hardin-Massopust multiwavelet basis functions have properties of compact support, orthogonality, and symmetry which cannot be obtained simultaneously in scalar wavelets. The performance of multiwavelets in still image compression is studied using vector quantization of multiwavelet subbands with a multiresolution codebook. The coding gain of multiwavelets is compared with that of other well-known wavelet families using performance measures such as unified coding gain. Implementation aspects of multiwavelet transforms such as pre-filtering/post-filtering and symmetric extension are also considered in the context of image compression.

  5. Efficient lossless compression scheme for multispectral images

    NASA Astrophysics Data System (ADS)

    Benazza-Benyahia, Amel; Hamdi, Mohamed; Pesquet, Jean-Christophe

    2001-12-01

    Huge amounts of data are generated thanks to the continuous improvement of remote sensing systems. Archiving this tremendous volume of data is a real challenge which requires lossless compression techniques. Furthermore, progressive coding constitutes a desirable feature for telebrowsing. To this purpose, a compact and pyramidal representation of the input image has to be generated. Separable multiresolution decompositions have already been proposed for multicomponent images allowing each band to be decomposed separately. It seems however more appropriate to exploit also the spectral correlations. For hyperspectral images, the solution is to apply a 3D decomposition according to the spatial and to the spectral dimensions. This approach is not appropriate for multispectral images because of the reduced number of spectral bands. In recent works, we have proposed a nonlinear subband decomposition scheme with perfect reconstruction which exploits efficiently both the spatial and the spectral redundancies contained in multispectral images. In this paper, the problem of coding the coefficients of the resulting subband decomposition is addressed. More precisely, we propose an extension to the vector case of Shapiro's embedded zerotrees of wavelet coefficients (V-EZW) with achieves further saving in the bit stream. Simulations carried out on SPOT images indicate the outperformance of the global compression package we performed.

  6. Compressive Hyperspectral Imaging and Anomaly Detection

    DTIC Science & Technology

    2010-02-01

    the desired jointly sparse a"s, one shall adjust a and b. 4.4 Hyperspectral Image Reconstruction and Denoising We apply the model x* = Da’ + e! to...iteration for compressive sensing and sparse denoising ,’" Communications in Mathematical Sciences , 2008. W. Yin, "Analysis and generalizations of...Aharon, M. Elad, and A. Bruckstein, "K- SVD : An algorithm for designing overcomplete dictionaries for sparse representation,’" IEEE Transactions on Signal

  7. Lossless Astronomical Image Compression and the Effects of Random Noise

    NASA Technical Reports Server (NTRS)

    Pence, William

    2009-01-01

    In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.

  8. Image Segmentation, Registration, Compression, and Matching

    NASA Technical Reports Server (NTRS)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  9. Reconfigurable Hardware for Compressing Hyperspectral Image Data

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh; Namkung, Jeffrey; Villapando, Carlos; Kiely, Aaron; Klimesh, Matthew; Xie, Hua

    2010-01-01

    High-speed, low-power, reconfigurable electronic hardware has been developed to implement ICER-3D, an algorithm for compressing hyperspectral-image data. The algorithm and parts thereof have been the topics of several NASA Tech Briefs articles, including Context Modeler for Wavelet Compression of Hyperspectral Images (NPO-43239) and ICER-3D Hyperspectral Image Compression Software (NPO-43238), which appear elsewhere in this issue of NASA Tech Briefs. As described in more detail in those articles, the algorithm includes three main subalgorithms: one for computing wavelet transforms, one for context modeling, and one for entropy encoding. For the purpose of designing the hardware, these subalgorithms are treated as modules to be implemented efficiently in field-programmable gate arrays (FPGAs). The design takes advantage of industry- standard, commercially available FPGAs. The implementation targets the Xilinx Virtex II pro architecture, which has embedded PowerPC processor cores with flexible on-chip bus architecture. It incorporates an efficient parallel and pipelined architecture to compress the three-dimensional image data. The design provides for internal buffering to minimize intensive input/output operations while making efficient use of offchip memory. The design is scalable in that the subalgorithms are implemented as independent hardware modules that can be combined in parallel to increase throughput. The on-chip processor manages the overall operation of the compression system, including execution of the top-level control functions as well as scheduling, initiating, and monitoring processes. The design prototype has been demonstrated to be capable of compressing hyperspectral data at a rate of 4.5 megasamples per second at a conservative clock frequency of 50 MHz, with a potential for substantially greater throughput at a higher clock frequency. The power consumption of the prototype is less than 6.5 W. The reconfigurability (by means of reprogramming) of

  10. Computed Tomography Image Compressibility and Limitations of Compression Ratio-Based Guidelines.

    PubMed

    Pambrun, Jean-François; Noumeir, Rita

    2015-12-01

    Finding optimal compression levels for diagnostic imaging is not an easy task. Significant compressibility variations exist between modalities, but little is known about compressibility variations within modalities. Moreover, compressibility is affected by acquisition parameters. In this study, we evaluate the compressibility of thousands of computed tomography (CT) slices acquired with different slice thicknesses, exposures, reconstruction filters, slice collimations, and pitches. We demonstrate that exposure, slice thickness, and reconstruction filters have a significant impact on image compressibility due to an increased high frequency content and a lower acquisition signal-to-noise ratio. We also show that compression ratio is not a good fidelity measure. Therefore, guidelines based on compression ratio should ideally be replaced with other compression measures better correlated with image fidelity. Value-of-interest (VOI) transformations also affect the perception of quality. We have studied the effect of value-of-interest transformation and found significant masking of artifacts when window is widened.

  11. Fpack and Funpack Utilities for FITS Image Compression and Uncompression

    NASA Technical Reports Server (NTRS)

    Pence, W.

    2008-01-01

    Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see http://fits.gsfc.nasa.gov). The associated funpack program restores the compressed image file back to its original state (as long as a lossless compression algorithm is used). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression algorithms. Fpack stores the compressed image using the FITS tiled image compression convention (see http://fits.gsfc.nasa.gov/fits_registry.html). Under this convention, the image is first divided into a user-configurable grid of rectangular tiles, and then each tile is individually compressed and stored in a variable-length array column in a FITS binary table. By default, fpack usually adopts a row-by-row tiling pattern. The FITS image header keywords remain uncompressed for fast access by FITS reading and writing software. The tiled image compression convention can in principle support any number of different compression algorithms. The fpack and funpack utilities call on routines in the CFITSIO library (http://hesarc.gsfc.nasa.gov/fitsio) to perform the actual compression and uncompression of the FITS images, which currently supports the GZIP, Rice, H-compress, and PLIO IRAF pixel list compression algorithms.

  12. Fpack and Funpack Utilities for FITS Image Compression and Uncompression

    NASA Technical Reports Server (NTRS)

    Pence, W.

    2008-01-01

    Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see http://fits.gsfc.nasa.gov). The associated funpack program restores the compressed image file back to its original state (as long as a lossless compression algorithm is used). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression algorithms. Fpack stores the compressed image using the FITS tiled image compression convention (see http://fits.gsfc.nasa.gov/fits_registry.html). Under this convention, the image is first divided into a user-configurable grid of rectangular tiles, and then each tile is individually compressed and stored in a variable-length array column in a FITS binary table. By default, fpack usually adopts a row-by-row tiling pattern. The FITS image header keywords remain uncompressed for fast access by FITS reading and writing software. The tiled image compression convention can in principle support any number of different compression algorithms. The fpack and funpack utilities call on routines in the CFITSIO library (http://hesarc.gsfc.nasa.gov/fitsio) to perform the actual compression and uncompression of the FITS images, which currently supports the GZIP, Rice, H-compress, and PLIO IRAF pixel list compression algorithms.

  13. Fast Lossless Compression of Multispectral-Image Data

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew

    2006-01-01

    An algorithm that effects fast lossless compression of multispectral-image data is based on low-complexity, proven adaptive-filtering algorithms. This algorithm is intended for use in compressing multispectral-image data aboard spacecraft for transmission to Earth stations. Variants of this algorithm could be useful for lossless compression of three-dimensional medical imagery and, perhaps, for compressing image data in general.

  14. Outer planet Pioneer imaging communications system study. [data compression

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The effects of different types of imaging data compression on the elements of the Pioneer end-to-end data system were studied for three imaging transmission methods. These were: no data compression, moderate data compression, and the advanced imaging communications system. It is concluded that: (1) the value of data compression is inversely related to the downlink telemetry bit rate; (2) the rolling characteristics of the spacecraft limit the selection of data compression ratios; and (3) data compression might be used to perform acceptable outer planet mission at reduced downlink telemetry bit rates.

  15. Selective document image data compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1998-05-19

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel. 10 figs.

  16. Selective document image data compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1998-01-01

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel.--(235 words)

  17. A Scheme for Compressing Floating-Point Images

    NASA Astrophysics Data System (ADS)

    White, Richard L.; Greenfield, Perry

    While many techniques have been used to compress integer data, compressing floating-point data presents a number of additional problems. We have implemented a scheme for compressing floating-point images that is fast, robust, and automatic, that allows random access to pixels without decompressing the whole image, and that generally has a scientifically negligible effect on the noise present in the image. The compressed data are stored in an FITS binary table. Most astronomical images can be compressed by approximately a factor of 3, using conservative settings for the permitted level of changes in the data. We intend to work with NOAO to incorporate this compression method into the IRAF image kernel, so that FITS images compressed using this scheme can be accessed transparently from IRAF applications without any explicit decompression steps. The scheme is simple, and it should be possible to include it in other FITS libraries as well.

  18. Discrete directional wavelet bases for image compression

    NASA Astrophysics Data System (ADS)

    Dragotti, Pier L.; Velisavljevic, Vladan; Vetterli, Martin; Beferull-Lozano, Baltasar

    2003-06-01

    The application of the wavelet transform in image processing is most frequently based on a separable construction. Lines and columns in an image are treated independently and the basis functions are simply products of the corresponding one dimensional functions. Such method keeps simplicity in design and computation, but is not capable of capturing properly all the properties of an image. In this paper, a new truly separable discrete multi-directional transform is proposed with a subsampling method based on lattice theory. Alternatively, the subsampling can be omitted and this leads to a multi-directional frame. This transform can be applied in many areas like denoising, non-linear approximation and compression. The results on non-linear approximation and denoising show very interesting gains compared to the standard two-dimensional analysis.

  19. Centralized and interactive compression of multiview images

    NASA Astrophysics Data System (ADS)

    Gelman, Andriy; Dragotti, Pier Luigi; Velisavljević, Vladan

    2011-09-01

    In this paper, we propose two multiview image compression methods. The basic concept of both schemes is the layer-based representation, in which the captured three-dimensional (3D) scene is partitioned into layers each related to a constant depth in the scene. The first algorithm is a centralized scheme where each layer is de-correlated using a separable multi-dimensional wavelet transform applied across the viewpoint and spatial dimensions. The transform is modified to efficiently deal with occlusions and disparity variations for different depths. Although the method achieves a high compression rate, the joint encoding approach requires the transmission of all data to the users. By contrast, in an interactive setting, the users request only a subset of the captured images, but in an unknown order a priori. We address this scenario in the second algorithm using Distributed Source Coding (DSC) principles which reduces the inter-view redundancy and facilitates random access at the image level. We demonstrate that the proposed centralized and interactive methods outperform H.264/MVC and JPEG 2000, respectively.

  20. Image compression with embedded multiwavelet coding

    NASA Astrophysics Data System (ADS)

    Liang, Kai-Chieh; Li, Jin; Kuo, C.-C. Jay

    1996-03-01

    An embedded image coding scheme using the multiwavelet transform and inter-subband prediction is proposed in this research. The new proposed coding scheme consists of the following building components: GHM multiwavelet transform, prediction across subbands, successive approximation quantization, and adaptive binary arithmetic coding. Our major contribution is the introduction of a set of prediction rules to fully exploit the correlations between multiwavelet coefficients in different frequency bands. The performance of the proposed new method is comparable to that of state-of-the-art wavelet compression methods.

  1. A new hyperspectral image compression paradigm based on fusion

    NASA Astrophysics Data System (ADS)

    Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.

  2. Improved Compression of Wavelet-Transformed Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Klimesh, Matthew

    2005-01-01

    A recently developed data-compression method is an adaptive technique for coding quantized wavelet-transformed data, nominally as part of a complete image-data compressor. Unlike some other approaches, this method admits a simple implementation and does not rely on the use of large code tables. A common data compression approach, particularly for images, is to perform a wavelet transform on the input data, and then losslessly compress a quantized version of the wavelet-transformed data. Under this compression approach, it is common for the quantized data to include long sequences, or runs, of zeros. The new coding method uses prefixfree codes for the nonnegative integers as part of an adaptive algorithm for compressing the quantized wavelet-transformed data by run-length coding. In the form of run-length coding used here, the data sequence to be encoded is parsed into strings consisting of some number (possibly 0) of zeros, followed by a nonzero value. The nonzero value and the length of the run of zeros are encoded. For a data stream that contains a sufficiently high frequency of zeros, this method is known to be more effective than using a single variable length code to encode each symbol. The specific prefix-free codes used are from two classes of variable-length codes: a class known as Golomb codes, and a class known as exponential-Golomb codes. The codes within each class are indexed by a single integer parameter. The present method uses exponential-Golomb codes for the lengths of the runs of zeros, and Golomb codes for the nonzero values. The code parameters within each code class are determined adaptively on the fly as compression proceeds, on the basis of statistics from previously encoded values. In particular, a simple adaptive method has been devised to select the parameter identifying the particular exponential-Golomb code to use. The method tracks the average number of bits used to encode recent runlengths, and takes the difference between this average

  3. Image Data Compression In A Personal Computer Environment

    NASA Astrophysics Data System (ADS)

    Farrelle, Paul M.; Harrington, Daniel G.; Jain, Anil K.

    1988-12-01

    This paper describes an image compression engine that is valuable for compressing virtually all types of images that occur in a personal computer environment. This allows efficient handling of still frame video images (monochrome or color) as well as documents and graphics (black-and-white or color) for archival and transmission applications. Through software control different image sizes, bit depths, and choices between lossless compression, high speed compression and controlled error compression are allowed. Having integrated a diverse set of compression algorithms on a single board, the device is suitable for a multitude of picture archival and communication (PAC) applications including medical imaging, electronic publishing, prepress imaging, document processing, law enforcement and forensic imaging.

  4. Watermark Compression in Medical Image Watermarking Using Lempel-Ziv-Welch (LZW) Lossless Compression Technique.

    PubMed

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohd; Ali, Mushtaq

    2016-04-01

    In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery.

  5. On-board image compression for the RAE lunar mission

    NASA Technical Reports Server (NTRS)

    Miller, W. H.; Lynch, T. J.

    1976-01-01

    The requirements, design, implementation, and flight performance of an on-board image compression system for the lunar orbiting Radio Astronomy Explorer-2 (RAE-2) spacecraft are described. The image to be compressed is a panoramic camera view of the long radio astronomy antenna booms used for gravity-gradient stabilization of the spacecraft. A compression ratio of 32 to 1 is obtained by a combination of scan line skipping and adaptive run-length coding. The compressed imagery data are convolutionally encoded for error protection. This image compression system occupies about 1000 cu cm and consumes 0.4 W.

  6. Optimal Compression of Floating-Point FITS Images

    NASA Astrophysics Data System (ADS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2010-12-01

    Lossless compression (e.g., with GZIP) of floating-point format astronomical FITS images is ineffective and typically only reduces the file size by 10% to 30%. We describe a much more effective compression method that is supported by the publicly available fpack and funpack FITS image compression utilities that can compress floating point images by a factor of 10 without loss of significant scientific precision. A “subtractive dithering” technique is described which permits coarser quantization (and thus higher compression) than is possible with simple scaling methods.

  7. Multiple snapshot colored compressive spectral imager

    NASA Astrophysics Data System (ADS)

    Correa, Claudia V.; Hinojosa, Carlos A.; Arce, Gonzalo R.; Arguello, Henry

    2017-04-01

    The snapshot colored compressive spectral imager (SCCSI) is a recent compressive spectral imaging (CSI) architecture that senses the spatial and spectral information of a scene in a single snapshot by means of a colored mosaic FPA detector and a dispersive element. Commonly, CSI architectures allow multiple snapshot acquisition, yielding improved reconstructions of spatially detailed and spectrally rich scenes. Each snapshot is captured employing a different coding pattern. In principle, SCCSI does not admit multiple snapshots since the pixelated tiling of optical filters is directly attached to the detector. This paper extends the concept of SCCSI to a system admitting multiple snapshot acquisition by rotating the dispersive element, so the dispersed spatio-spectral source is coded and integrated at different detector pixels in each rotation. Thus, a different set of coded projections is captured using the same optical components of the original architecture. The mathematical model of the multishot SCCSI system is presented along with several simulations. Results show that a gain up to 7 dB of peak signal-to-noise ratio is achieved when four SCCSI snapshots are compared to a single snapshot reconstruction. Furthermore, a gain up to 5 dB is obtained with respect to state-of-the-art architecture, the multishot CASSI.

  8. Lossless compression of JPEG2000 whole slide images is not required for diagnostic virtual microscopy.

    PubMed

    Kalinski, Thomas; Zwönitzer, Ralf; Grabellus, Florian; Sheu, Sien-Yi; Sel, Saadettin; Hofmann, Harald; Roessner, Albert

    2011-12-01

    The use of lossy compression in medical imaging is controversial, although it is inevitable to reduce large data amounts. In contrast with lossy compression, lossless compression does not impair image quality. In addition to our previous studies, we evaluated virtual 3-dimensional microscopy using JPEG2000 whole slide images of gastric biopsy specimens with or without Helicobacter pylori gastritis using lossless compression (1:1) or lossy compression with different compression levels: 5:1, 10:1, and 20:1. The virtual slides were diagnosed in a blinded manner by 3 pathologists using the updated Sydney classification. The results showed no significant differences in the diagnosis of H pylori between the different levels of compression in virtual microscopy. We assume that lossless compression is not required for diagnostic virtual microscopy. The limits of lossy compression in virtual microscopy without a loss of diagnostic quality still need to be determined. Analogous to the processes in radiology, recommendations for the use of lossy compression in diagnostic virtual microscopy have to be worked out by pathology societies.

  9. Using compressed images in multimedia education

    NASA Astrophysics Data System (ADS)

    Guy, William L.; Hefner, Lance V.

    1996-04-01

    The classic radiologic teaching file consists of hundreds, if not thousands, of films of various ages, housed in paper jackets with brief descriptions written on the jackets. The development of a good teaching file has been both time consuming and voluminous. Also, any radiograph to be copied was unavailable during the reproduction interval, inconveniencing other medical professionals needing to view the images at that time. These factors hinder motivation to copy films of interest. If a busy radiologist already has an adequate example of a radiological manifestation, it is unlikely that he or she will exert the effort to make a copy of another similar image even if a better example comes along. Digitized radiographs stored on CD-ROM offer marked improvement over the copied film teaching files. Our institution has several laser digitizers which are used to rapidly scan radiographs and produce high quality digital images which can then be converted into standard microcomputer (IBM, Mac, etc.) image format. These images can be stored on floppy disks, hard drives, rewritable optical disks, recordable CD-ROM disks, or removable cartridge media. Most hospital computer information systems include radiology reports in their database. We demonstrate that the reports for the images included in the users teaching file can be copied and stored on the same storage media as the images. The radiographic or sonographic image and the corresponding dictated report can then be 'linked' together. The description of the finding or findings of interest on the digitized image is thus electronically tethered to the image. This obviates the need to write much additional detail concerning the radiograph, saving time. In addition, the text on this disk can be indexed such that all files with user specified features can be instantly retrieve and combined in a single report, if desired. With the use of newer image compression techniques, hundreds of cases may be stored on a single CD

  10. Wavelet-based Image Compression using Subband Threshold

    NASA Astrophysics Data System (ADS)

    Muzaffar, Tanzeem; Choi, Tae-Sun

    2002-11-01

    Wavelet based image compression has been a focus of research in recent days. In this paper, we propose a compression technique based on modification of original EZW coding. In this lossy technique, we try to discard less significant information in the image data in order to achieve further compression with minimal effect on output image quality. The algorithm calculates weight of each subband and finds the subband with minimum weight in every level. This minimum weight subband in each level, that contributes least effect during image reconstruction, undergoes a threshold process to eliminate low-valued data in it. Zerotree coding is done next on the resultant output for compression. Different values of threshold were applied during experiment to see the effect on compression ratio and reconstructed image quality. The proposed method results in further increase in compression ratio with negligible loss in image quality.

  11. Lossless Astronomical Image Compression and the Effects of Noise

    NASA Astrophysics Data System (ADS)

    Pence, W. D.; Seaman, R.; White, R. L.

    2009-04-01

    We compare a variety of lossless image compression methods on a large sample of astronomical images and show how the compression ratios and speeds of the algorithms are affected by the amount of noise (that is, entropy) in the images. In the ideal case where the image pixel values have a random Gaussian distribution, the equivalent number of uncompressible noise bits per pixel is given by and the lossless compression ratio is given by where is the bit length of the pixel values (typically 16 or 32), and K is a measure of the efficiency of the compression algorithm. We show that real astronomical CCD images also closely follow these same relations, by using a robust algorithm for measuring the equivalent number of noise bits from the dispersion of the pixel values in background regions of the image. We perform image compression tests on a large sample of 16-bit integer astronomical CCD images using the GZIP compression program and using a newer FITS tiled-image compression method that currently supports four compression algorithms: Rice, Hcompress, PLIO, and the same Lempel-Ziv algorithm that is used by GZIP. Overall, the Rice compression algorithm strikes the best balance of compression and computational efficiency; it is 2-3 times faster and produces about 1.4 times greater compression than GZIP (the uncompression speeds are about the same). The Rice algorithm has a measured K value of 1.2 bits pixel-1, and thus produces 75%-90% (depending on the amount of noise in the image) as much compression as an ideal algorithm with K = 0. Hcompress produces slightly better compression but at the expense of three times more CPU time than Rice. Compression tests on a sample of 32-bit integer images show similar results, but the relative speed and compression ratio advantage of Rice over GZIP is even greater. We also briefly discuss a technique for compressing floating point images that converts the pixel values to scaled integers. The image compression and uncompression

  12. MR image compression using a wavelet transform coding algorithm.

    PubMed

    Angelidis, P A

    1994-01-01

    We present here a technique for MR image compression. It is based on a transform coding scheme using the wavelet transform and vector quantization. Experimental results show that the method offers high compression ratios with low degradation of the image quality. The technique is expected to be particularly useful wherever storing and transmitting large numbers of images is necessary.

  13. Chronic edema of the lower extremities: international consensus recommendations for compression therapy clinical research trials.

    PubMed

    Stout, N; Partsch, H; Szolnoky, G; Forner-Cordero, I; Mosti, G; Mortimer, P; Flour, M; Damstra, R; Piller, N; Geyer, M J; Benigni, J-P; Moffat, C; Cornu-Thenard, A; Schingale, F; Clark, M; Chauveau, M

    2012-08-01

    Chronic edema is a multifactorial condition affecting patients with various diseases. Although the pathophysiology of edema varies, compression therapy is a basic tenant of treatment, vital to reducing swelling. Clinical trials are disparate or lacking regarding specific protocols and application recommendations for compression materials and methodology to enable optimal efficacy. Compression therapy is a basic treatment modality for chronic leg edema; however, the evidence base for the optimal application, duration and intensity of compression therapy is lacking. The aim of this document was to present the proceedings of a day-long international expert consensus group meeting that examined the current state of the science for the use of compression therapy in chronic edema. An expert consensus group met in Brighton, UK, in March 2010 to examine the current state of the science for compression therapy in chronic edema of the lower extremities. Panel discussions and open space discussions examined the current literature, clinical practice patterns, common materials and emerging technologies for the management of chronic edema. This document outlines a proposed clinical research agenda focusing on compression therapy in chronic edema. Future trials comparing different compression devices, materials, pressures and parameters for application are needed to enhance the evidence base for optimal chronic oedema management. Important outcomes measures and methods of pressure and oedema quantification are outlined. Future trials are encouraged to optimize compression therapy in chronic edema of the lower extremities.

  14. The Effect Of Pre-Processing On Image Compression

    NASA Astrophysics Data System (ADS)

    Cookson, J.; Thoma, G.

    1986-10-01

    The Lister Hill National Center for Biomedical Communications, the National Library of Medicine's research division, is currently engaged in studying the application of Electronic Document Storage and Retrieval (EDSR) systems to a library environment. To accomplish this, an EDSR prototype has been built and is currently in use as a laboratory test-bed. The system consists of CCD scanners for document digitization, high resolution CRT document displays, hardcopy output devices, and optical and magnetic disk storage devices, all under the control of a PDP-11/44 computer. Prior to storage and transmission, the captured document images undergo processing operations that enhance their quality, eliminate degradations and remove redundancy. It is postulated that a "pre-processing" stage that removes extraneous material from the raw image data could improve the performance of the processing operations. The processing operation selected to prove this hypothesis is image compression, an important feature to economically extend on-line image storage capacity and increase image transfer speed in the EDSR system. The particular technique selected for implementation is one-dimensional runlength coding (CCITT recommendation T.4), because it is an established standard and appropriate as a base line system. The preprocessing operations on the raw image data are border removal and page centering. After centering the images, which are approximately 6 by 9 inches in the examples picked, in an 8.5 by 11 inch field, the "noisy" border areas are then made white. These operations are done electronically in a digital memory under operator control. For a selected set of pages, mostly comprising title pages and tables of contents, the result is an average improvement in compression ratios by a factor of over 3.

  15. KRESKA: A compression system for small and very large images

    NASA Technical Reports Server (NTRS)

    Ohnesorge, Krystyna W.; Sennhauser, Rene

    1995-01-01

    An effective lossless compression system for grayscale images is presented using finite context variable order Markov models. A new method to accurately estimate the probability of the escape symbol is proposed. The choice of the best model order and rules for selecting context pixels are discussed. Two context precision and two symbol precision techniques to handle noisy image data with Markov models are introduced. Results indicate that finite context variable order Markov models lead to effective lossless compression systems for small and very large images. The system achieves higher compression ratios than some of the better known image compression techniques such as lossless JPEG, JBIG, or FELICS.

  16. Image compression system and method having optimized quantization tables

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)

    1998-01-01

    A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.

  17. The impact of lossless image compression to radiographs

    NASA Astrophysics Data System (ADS)

    Lehmann, Thomas M.; Abel, Jürgen; Weiss, Claudia

    2006-03-01

    The increasing number of digital imaging modalities results in data volumes of several Tera Bytes per year that must be transferred and archived in a common-sized hospital. Hence, data compression is an important issue for picture archiving and communication systems (PACS). The effect of lossy image compression is frequently analyzed with respect to images from a certain modality supporting a certain diagnosis. However, novel compression schemes have been developed recently allowing efficient but lossless compression. In this study, we compare the lossless compression schemes embedded in the tagged image file format (TIFF), graphics interchange format (GIF), and Joint Photographic Experts Group (JPEG 2000 II) with the Borrows-Wheeler compression algorithm (BWCA) with respect to image content and origin. Repeated measures ANOVA was based on 1.200 images in total. Statistically significant effects (p < 0,0001) of compression scheme, image content, and image origin were found. Best mean compression factor of 3.5 (2.272 bpp) is obtained applying BTW to secondarily digitized radiographs of the head, while the lowest factor of 1,05 (7.587 bpp) resulted from the TIFF packbits algorithm applied to pelvis images captured digitally. Over all, the BWCA is slightly but significantly more effective than JPEG 2000. Both compression schemes reduce the required bits per pixel (bpp) below 3. Also, secondarily digitized images are more compressible than the directly digital ones. Interestingly, JPEG outperforms BWCA for directly digital images regardless of image content, while BWCA performs better than JPEG on secondarily digitized radiographs. In conclusion, efficient lossless image compression schemes are available for PACS.

  18. Effects on MR images compression in tissue classification quality

    NASA Astrophysics Data System (ADS)

    Santalla, H.; Meschino, G.; Ballarin, V.

    2007-11-01

    It is known that image compression is required to optimize the storage in memory. Moreover, transmission speed can be significantly improved. Lossless compression is used without controversy in medicine, though benefits are limited. If we compress images lossy, where image can not be totally recovered; we can only recover an approximation. In this point definition of "quality" is essential. What we understand for "quality"? How can we evaluate a compressed image? Quality in images is an attribute whit several definitions and interpretations, which actually depend on the posterior use we want to give them. This work proposes a quantitative analysis of quality for lossy compressed Magnetic Resonance (MR) images, and their influence in automatic tissue classification, accomplished with these images.

  19. High Bit-Depth Medical Image Compression with HEVC.

    PubMed

    Parikh, Saurin; Ruiz, Damian; Kalva, Hari; Fernandez-Escribano, Gerardo; Adzic, Velibor

    2017-01-27

    Efficient storing and retrieval of medical images has direct impact on reducing costs and improving access in cloud based health care services. JPEG 2000 is currently the commonly used compression format for medical images shared using the DICOM standard. However, new formats such as HEVC can provide better compression efficiency compared to JPEG 2000. Furthermore, JPEG 2000 is not suitable for efficiently storing image series and 3D imagery. Using HEVC, a single format can support all forms of medical images. This paper presents the use of HEVC for diagnostically acceptable medical image compression, focusing on compression efficiency compared to JPEG 2000. Diagnostically acceptable lossy compression and complexity of high bit-depth medical image compression are studied. Based on an established medically acceptable compression range for JPEG 2000, this paper establishes acceptable HEVC compression range for medical imaging applications. Experimental results show that using HEVC can increase the compression performance, compared to JPEG 2000, by over 54%. Along with this, new method for reducing computational complexity of HEVC encoding for medical images is proposed. Results show that HEVC intra encoding complexity can be reduced by over 55% with negligible increase in file size.

  20. Comparison of two SVD-based color image compression schemes.

    PubMed

    Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli

    2017-01-01

    Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR.

  1. Comparison of two SVD-based color image compression schemes

    PubMed Central

    Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli

    2017-01-01

    Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR. PMID:28257451

  2. Optimal Compression Methods for Floating-point Format Images

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2009-01-01

    We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.

  3. Optimal Compression Methods for Floating-point Format Images

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2009-01-01

    We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.

  4. Image compression using the W-transform

    SciTech Connect

    Reynolds, W.D. Jr.

    1995-12-31

    The authors present the W-transform for a multiresolution signal decomposition. One of the differences between the wavelet transform and W-transform is that the W-transform leads to a nonorthogonal signal decomposition. Another difference between the two is the manner in which the W-transform handles the endpoints (boundaries) of the signal. This approach does not restrict the length of the signal to be a power of two. Furthermore, it does not call for the extension of the signal thus, the W-transform is a convenient tool for image compression. They present the basic theory behind the W-transform and include experimental simulations to demonstrate its capabilities.

  5. A Framework of Hyperspectral Image Compression using Neural Networks

    SciTech Connect

    Masalmah, Yahya M.; Martínez Nieves, Christian; Rivera Soto, Rafael; Velez, Carlos; Gonzalez, Jenipher

    2015-01-01

    Hyperspectral image analysis has gained great attention due to its wide range of applications. Hyperspectral images provide a vast amount of information about underlying objects in an image by using a large range of the electromagnetic spectrum for each pixel. However, since the same image is taken multiple times using distinct electromagnetic bands, the size of such images tend to be significant, which leads to greater processing requirements. The aim of this paper is to present a proposed framework for image compression and to study the possible effects of spatial compression on quality of unmixing results. Image compression allows us to reduce the dimensionality of an image while still preserving most of the original information, which could lead to faster image processing. Lastly, this paper presents preliminary results of different training techniques used in Artificial Neural Network (ANN) based compression algorithm.

  6. A Framework of Hyperspectral Image Compression using Neural Networks

    DOE PAGES

    Masalmah, Yahya M.; Martínez Nieves, Christian; Rivera Soto, Rafael; ...

    2015-01-01

    Hyperspectral image analysis has gained great attention due to its wide range of applications. Hyperspectral images provide a vast amount of information about underlying objects in an image by using a large range of the electromagnetic spectrum for each pixel. However, since the same image is taken multiple times using distinct electromagnetic bands, the size of such images tend to be significant, which leads to greater processing requirements. The aim of this paper is to present a proposed framework for image compression and to study the possible effects of spatial compression on quality of unmixing results. Image compression allows usmore » to reduce the dimensionality of an image while still preserving most of the original information, which could lead to faster image processing. Lastly, this paper presents preliminary results of different training techniques used in Artificial Neural Network (ANN) based compression algorithm.« less

  7. Fast and accurate face recognition based on image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2017-05-01

    Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.

  8. Learning random networks for compression of still and moving images

    NASA Technical Reports Server (NTRS)

    Gelenbe, Erol; Sungur, Mert; Cramer, Christopher

    1994-01-01

    Image compression for both still and moving images is an extremely important area of investigation, with numerous applications to videoconferencing, interactive education, home entertainment, and potential applications to earth observations, medical imaging, digital libraries, and many other areas. We describe work on a neural network methodology to compress/decompress still and moving images. We use the 'point-process' type neural network model which is closer to biophysical reality than standard models, and yet is mathematically much more tractable. We currently achieve compression ratios of the order of 120:1 for moving grey-level images, based on a combination of motion detection and compression. The observed signal-to-noise ratio varies from values above 25 to more than 35. The method is computationally fast so that compression and decompression can be carried out in real-time. It uses the adaptive capabilities of a set of neural networks so as to select varying compression ratios in real-time as a function of quality achieved. It also uses a motion detector which will avoid retransmitting portions of the image which have varied little from the previous frame. Further improvements can be achieved by using on-line learning during compression, and by appropriate compensation of nonlinearities in the compression/decompression scheme. We expect to go well beyond the 250:1 compression level for color images with good quality levels.

  9. Statisically lossless image compression for CR and DR

    NASA Astrophysics Data System (ADS)

    Young, Susan S.; Whiting, Bruce R.; Foos, David H.

    1999-05-01

    This paper proposes an image compression algorithm that can improve the compression efficiency for digital projection radiographs over current lossless JPEG by utilizing a quantization companding function and a new lossless image compression standard called JPEG-LS. The companding and compression processes can also be augmented by a pre- processing step to first segment the foreground portions of the image and then substitute the foreground pixel values with a uniform code value. The quantization companding function approach is based on a theory that relates the onset of distortion to changes in the second-order statistics in an image. By choosing an appropriate companding function, the properties of the second-order statistics can be retained to within an insignificant error, and the companded image can then be lossless compressed using JPEG-LS; we call the reconstructed image statistically lossless. The approach offers a theoretical basis supporting the integrity of the compressed-reconstructed data relative to the original image, while providing a modest level of compression efficiency. This intermediate level of compression could help to increase the conform level for radiologists that do not currently utilize lossy compression and may also have benefits form a medico-legal perspective.

  10. Treatment of metastatic spinal cord compression: cepo review and clinical recommendations

    PubMed Central

    L’Espérance, S.; Vincent, F.; Gaudreault, M.; Ouellet, J.A.; Li, M.; Tosikyan, A.; Goulet, S.

    2012-01-01

    Background Metastatic spinal cord compression (mscc) is an oncologic emergency that, unless diagnosed early and treated appropriately, can lead to permanent neurologic impairment. After an analysis of relevant studies evaluating the effectiveness of various treatment modalities, the Comité de l’évolution des pratiques en oncologie (cepo) made recommendations on mscc management. Method A review of the scientific literature published up to February 2011 considered only phase ii and iii trials that included assessment of neurologic function. A total of 26 studies were identified. Recommendations Considering the evidence available to date, cepo recommends that cancer patients with mscc be treated by a specialized multidisciplinary team.dexamethasone 16 mg daily be administered to symptomatic patients as soon as mscc is diagnosed or suspected.high-loading-dose corticosteroids be avoided.histopathologic diagnosis and scores from scales evaluating prognosis and spinal instability be considered before treatment.corticosteroids and chemotherapy with radiotherapy be offered to patients with spinal cord compression caused by myeloma, lymphoma, or germ cell tumour without sign of spinal instability or compression by bone fragment.short-course radiotherapy be administered to patients with spinal cord compression and short life expectancy.long-course radiotherapy be administered to patients with inoperable spinal cord compression and good life expectancy.decompressive surgery followed by long-course radiotherapy be offered to appropriate symptomatic mscc patients (including spinal instability, displacement of vertebral fragment); andpatients considered for surgery have a life expectancy of at least 3–6 months. PMID:23300371

  11. Fast computational scheme of image compression for 32-bit microprocessors

    NASA Technical Reports Server (NTRS)

    Kasperovich, Leonid

    1994-01-01

    This paper presents a new computational scheme of image compression based on the discrete cosine transform (DCT), underlying JPEG and MPEG International Standards. The algorithm for the 2-d DCT computation uses integer operations (register shifts and additions / subtractions only); its computational complexity is about 8 additions per image pixel. As a meaningful example of an on-board image compression application we consider the software implementation of the algorithm for the Mars Rover (Marsokhod, in Russian) imaging system being developed as a part of Mars-96 International Space Project. It's shown that fast software solution for 32-bit microprocessors may compete with the DCT-based image compression hardware.

  12. Compressive optical image watermarking using joint Fresnel transform correlator architecture

    NASA Astrophysics Data System (ADS)

    Li, Jun; Zhong, Ting; Dai, Xiaofang; Yang, Chanxia; Li, Rong; Tang, Zhilie

    2017-02-01

    A new optical image watermarking technique based on compressive sensing using joint Fresnel transform correlator architecture has been presented. A secret scene or image is first embedded into a host image to perform optical image watermarking by use of joint Fresnel transform correlator architecture. Then, the watermarked image is compressed to much smaller signal data using single-pixel compressive holographic imaging in optical domain. At the received terminal, the watermarked image is reconstructed well via compressive sensing theory and a specified holographic reconstruction algorithm. The preliminary numerical simulations show that it is effective and suitable for optical image security transmission in the coming absolutely optical network for the reason of the completely optical implementation and largely decreased holograms data volume.

  13. Texture-based medical image retrieval in compressed domain using compressive sensing.

    PubMed

    Yadav, Kuldeep; Srivastava, Avi; Mittal, Ankush; Ansari, M A

    2014-01-01

    Content-based image retrieval has gained considerable attention in today's scenario as a useful tool in many applications; texture is one of them. In this paper, we focus on texture-based image retrieval in compressed domain using compressive sensing with the help of DC coefficients. Medical imaging is one of the fields which have been affected most, as there had been huge size of image database and getting out the concerned image had been a daunting task. Considering this, in this paper we propose a new model of image retrieval process using compressive sampling, since it allows accurate recovery of image from far fewer samples of unknowns and it does not require a close relation of matching between sampling pattern and characteristic image structure with increase acquisition speed and enhanced image quality.

  14. CWICOM: A Highly Integrated & Innovative CCSDS Image Compression ASIC

    NASA Astrophysics Data System (ADS)

    Poupat, Jean-Luc; Vitulli, Raffaele

    2013-08-01

    The space market is more and more demanding in terms of on image compression performances. The earth observation satellites instrument resolution, the agility and the swath are continuously increasing. It multiplies by 10 the volume of picture acquired on one orbit. In parallel, the satellites size and mass are decreasing, requiring innovative electronic technologies reducing size, mass and power consumption. Astrium, leader on the market of the combined solutions for compression and memory for space application, has developed a new image compression ASIC which is presented in this paper. CWICOM is a high performance and innovative image compression ASIC developed by Astrium in the frame of the ESA contract n°22011/08/NLL/LvH. The objective of this ESA contract is to develop a radiation hardened ASIC that implements the CCSDS 122.0-B-1 Standard for Image Data Compression, that has a SpaceWire interface for configuring and controlling the device, and that is compatible with Sentinel-2 interface and with similar Earth Observation missions. CWICOM stands for CCSDS Wavelet Image COMpression ASIC. It is a large dynamic, large image and very high speed image compression ASIC potentially relevant for compression of any 2D image with bi-dimensional data correlation such as Earth observation, scientific data compression… The paper presents some of the main aspects of the CWICOM development, such as the algorithm and specification, the innovative memory organization, the validation approach and the status of the project.

  15. Polarimetric and Indoor Imaging Fusion Based on Compressive Sensing

    DTIC Science & Technology

    2013-04-01

    Signal Process., vol. 57, no. 6, pp. 2275-2284, 2009. [20] A. Gurbuz, J. McClellan, and W. Scott, Jr., "Compressive sensing for subsurface imaging using...SciTech Publishing, 2010, pp. 922- 938. [45] A. C. Gurbuz, J. H. McClellan, and W. R. Scott, Jr., "Compressive sensing for subsurface imaging using

  16. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-12-30

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.

  17. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.

  18. Digital mammography, cancer screening: Factors important for image compression

    NASA Technical Reports Server (NTRS)

    Clarke, Laurence P.; Blaine, G. James; Doi, Kunio; Yaffe, Martin J.; Shtern, Faina; Brown, G. Stephen; Winfield, Daniel L.; Kallergi, Maria

    1993-01-01

    The use of digital mammography for breast cancer screening poses several novel problems such as development of digital sensors, computer assisted diagnosis (CAD) methods for image noise suppression, enhancement, and pattern recognition, compression algorithms for image storage, transmission, and remote diagnosis. X-ray digital mammography using novel direct digital detection schemes or film digitizers results in large data sets and, therefore, image compression methods will play a significant role in the image processing and analysis by CAD techniques. In view of the extensive compression required, the relative merit of 'virtually lossless' versus lossy methods should be determined. A brief overview is presented here of the developments of digital sensors, CAD, and compression methods currently proposed and tested for mammography. The objective of the NCI/NASA Working Group on Digital Mammography is to stimulate the interest of the image processing and compression scientific community for this medical application and identify possible dual use technologies within the NASA centers.

  19. Wavelet/scalar quantization compression standard for fingerprint images

    SciTech Connect

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.

  20. Comparative analysis of infrared images degraded by lossy compression techniques

    NASA Astrophysics Data System (ADS)

    Toussaint, W. A.; Weber, Reed A.

    2015-09-01

    This work addresses image degradation introduced by lossy compression techniques and the effects of such degradation on signal detection statistics for applications in fast-framing (<100 Hz) IR image analysis. As future space systems make use of increasingly higher pixel count IR focal plane arrays, data generation rates are anticipated to become too copious for continuous download. The prevailing solution to this issue has been to compress image data prior to downlink. While this solution is application independent for lossless compression, the expected benefits of lossy compression, including higher compression ratio, necessitate several application specific trades in order to characterize preservation of critical information within the data. Current analyses via standard statistical image processing techniques following tunably lossy compression algorithms (JPEG2000, JPEG-LS) allow for detection statistics nearly identical to analyses following standard lossless compression techniques, such as Rice and PNG, even at degradation levels offering a greater than twofold increase in compression ratio. Ongoing efforts focus on repeating the analysis for other tunably lossy compression techniques while also assessing the relative computational burden of each algorithm. Current results suggest that lossy compression techniques can preserve critical information in fast-framing IR data while either significantly reducing downlink bandwidth requirements or significantly increasing the usable focal plane array window size.

  1. [Lossless compression of hyperspectral image for space-borne application].

    PubMed

    Li, Jin; Jin, Long-xu; Li, Guo-ning

    2012-08-01

    In order to resolve the difficulty in hardware implementation, lower compression ratio and time consuming for the whole hyperspectral image lossless compression algorithm based on the prediction, transform, vector quantization and their combination, a hyperspectral image lossless compression algorithm for space-borne application was proposed in the present paper. Firstly, intra-band prediction is used only for the first image along the spectral line using a median predictor. And inter- band prediction is applied to other band images. A two-step and bidirectional prediction algorithm is proposed for the inter-band prediction. In the first step prediction, a bidirectional and second order predictor proposed is used to obtain a prediction reference value. And a improved LUT prediction algorithm proposed is used to obtain four values of LUT prediction. Then the final prediction is obtained through comparison between them and the prediction reference. Finally, the verification experiments for the compression algorithm proposed using compression system test equipment of XX-X space hyperspectral camera were carried out. The experiment results showed that compression system can be fast and stable work. The average compression ratio reached 3.05 bpp. Compared with traditional approaches, the proposed method could improve the average compression ratio by 0.14-2.94 bpp. They effectively improve the lossless compression ratio and solve the difficulty of hardware implementation of the whole wavelet-based compression scheme.

  2. JPEG compression of stereoscopic digital images for the diagnosis of diabetic retinopathy via teleophthalmology.

    PubMed

    Baker, Chad F; Rudnisky, Christopher J; Tennant, Matthew T S; Sanghera, Paul; Hinz, Bradley J; De Leon, Alexander R; Greve, Mark D J

    2004-12-01

    Canada's vast size and remote rural communities represent a significant hurdle for successful monitoring and evaluation of diabetic retinopathy. Teleophthalmology may provide a solution to overcome this problem. We investigated the application of Joint Photographic Experts Group (PEG) compression to digital retinal images to determine whether JPEG compression could reduce file sizes while maintaining sufficient quality and detail to accurately diagnose diabetic retinopathy. All 20 patients with type 2 diabetes mellitus assessed at a 1-day teleophthalmology clinic in northern Alberta were enrolled in the study. Following pupil dilation, seven 30 degrees fields of each fundus were digitally photographed at a resolution of 2008 x 3040 pixels and saved in uncompressed tagged image file format (TIFF). The files were compressed approximately 55x and 113x their original size using JPEG compression. A reviewer in Edmonton randomly viewed all original TIFF images along with the compressed JPEG images in a masked fashion for image quality and for specific diabetic retinal pathology in accordance with Early Treatment Diabetic Retinopathy Study standards. The level of diabetic retinopathy and recommendations for clinical follow-up were also recorded. Exact agreement and weighted kappa statistics, a measure of reproducibility, were calculated. Exact agreement between the compressed JPEG images and the TIFF images was high (75% to 100%) for all measured variables at both compression levels. Reproducibility was good to excellent at both compression levels for the identification of diabetic retinal abnormalities (K = 0.45-1), diagnosis of level of retinopathy (kappa = 0.73-1) and recommended follow-up (kappa = 0.64-1). The application of JPEG compression at ratios of 55:1 and 113:1 did not significantly interfere with the identification of specific diabetic retinal pathology, diagnosis of level of retinopathy or recommended follow-up. These results indicate that JPEG compression

  3. Image encryption and compression based on kronecker compressed sensing and elementary cellular automata scrambling

    NASA Astrophysics Data System (ADS)

    Chen, Tinghuan; Zhang, Meng; Wu, Jianhui; Yuen, Chau; Tong, You

    2016-10-01

    Because of simple encryption and compression procedure in single step, compressed sensing (CS) is utilized to encrypt and compress an image. Difference of sparsity levels among blocks of the sparsely transformed image degrades compression performance. In this paper, motivated by this difference of sparsity levels, we propose an encryption and compression approach combining Kronecker CS (KCS) with elementary cellular automata (ECA). In the first stage of encryption, ECA is adopted to scramble the sparsely transformed image in order to uniformize sparsity levels. A simple approximate evaluation method is introduced to test the sparsity uniformity. Due to low computational complexity and storage, in the second stage of encryption, KCS is adopted to encrypt and compress the scrambled and sparsely transformed image, where the measurement matrix with a small size is constructed from the piece-wise linear chaotic map. Theoretical analysis and experimental results show that our proposed scrambling method based on ECA has great performance in terms of scrambling and uniformity of sparsity levels. And the proposed encryption and compression method can achieve better secrecy, compression performance and flexibility.

  4. Context Modeler for Wavelet Compression of Spectral Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Xie, Hua; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    A context-modeling sub-algorithm has been developed as part of an algorithm that effects three-dimensional (3D) wavelet-based compression of hyperspectral image data. The context-modeling subalgorithm, hereafter denoted the context modeler, provides estimates of probability distributions of wavelet-transformed data being encoded. These estimates are utilized by an entropy coding subalgorithm that is another major component of the compression algorithm. The estimates make it possible to compress the image data more effectively than would otherwise be possible. The following background discussion is prerequisite to a meaningful summary of the context modeler. This discussion is presented relative to ICER-3D, which is the name attached to a particular compression algorithm and the software that implements it. The ICER-3D software is summarized briefly in the preceding article, ICER-3D Hyperspectral Image Compression Software (NPO-43238). Some aspects of this algorithm were previously described, in a slightly more general context than the ICER-3D software, in "Improving 3D Wavelet-Based Compression of Hyperspectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. In turn, ICER-3D is a product of generalization of ICER, another previously reported algorithm and computer program that can perform both lossless and lossy wavelet-based compression and decompression of gray-scale-image data. In ICER-3D, hyperspectral image data are decomposed using a 3D discrete wavelet transform (DWT). Following wavelet decomposition, mean values are subtracted from spatial planes of spatially low-pass subbands prior to encoding. The resulting data are converted to sign-magnitude form and compressed. In ICER-3D, compression is progressive, in that compressed information is ordered so that as more of the compressed data stream is received, successive reconstructions of the hyperspectral image data are of successively higher overall fidelity.

  5. The impact of skull bone intensity on the quality of compressed CT neuro images

    NASA Astrophysics Data System (ADS)

    Kowalik-Urbaniak, Ilona; Vrscay, Edward R.; Wang, Zhou; Cavaro-Menard, Christine; Koff, David; Wallace, Bill; Obara, Boguslaw

    2012-02-01

    The increasing use of technologies such as CT and MRI, along with a continuing improvement in their resolution, has contributed to the explosive growth of digital image data being generated. Medical communities around the world have recognized the need for efficient storage, transmission and display of medical images. For example, the Canadian Association of Radiologists (CAR) has recommended compression ratios for various modalities and anatomical regions to be employed by lossy JPEG and JPEG2000 compression in order to preserve diagnostic quality. Here we investigate the effects of the sharp skull edges present in CT neuro images on JPEG and JPEG2000 lossy compression. We conjecture that this atypical effect is caused by the sharp edges between the skull bone and the background regions as well as between the skull bone and the interior regions. These strong edges create large wavelet coefficients that consume an unnecessarily large number of bits in JPEG2000 compression because of its bitplane coding scheme, and thus result in reduced quality at the interior region, which contains most diagnostic information in the image. To validate the conjecture, we investigate a segmentation based compression algorithm based on simple thresholding and morphological operators. As expected, quality is improved in terms of PSNR as well as the structural similarity (SSIM) image quality measure, and its multiscale (MS-SSIM) and informationweighted (IW-SSIM) versions. This study not only supports our conjecture, but also provides a solution to improve the performance of JPEG and JPEG2000 compression for specific types of CT images.

  6. Wavelet-based compression of pathological images for telemedicine applications

    NASA Astrophysics Data System (ADS)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  7. [Hyperspectral image compression technology research based on EZW].

    PubMed

    Wei, Jun-Xia; Xiangli, Bin; Duan, Xiao-Feng; Xu, Zhao-Hui; Xue, Li-Jun

    2011-08-01

    Along with the development of hyperspectral remote sensing technology, hyperspectral imaging technology has been applied in the aspect of aviation and spaceflight, which is different from multispectral imaging, and with the band width of nanoscale spectral imaging the target continuously, the image resolution is very high. However, with the increasing number of band, spectral data quantity will be more and more, and these data storage and transmission is the problem that the authors must face. Along with the development of wavelet compression technology, in field of image compression, many people adopted and improved EZW, the present paper used the method in hyperspectral spatial dimension compression, but does not involved the spectrum dimension compression. From hyperspectral image compression reconstruction results, whether from the peak signal-to-noise ratio (PSNR) and spectral curve or from the subjective comparison of source and reconstruction image, the effect is well. If the first compression of image from spectrum dimension is made, then compression on space dimension, the authors believe the effect will be better.

  8. Compressing subbanded image data with Lempel-Ziv-based coders

    NASA Technical Reports Server (NTRS)

    Glover, Daniel; Kwatra, S. C.

    1993-01-01

    A method of improving the compression of image data using Lempel-Ziv-based coding is presented. Image data is first processed with a simple transform, such as the Walsh Hadamard Transform, to produce subbands. The subbanded data can be rounded to eight bits or it can be quantized for higher compression at the cost of some reduction in the quality of the reconstructed image. The data is then run-length coded to take advantage of the large runs of zeros produced by quantization. Compression results are presented and contrasted with a subband compression method using quantization followed by run-length coding and Huffman coding. The Lempel-Ziv-based coding in conjunction with run-length coding produces the best compression results at the same reconstruction quality (compared with the Huffman-based coding) on the image data used.

  9. Iliac vein compression syndrome: Clinical, imaging and pathologic findings

    PubMed Central

    Brinegar, Katelyn N; Sheth, Rahul A; Khademhosseini, Ali; Bautista, Jemianne; Oklu, Rahmi

    2015-01-01

    May-Thurner syndrome (MTS) is the pathologic compression of the left common iliac vein by the right common iliac artery, resulting in left lower extremity pain, swelling, and deep venous thrombosis. Though this syndrome was first described in 1851, there are currently no standardized criteria to establish the diagnosis of MTS. Since MTS is treated by a wide array of specialties, including interventional radiology, vascular surgery, cardiology, and vascular medicine, the need for an established diagnostic criterion is imperative in order to reduce misdiagnosis and inappropriate treatment. Although MTS has historically been diagnosed by the presence of pathologic features, the use of dynamic imaging techniques has led to a more radiologic based diagnosis. Thus, imaging plays an integral part in screening patients for MTS, and the utility of a wide array of imaging modalities has been evaluated. Here, we summarize the historical aspects of the clinical features of this syndrome. We then provide a comprehensive assessment of the literature on the efficacy of imaging tools available to diagnose MTS. Lastly, we provide clinical pearls and recommendations to aid physicians in diagnosing the syndrome through the use of provocative measures. PMID:26644823

  10. Estimating JPEG2000 compression for image forensics using Benford's Law

    NASA Astrophysics Data System (ADS)

    Qadir, Ghulam; Zhao, Xi; Ho, Anthony T. S.

    2010-05-01

    With the tremendous growth and usage of digital images nowadays, the integrity and authenticity of digital content is becoming increasingly important, and a growing concern to many government and commercial sectors. Image Forensics, based on a passive statistical analysis of the image data only, is an alternative approach to the active embedding of data associated with Digital Watermarking. Benford's Law was first introduced to analyse the probability distribution of the 1st digit (1-9) numbers of natural data, and has since been applied to Accounting Forensics for detecting fraudulent income tax returns [9]. More recently, Benford's Law has been further applied to image processing and image forensics. For example, Fu et al. [5] proposed a Generalised Benford's Law technique for estimating the Quality Factor (QF) of JPEG compressed images. In our previous work, we proposed a framework incorporating the Generalised Benford's Law to accurately detect unknown JPEG compression rates of watermarked images in semi-fragile watermarking schemes. JPEG2000 (a relatively new image compression standard) offers higher compression rates and better image quality as compared to JPEG compression. In this paper, we propose the novel use of Benford's Law for estimating JPEG2000 compression for image forensics applications. By analysing the DWT coefficients and JPEG2000 compression on 1338 test images, the initial results indicate that the 1st digit probability of DWT coefficients follow the Benford's Law. The unknown JPEG2000 compression rates of the image can also be derived, and proved with the help of a divergence factor, which shows the deviation between the probabilities and Benford's Law. Based on 1338 test images, the mean divergence for DWT coefficients is approximately 0.0016, which is lower than DCT coefficients at 0.0034. However, the mean divergence for JPEG2000 images compression rate at 0.1 is 0.0108, which is much higher than uncompressed DWT coefficients. This result

  11. Support vector machines for microscopic medical images compression.

    PubMed

    Bentaouza, Chahinez Mérièm; Benyettou, Mohamed

    2014-02-01

    This study presents the compression of microscopic medical images by Support Vector Machines using machine learning. The visual cortex is the largest system in the human brain and is responsible for image processing such as compression, because the eye does not necessarily perceive all the details of an image. Medical images are a valuable means of decision support. However, they provide a large number of images per examination that can be transmitted over a network or stored for several years under the law imposed by the country. To apply the reasoning of biological intelligence, this study uses Support Vector Machines for compression to reduce the pixels of medical images in order to transmit data in less time and store information in less space. The results found by using this method are satisfactory for compression though the time must be improved.

  12. Designing robust sensing matrix for image compression.

    PubMed

    Li, Gang; Li, Xiao; Li, Sheng; Bai, Huang; Jiang, Qianru; He, Xiongxiong

    2015-12-01

    This paper deals with designing sensing matrix for compressive sensing systems. Traditionally, the optimal sensing matrix is designed so that the Gram of the equivalent dictionary is as close as possible to a target Gram with small mutual coherence. A novel design strategy is proposed, in which, unlike the traditional approaches, the measure considers of mutual coherence behavior of the equivalent dictionary as well as sparse representation errors of the signals. The optimal sensing matrix is defined as the one that minimizes this measure and hence is expected to be more robust against sparse representation errors. A closed-form solution is derived for the optimal sensing matrix with a given target Gram. An alternating minimization-based algorithm is also proposed for addressing the same problem with the target Gram searched within a set of relaxed equiangular tight frame Grams. The experiments are carried out and the results show that the sensing matrix obtained using the proposed approach outperforms those existing ones using a fixed dictionary in terms of signal reconstruction accuracy for synthetic data and peak signal-to-noise ratio for real images.

  13. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  14. Discrete-cosine-transform-based image compression applied to dermatology

    NASA Astrophysics Data System (ADS)

    Cookson, John P.; Sneiderman, Charles; Rivera, Christopher

    1991-05-01

    The research reported in this paper concerns an evaluation of the impact of compression on the quality of digitized color dermatologic images. 35 mm slides of four morphologic types of skin lesions were captured at 1000 pixels per inch (ppi) in 24 bit RGB color, to give an approximate 1K X 1K image. The discrete cosine transform (DCT) algorithm, was applied to the resulting image files to achieve compression ratios of about 7:1, 28:1, and 70:1. The original scans and the decompressed files were written to a 35 mm film recorder. Together with the original photo slides, the slides resulting from digital images were evaluated in a study of morphology recognition and image quality assessment. A panel of dermatologists was asked to identify the morphology depicted and to rate the image quality of each slide. The images were shown in a progression from highest level of compression to original photo slides. We conclude that the use of DCT file compression yields acceptable performance for skin lesion images since differences in morphology recognition performance do not correlate significantly with the use of original photos versus compressed versions. Additionally, image quality evaluation does not correlate significantly with level of compression.

  15. Closed-form quality measures for compressed medical images: compression noise statistics of transform coding

    NASA Astrophysics Data System (ADS)

    Li, Dunling; Loew, Murray H.

    2004-05-01

    This paper provides a theoretical foundation for the closed-form expression of model observers on compressed images. In medical applications, model observers, especially the channelized Hotelling observer, have been successfully used to predict human observer performance and to evaluate image quality for detection tasks in various backgrounds. To use model observers, however, requires knowledge of noise statistics. This paper first identifies quantization noise as the sole distortion source in transform coding, one of the most commonly used methods for image compression. Then, it represents transform coding as a 1-D block-based matrix expression, it further derives first and second moments, and the probability density function (pdf) of the compression noise at pixel, block and image levels. The compression noise statistics depend on the transform matrix and the quantization matrix in the transform coding algorithm. Compression noise is jointly normally distributed when the dimension of the transform (the block size) is typical and the contents of image sets vary randomly. Moreover, this paper uses JPEG as a test example to verify the derived statistics. The test simulation results show that the closed-form expression of JPEG quantization and compression noise statistics correctly predicts the estimated ones from actual images.

  16. Multispectral Image Compression Based on DSC Combined with CCSDS-IDC

    PubMed Central

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741

  17. Multispectral image compression based on DSC combined with CCSDS-IDC.

    PubMed

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.

  18. Science-based Region-of-Interest Image Compression

    NASA Technical Reports Server (NTRS)

    Wagstaff, K. L.; Castano, R.; Dolinar, S.; Klimesh, M.; Mukai, R.

    2004-01-01

    As the number of currently active space missions increases, so does competition for Deep Space Network (DSN) resources. Even given unbounded DSN time, power and weight constraints onboard the spacecraft limit the maximum possible data transmission rate. These factors highlight a critical need for very effective data compression schemes. Images tend to be the most bandwidth-intensive data, so image compression methods are particularly valuable. In this paper, we describe a method for prioritizing regions in an image based on their scientific value. Using a wavelet compression method that can incorporate priority information, we ensure that the highest priority regions are transmitted with the highest fidelity.

  19. Lossless compression of hyperspectral images using hybrid context prediction.

    PubMed

    Liang, Yuan; Li, Jianping; Guo, Ke

    2012-03-26

    In this letter a new algorithm for lossless compression of hyperspectral images using hybrid context prediction is proposed. Lossless compression algorithms are typically divided into two stages, a decorrelation stage and a coding stage. The decorrelation stage supports both intraband and interband predictions. The intraband (spatial) prediction uses the median prediction model, since the median predictor is fast and efficient. The interband prediction uses hybrid context prediction. The hybrid context prediction is the combination of a linear prediction (LP) and a context prediction. Finally, the residual image of hybrid context prediction is coded by the arithmetic coding. We compare the proposed lossless compression algorithm with some of the existing algorithms for hyperspectral images such as 3D-CALIC, M-CALIC, LUT, LAIS-LUT, LUT-NN, DPCM (C-DPCM), JPEG-LS. The performance of the proposed lossless compression algorithm is evaluated. Simulation results show that our algorithm achieves high compression ratios with low complexity and computational cost.

  20. A high-speed distortionless predictive image-compression scheme

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Smyth, P.; Wang, H.

    1990-01-01

    A high-speed distortionless predictive image-compression scheme that is based on differential pulse code modulation output modeling combined with efficient source-code design is introduced. Experimental results show that this scheme achieves compression that is very close to the difference entropy of the source.

  1. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    PubMed

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.

  2. Medical image compression algorithm based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Chen, Minghong; Zhang, Guoping; Wan, Wei; Liu, Minmin

    2005-02-01

    With rapid development of electronic imaging and multimedia technology, the telemedicine is applied to modern medical servings in the hospital. Digital medical image is characterized by high resolution, high precision and vast data. The optimized compression algorithm can alleviate restriction in the transmission speed and data storage. This paper describes the characteristics of human vision system based on the physiology structure, and analyses the characteristics of medical image in the telemedicine, then it brings forward an optimized compression algorithm based on wavelet zerotree. After the image is smoothed, it is decomposed with the haar filters. Then the wavelet coefficients are quantified adaptively. Therefore, we can maximize efficiency of compression and achieve better subjective visual image. This algorithm can be applied to image transmission in the telemedicine. In the end, we examined the feasibility of this algorithm with an image transmission experiment in the network.

  3. Segmentation and thematic classification of color orthophotos over non-compressed and JPEG 2000 compressed images

    NASA Astrophysics Data System (ADS)

    Zabala, A.; Cea, C.; Pons, X.

    2012-04-01

    Lossy compression is now increasingly used due to the enormous amount of images gathered by airborne and satellite sensors. Nevertheless, the implications of these compression procedures have been scarcely assessed. Segmentation before digital image classification is also a technique increasingly used in GEOBIA (GEOgraphic Object-Based Image Analysis). This paper presents an object-oriented application for image analysis using color orthophotos (RGB bands) and a Quickbird image (RGB and a near infrared band). We use different compression levels in order to study the effects of the data loss on the segmentation-based classification results. A set of 4 color orthophotos with 1 m spatial resolution and a 4-band Quickbird satellite image with 0.7 m spatial resolution each covering an area of about 1200 × 1200 m 2 (144 ha) was chosen for the experiment. Those scenes were compressed at 8 compression ratios (between 5:1 and 1000:1) using the JPEG 2000 standard. There were 7 thematic categories: dense vegetation, herbaceous, bare lands, road and asphalt areas, building areas, swimming pools and rivers (if necessary). The best category classification was obtained using a hierarchical classification algorithm over the second segmentation level. The same segmentation and classification methods were applied in order to establish a semi-automatic technique for all 40 images. To estimate the overall accuracy, a confusion matrix was calculated using a photointerpreted ground-truth map (fully covering 25% of each orthophoto). The mean accuracy over non-compressed images was 66% for the orthophotos and 72% for the Quickbird image. It is interesting to obtain this medium overall accuracy to be able to properly assess the compression effects (if the initial overall accuracy is very high, the possible positive effects of compression would not be noticeable). The first and second compression levels (up to 10:1) obtain results similar to the reference ones. Differences in the third to

  4. Image compression and encryption scheme based on 2D compressive sensing and fractional Mellin transform

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Li, Haolin; Wang, Di; Pan, Shumin; Zhou, Zhihong

    2015-05-01

    Most of the existing image encryption techniques bear security risks for taking linear transform or suffer encryption data expansion for adopting nonlinear transformation directly. To overcome these difficulties, a novel image compression-encryption scheme is proposed by combining 2D compressive sensing with nonlinear fractional Mellin transform. In this scheme, the original image is measured by measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the nonlinear fractional Mellin transform. The measurement matrices are controlled by chaos map. The Newton Smoothed l0 Norm (NSL0) algorithm is adopted to obtain the decryption image. Simulation results verify the validity and the reliability of this scheme.

  5. An image compression technique for use on token ring networks

    NASA Astrophysics Data System (ADS)

    Gorjala, B.; Sayood, Khalid; Meempat, G.

    1992-12-01

    A low complexity technique for compression of images for transmission over local area networks is presented. The technique uses the synchronous traffic as a side channel for improving the performance of an adaptive differential pulse code modulation (ADPCM) based coder.

  6. An image compression technique for use on token ring networks

    NASA Technical Reports Server (NTRS)

    Gorjala, B.; Sayood, Khalid; Meempat, G.

    1992-01-01

    A low complexity technique for compression of images for transmission over local area networks is presented. The technique uses the synchronous traffic as a side channel for improving the performance of an adaptive differential pulse code modulation (ADPCM) based coder.

  7. Effect of severe image compression on face recognition algorithms

    NASA Astrophysics Data System (ADS)

    Zhao, Peilong; Dong, Jiwen; Li, Hengjian

    2015-10-01

    In today's information age, people will depend more and more on computers to obtain and make use of information, there is a big gap between the multimedia information after digitization that has large data and the current hardware technology that can provide the computer storage resources and network band width. For example, there is a large amount of image storage and transmission problem. Image compression becomes useful in cases when images need to be transmitted across networks in a less costly way by increasing data volume while reducing transmission time. This paper discusses image compression to effect on face recognition system. For compression purposes, we adopted the JPEG, JPEG2000, JPEG XR coding standard. The face recognition algorithms studied are SIFT. As a form of an extensive research, Experimental results show that it still maintains a high recognition rate under the high compression ratio, and JPEG XR standards is superior to other two kinds in terms of performance and complexity.

  8. Simultaneous fusion, compression, and encryption of multiple images.

    PubMed

    Alfalou, A; Brosseau, C; Abdallah, N; Jridi, M

    2011-11-21

    We report a new spectral multiple image fusion analysis based on the discrete cosine transform (DCT) and a specific spectral filtering method. In order to decrease the size of the multiplexed file, we suggest a procedure of compression which is based on an adapted spectral quantization. Each frequency is encoded with an optimized number of bits according its importance and its position in the DC domain. This fusion and compression scheme constitutes a first level of encryption. A supplementary level of encryption is realized by making use of biometric information. We consider several implementations of this analysis by experimenting with sequences of gray scale images. To quantify the performance of our method we calculate the MSE (mean squared error) and the PSNR (peak signal to noise ratio). Our results consistently improve performances compared to the well-known JPEG image compression standard and provide a viable solution for simultaneous compression and encryption of multiple images.

  9. Pre-Processor for Compression of Multispectral Image Data

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron

    2006-01-01

    A computer program that preprocesses multispectral image data has been developed to provide the Mars Exploration Rover (MER) mission with a means of exploiting the additional correlation present in such data without appreciably increasing the complexity of compressing the data.

  10. Dynamic CT perfusion image data compression for efficient parallel processing.

    PubMed

    Barros, Renan Sales; Olabarriaga, Silvia Delgado; Borst, Jordi; van Walderveen, Marianne A A; Posthuma, Jorrit S; Streekstra, Geert J; van Herk, Marcel; Majoie, Charles B L M; Marquering, Henk A

    2016-03-01

    The increasing size of medical imaging data, in particular time series such as CT perfusion (CTP), requires new and fast approaches to deliver timely results for acute care. Cloud architectures based on graphics processing units (GPUs) can provide the processing capacity required for delivering fast results. However, the size of CTP datasets makes transfers to cloud infrastructures time-consuming and therefore not suitable in acute situations. To reduce this transfer time, this work proposes a fast and lossless compression algorithm for CTP data. The algorithm exploits redundancies in the temporal dimension and keeps random read-only access to the image elements directly from the compressed data on the GPU. To the best of our knowledge, this is the first work to present a GPU-ready method for medical image compression with random access to the image elements from the compressed data.

  11. A High Performance Image Data Compression Technique for Space Applications

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Venbrux, Jack

    2003-01-01

    A highly performing image data compression technique is currently being developed for space science applications under the requirement of high-speed and pushbroom scanning. The technique is also applicable to frame based imaging data. The algorithm combines a two-dimensional transform with a bitplane encoding; this results in an embedded bit string with exact desirable compression rate specified by the user. The compression scheme performs well on a suite of test images acquired from spacecraft instruments. It can also be applied to three-dimensional data cube resulting from hyper-spectral imaging instrument. Flight qualifiable hardware implementations are in development. The implementation is being designed to compress data in excess of 20 Msampledsec and support quantization from 2 to 16 bits. This paper presents the algorithm, its applications and status of development.

  12. Compression of M-FISH images using 3D SPIHT

    NASA Astrophysics Data System (ADS)

    Wu, Qiang; Xiong, Zixiang; Castleman, Kenneth R.

    2001-12-01

    With the recent development of the use of digital media for cytogenetic imaging applications, efficient compression techniques are highly desirable to accommodate the rapid growth of image data. This paper introduces a lossy to lossless coding technique for compression of multiplex fluorescence in situ hybridization (M-FISH) images, based on 3-D set partitioning in hierarchical trees (3-D SPIHT). Using a lifting-based integer wavelet decomposition, the 3-D SPIHT achieves both embedded coding and substantial improvement in lossless compression over the Lempel-Ziv (WinZip) coding which is the current method for archiving M-FISH images. The lossy compression performance of the 3-D SPIHT is also significantly better than that of the 2-D based JPEG-2000.

  13. The Pixon Method for Data Compression Image Classification, and Image Reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard; Yahil, Amos

    2002-01-01

    As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.

  14. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  15. The Effects of Applying Breast Compression in Dynamic Contrast Material–enhanced MR Imaging

    PubMed Central

    Macura, Katarzyna J.; Kamel, Ihab R.; Bluemke, David A.; Jacobs, Michael A.

    2014-01-01

    resulted in complete loss of enhancement of nine of 210 lesions (4%). Conclusion Breast compression during biopsy affected breast lesion detection, lesion size, and dynamic contrast-enhanced MR imaging interpretation and performance. Limiting the application of breast compression is recommended, except when clinically necessary. © RSNA, 2014 Online supplemental material is available for this article. PMID:24620911

  16. New Methods for Lossless Image Compression Using Arithmetic Coding.

    ERIC Educational Resources Information Center

    Howard, Paul G.; Vitter, Jeffrey Scott

    1992-01-01

    Identifies four components of a good predictive lossless image compression method: (1) pixel sequence, (2) image modeling and prediction, (3) error modeling, and (4) error coding. Highlights include Laplace distribution and a comparison of the multilevel progressive method for image coding with the prediction by partial precision matching method.…

  17. Fast DPCM scheme for lossless compression of aurora spectral images

    NASA Astrophysics Data System (ADS)

    Kong, Wanqiu; Wu, Jiaji

    2016-10-01

    Aurora has abundant information to be stored. Aurora spectral image electronically preserves spectral information and visual observation of aurora during a period to be studied later. These images are helpful for the research of earth-solar activities and to understand the aurora phenomenon itself. However, the images are produced with a quite high sampling frequency, which leads to the challenging transmission load. In order to solve the problem, lossless compression turns out to be required. Indeed, each frame of aurora spectral images differs from the classical natural image and also from the frame of hyperspectral image. Existing lossless compression algorithms are not quite applicable. On the other hand, the key of compression is to decorrelate between pixels. We consider exploiting a DPCM-based scheme for the lossless compression because DPCM is effective for decorrelation. Such scheme makes use of two-dimensional redundancy both in the spatial and spectral domain with a relatively low complexity. Besides, we also parallel it for a faster computation speed. All codes are implemented on a structure consists of nested for loops of which the outer and the inner loops are respectively designed for spectral and spatial decorrelation. And the parallel version is represented on CPU platform using different numbers of cores. Experimental results show that compared to traditional lossless compression methods, the DPCM scheme has great advantage in compression gain and meets the requirement of real-time transmission. Besides, the parallel version has expected computation performance with a high CPU utilization.

  18. Planning/scheduling techniques for VQ-based image compression

    NASA Technical Reports Server (NTRS)

    Short, Nicholas M., Jr.; Manohar, Mareboyana; Tilton, James C.

    1994-01-01

    The enormous size of the data holding and the complexity of the information system resulting from the EOS system pose several challenges to computer scientists, one of which is data archival and dissemination. More than ninety percent of the data holdings of NASA is in the form of images which will be accessed by users across the computer networks. Accessing the image data in its full resolution creates data traffic problems. Image browsing using a lossy compression reduces this data traffic, as well as storage by factor of 30-40. Of the several image compression techniques, VQ is most appropriate for this application since the decompression of the VQ compressed images is a table lookup process which makes minimal additional demands on the user's computational resources. Lossy compression of image data needs expert level knowledge in general and is not straightforward to use. This is especially true in the case of VQ. It involves the selection of appropriate codebooks for a given data set and vector dimensions for each compression ratio, etc. A planning and scheduling system is described for using the VQ compression technique in the data access and ingest of raw satellite data.

  19. Image compression software for the SOHO LASCO and EIT experiments

    NASA Technical Reports Server (NTRS)

    Grunes, Mitchell R.; Howard, Russell A.; Hoppel, Karl; Mango, Stephen A.; Wang, Dennis

    1994-01-01

    This paper describes the lossless and lossy image compression algorithms to be used on board the Solar Heliospheric Observatory (SOHO) in conjunction with the Large Angle Spectrometric Coronograph and Extreme Ultraviolet Imaging Telescope experiments. It also shows preliminary results obtained using similar prior imagery and discusses the lossy compression artifacts which will result. This paper is in part intended for the use of SOHO investigators who need to understand the results of SOHO compression in order to better allocate the transmission bits which they have been allocated.

  20. Imaging industry expectations for compressed sensing in MRI

    NASA Astrophysics Data System (ADS)

    King, Kevin F.; Kanwischer, Adriana; Peters, Rob

    2015-09-01

    Compressed sensing requires compressible data, incoherent acquisition and a nonlinear reconstruction algorithm to force creation of a compressible image consistent with the acquired data. MRI images are compressible using various transforms (commonly total variation or wavelets). Incoherent acquisition of MRI data by appropriate selection of pseudo-random or non-Cartesian locations in k-space is straightforward. Increasingly, commercial scanners are sold with enough computing power to enable iterative reconstruction in reasonable times. Therefore integration of compressed sensing into commercial MRI products and clinical practice is beginning. MRI frequently requires the tradeoff of spatial resolution, temporal resolution and volume of spatial coverage to obtain reasonable scan times. Compressed sensing improves scan efficiency and reduces the need for this tradeoff. Benefits to the user will include shorter scans, greater patient comfort, better image quality, more contrast types per patient slot, the enabling of previously impractical applications, and higher throughput. Challenges to vendors include deciding which applications to prioritize, guaranteeing diagnostic image quality, maintaining acceptable usability and workflow, and acquisition and reconstruction algorithm details. Application choice depends on which customer needs the vendor wants to address. The changing healthcare environment is putting cost and productivity pressure on healthcare providers. The improved scan efficiency of compressed sensing can help alleviate some of this pressure. Image quality is strongly influenced by image compressibility and acceleration factor, which must be appropriately limited. Usability and workflow concerns include reconstruction time and user interface friendliness and response. Reconstruction times are limited to about one minute for acceptable workflow. The user interface should be designed to optimize workflow and minimize additional customer training. Algorithm

  1. Improvements for Image Compression Using Adaptive Principal Component Extraction (APEX)

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1997-01-01

    The issues of image compression and pattern classification have been a primary focus of researchers among a variety of fields including signal and image processing, pattern recognition, data classification, etc. These issues depend on finding an efficient representation of the source data. In this paper we collate our earlier results where we introduced the application of the. Hilbe.rt scan to a principal component algorithm (PCA) with Adaptive Principal Component Extraction (APEX) neural network model. We apply these technique to medical imaging, particularly image representation and compression. We apply the Hilbert scan to the APEX algorithm to improve results

  2. Integer cosine transform for image compression

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Pollara, F.; Shahshahani, M.

    1991-01-01

    This article describes a recently introduced transform algorithm called the integer cosine transform (ICT), which is used in transform-based data compression schemes. The ICT algorithm requires only integer operations on small integers and at the same time gives a rate-distortion performance comparable to that offered by the floating-point discrete cosine transform (DCT). The article addresses the issue of implementation complexity, which is of prime concern for source coding applications of interest in deep-space communications. Complexity reduction in the transform stage of the compression scheme is particularly relevant, since this stage accounts for most (typically over 80 percent) of the computational load.

  3. Lossless Compression of Medical Images Using 3D Predictors.

    PubMed

    Lucas, Luis; Rodrigues, Nuno; Cruz, Luis; Faria, Sergio

    2017-06-09

    This paper describes a highly efficient method for lossless compression of volumetric sets of medical images, such as CTs or MRIs. The proposed method, referred to as 3D-MRP, is based on the principle of minimum rate predictors (MRP), which is one of the state-of-the-art lossless compression technologies, presented in the data compression literature. The main features of the proposed method include the use of 3D predictors, 3D-block octree partitioning and classification, volume-based optimisation and support for 16 bit-depth images. Experimental results demonstrate the efficiency of the 3D-MRP algorithm for the compression of volumetric sets of medical images, achieving gains above 15% and 12% for 8 bit and 16 bit-depth contents, respectively, when compared to JPEG-LS, JPEG2000, CALIC, HEVC, as well as other proposals based on MRP algorithm.

  4. Correlation estimation and performance optimization for distributed image compression

    NASA Astrophysics Data System (ADS)

    He, Zhihai; Cao, Lei; Cheng, Hui

    2006-01-01

    Correlation estimation plays a critical role in resource allocation and rate control for distributed data compression. A Wyner-Ziv encoder for distributed image compression is often considered as a lossy source encoder followed by a lossless Slepian-Wolf encoder. The source encoder consists of spatial transform, quantization, and bit plane extraction. In this work, we find that Gray code, which has been extensively used in digital modulation, is able to significantly improve the correlation between the source data and its side information. Theoretically, we analyze the behavior of Gray code within the context of distributed image compression. Using this theoretical model, we are able to efficiently allocate the bit budget and determine the code rate of the Slepian-Wolf encoder. Our experimental results demonstrate that the Gray code, coupled with accurate correlation estimation and rate control, significantly improves the picture quality, by up to 4 dB, over the existing methods for distributed image compression.

  5. DCT and DST Based Image Compression for 3D Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-03-01

    This paper introduces a new method for 2D image compression whose quality is demonstrated through accurate 3D reconstruction using structured light techniques and 3D reconstruction from multiple viewpoints. The method is based on two discrete transforms: (1) A one-dimensional Discrete Cosine Transform (DCT) is applied to each row of the image. (2) The output from the previous step is transformed again by a one-dimensional Discrete Sine Transform (DST), which is applied to each column of data generating new sets of high-frequency components followed by quantization of the higher frequencies. The output is then divided into two parts where the low-frequency components are compressed by arithmetic coding and the high frequency ones by an efficient minimization encoding algorithm. At decompression stage, a binary search algorithm is used to recover the original high frequency components. The technique is demonstrated by compressing 2D images up to 99% compression ratio. The decompressed images, which include images with structured light patterns for 3D reconstruction and from multiple viewpoints, are of high perceptual quality yielding accurate 3D reconstruction. Perceptual assessment and objective quality of compression are compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results show that the proposed compression method is superior to both JPEG and JPEG2000 concerning 3D reconstruction, and with equivalent perceptual quality to JPEG2000.

  6. Comparison of lossless compression techniques for prepress color images

    NASA Astrophysics Data System (ADS)

    Van Assche, Steven; Denecker, Koen N.; Philips, Wilfried R.; Lemahieu, Ignace L.

    1998-12-01

    In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only lossless compression is acceptable. Most existing lossless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. In this paper we present a comparison of three state-of-the-art lossless compression techniques which exploit such color redundancies: IEP (Inter- color Error Prediction) and a KLT-based technique, which are both linear color decorrelation techniques, and Interframe CALIC, which uses a non-linear approach to color decorrelation. It is shown that these techniques are able to exploit color redundancies and that color decorrelation can be done effectively and efficiently. The linear color decorrelators provide a considerable coding gain (about 2 bpp) on some typical prepress images. The non-linear interframe CALIC predictor does not yield better results, but the full interframe CALIC technique does.

  7. Saliency detection in the compressed domain for adaptive image retargeting.

    PubMed

    Fang, Yuming; Chen, Zhenzhong; Lin, Weisi; Lin, Chia-Wen

    2012-09-01

    Saliency detection plays important roles in many image processing applications, such as regions of interest extraction and image resizing. Existing saliency detection models are built in the uncompressed domain. Since most images over Internet are typically stored in the compressed domain such as joint photographic experts group (JPEG), we propose a novel saliency detection model in the compressed domain in this paper. The intensity, color, and texture features of the image are extracted from discrete cosine transform (DCT) coefficients in the JPEG bit-stream. Saliency value of each DCT block is obtained based on the Hausdorff distance calculation and feature map fusion. Based on the proposed saliency detection model, we further design an adaptive image retargeting algorithm in the compressed domain. The proposed image retargeting algorithm utilizes multioperator operation comprised of the block-based seam carving and the image scaling to resize images. A new definition of texture homogeneity is given to determine the amount of removal block-based seams. Thanks to the directly derived accurate saliency information from the compressed domain, the proposed image retargeting algorithm effectively preserves the visually important regions for images, efficiently removes the less crucial regions, and therefore significantly outperforms the relevant state-of-the-art algorithms, as demonstrated with the in-depth analysis in the extensive experiments.

  8. Comparison Of Data Compression Schemes For Medical Images

    NASA Astrophysics Data System (ADS)

    Noh, Ki H.; Jenkins, Janice M.

    1986-06-01

    Medical images acquired and stored digitally continue to pose a major problem in the area of picture archiving and transmission. The need for accurate reproduction of such images, which constitute patient medical records, and the medico-legal problems of possible loss of information has led us to examine the suitability of data compression schemes for several different medical image modalities. We have examined both reversible coding and irreversible coding as methods of image for-matting and reproduction. In reversible coding we have tested run-length coding and arithmetic coding on image bit planes. In irreversible coding, we have studied transform coding, linear predictive coding, and block truncation coding and their effects on image quality versus compression ratio in several image modalities. In transform coding, we have applied discrete Fourier coding, discrete cosine coding, discrete sine transform, and Walsh-Hadamard transform to images in which a subset of the transformed coefficients were retained and quantized. In linear predictive coding, we used a fixed level quantizer. In the case of block truncation coding, the first and second moments were retained. Results of all types of irreversible coding for data compression were unsatisfactory in terms of reproduction of the original image. Run-length coding was useful on several bit planes of an image but not on others. Arithmetic coding was found to be completely reversible and resulted in up to 2 to 1 compression ratio.

  9. The quest for 'diagnostically lossless' medical image compression: a comparative study of objective quality metrics for compressed medical images

    NASA Astrophysics Data System (ADS)

    Kowalik-Urbaniak, Ilona; Brunet, Dominique; Wang, Jiheng; Koff, David; Smolarski-Koff, Nadine; Vrscay, Edward R.; Wallace, Bill; Wang, Zhou

    2014-03-01

    Our study, involving a collaboration with radiologists (DK,NSK) as well as a leading international developer of medical imaging software (AGFA), is primarily concerned with improved methods of assessing the diagnostic quality of compressed medical images and the investigation of compression artifacts resulting from JPEG and JPEG2000. In this work, we compare the performances of the Structural Similarity quality measure (SSIM), MSE/PSNR, compression ratio CR and JPEG quality factor Q, based on experimental data collected in two experiments involving radiologists. An ROC and Kolmogorov-Smirnov analysis indicates that compression ratio is not always a good indicator of visual quality. Moreover, SSIM demonstrates the best performance, i.e., it provides the closest match to the radiologists' assessments. We also show that a weighted Youden index1 and curve tting method can provide SSIM and MSE thresholds for acceptable compression ratios.

  10. Feasibility studies of optical processing of image bandwidth compression schemes

    NASA Astrophysics Data System (ADS)

    Hunt, B. R.

    1987-05-01

    The two research activities are included as two separate divisions of this research report. The research activities are as follows: 1. Adaptive Recursive Interpolated DPCM for image data compression (ARIDPCM). A consistent theme in the search supported under Grant Number AFOSR under Grant AFOSR-81-0170 has been novel methods of image data compression that are suitable for implementation by optical processing. Initial investigation led to the IDPCM method of image data compression. 2. Deblurring images through turbulent atmosphere. A common problem in astronomy is the imaging of astronomical fluctuations of the atmosphere. The microscale fluctuations limit the resolution of any object by ground-based telescope, the phenomenon of stars twinkling being the most commonly observed form of this degradation. This problem also has military significance in limiting the ground-based observation of satellites in earth orbit. As concerns about SDI arise, the observation of Soviet Satellites becomes more important, and this observation is limited by atmospheric turbulence.

  11. Image Compression Algorithm Altered to Improve Stereo Ranging

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron

    2008-01-01

    A report discusses a modification of the ICER image-data-compression algorithm to increase the accuracy of ranging computations performed on compressed stereoscopic image pairs captured by cameras aboard the Mars Exploration Rovers. (ICER and variants thereof were discussed in several prior NASA Tech Briefs articles.) Like many image compressors, ICER was designed to minimize a mean-square-error measure of distortion in reconstructed images as a function of the compressed data volume. The present modification of ICER was preceded by formulation of an alternative error measure, an image-quality metric that focuses on stereoscopic-ranging quality and takes account of image-processing steps in the stereoscopic-ranging process. This metric was used in empirical evaluation of bit planes of wavelet-transform subbands that are generated in ICER. The present modification, which is a change in a bit-plane prioritization rule in ICER, was adopted on the basis of this evaluation. This modification changes the order in which image data are encoded, such that when ICER is used for lossy compression, better stereoscopic-ranging results are obtained as a function of the compressed data volume.

  12. Coil Compression for Accelerated Imaging with Cartesian Sampling

    PubMed Central

    Zhang, Tao; Pauly, John M.; Vasanawala, Shreyas S.; Lustig, Michael

    2012-01-01

    MRI using receiver arrays with many coil elements can provide high signal-to-noise ratio and increase parallel imaging acceleration. At the same time, the growing number of elements results in larger datasets and more computation in the reconstruction. This is of particular concern in 3D acquisitions and in iterative reconstructions. Coil compression algorithms are effective in mitigating this problem by compressing data from many channels into fewer virtual coils. In Cartesian sampling there often are fully sampled k-space dimensions. In this work, a new coil compression technique for Cartesian sampling is presented that exploits the spatially varying coil sensitivities in these non-subsampled dimensions for better compression and computation reduction. Instead of directly compressing in k-space, coil compression is performed separately for each spatial location along the fully-sampled directions, followed by an additional alignment process that guarantees the smoothness of the virtual coil sensitivities. This important step provides compatibility with autocalibrating parallel imaging techniques. Its performance is not susceptible to artifacts caused by a tight imaging fieldof-view. High quality compression of in-vivo 3D data from a 32 channel pediatric coil into 6 virtual coils is demonstrated. PMID:22488589

  13. Optimization of wavelet decomposition for image compression and feature preservation.

    PubMed

    Lo, Shih-Chung B; Li, Huai; Freedman, Matthew T

    2003-09-01

    A neural-network-based framework has been developed to search for an optimal wavelet kernel that can be used for a specific image processing task. In this paper, a linear convolution neural network was employed to seek a wavelet that minimizes errors and maximizes compression efficiency for an image or a defined image pattern such as microcalcifications in mammograms and bone in computed tomography (CT) head images. We have used this method to evaluate the performance of tap-4 wavelets on mammograms, CTs, magnetic resonance images, and Lena images. We found that the Daubechies wavelet or those wavelets with similar filtering characteristics can produce the highest compression efficiency with the smallest mean-square-error for many image patterns including general image textures as well as microcalcifications in digital mammograms. However, the Haar wavelet produces the best results on sharp edges and low-noise smooth areas. We also found that a special wavelet whose low-pass filter coefficients are 0.32252136, 0.85258927, 1.38458542, and -0.14548269) produces the best preservation outcomes in all tested microcalcification features including the peak signal-to-noise ratio, the contrast and the figure of merit in the wavelet lossy compression scheme. Having analyzed the spectrum of the wavelet filters, we can find the compression outcomes and feature preservation characteristics as a function of wavelets. This newly developed optimization approach can be generalized to other image analysis applications where a wavelet decomposition is employed.

  14. A Novel Psychovisual Threshold on Large DCT for Image Compression

    PubMed Central

    2015-01-01

    A psychovisual experiment prescribes the quantization values in image compression. The quantization process is used as a threshold of the human visual system tolerance to reduce the amount of encoded transform coefficients. It is very challenging to generate an optimal quantization value based on the contribution of the transform coefficient at each frequency order. The psychovisual threshold represents the sensitivity of the human visual perception at each frequency order to the image reconstruction. An ideal contribution of the transform at each frequency order will be the primitive of the psychovisual threshold in image compression. This research study proposes a psychovisual threshold on the large discrete cosine transform (DCT) image block which will be used to automatically generate the much needed quantization tables. The proposed psychovisual threshold will be used to prescribe the quantization values at each frequency order. The psychovisual threshold on the large image block provides significant improvement in the quality of output images. The experimental results on large quantization tables from psychovisual threshold produce largely free artifacts in the visual output image. Besides, the experimental results show that the concept of psychovisual threshold produces better quality image at the higher compression rate than JPEG image compression. PMID:25874257

  15. Nonlinear pulse compression in pulse-inversion fundamental imaging.

    PubMed

    Cheng, Yun-Chien; Shen, Che-Chou; Li, Pai-Chi

    2007-04-01

    Coded excitation can be applied in ultrasound contrast agent imaging to enhance the signal-to-noise ratio with minimal destruction of the microbubbles. Although the axial resolution is usually compromised by the requirement for a long coded transmit waveforms, this can be restored by using a compression filter to compress the received echo. However, nonlinear responses from microbubbles may cause difficulties in pulse compression and result in severe range side-lobe artifacts, particularly in pulse-inversion-based (PI) fundamental imaging. The efficacy of pulse compression in nonlinear contrast imaging was evaluated by investigating several factors relevant to PI fundamental generation using both in-vitro experiments and simulations. The results indicate that the acoustic pressure and the bubble size can alter the nonlinear characteristics of microbubbles and change the performance of the compression filter. When nonlinear responses from contrast agents are enhanced by using a higher acoustic pressure or when more microbubbles are near the resonance size of the transmit frequency, higher range side lobes are produced in both linear imaging and PI fundamental imaging. On the other hand, contrast detection in PI fundamental imaging significantly depends on the magnitude of the nonlinear responses of the bubbles and thus the resultant contrast-to-tissue ratio (CTR) still increases with acoustic pressure and the nonlinear resonance of microbubbles. It should be noted, however, that the CTR in PI fundamental imaging after compression is consistently lower than that before compression due to obvious side-lobe artifacts. Therefore, the use of coded excitation is not beneficial in PI fundamental contrast detection.

  16. Computational complexity of object-based image compression

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.

    2005-09-01

    Image compression via transform coding applied to small rectangular regions or encoding blocks appears to be approaching asymptotic rate-distortion performance. However, an emerging compression technology, called object-based compression (OBC) promises significantly improved performance via compression ratios ranging from 200:1 to as high as 2,500:1. OBC involves segmentation of image regions, followed by efficient encoding of each region's content and boundary. During decompression, such regions can be approximated by objects from a codebook, yielding a reconstructed image that is semantically equivalent to the corresponding source image, but has pixel- and featural-level differences. Semantic equivalence between the source and decompressed image facilitates fast decompression through efficient substitutions, albeit at the cost of codebook search in the compression step. Given small codebooks, OBC holds promise for information-push technologies where approximate context is sufficient, for example, transmission of surveillance images that provide the gist of a scene. However, OBC is not necessarily useful for applications requiring high accuracy, such as medical image processing, because substitution of source content can be inaccurate at small spatial scales. The cost of segmentation is a significant disadvantage in current OBC implementations. Several innovative techniques have been developed for region segmentation, as discussed in a previous paper [4]. Additionally, tradeoffs between representational fidelity, computational cost, and storage requirement occur, as with the vast majority of lossy compression algorithms. This paper analyzes the computational (time) and storage (space) complexities of several recent OBC algorithms applied to single-frame imagery. A time complexity model is proposed, which can be associated theoretically with a space complexity model that we have previously published [2]. The result, when combined with measurements of

  17. Preprocessing and compression of Hyperspectral images captured onboard UAVs

    NASA Astrophysics Data System (ADS)

    Herrero, Rolando; Cadirola, Martin; Ingle, Vinay K.

    2015-10-01

    Advancements in image sensors and signal processing have led to the successful development of lightweight hyperspectral imaging systems that are critical to the deployment of Photometry and Remote Sensing (PaRS) capabilities in unmanned aerial vehicles (UAVs). In general, hyperspectral data cubes include a few dozens of spectral bands that are extremely useful for remote sensing applications that range from detection of land vegetation to monitoring of atmospheric products derived from the processing of lower level radiance images. Because these data cubes are captured in the challenging environment of UAVs, where resources are limited, source encoding by means of compression is a fundamental mechanism that considerably improves the overall system performance and reliability. In this paper, we focus on the hyperspectral images captured by a state-of-the-art commercial hyperspectral camera by showing the results of applying ultraspectral data compression to the obtained data set. Specifically the compression scheme that we introduce integrates two stages; (1) preprocessing and (2) compression itself. The outcomes of this procedure are linear prediction coefficients and an error signal that, when encoded, results in a compressed version of the original image. Second, preprocessing and compression algorithms are optimized and have their time complexity analyzed to guarantee their successful deployment using low power ARM based embedded processors in the context of UAVs. Lastly, we compare the proposed architecture against other well known schemes and show how the compression scheme presented in this paper outperforms all of them by providing substantial improvement and delivering both lower compression rates and lower distortion.

  18. Watermarking of ultrasound medical images in teleradiology using compressed watermark.

    PubMed

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohamad; Ali, Mushtaq

    2016-01-01

    The open accessibility of Internet-based medical images in teleradialogy face security threats due to the nonsecured communication media. This paper discusses the spatial domain watermarking of ultrasound medical images for content authentication, tamper detection, and lossless recovery. For this purpose, the image is divided into two main parts, the region of interest (ROI) and region of noninterest (RONI). The defined ROI and its hash value are combined as watermark, lossless compressed, and embedded into the RONI part of images at pixel's least significant bits (LSBs). The watermark lossless compression and embedding at pixel's LSBs preserve image diagnostic and perceptual qualities. Different lossless compression techniques including Lempel-Ziv-Welch (LZW) were tested for watermark compression. The performances of these techniques were compared based on more bit reduction and compression ratio. LZW was found better than others and used in tamper detection and recovery watermarking of medical images (TDARWMI) scheme development to be used for ROI authentication, tamper detection, localization, and lossless recovery. TDARWMI performance was compared and found to be better than other watermarking schemes.

  19. Watermarking of ultrasound medical images in teleradiology using compressed watermark

    PubMed Central

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohamad; Ali, Mushtaq

    2016-01-01

    Abstract. The open accessibility of Internet-based medical images in teleradialogy face security threats due to the nonsecured communication media. This paper discusses the spatial domain watermarking of ultrasound medical images for content authentication, tamper detection, and lossless recovery. For this purpose, the image is divided into two main parts, the region of interest (ROI) and region of noninterest (RONI). The defined ROI and its hash value are combined as watermark, lossless compressed, and embedded into the RONI part of images at pixel’s least significant bits (LSBs). The watermark lossless compression and embedding at pixel’s LSBs preserve image diagnostic and perceptual qualities. Different lossless compression techniques including Lempel-Ziv-Welch (LZW) were tested for watermark compression. The performances of these techniques were compared based on more bit reduction and compression ratio. LZW was found better than others and used in tamper detection and recovery watermarking of medical images (TDARWMI) scheme development to be used for ROI authentication, tamper detection, localization, and lossless recovery. TDARWMI performance was compared and found to be better than other watermarking schemes. PMID:26839914

  20. Perceptually tuned JPEG coder for echocardiac image compression.

    PubMed

    Al-Fahoum, Amjed S; Reza, Ali M

    2004-09-01

    In this work, we propose an efficient framework for compressing and displaying medical images. Image compression for medical applications, due to available Digital Imaging and Communications in Medicine requirements, is limited to the standard discrete cosine transform-based joint picture expert group. The objective of this work is to develop a set of quantization tables (Q tables) for compression of a specific class of medical image sequences, namely echocardiac. The main issue of concern is to achieve a Q table that matches the specific application and can linearly change the compression rate by adjusting the gain factor. This goal is achieved by considering the region of interest, optimum bit allocation, human visual system constraint, and optimum coding technique. These parameters are jointly optimized to design a Q table that works robustly for a category of medical images. Application of this approach to echocardiac images shows high subjective and quantitative performance. The proposed approach exhibits objectively a 2.16-dB improvement in the peak signal-to-noise ratio and subjectively 25% improvement over the most useable compression techniques.

  1. Medical Image Compression Using a New Subband Coding Method

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen; Tucker, Doug

    1995-01-01

    A recently introduced iterative complexity- and entropy-constrained subband quantization design algorithm is generalized and applied to medical image compression. In particular, the corresponding subband coder is used to encode Computed Tomography (CT) axial slice head images, where statistical dependencies between neighboring image subbands are exploited. Inter-slice conditioning is also employed for further improvements in compression performance. The subband coder features many advantages such as relatively low complexity and operation over a very wide range of bit rates. Experimental results demonstrate that the performance of the new subband coder is relatively good, both objectively and subjectively.

  2. The FBI compression standard for digitized fingerprint images

    SciTech Connect

    Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.; Hopper, T.

    1996-10-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  3. Three-dimensional image compression with integer wavelet transforms.

    PubMed

    Bilgin, A; Zweig, G; Marcellin, M W

    2000-04-10

    A three-dimensional (3-D) image-compression algorithm based on integer wavelet transforms and zerotree coding is presented. The embedded coding of zerotrees of wavelet coefficients (EZW) algorithm is extended to three dimensions, and context-based adaptive arithmetic coding is used to improve its performance. The resultant algorithm, 3-D CB-EZW, efficiently encodes 3-D image data by the exploitation of the dependencies in all dimensions, while enabling lossy and lossless decompression from the same bit stream. Compared with the best available two-dimensional lossless compression techniques, the 3-D CB-EZW algorithm produced averages of 22%, 25%, and 20% decreases in compressed file sizes for computed tomography, magnetic resonance, and Airborne Visible Infrared Imaging Spectrometer images, respectively. The progressive performance of the algorithm is also compared with other lossy progressive-coding algorithms.

  4. Three-Dimensional Image Compression With Integer Wavelet Transforms

    NASA Astrophysics Data System (ADS)

    Bilgin, Ali; Zweig, George; Marcellin, Michael W.

    2000-04-01

    A three-dimensional (3-D) image-compression algorithm based on integer wavelet transforms and zerotree coding is presented. The embedded coding of zerotrees of wavelet coefficients (EZW) algorithm is extended to three dimensions, and context-based adaptive arithmetic coding is used to improve its performance. The resultant algorithm, 3-D CB-EZW, efficiently encodes 3-D image data by the exploitation of the dependencies in all dimensions, while enabling lossy and lossless decompression from the same bit stream. Compared with the best available two-dimensional lossless compression techniques, the 3-D CB-EZW algorithm produced averages of 22%, 25%, and 20% decreases in compressed file sizes for computed tomography, magnetic resonance, and Airborne Visible Infrared Imaging Spectrometer images, respectively. The progressive performance of the algorithm is also compared with other lossy progressive-coding algorithms.

  5. Compressive sampling for time critical microwave imaging applications

    PubMed Central

    O'Halloran, Martin; McGinley, Brian; Conceicao, Raquel C.; Kilmartin, Liam; Jones, Edward; Glavin, Martin

    2014-01-01

    Across all biomedical imaging applications, there is a growing emphasis placed on reducing data acquisition and imaging times. This research explores the use of a technique, known as compressive sampling or compressed sensing (CS), as an efficient technique to minimise the data acquisition time for time critical microwave imaging (MWI) applications. Where a signal exhibits sparsity in the time domain, the proposed CS implementation allows for sub-sampling acquisition in the frequency domain and consequently shorter imaging times, albeit at the expense of a slight degradation in reconstruction quality of the signals as the compression increases. This Letter focuses on ultra wideband (UWB) radar MWI applications where reducing acquisition is of critical importance therefore a slight degradation in reconstruction quality may be acceptable. The analysis demonstrates the effectiveness and suitability of CS with UWB applications. PMID:26609368

  6. Image Compression on a VLSI Neural-Based Vector Quantizer.

    ERIC Educational Resources Information Center

    Chen, Oscal T.-C.; And Others

    1992-01-01

    Describes a modified frequency-sensitive self-organization (FSO) algorithm for image data compression and the associated VLSI architecture. Topics discussed include vector quantization; VLSI neural processor architecture; detailed circuit implementation; and a neural network vector quantization prototype chip. Examples of images using the FSO…

  7. Image Compression on a VLSI Neural-Based Vector Quantizer.

    ERIC Educational Resources Information Center

    Chen, Oscal T.-C.; And Others

    1992-01-01

    Describes a modified frequency-sensitive self-organization (FSO) algorithm for image data compression and the associated VLSI architecture. Topics discussed include vector quantization; VLSI neural processor architecture; detailed circuit implementation; and a neural network vector quantization prototype chip. Examples of images using the FSO…

  8. Joint transform correlator using JPEG-compressed reference images

    NASA Astrophysics Data System (ADS)

    Widjaja, Joewono

    2013-06-01

    Pattern recognition by using joint transform correlator with JPEG-compressed reference images is studied. Human face and fingerprint images are used as test scenes with different spatial frequency contents. Recognition performance is quantitatively measured by taking into account effect of imbalance illumination and noise presence. The feasibility of implementing the proposed JTC is verified by using computer simulations and experiments.

  9. Compressive SAR imaging with joint sparsity and local similarity exploitation.

    PubMed

    Shen, Fangfang; Zhao, Guanghui; Shi, Guangming; Dong, Weisheng; Wang, Chenglong; Niu, Yi

    2015-02-12

    Compressive sensing-based synthetic aperture radar (SAR) imaging has shown its superior capability in high-resolution image formation. However, most of those works focus on the scenes that can be sparsely represented in fixed spaces. When dealing with complicated scenes, these fixed spaces lack adaptivity in characterizing varied image contents. To solve this problem, a new compressive sensing-based radar imaging approach with adaptive sparse representation is proposed. Specifically, an autoregressive model is introduced to adaptively exploit the structural sparsity of an image. In addition, similarity among pixels is integrated into the autoregressive model to further promote the capability and thus an adaptive sparse representation facilitated by a weighted autoregressive model is derived. Since the weighted autoregressive model is inherently determined by the unknown image, we propose a joint optimization scheme by iterative SAR imaging and updating of the weighted autoregressive model to solve this problem. Eventually, experimental results demonstrated the validity and generality of the proposed approach.

  10. Compressive SAR Imaging with Joint Sparsity and Local Similarity Exploitation

    PubMed Central

    Shen, Fangfang; Zhao, Guanghui; Shi, Guangming; Dong, Weisheng; Wang, Chenglong; Niu, Yi

    2015-01-01

    Compressive sensing-based synthetic aperture radar (SAR) imaging has shown its superior capability in high-resolution image formation. However, most of those works focus on the scenes that can be sparsely represented in fixed spaces. When dealing with complicated scenes, these fixed spaces lack adaptivity in characterizing varied image contents. To solve this problem, a new compressive sensing-based radar imaging approach with adaptive sparse representation is proposed. Specifically, an autoregressive model is introduced to adaptively exploit the structural sparsity of an image. In addition, similarity among pixels is integrated into the autoregressive model to further promote the capability and thus an adaptive sparse representation facilitated by a weighted autoregressive model is derived. Since the weighted autoregressive model is inherently determined by the unknown image, we propose a joint optimization scheme by iterative SAR imaging and updating of the weighted autoregressive model to solve this problem. Eventually, experimental results demonstrated the validity and generality of the proposed approach. PMID:25686307

  11. Multiview image compression based on LDV scheme

    NASA Astrophysics Data System (ADS)

    Battin, Benjamin; Niquin, Cédric; Vautrot, Philippe; Debons, Didier; Lucas, Laurent

    2011-03-01

    In recent years, we have seen several different approaches dealing with multiview compression. First, we can find the H264/MVC extension which generates quite heavy bitstreams when used on n-views autostereoscopic medias and does not allow inter-view reconstruction. Another solution relies on the MVD (MultiView+Depth) scheme which keeps p views (n > p > 1) and their associated depth-maps. This method is not suitable for multiview compression since it does not exploit the redundancy between the p views, moreover occlusion areas cannot be accurately filled. In this paper, we present our method based on the LDV (Layered Depth Video) approach which keeps one reference view with its associated depth-map and the n-1 residual ones required to fill occluded areas. We first perform a global per-pixel matching step (providing a good consistency between each view) in order to generate one unified-color RGB texture (where a unique color is devoted to all pixels corresponding to the same 3D-point, thus avoiding illumination artifacts) and a signed integer disparity texture. Next, we extract the non-redundant information and store it into two textures (a unified-color one and a disparity one) containing the reference and the n-1 residual views. The RGB texture is compressed with a conventional DCT or DWT-based algorithm and the disparity texture with a lossless dictionary algorithm. Then, we will discuss about the signal deformations generated by our approach.

  12. Context dependent prediction and category encoding for DPCM image compression

    NASA Technical Reports Server (NTRS)

    Beaudet, Paul R.

    1989-01-01

    Efficient compression of image data requires the understanding of the noise characteristics of sensors as well as the redundancy expected in imagery. Herein, the techniques of Differential Pulse Code Modulation (DPCM) are reviewed and modified for information-preserving data compression. The modifications include: mapping from intensity to an equal variance space; context dependent one and two dimensional predictors; rationale for nonlinear DPCM encoding based upon an image quality model; context dependent variable length encoding of 2x2 data blocks; and feedback control for constant output rate systems. Examples are presented at compression rates between 1.3 and 2.8 bits per pixel. The need for larger block sizes, 2D context dependent predictors, and the hope for sub-bits-per-pixel compression which maintains spacial resolution (information preserving) are discussed.

  13. Compressive spectral integral imaging using a microlens array

    NASA Astrophysics Data System (ADS)

    Feng, Weiyi; Rueda, Hoover; Fu, Chen; Qian, Chen; Arce, Gonzalo R.

    2016-05-01

    In this paper, a compressive spectral integral imaging system using a microlens array (MLA) is proposed. This system can sense the 4D spectro-volumetric information into a compressive 2D measurement image on the detector plane. In the reconstruction process, the 3D spatial information at different depths and the spectral responses of each spatial volume pixel can be obtained simultaneously. In the simulation, sensing of the 3D objects is carried out by optically recording elemental images (EIs) using a scanned pinhole camera. With the elemental images, a spectral data cube with different perspectives and depth information can be reconstructed using the TwIST algorithm in the multi-shot compressive spectral imaging framework. Then, the 3D spatial images with one dimensional spectral information at arbitrary depths are computed using the computational integral imaging method by inversely mapping the elemental images according to geometrical optics. The simulation results verify the feasibility of the proposed system. The 3D volume images and the spectral information of the volume pixels can be successfully reconstructed at the location of the 3D objects. The proposed system can capture both 3D volumetric images and spectral information in a video rate, which is valuable in biomedical imaging and chemical analysis.

  14. Improved Pediatric MR Imaging with Compressed Sensing1

    PubMed Central

    Alley, Marcus T.; Hargreaves, Brian A.; Barth, Richard A.; Pauly, John M.; Lustig, Michael

    2010-01-01

    Purpose: To develop a method that combines parallel imaging and compressed sensing to enable faster and/or higher spatial resolution magnetic resonance (MR) imaging and show its feasibility in a pediatric clinical setting. Materials and Methods: Institutional review board approval was obtained for this HIPAA-compliant study, and informed consent or assent was given by subjects. A pseudorandom k-space undersampling pattern was incorporated into a three-dimensional (3D) gradient-echo sequence; aliasing then has an incoherent noiselike pattern rather than the usual coherent fold-over wrapping pattern. This k-space–sampling pattern was combined with a compressed sensing nonlinear reconstruction method that exploits the assumption of sparsity of medical images to permit reconstruction from undersampled k-space data and remove the noiselike aliasing. Thirty-four patients (15 female and 19 male patients; mean age, 8.1 years; range, 0–17 years) referred for cardiovascular, abdominal, and knee MR imaging were scanned with this 3D gradient-echo sequence at high acceleration factors. Obtained k-space data were reconstructed with both a traditional parallel imaging algorithm and the nonlinear method. Both sets of images were rated for image quality, radiologist preference, and delineation of specific structures by two radiologists. Wilcoxon and symmetry tests were performed to test the hypothesis that there was no significant difference in ratings for image quality, preference, and delineation of specific structures. Results: Compressed sensing images were preferred more often, had significantly higher image quality ratings, and greater delineation of anatomic structures (P < .001) than did images obtained with the traditional parallel reconstruction method. Conclusion: A combination of parallel imaging and compressed sensing is feasible in a clinical setting and may provide higher resolution and/or faster imaging, addressing the challenge of delineating anatomic

  15. Image data compression using cubic convolution spline interpolation.

    PubMed

    Truong, T K; Wang, L J; Reed, I S; Hsieh, W S

    2000-01-01

    A new cubic convolution spline interpolation (CCSI )for both one-dimensional (1-D) and two-dimensional (2-D) signals is developed in order to subsample signal and image compression data. The CCSI yields a very accurate algorithm for smoothing. It is also shown that this new and fast smoothing filter for CCSI can be used with the JPEG standard to design an improved JPEG encoder-decoder for a high compression ratio.

  16. Compressive Estimation and Imaging Based on Autoregressive Models.

    PubMed

    Testa, Matteo; Magli, Enrico

    2016-11-01

    Compressed sensing (CS) is a fast and efficient way to obtain compact signal representations. Oftentimes, one wishes to extract some information from the available compressed signal. Since CS signal recovery is typically expensive from a computational point of view, it is inconvenient to first recover the signal and then extract the information. A much more effective approach consists in estimating the information directly from the signal's linear measurements. In this paper, we propose a novel framework for compressive estimation of autoregressive (AR) process parameters based on ad hoc sensing matrix construction. More in detail, we introduce a compressive least square estimator for AR(p) parameters and a specific AR(1) compressive Bayesian estimator. We exploit the proposed techniques to address two important practical problems. The first is compressive covariance estimation for Toeplitz structured covariance matrices where we tackle the problem with a novel parametric approach based on the estimated AR parameters. The second is a block-based compressive imaging system, where we introduce an algorithm that adaptively calculates the number of measurements to be acquired for each block from a set of initial measurements based on its degree of compressibility. We show that the proposed techniques outperform the state-of-the-art methods for these two problems.

  17. Perceptual rate-distortion optimized image compression based on block compressive sensing

    NASA Astrophysics Data System (ADS)

    Xu, Jin; Qiao, Yuansong; Wen, Quan; Fu, Zhizhong

    2016-09-01

    The emerging compressive sensing (CS) theory provides a paradigm for image compression. Most current efforts in CS-based image compression have been focused on enhancing the objective coding efficiency. In order to achieve a maximal perceptual quality under the measurements budget constraint, we propose a perceptual rate-distortion optimized (RDO) CS-based image codec in this paper. By incorporating both the human visual system characteristics and the signal sparsity into a RDO model designed for the block compressive sensing framework, the measurements allocation for each block is formulated as an optimization problem, which can be efficiently solved by the Lagrangian relaxation method. After the optimal measurement number is determined, each block is adaptively sampled using an image-dependent measurement matrix. To make our proposed codec applicable to different scenarios, we also propose two solutions to implement the perceptual RDO measurements allocation technique: one at the encoder side and the other at the decoder side. The experimental results show that our codec outperforms the other existing CS-based image codecs in terms of both objective and subjective performances. In particular, our codec can also achieve a low complexity encoder by adopting the decoder-based solution for the perceptual RDO measurements allocation.

  18. Fast-adaptive near-lossless image compression

    NASA Astrophysics Data System (ADS)

    He, Kejing

    2016-05-01

    The purpose of image compression is to store or transmit image data efficiently. However, most compression methods emphasize the compression ratio rather than the throughput. We propose an encoding process and rules, and consequently a fast-adaptive near-lossless image compression method (FAIC) with good compression ratio. FAIC is a single-pass method, which removes bits from each codeword, then predicts the next pixel value through localized edge detection techniques, and finally uses Golomb-Rice codes to encode the residuals. FAIC uses only logical operations, bitwise operations, additions, and subtractions. Meanwhile, it eliminates the slow operations (e.g., multiplication, division, and logarithm) and the complex entropy coder, which can be a bottleneck in hardware implementations. Besides, FAIC does not depend on any precomputed tables or parameters. Experimental results demonstrate that FAIC achieves good balance between compression ratio and computational complexity in certain range (e.g., peak signal-to-noise ratio >35 dB, bits per pixel>2). It is suitable for applications in which the amount of data is huge or the computation power is limited.

  19. Feature preserving compression of high resolution SAR images

    NASA Astrophysics Data System (ADS)

    Yang, Zhigao; Hu, Fuxiang; Sun, Tao; Qin, Qianqing

    2006-10-01

    Compression techniques are required to transmit the large amounts of high-resolution synthetic aperture radar (SAR) image data over the available channels. Common Image compression methods may lose detail and weak information in original images, especially at smoothness areas and edges with low contrast. This is known as "smoothing effect". It becomes difficult to extract and recognize some useful image features such as points and lines. We propose a new SAR image compression algorithm that can reduce the "smoothing effect" based on adaptive wavelet packet transform and feature-preserving rate allocation. For the reason that images should be modeled as non-stationary information resources, a SAR image is partitioned to overlapped blocks. Each overlapped block is then transformed by adaptive wavelet packet according to statistical features of different blocks. In quantifying and entropy coding of wavelet coefficients, we integrate feature-preserving technique. Experiments show that quality of our algorithm up to 16:1 compression ratio is improved significantly, and more weak information is reserved.

  20. Improved satellite image compression and reconstruction via genetic algorithms

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael; Lamont, Gary

    2008-10-01

    A wide variety of signal and image processing applications, including the US Federal Bureau of Investigation's fingerprint compression standard [3] and the JPEG-2000 image compression standard [26], utilize wavelets. This paper describes new research that demonstrates how a genetic algorithm (GA) may be used to evolve transforms that outperform wavelets for satellite image compression and reconstruction under conditions subject to quantization error. The new approach builds upon prior work by simultaneously evolving real-valued coefficients representing matched forward and inverse transform pairs at each of three levels of a multi-resolution analysis (MRA) transform. The training data for this investigation consists of actual satellite photographs of strategic urban areas. Test results show that a dramatic reduction in the error present in reconstructed satellite images may be achieved without sacrificing the compression capabilities of the forward transform. The transforms evolved during this research outperform previous start-of-the-art solutions, which optimized coefficients for the reconstruction transform only. These transforms also outperform wavelets, reducing error by more than 0.76 dB at a quantization level of 64. In addition, transforms trained using representative satellite images do not perform quite as well when subsequently tested against images from other classes (such as fingerprints or portraits). This result suggests that the GA developed for this research is automatically learning to exploit specific attributes common to the class of images represented in the training population.

  1. High speed fluorescence imaging with compressed ultrafast photography

    NASA Astrophysics Data System (ADS)

    Thompson, J. V.; Mason, J. D.; Beier, H. T.; Bixler, J. N.

    2017-02-01

    Fluorescent lifetime imaging is an optical technique that facilitates imaging molecular interactions and cellular functions. Because the excited lifetime of a fluorophore is sensitive to its local microenvironment,1, 2 measurement of fluorescent lifetimes can be used to accurately detect regional changes in temperature, pH, and ion concentration. However, typical state of the art fluorescent lifetime methods are severely limited when it comes to acquisition time (on the order of seconds to minutes) and video rate imaging. Here we show that compressed ultrafast photography (CUP) can be used in conjunction with fluorescent lifetime imaging to overcome these acquisition rate limitations. Frame rates up to one hundred billion frames per second have been demonstrated with compressed ultrafast photography using a streak camera.3 These rates are achieved by encoding time in the spatial direction with a pseudo-random binary pattern. The time domain information is then reconstructed using a compressed sensing algorithm, resulting in a cube of data (x,y,t) for each readout image. Thus, application of compressed ultrafast photography will allow us to acquire an entire fluorescent lifetime image with a single laser pulse. Using a streak camera with a high-speed CMOS camera, acquisition rates of 100 frames per second can be achieved, which will significantly enhance our ability to quantitatively measure complex biological events with high spatial and temporal resolution. In particular, we will demonstrate the ability of this technique to do single-shot fluorescent lifetime imaging of cells and microspheres.

  2. Medical image compression with embedded-wavelet transform

    NASA Astrophysics Data System (ADS)

    Cheng, Po-Yuen; Lin, Freddie S.; Jannson, Tomasz

    1997-10-01

    The need for effective medical image compression and transmission techniques continues to grow because of the huge volume of radiological images captured each year. The limited bandwidth and efficiency of current networking systems cannot meet this need. In response, Physical Optics Corporation devised an efficient medical image management system to significantly reduce the storage space and transmission bandwidth required for digitized medical images. The major functions of this system are: (1) compressing medical imagery, using a visual-lossless coder, to reduce the storage space required; (2) transmitting image data progressively, to use the transmission bandwidth efficiently; and (3) indexing medical imagery according to image characteristics, to enable automatic content-based retrieval. A novel scalable wavelet-based image coder was developed to implement the system. In addition to its high compression, this approach is scalable in both image size and quality. The system provides dramatic solutions to many medical image handling problems. One application is the efficient storage and fast transmission of medical images over picture archiving and communication systems. In addition to reducing costs, the potential impact on improving the quality and responsiveness of health care delivery in the US is significant.

  3. Automated Compression Device for Viscoelasticity Imaging

    PubMed Central

    Nabavizadeh, Alireza; Kinnick, Randall R.; Bayat, Mahdi; Amador, Carolina; Urban, Matthew W.; Alizad, Azra; Fatemi, Mostafa

    2017-01-01

    Non-invasive measurement of tissue viscoelastic properties is gaining more attention for screening and diagnostic purposes. Recently, measuring dynamic response of tissue under a constant force has been studied for estimation of tissue viscoelastic properties in terms of retardation times. The essential part of such a test is an instrument that is capable of creating a controlled axial force and is suitable for clinical applications. Such a device should be lightweight, portable and easy to use for patient studies to capture tissue dynamics under external stress. In this paper we present the design of an automated compression device for studying the creep response of materials with tissue-like behaviors. The device can be used to apply a ramp-and-hold force excitation for a predetermined duration of time and it houses an ultrasound probe for monitoring the creep response of the underlying tissue. To validate the performance of the device, several creep tests were performed on tissue-mimicking phantoms and the results were compared against those from a commercial mechanical testing instrument. Using a second order Kelvin-Voigt model and surface measurement of the forces and displacements, retardation times T1 and T2 were estimated from each test. These tests showed strong agreement between our automated compression device and the commercial mechanical testing system, with an average relative error of 2.9% and 12.4 %, for T1 and T2 respectively. Also, we present the application of compression device to measure local retardation times for four different phantoms with different size and stiffness. PMID:28113299

  4. Effect of Image Linearization on Normalized Compression Distance

    NASA Astrophysics Data System (ADS)

    Mortensen, Jonathan; Wu, Jia Jie; Furst, Jacob; Rogers, John; Raicu, Daniela

    Normalized Information Distance, based on Kolmogorov complexity, is an emerging metric for image similarity. It is approximated by the Normalized Compression Distance (NCD) which generates the relative distance between two strings by using standard compression algorithms to compare linear strings of information. This relative distance quantifies the degree of similarity between the two objects. NCD has been shown to measure similarity effectively on information which is already a string: genomic string comparisons have created accurate phylogeny trees and NCD has also been used to classify music. Currently, to find a similarity measure using NCD for images, the images must first be linearized into a string, and then compared. To understand how linearization of a 2D image affects the similarity measure, we perform four types of linearization on a subset of the Corel image database and compare each for a variety of image transformations. Our experiment shows that different linearization techniques produce statistically significant differences in NCD for identical spatial transformations.

  5. OARSI Clinical Trials Recommendations for Hip Imaging in Osteoarthritis

    PubMed Central

    Gold, Garry E.; Cicuttini, Flavia; Crema, Michel D.; Eckstein, Felix; Guermazi, Ali; Kijowski, Richard; Link, Thomas M.; Maheu, Emmanuel; Martel-Pelletier, Johanne; Miller, Colin G.; Pelletier, Jean-Pierre; Peterfy, Charles G.; Potter, Hollis G.; Roemer, Frank W.; Hunter, David. J

    2015-01-01

    Imaging of hip in osteoarthritis (OA) has seen considerable progress in the past decade, with the introduction of new techniques that may be more sensitive to structural disease changes. The purpose of this expert opinion, consensus driven recommendation is to provide detail on how to apply hip imaging in disease modifying clinical trials. It includes information on acquisition methods/ techniques (including guidance on positioning for radiography, sequence/protocol recommendations/ hardware for MRI); commonly encountered problems (including positioning, hardware and coil failures, artifacts associated with various MRI sequences); quality assurance/ control procedures; measurement methods; measurement performance (reliability, responsiveness, and validity); recommendations for trials; and research recommendations. PMID:25952344

  6. Compression of Ultrasonic NDT Image by Wavelet Based Local Quantization

    NASA Astrophysics Data System (ADS)

    Cheng, W.; Li, L. Q.; Tsukada, K.; Hanasaki, K.

    2004-02-01

    Compression on ultrasonic image that is always corrupted by noise will cause `over-smoothness' or much distortion. To solve this problem to meet the need of real time inspection and tele-inspection, a compression method based on Discrete Wavelet Transform (DWT) that can also suppress the noise without losing much flaw-relevant information, is presented in this work. Exploiting the multi-resolution and interscale correlation property of DWT, a simple way named DWCs classification, is introduced first to classify detail wavelet coefficients (DWCs) as dominated by noise, signal or bi-effected. A better denoising can be realized by selective thresholding DWCs. While in `Local quantization', different quantization strategies are applied to the DWCs according to their classification and the local image property. It allocates the bit rate more efficiently to the DWCs thus achieve a higher compression rate. Meanwhile, the decompressed image shows the effects of noise suppressed and flaw characters preserved.

  7. A specific measurement matrix in compressive imaging system

    NASA Astrophysics Data System (ADS)

    Wang, Fen; Wei, Ping; Ke, Jun

    2011-11-01

    Compressed sensing or compressive sampling (CS) is a new framework for simultaneous data sampling and compression which was proposed by Candes, Donoho, and Tao several years ago. Ever since the advent of a single-pixel camera, one of the CS applications - compressive imaging (CI, also referred as feature-specific imaging) has aroused more interest of numerous researchers. However, it is still a challenging problem to choose a simple and efficient measurement matrix in such a hardware system, especially for large scale image. In this paper, we propose a new measurement matrix whose rows are the odd rows of N order Hadamard matrix and discuss the validity of the matrix theoretically. The advantage of the matrix is its universality and easy implementation in the optical domain owing to its integer-valued elements. In addition, we demonstrate the validity of the matrix through the reconstruction of natural images using Orthogonal Matching Pursuit (OMP) algorithm. Due to the limitation of the memory of the hardware system and personal computer which is used to simulate the process, it is impossible to create such a large matrix that is used to conduct large scale images. In order to solve the problem, the block-wise notion is introduced to conduct large scale images and the experiments results present the validity of this method.

  8. High-speed lossless compression for angiography image sequences

    NASA Astrophysics Data System (ADS)

    Kennedy, Jonathon M.; Simms, Michael; Kearney, Emma; Dowling, Anita; Fagan, Andrew; O'Hare, Neil J.

    2001-05-01

    High speed processing of large amounts of data is a requirement for many diagnostic quality medical imaging applications. A demanding example is the acquisition, storage and display of image sequences in angiography. The functional performance requirements for handling angiography data were identified. A new lossless image compression algorithm was developed, implemented in C++ for the Intel Pentium/MS-Windows environment and optimized for speed of operation. Speeds of up to 6M pixels per second for compression and 12M pixels per second for decompression were measured. This represents an improvement of up to 400% over the next best high-performance algorithm (LOCO-I) without significant reduction in compression ratio. Performance tests were carried out at St. James's Hospital using actual angiography data. Results were compared with the lossless JPEG standard and other leading methods such as JPEG-LS (LOCO-I) and the lossless wavelet approach proposed for JPEG 2000. Our new algorithm represents a significant improvement in the performance of lossless image compression technology without using specialized hardware. It has been applied successfully to image sequence decompression at video rate for angiography, one of the most challenging application areas in medical imaging.

  9. Compression through decomposition into browse and residual images

    NASA Technical Reports Server (NTRS)

    Novik, Dmitry A.; Tilton, James C.; Manohar, M.

    1993-01-01

    Economical archival and retrieval of image data is becoming increasingly important considering the unprecedented data volumes expected from the Earth Observing System (EOS) instruments. For cost effective browsing the image data (possibly from remote site), and retrieving the original image data from the data archive, we suggest an integrated image browse and data archive system employing incremental transmission. We produce our browse image data with the JPEG/DCT lossy compression approach. Image residual data is then obtained by taking the pixel by pixel differences between the original data and the browse image data. We then code the residual data with a form of variable length coding called diagonal coding. In our experiments, the JPEG/DCT is used at different quality factors (Q) to generate the browse and residual data. The algorithm has been tested on band 4 of two Thematic mapper (TM) data sets. The best overall compression ratios (of about 1.7) were obtained when a quality factor of Q=50 was used to produce browse data at a compression ratio of 10 to 11. At this quality factor the browse image data has virtually no visible distortions for the images tested.

  10. Feature-preserving image/video compression

    NASA Astrophysics Data System (ADS)

    Al-Jawad, Naseer; Jassim, Sabah

    2005-10-01

    Advances in digital image processing, the advents of multimedia computing, and the availability of affordable high quality digital cameras have led to increased demand for digital images/videos. There has been a fast growth in the number of information systems that benefit from digital imaging techniques and present many tough challenges. In this paper e are concerned with applications for which image quality is a critical requirement. The fields of medicine, remote sensing, real time surveillance, and image-based automatic fingerprint/face identification systems are all but few examples of such applications. Medical care is increasingly dependent on imaging for diagnostics, surgery, and education. It is estimated that medium size hospitals in the US generate terabytes of MRI images and X-Ray images are generated to be stored in very large databases which are frequently accessed and searched for research and training. On the other hand, the rise of international terrorism and the growth of identity theft have added urgency to the development of new efficient biometric-based person verification/authentication systems. In future, such systems can provide an additional layer of security for online transactions or for real-time surveillance.

  11. Spatial exemplars and metrics for characterizing image compression transform error

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Caimi, Frank M.

    2001-12-01

    The efficient transmission and storage of digital imagery increasingly requires compression to maintain effective channel bandwidth and device capacity. Unfortunately, in applications where high compression ratios are required, lossy compression transforms tend to produce a wide variety of artifacts in decompressed images. Image quality measures (IQMs) have been published that detect global changes in image configuration resulting from the compression or decompression process. Examples include statistical and correlation-based procedures related to mean-squared error, diffusion of energy from features of interest, and spectral analysis. Additional but sparsely-reported research involves local IQMs that quantify feature distortion in terms of objective or subjective models. In this paper, a suite of spatial exemplars and evaluation procedures is introduced that can elicit and measure a wide range of spatial, statistical, or spectral distortions from an image compression transform T. By applying the test suite to the input of T, performance deficits can be highlighted in the transform's design phase, versus discovery under adverse conditions in field practice. In this study, performance analysis is concerned primarily with the effect of compression artifacts on automated target recognition (ATR) algorithm performance. For example, featural distortion can be measured using linear, curvilinear, polygonal, or elliptical features interspersed with various textures or noise-perturbed backgrounds or objects. These simulated target blobs may themselves be perturbed with various types or levels of noise, thereby facilitating measurement of statistical target-background interactions. By varying target-background contrast, resolution, noise level, and target shape, compression transforms can be stressed to isolate performance deficits. Similar techniques can be employed to test spectral, phase and boundary distortions due to decompression. Applicative examples are taken from

  12. Image and Video Compression with VLSI Neural Networks

    NASA Technical Reports Server (NTRS)

    Fang, W.; Sheu, B.

    1993-01-01

    An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.

  13. Image and Video Compression with VLSI Neural Networks

    NASA Technical Reports Server (NTRS)

    Fang, W.; Sheu, B.

    1993-01-01

    An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.

  14. Spectral Ringing Artifacts in Hyperspectral Image Data Compression

    NASA Astrophysics Data System (ADS)

    Klimesh, M.; Kiely, A.; Xie, H.; Aranki, N.

    2005-02-01

    When a three-dimensional wavelet decomposition is used for compression of hyperspectral images, spectral ringing artifacts can arise, manifesting themselves as systematic biases in some reconstructed spectral bands. More generally, systematic differences in signal level in different spectral bands can hurt compression effectiveness of spatially low-pass subbands. The mechanism by which this occurs is described in the context of ICER-3D, a hyperspectral imagery extension of the ICER image compressor. Methods of mitigating or eliminating the detrimental effects of systematic band-dependent signal levels are proposed and discussed, and results are presented.

  15. Compressive Passive Millimeter Wave Imaging with Extended Depth of Field

    DTIC Science & Technology

    2012-01-01

    weapons are clearly detected in the mmW image. Recently, in [3], Mait et al. presented a computational imaging method to extend the depth-of-field of a...passive mmW imaging sys- tem. The method uses a cubic phase element in the pupil plane of the system to render system operation relatively insensitive...compressive sampling methods [4], [5] have been applied to mmW imaging which reduces the number of samples required to form an image [6], [7], [8

  16. View compensated compression of volume rendered images for remote visualization.

    PubMed

    Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S

    2009-07-01

    Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.

  17. Compressive microscopic imaging with "positive-negative" light modulation

    NASA Astrophysics Data System (ADS)

    Yu, Wen-Kai; Yao, Xu-Ri; Liu, Xue-Feng; Lan, Ruo-Ming; Wu, Ling-An; Zhai, Guang-Jie; Zhao, Qing

    2016-07-01

    An experiment on compressive microscopic imaging with single-pixel detector and single-arm has been performed on the basis of ;positive-negative; (differential) light modulation of a digital micromirror device (DMD). A magnified image of micron-sized objects illuminated by the microscope's own incandescent lamp has been successfully acquired. The image quality is improved by one more orders of magnitude compared with that obtained by conventional single-pixel imaging scheme with normal modulation using the same sampling rate, and moreover, the system is robust against the instability of light source and may be applied to very weak light condition. Its nature and the analysis of noise sources is discussed deeply. The realization of this technique represents a big step to the practical applications of compressive microscopic imaging in the fields of biology and materials science.

  18. Pulse-compression ghost imaging lidar via coherent detection

    NASA Astrophysics Data System (ADS)

    Deng, Chenjin; Gong, Wenlin; Han, Shensheng

    2016-11-01

    Ghost imaging (GI) lidar, as a novel remote sensing technique,has been receiving increasing interest in recent years. By combining pulse-compression technique and coherent detection with GI, we propose a new lidar system called pulse-compression GI lidar. Our analytical results, which are backed up by numerical simulations, demonstrate that pulse-compression GI lidar can obtain the target's spatial intensity distribution, range and moving velocity. Compared with conventional pulsed GI lidar system, pulse-compression GI lidar, without decreasing the range resolution, is easy to obtain high single pulse energy with the use of a long pulse, and the mechanism of coherent detection can eliminate the influence of the stray light, which can dramatically improve the detection sensitivity and detection range.

  19. Pulse-compression ghost imaging lidar via coherent detection.

    PubMed

    Deng, Chenjin; Gong, Wenlin; Han, Shensheng

    2016-11-14

    Ghost imaging (GI) lidar, as a novel remote sensing technique, has been receiving increasing interest in recent years. By combining pulse-compression technique and coherent detection with GI, we propose a new lidar system called pulse-compression GI lidar. Our analytical results, which are backed up by numerical simulations, demonstrate that pulse-compression GI lidar can obtain the target's spatial intensity distribution, range and moving velocity. Compared with conventional pulsed GI lidar system, pulse-compression GI lidar, without decreasing the range resolution, is easy to obtain high single pulse energy with the use of a long pulse, and the mechanism of coherent detection can eliminate the influence of the stray light, which is helpful to improve the detection sensitivity and detection range.

  20. An innovative lossless compression method for discrete-color images.

    PubMed

    Alzahir, Saif; Borici, Arber

    2015-01-01

    In this paper, we present an innovative method for lossless compression of discrete-color images, such as map images, graphics, GIS, as well as binary images. This method comprises two main components. The first is a fixed-size codebook encompassing 8×8 bit blocks of two-tone data along with their corresponding Huffman codes and their relative probabilities of occurrence. The probabilities were obtained from a very large set of discrete color images which are also used for arithmetic coding. The second component is the row-column reduction coding, which will encode those blocks that are not in the codebook. The proposed method has been successfully applied on two major image categories: 1) images with a predetermined number of discrete colors, such as digital maps, graphs, and GIS images and 2) binary images. The results show that our method compresses images from both categories (discrete color and binary images) with 90% in most case and higher than the JBIG-2 by 5%-20% for binary images, and by 2%-6.3% for discrete color images on average.

  1. Integer wavelet transform for embedded lossy to lossless image compression.

    PubMed

    Reichel, J; Menegaz, G; Nadenau, M J; Kunt, M

    2001-01-01

    The use of the discrete wavelet transform (DWT) for embedded lossy image compression is now well established. One of the possible implementations of the DWT is the lifting scheme (LS). Because perfect reconstruction is granted by the structure of the LS, nonlinear transforms can be used, allowing efficient lossless compression as well. The integer wavelet transform (IWT) is one of them. This is an interesting alternative to the DWT because its rate-distortion performance is similar and the differences can be predicted. This topic is investigated in a theoretical framework. A model of the degradations caused by the use of the IWT instead of the DWT for lossy compression is presented. The rounding operations are modeled as additive noise. The noise are then propagated through the LS structure to measure their impact on the reconstructed pixels. This methodology is verified using simulations with random noise as input. It predicts accurately the results obtained using images compressed by the well-known EZW algorithm. Experiment are also performed to measure the difference in terms of bit rate and visual quality. This allows to a better understanding of the impact of the IWT when applied to lossy image compression.

  2. Target-driven selection of lossy hyperspectral image compression ratios

    NASA Astrophysics Data System (ADS)

    Kaufman, Jason R.; McGuinness, Christopher D.

    2017-05-01

    A common problem in applying lossy compression to a hyperspectral image is predicting its effect on spectral target detection performance. Recent work has shown that light amounts of lossy compression can remove noise in hyperspectral imagery that would otherwise bias a covariance-based spectral target detection algorithm's background-normalized response to target samples. However, the detection performance of such an algorithm is a function of both the specific target of interest as well as the background, among other factors, and therefore sometimes lossy compression operating at a particular compression ratio (CR) will not negatively affect the detection of one target, while it will negatively affect the detection of another. To account for the variability in this behavior, we have developed a target-centric metric that guides the selection of a lossy compression algorithm's CR without knowledge of whether or not the targets of interest are present in an image. Further, we show that this metric is correlated with the adaptive coherence estimator's (ACE's) signal to clutter ratio when targets are present in an image.

  3. Simultaneous image compression, fusion and encryption algorithm based on compressive sensing and chaos

    NASA Astrophysics Data System (ADS)

    Liu, Xingbin; Mei, Wenbo; Du, Huiqian

    2016-05-01

    In this paper, a novel approach based on compressive sensing and chaos is proposed for simultaneously compressing, fusing and encrypting multi-modal images. The sparsely represented source images are firstly measured with the key-controlled pseudo-random measurement matrix constructed using logistic map, which reduces the data to be processed and realizes the initial encryption. Then the obtained measurements are fused by the proposed adaptive weighted fusion rule. The fused measurement is further encrypted into the ciphertext through an iterative procedure including improved random pixel exchanging technique and fractional Fourier transform. The fused image can be reconstructed by decrypting the ciphertext and using a recovery algorithm. The proposed algorithm not only reduces data volume but also simplifies keys, which improves the efficiency of transmitting data and distributing keys. Numerical results demonstrate the feasibility and security of the proposed scheme.

  4. Improved zerotree coding algorithm for wavelet image compression

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Li, Yunsong; Wu, Chengke

    2000-12-01

    A listless minimum zerotree coding algorithm based on the fast lifting wavelet transform with lower memory requirement and higher compression performance is presented in this paper. Most state-of-the-art image compression techniques based on wavelet coefficients, such as EZW and SPIHT, exploit the dependency between the subbands in a wavelet transformed image. We propose a minimum zerotree of wavelet coefficients which exploits the dependency not only between the coarser and the finer subbands but also within the lowest frequency subband. And a ne listless significance map coding algorithm based on the minimum zerotree, using new flag maps and new scanning order different form Wen-Kuo Lin et al. LZC, is also proposed. A comparison reveals that the PSNR results of LMZC are higher than those of LZC, and the compression performance of LMZC outperforms that of SPIHT in terms of hard implementation.

  5. Overview of parallel processing approaches to image and video compression

    NASA Astrophysics Data System (ADS)

    Shen, Ke; Cook, Gregory W.; Jamieson, Leah H.; Delp, Edward J., III

    1994-05-01

    In this paper we present an overview of techniques used to implement various image and video compression algorithms using parallel processing. Approaches used can largely be divided into four areas. The first is the use of special purpose architectures designed specifically for image and video compression. An example of this is the use of an array of DSP chips to implement a version of MPEG1. The second approach is the use of VLSI techniques. These include various chip sets for JPEG and MPEG1. The third approach is algorithm driven, in which the structure of the compression algorithm describes the architecture, e.g. pyramid algorithms. The fourth approach is the implementation of algorithms on high performance parallel computers. Examples of this approach are the use of a massively parallel computer such as the MasPar MP-1 or the use of a coarse-grained machine such as the Intel Touchstone Delta.

  6. Application of joint orthogonal bases in compressive sensing ghost image

    NASA Astrophysics Data System (ADS)

    Fan, Xiang; Chen, Yi; Cheng, Zheng-dong; Liang, Zheng-yu; Zhu, Bin

    2016-11-01

    Sparse decomposition is one of the core issue of compressive sensing ghost image. At this stage, traditional methods still have the problems of poor sparsity and low reconstruction accuracy, such as discrete fourier transform and discrete cosine transform. In order to solve these problems, joint orthogonal bases transform is proposed to optimize ghost imaging. First, introduce the principle of compressive sensing ghost imaging and point out that sparsity is related to the minimum sample data required for imaging. Then, analyze the development and principle of joint orthogonal bases in detail and find out it can use less nonzero coefficients to reach the same identification effect as other methods. So, joint orthogonal bases transform is able to provide the sparsest representation. Finally, the experimental setup is built in order to verify simulation results. Experimental results indicate that the PSNR of joint orthogonal bases is much higher than traditional methods by using same sample data in compressive sensing ghost image.Therefore, joint orthogonal bases transform can realize better imaging quality under less sample data, which can satisfy the system requirements of convenience and rapid speed in ghost image.

  7. Compressive Hyperspectral Imaging and Anomaly Detection

    DTIC Science & Technology

    2013-03-01

    simple, yet effective method of using the spatial information to increase the accuracy of target detection. The idea is to apply TV denoising [4] to the...a zero value, and isolated false alarm pixels are usually eliminated by the TV denoising algorithm. 2 2.1.1 TV Denoising Here we briefly describe the...total variation denoising model[4] we use in the above. Given an image I ∈ R2, we solve the following L1 minimization problem to denoise the image

  8. Efficient wavelet compression for images of arbitrary size

    NASA Astrophysics Data System (ADS)

    Murao, Kohei

    1996-10-01

    Wavelet compression for arbitrary size images is discussed. So far, wavelet compression has dealt with restricted size images, such as 2n X 2m. I propose practical and efficient methods of wavelet transform for arbitrary size images, i.e. method of extension to F (DOT) 2m and method of extension to even numbers at each decomposition. I applied them to 'Mona Lisa' with the size of 137 X 180. The two methods showed almost the same calculation time for both encoding and decoding. The encoding times were 0.83 s and 0.79 s, and the decoding times were 0.60 s and 0.57 s, respectively. The difference in bit-rates was attributed to the difference in the interpolation of the edge data of the image.

  9. Eye-Movement Tracking Using Compressed Video Images

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.; Beutter, Brent R.; Hull, Cynthia H. (Technical Monitor)

    1994-01-01

    Infrared video cameras offer a simple noninvasive way to measure the position of the eyes using relatively inexpensive equipment. Several commercial systems are available which use special hardware to localize features in the image in real time, but the constraint of realtime performance limits the complexity of the applicable algorithms. In order to get better resolution and accuracy, we have used off-line processing to apply more sophisticated algorithms to the images. In this case, a major technical challenge is the real-time acquisition and storage of the video images. This has been solved using a strictly digital approach, exploiting the burgeoning field of hardware video compression. In this paper we describe the algorithms we have developed for tracking the movements of the eyes in video images, and present experimental results showing how the accuracy is affected by the degree of video compression.

  10. Fractal image compression: A resolution independent representation for imagery

    NASA Technical Reports Server (NTRS)

    Sloan, Alan D.

    1993-01-01

    A deterministic fractal is an image which has low information content and no inherent scale. Because of their low information content, deterministic fractals can be described with small data sets. They can be displayed at high resolution since they are not bound by an inherent scale. A remarkable consequence follows. Fractal images can be encoded at very high compression ratios. This fern, for example is encoded in less than 50 bytes and yet can be displayed at resolutions with increasing levels of detail appearing. The Fractal Transform was discovered in 1988 by Michael F. Barnsley. It is the basis for a new image compression scheme which was initially developed by myself and Michael Barnsley at Iterated Systems. The Fractal Transform effectively solves the problem of finding a fractal which approximates a digital 'real world image'.

  11. Adaptive Compression of Multisensor Image Data

    DTIC Science & Technology

    1992-03-01

    upsample and reconstruct the subimages which are then added together to form the reconstructed image. In order to prevent distortions resulting from...smooth surfaces such as metallic or painted objects have predominantly path A reflections and that rougher surfaces such as soils and vegetation support

  12. Wavelet-based pavement image compression and noise reduction

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Huang, Peisen S.; Chiang, Fu-Pen

    2005-08-01

    For any automated distress inspection system, typically a huge number of pavement images are collected. Use of an appropriate image compression algorithm can save disk space, reduce the saving time, increase the inspection distance, and increase the processing speed. In this research, a modified EZW (Embedded Zero-tree Wavelet) coding method, which is an improved version of the widely used EZW coding method, is proposed. This method, unlike the two-pass approach used in the original EZW method, uses only one pass to encode both the coordinates and magnitudes of wavelet coefficients. An adaptive arithmetic encoding method is also implemented to encode four symbols assigned by the modified EZW into binary bits. By applying a thresholding technique to terminate the coding process, the modified EZW coding method can compress the image and reduce noise simultaneously. The new method is much simpler and faster. Experimental results also show that the compression ratio was increased one and one-half times compared to the EZW coding method. The compressed and de-noised data can be used to reconstruct wavelet coefficients for off-line pavement image processing such as distress classification and quantification.

  13. Projection-based medical image compression for telemedicine applications.

    PubMed

    Juliet, Sujitha; Rajsingh, Elijah Blessing; Ezra, Kirubakaran

    2015-04-01

    Recent years have seen great development in the field of medical imaging and telemedicine. Despite the developments in storage and communication technologies, compression of medical data remains challenging. This paper proposes an efficient medical image compression method for telemedicine. The proposed method takes advantage of Radon transform whose basis functions are effective in representing the directional information. The periodic re-ordering of the elements of Radon projections requires minimal interpolation and preserves all of the original image pixel intensities. The dimension-reducing property allows the conversion of 2D processing task to a set of simple 1D task independently on each of the projections. The resultant Radon coefficients are then encoded using set partitioning in hierarchical trees (SPIHT) encoder. Experimental results obtained on a set of medical images demonstrate that the proposed method provides competing performance compared with conventional and state-of-the art compression methods in terms of compression ratio, peak signal-to-noise ratio (PSNR), and computational time.

  14. Knowledge-based image bandwidth compression and enhancement

    NASA Astrophysics Data System (ADS)

    Saghri, John A.; Tescher, Andrew G.

    1987-01-01

    Techniques for incorporating a priori knowledge in the digital coding and bandwidth compression of image data are described and demonstrated. An algorithm for identifying and highlighting thin lines and point objects prior to coding is presented, and the precoding enhancement of a slightly smoothed version of the image is shown to be more effective than enhancement of the original image. Also considered are readjustment of the local distortion parameter and variable-block-size coding. The line-segment criteria employed in the classification are listed in a table, and sample images demonstrating the effectiveness of the enhancement techniques are presented.

  15. Optimized satellite image compression and reconstruction via evolution strategies

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael

    2009-05-01

    This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.

  16. Structure assisted compressed sensing reconstruction of undersampled AFM images.

    PubMed

    Oxvig, Christian Schou; Arildsen, Thomas; Larsen, Torben

    2017-01-01

    The use of compressed sensing in atomic force microscopy (AFM) can potentially speed-up image acquisition, lower probe-specimen interaction, or enable super resolution imaging. The idea in compressed sensing for AFM is to spatially undersample the specimen, i.e. only acquire a small fraction of the full image of it, and then use advanced computational techniques to reconstruct the remaining part of the image whenever this is possible. Our initial experiments have shown that it is possible to leverage inherent structure in acquired AFM images to improve image reconstruction. Thus, we have studied structure in the discrete cosine transform coefficients of typical AFM images. Based on this study, we propose a generic support structure model that may be used to improve the quality of the reconstructed AFM images. Furthermore, we propose a modification to the established iterative thresholding reconstruction algorithms that enables the use of our proposed structure model in the reconstruction process. Through a large set of reconstructions, the general reconstruction capability improvement achievable using our structured model is shown both quantitatively and qualitatively. Specifically, our experiments show that our proposed algorithm improves over established iterative thresholding algorithms by being able to reconstruct AFM images to a comparable quality using fewer measurements or equivalently obtaining a more detailed reconstruction for a fixed number of measurements. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. AMA Statistical Information Based Analysis of a Compressive Imaging System

    NASA Astrophysics Data System (ADS)

    Hope, D.; Prasad, S.

    Recent advances in optics and instrumentation have dramatically increased the amount of data, both spatial and spectral, that can be obtained about a target scene. The volume of the acquired data can and, in fact, often does far exceed the amount of intrinsic information present in the scene. In such cases, the large volume of data alone can impede the analysis and extraction of relevant information about the scene. One approach to overcoming this impedance mismatch between the volume of data and intrinsic information in the scene the data are supposed to convey is compressive sensing. Compressive sensing exploits the fact that most signals of interest, such as image scenes, possess natural correlations in their physical structure. These correlations, which can occur spatially as well as spectrally, can suggest a more natural sparse basis for compressing and representing the scene than standard pixels or voxels. A compressive sensing system attempts to acquire and encode the scene in this sparse basis, while preserving all relevant information in the scene. One criterion for assessing the content, acquisition, and processing of information in the image scene is Shannon information. This metric describes fundamental limits on encoding and reliably transmitting information about a source, such as an image scene. In this framework, successful encoding of the image requires an optimal choice of a sparse basis, while losses of information during transmission occur due to a finite system response and measurement noise. An information source can be represented by a certain class of image scenes, .e.g. those that have a common morphology. The ability to associate the recorded image with the correct member of the class that produced the image depends on the amount of Shannon information in the acquired data. In this manner, one can analyze the performance of a compressive imaging system for a specific class or ensemble of image scenes. We present such an information

  18. Simultaneous compression and encryption of closely resembling images: application to video sequences and polarimetric images.

    PubMed

    Aldossari, M; Alfalou, A; Brosseau, C

    2014-09-22

    This study presents and validates an optimized method of simultaneous compression and encryption designed to process images with close spectra. This approach is well adapted to the compression and encryption of images of a time-varying scene but also to static polarimetric images. We use the recently developed spectral fusion method [Opt. Lett.35, 1914-1916 (2010)] to deal with the close resemblance of the images. The spectral plane (containing the information to send and/or to store) is decomposed in several independent areas which are assigned according a specific way. In addition, each spectrum is shifted in order to minimize their overlap. The dual purpose of these operations is to optimize the spectral plane allowing us to keep the low- and high-frequency information (compression) and to introduce an additional noise for reconstructing the images (encryption). Our results show that not only can the control of the spectral plane enhance the number of spectra to be merged, but also that a compromise between the compression rate and the quality of the reconstructed images can be tuned. We use a root-mean-square (RMS) optimization criterion to treat compression. Image encryption is realized at different security levels. Firstly, we add a specific encryption level which is related to the different areas of the spectral plane, and then, we make use of several random phase keys. An in-depth analysis at the spectral fusion methodology is done in order to find a good trade-off between the compression rate and the quality of the reconstructed images. Our new proposal spectral shift allows us to minimize the image overlap. We further analyze the influence of the spectral shift on the reconstructed image quality and compression rate. The performance of the multiple-image optical compression and encryption method is verified by analyzing several video sequences and polarimetric images.

  19. Spectrally Adaptable Compressive Sensing Imaging System

    DTIC Science & Technology

    2014-05-01

    with our DMD-SSI setup. Figure 5.22(a) shows the imaging target used in this experiment, which is a red chili pepper with a green stem. Figure 5.22(b...5.25 (a) Reconstructed and reference spectral curves measured at point-1 on the pepper target. (b) Reconstructed and reference spectral curves...measured at point-2 on the pepper target. 97 6.1 Discrete-time continuous-amplitude communications system . . . . . . . . . . . . . . . . . 98 6.2 RF testbed

  20. 2D image compression using concurrent wavelet transform

    NASA Astrophysics Data System (ADS)

    Talukder, Kamrul Hasan; Harada, Koichi

    2011-10-01

    In the recent years wavelet transform (WT) has been widely used for image compression. As WT is a sequential process, much time is required to transform data. Here a new approach has been presented where the transformation process is executed concurrently. As a result the procedure runs first and the time of transformation is reduced. Multiple threads are used for row and column transformation and the communication among threads has been managed effectively. Thus, the transformation time has been reduced significantly. The proposed system provides better compression ratio and PSNR value with lower time complexity.

  1. Wavelet-based image compression using fixed residual value

    NASA Astrophysics Data System (ADS)

    Muzaffar, Tanzeem; Choi, Tae-Sun

    2000-12-01

    Wavelet based compression is getting popular due to its promising compaction properties at low bitrate. Zerotree wavelet image coding scheme efficiently exploits multi-level redundancy present in transformed data to minimize coding bits. In this paper, a new technique is proposed to achieve high compression by adding new zerotree and significant symbols to original EZW coder. Contrary to four symbols present in basic EZW scheme, modified algorithm uses eight symbols to generate fewer bits for a given data. Subordinate pass of EZW is eliminated and replaced with fixed residual value transmission for easy implementation. This modification simplifies the coding technique as well and speeds up the process, retaining the property of embeddedness.

  2. Compressive Optical Imaging Systems - Theory, Devices and Implementation

    DTIC Science & Technology

    2009-04-01

    CMOS sampling array but rather onto a DMD consisting of an array of N tiny mirrors. Each mirror corresponds to a particular pixel in x and 4>m and can...imaged is compressible by a compression algorithm like JPEG or JPEG2000. Since the DMD array is programmable, we can also employ test functions o...a photodiode is higher than that of the pixel sensors in a typical CCD or CMOS array and that the fill factor of a DMD can reach 90% whereas that of

  3. Videos and images from 25 years of teaching compressible flow

    NASA Astrophysics Data System (ADS)

    Settles, Gary

    2008-11-01

    Compressible flow is a very visual topic due to refractive optical flow visualization and the public fascination with high-speed flight. Films, video clips, and many images are available to convey this in the classroom. An overview of this material is given and selected examples are shown, drawn from educational films, the movies, television, etc., and accumulated over 25 years of teaching basic and advanced compressible-flow courses. The impact of copyright protection and the doctrine of fair use is also discussed.

  4. Distributed imaging using an array of compressive cameras

    NASA Astrophysics Data System (ADS)

    Ke, Jun; Shankar, Premchandra; Neifeld, Mark A.

    2009-01-01

    We describe a distributed computational imaging system that employs an array of feature specific sensors, also known as compressive imagers, to directly measure the linear projections of an object. Two different schemes for implementing these non-imaging sensors are discussed. We consider the task of object reconstruction and quantify the fidelity of reconstruction using the root mean squared error (RMSE) metric. We also study the lifetime of such a distributed sensor network. The sources of energy consumption in a distributed feature specific imaging (DFSI) system are discussed and compared with those in a distributed conventional imaging (DCI) system. A DFSI system consisting of 20 imagers collecting DCT, Hadamard, or PCA features has a lifetime of 4.8× that of the DCI system when the noise level is 20% and the reconstruction RMSE requirement is 6%. To validate the simulation results we emulate a distributed computational imaging system using an experimental setup consisting of an array of conventional cameras.

  5. Neural networks for data compression and invariant image recognition

    NASA Technical Reports Server (NTRS)

    Gardner, Sheldon

    1989-01-01

    An approach to invariant image recognition (I2R), based upon a model of biological vision in the mammalian visual system (MVS), is described. The complete I2R model incorporates several biologically inspired features: exponential mapping of retinal images, Gabor spatial filtering, and a neural network associative memory. In the I2R model, exponentially mapped retinal images are filtered by a hierarchical set of Gabor spatial filters (GSF) which provide compression of the information contained within a pixel-based image. A neural network associative memory (AM) is used to process the GSF coded images. We describe a 1-D shape function method for coding of scale and rotationally invariant shape information. This method reduces image shape information to a periodic waveform suitable for coding as an input vector to a neural network AM. The shape function method is suitable for near term applications on conventional computing architectures equipped with VLSI FFT chips to provide a rapid image search capability.

  6. JPIC-Rad-Hard JPEG2000 Image Compression ASIC

    NASA Astrophysics Data System (ADS)

    Zervas, Nikos; Ginosar, Ran; Broyde, Amitai; Alon, Dov

    2010-08-01

    JPIC is a rad-hard high-performance image compression ASIC for the aerospace market. JPIC implements tier 1 of the ISO/IEC 15444-1 JPEG2000 (a.k.a. J2K) image compression standard [1] as well as the post compression rate-distortion algorithm, which is part of tier 2 coding. A modular architecture enables employing a single JPIC or multiple coordinated JPIC units. JPIC is designed to support wide data sources of imager in optical, panchromatic and multi-spectral space and airborne sensors. JPIC has been developed as a collaboration of Alma Technologies S.A. (Greece), MBT/IAI Ltd (Israel) and Ramon Chips Ltd (Israel). MBT IAI defined the system architecture requirements and interfaces, The JPEG2K-E IP core from Alma implements the compression algorithm [2]. Ramon Chips adds SERDES interfaces and host interfaces and integrates the ASIC. MBT has demonstrated the full chip on an FPGA board and created system boards employing multiple JPIC units. The ASIC implementation, based on Ramon Chips' 180nm CMOS RadSafe[TM] RH cell library enables superior radiation hardness.

  7. Accelerated MR diffusion tensor imaging using distributed compressed sensing.

    PubMed

    Wu, Yin; Zhu, Yan-Jie; Tang, Qiu-Yang; Zou, Chao; Liu, Wei; Dai, Rui-Bin; Liu, Xin; Wu, Ed X; Ying, Leslie; Liang, Dong

    2014-02-01

    Diffusion tensor imaging (DTI) is known to suffer from long acquisition time in the orders of several minutes or even hours. Therefore, a feasible way to accelerate DTI data acquisition is highly desirable. In this article, the feasibility and efficacy of distributed compressed sensing to fast DTI is investigated by exploiting the joint sparsity prior in diffusion-weighted images. Fully sampled DTI datasets were obtained from both simulated phantom and experimental heart sample, with diffusion gradient applied in six directions. The k-space data were undersampled retrospectively with acceleration factors from 2 to 6. Diffusion-weighted images were reconstructed by solving an l2-l1 norm minimization problem. Reconstruction performance with varied signal-to-noise ratio and acceleration factors were evaluated by root-mean-square error and maps of reconstructed DTI indices. Superiority of distributed compressed sensing over basic compressed sensing was confirmed with simulation, and the reconstruction accuracy was influenced by signal-to-noise ratio and acceleration factors. Experimental results demonstrate that DTI indices including fractional anisotropy, mean diffusivities, and orientation of primary eigenvector can be obtained with high accuracy at acceleration factors up to 4. Distributed compressed sensing is shown to be able to accelerate DTI and may be used to reduce DTI acquisition time practically. Copyright © 2013 Wiley Periodicals, Inc.

  8. Influence of Lossy Compressed DEM on Radiometric Correction for Land Cover Classification of Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Moré, G.; Pesquer, L.; Blanes, I.; Serra-Sagristà, J.; Pons, X.

    2012-12-01

    World coverage Digital Elevation Models (DEM) have progressively increased their spatial resolution (e.g., ETOPO, SRTM, or Aster GDEM) and, consequently, their storage requirements. On the other hand, lossy data compression facilitates accessing, sharing and transmitting large spatial datasets in environments with limited storage. However, since lossy compression modifies the original information, rigorous studies are needed to understand its effects and consequences. The present work analyzes the influence of DEM quality -modified by lossy compression-, on the radiometric correction of remote sensing imagery, and the eventual propagation of the uncertainty in the resulting land cover classification. Radiometric correction is usually composed of two parts: atmospheric correction and topographical correction. For topographical correction, DEM provides the altimetry information that allows modeling the incidence radiation on terrain surface (cast shadows, self shadows, etc). To quantify the effects of the DEM lossy compression on the radiometric correction, we use radiometrically corrected images for classification purposes, and compare the accuracy of two standard coding techniques for a wide range of compression ratios. The DEM has been obtained by resampling the DEM v.2 of Catalonia (ICC), originally having 15 m resolution, to the Landsat TM resolution. The Aster DEM has been used to fill the gaps beyond the administrative limits of Catalonia. The DEM has been lossy compressed with two coding standards at compression ratios 5:1, 10:1, 20:1, 100:1 and 200:1. The employed coding standards have been JPEG2000 and CCSDS-IDC; the former is an international ISO/ITU-T standard for almost any type of images, while the latter is a recommendation of the CCSDS consortium for mono-component remote sensing images. Both techniques are wavelet-based followed by an entropy-coding stage. Also, for large compression ratios, both techniques need a post processing for correctly

  9. Mechanical compression for contrasting OCT images of biotissues

    NASA Astrophysics Data System (ADS)

    Kirillin, Mikhail Y.; Argba, Pavel D.; Kamensky, Vladislav A.

    2011-06-01

    Contrasting of biotissue layers in OCT images after application of mechanical compression is discussed. The study is performed on ex vivo samples of human rectum, and in vivo on skin of human volunteers. We show that mechanical compression provides contrasting of biotissue layer boundaries due to different mechanical properties of layers. We show that alteration of pressure from 0 up to 0.45 N/mm2 causes contrast increase from 1 to 10 dB in OCT imaging of human rectum ex vivo. Results of ex vivo studies are in good agreement with Monte Carlo simulations. Application of pressure of 0.45 N/mm2 causes increase in contrast of epidermis-dermis junction in OCT-images of human skin in vivo for about 10 dB.

  10. Implementation of aeronautic image compression technology on DSP

    NASA Astrophysics Data System (ADS)

    Wang, Yujing; Gao, Xueqiang; Wang, Mei

    2007-11-01

    According to the designed characteristics and demands of aeronautic image compression system, lifting scheme wavelet and SPIHT algorithm was selected as the key part of software implementation, which was introduced with details. In order to improve execution efficiency, border processing was simplified reasonably and SPIHT (Set Partitioning in Hierarchical Trees) algorithm was also modified partly. The results showed that the selected scheme has a 0.4dB improvement in PSNR(peak-peak-ratio) compared with classical Shaprio's scheme. To improve the operating speed, the hardware system was then designed based on DSP and many optimization measures were then applied successfully. Practical test showed that the system can meet the real-time demand with good quality of reconstruct image, which has been used in an aeronautic image compression system practically.

  11. A Progressive Image Compression Method Based on EZW Algorithm

    NASA Astrophysics Data System (ADS)

    Du, Ke; Lu, Jianming; Yahagi, Takashi

    A simple method based on the EZW algorithm is presented for improving image compression performance. Recent success in wavelet image coding is mainly attributed to recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's EZW(Embedded Zerotree Wavelets)(1), Said and Pearlman's SPIHT(Set Partitioning In Hierarchical Trees)(2), and Bing-Bing Chai's SLCCA(Significance-Linked Connected Component Analysis for Wavelet Image Coding)(3). The EZW algorithm is based on five key concepts: (1) a DWT(Discrete Wavelet Transform) or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, (4) universal lossless data compression which is achieved via adaptive arithmetic coding. and (5) DWT coefficients' degeneration from high scale subbands to low scale subbands. In this paper, we have improved the self-similarity statistical characteristic in concept (5) and present a progressive image compression method.

  12. Spatial compression algorithm for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R.

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  13. Spatial compression algorithm for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R.

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  14. Single-pixel optical imaging with compressed reference intensity patterns

    NASA Astrophysics Data System (ADS)

    Chen, Wen; Chen, Xudong

    2015-03-01

    Ghost imaging with single-pixel bucket detector has attracted more and more current attention due to its marked physical characteristics. However, in ghost imaging, a large number of reference intensity patterns are usually required for object reconstruction, hence many applications based on ghost imaging (such as tomography and optical security) may be tedious since heavy storage or transmission is requested. In this paper, we report that the compressed reference intensity patterns can be used for object recovery in computational ghost imaging (with single-pixel bucket detector), and object verification can be further conducted. Only a small portion (such as 2.0% pixels) of each reference intensity pattern is used for object reconstruction, and the recovered object is verified by using nonlinear correlation algorithm. Since statistical characteristic and speckle averaging property are inherent in ghost imaging, sidelobes or multiple peaks can be effectively suppressed or eliminated in the nonlinear correlation outputs when random pixel positions are selected from each reference intensity pattern. Since pixel positions can be randomly selected from each 2D reference intensity pattern (such as total measurements of 20000), a large key space and high flexibility can be generated when the proposed method is applied for authenticationbased cryptography. When compressive sensing is used to recover the object with a small number of measurements, the proposed strategy could still be feasible through further compressing the recorded data (i.e., reference intensity patterns) followed by object verification. It is expected that the proposed method not only compresses the recorded data and facilitates the storage or transmission, but also can build up novel capability (i.e., classical or quantum information verification) for ghost imaging.

  15. Split Bregman's optimization method for image construction in compressive sensing

    NASA Astrophysics Data System (ADS)

    Skinner, D.; Foo, S.; Meyer-Bäse, A.

    2014-05-01

    The theory of compressive sampling (CS) was reintroduced by Candes, Romberg and Tao, and D. Donoho in 2006. Using a priori knowledge that a signal is sparse, it has been mathematically proven that CS can defY Nyquist sampling theorem. Theoretically, reconstruction of a CS image relies on the minimization and optimization techniques to solve this complex almost NP-complete problem. There are many paths to consider when compressing and reconstructing an image but these methods have remained untested and unclear on natural images, such as underwater sonar images. The goal of this research is to perfectly reconstruct the original sonar image from a sparse signal while maintaining pertinent information, such as mine-like object, in Side-scan sonar (SSS) images. Goldstein and Osher have shown how to use an iterative method to reconstruct the original image through a method called Split Bregman's iteration. This method "decouples" the energies using portions of the energy from both the !1 and !2 norm. Once the energies are split, Bregman iteration is used to solve the unconstrained optimization problem by recursively solving the problems simultaneously. The faster these two steps or energies can be solved then the faster the overall method becomes. While the majority of CS research is still focused on the medical field, this paper will demonstrate the effectiveness of the Split Bregman's methods on sonar images.

  16. A novel image fusion approach based on compressive sensing

    NASA Astrophysics Data System (ADS)

    Yin, Hongpeng; Liu, Zhaodong; Fang, Bin; Li, Yanxia

    2015-11-01

    Image fusion can integrate complementary and relevant information of source images captured by multiple sensors into a unitary synthetic image. The compressive sensing-based (CS) fusion approach can greatly reduce the processing speed and guarantee the quality of the fused image by integrating fewer non-zero coefficients. However, there are two main limitations in the conventional CS-based fusion approach. Firstly, directly fusing sensing measurements may bring greater uncertain results with high reconstruction error. Secondly, using single fusion rule may result in the problems of blocking artifacts and poor fidelity. In this paper, a novel image fusion approach based on CS is proposed to solve those problems. The non-subsampled contourlet transform (NSCT) method is utilized to decompose the source images. The dual-layer Pulse Coupled Neural Network (PCNN) model is used to integrate low-pass subbands; while an edge-retention based fusion rule is proposed to fuse high-pass subbands. The sparse coefficients are fused before being measured by Gaussian matrix. The fused image is accurately reconstructed by Compressive Sampling Matched Pursuit algorithm (CoSaMP). Experimental results demonstrate that the fused image contains abundant detailed contents and preserves the saliency structure. These also indicate that our proposed method achieves better visual quality than the current state-of-the-art methods.

  17. Novel image compression-encryption hybrid algorithm based on key-controlled measurement matrix in compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua

    2014-10-01

    The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.

  18. Image compression using address-vector quantization

    NASA Astrophysics Data System (ADS)

    Nasrabadi, Nasser M.; Feng, Yushu

    1990-12-01

    A novel vector quantization scheme, the address-vector quantizer (A-VQ), is proposed which exploits the interblock correlation by encoding a group of blocks together using an address-codebook (AC). The AC is a set of address-codevectors (ACVs), each representing a combination of addresses or indices. Each element of the ACV is an address of an entry in the LBG-codebook, representing a vector-quantized block. The AC consists of an active (addressable) region and an inactive (nonaddressable) region. During encoding the ACVs in the AC are reordered adaptively to bring the most probable ACVs into the active region. When encoding an ACV, the active region is checked, and if such an address combination exists, its index is transmitted to the receiver. Otherwise, the address of each block is transmitted individually. The SNR of the images encoded by the A-VQ method is the same as that of a memoryless vector quantizer, but the bit rate is by a factor of approximately two.

  19. Digital image compression for a 2f multiplexing optical setup

    NASA Astrophysics Data System (ADS)

    Vargas, J.; Amaya, D.; Rueda, E.

    2016-07-01

    In this work a virtual 2f multiplexing system was implemented in combination with digital image compression techniques and redundant information elimination. Depending on the image type to be multiplexed, a memory-usage saving of as much as 99% was obtained. The feasibility of the system was tested using three types of images, binary characters, QR codes, and grey level images. A multiplexing step was implemented digitally, while a demultiplexing step was implemented in a virtual 2f optical setup following real experimental parameters. To avoid cross-talk noise, each image was codified with a specially designed phase diffraction carrier that would allow the separation and relocation of the multiplexed images on the observation plane by simple light propagation. A description of the system is presented together with simulations that corroborate the method. The present work may allow future experimental implementations that will make use of all the parallel processing capabilities of optical systems.

  20. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a

  1. High-quality JPEG compression history detection for fake uncompressed images

    NASA Astrophysics Data System (ADS)

    Zhang, Rong; Wang, Rang-Ding; Guo, Li-Jun; Jiang, Bao-Chuan

    2017-05-01

    Authenticity is one of the most important evaluation factors of images for photography competitions or journalism. Unusual compression history of an image often implies the illicit intent of its author. Our work aims at distinguishing real uncompressed images from fake uncompressed images that are saved in uncompressed formats but have been previously compressed. To detect the potential image JPEG compression, we analyze the JPEG compression artifacts based on the tetrolet covering, which corresponds to the local image geometrical structure. Since the compression can alter the structure information, the tetrolet covering indexes may be changed if a compression is performed on the test image. Such changes can provide valuable clues about the image compression history. To be specific, the test image is first compressed with different quality factors to generate a set of temporary images. Then, the test image is compared with each temporary image block-by-block to investigate whether the tetrolet covering index of each 4×4 block is different between them. The percentages of the changed tetrolet covering indexes corresponding to the quality factors (from low to high) are computed and used to form the p-curve, the local minimum of which may indicate the potential compression. Our experimental results demonstrate the advantage of our method to detect JPEG compressions of high quality, even the highest quality factors such as 98, 99, or 100 of the standard JPEG compression, from uncompressed-format images. At the same time, our detection algorithm can accurately identify the corresponding compression quality factor.

  2. Compressive Sensing Image Fusion Based on Particle Swarm Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Li, X.; Lv, J.; Jiang, S.; Zhou, H.

    2017-09-01

    In order to solve the problem that the spatial matching is difficult and the spectral distortion is large in traditional pixel-level image fusion algorithm. We propose a new method of image fusion that utilizes HIS transformation and the recently developed theory of compressive sensing that is called HIS-CS image fusion. In this algorithm, the particle swarm optimization algorithm is used to select the fusion coefficient ω. In the iterative process, the image fusion coefficient ω is taken as particle, and the optimal value is obtained by combining the optimal objective function. Then we use the compression-aware weighted fusion algorithm for remote sensing image fusion, taking the coefficient ω as the weight value. The algorithm ensures the optimal selection of fusion effect with a certain degree of self-adaptability. To evaluate the fused images, this paper uses five kinds of index parameters such as Entropy, Standard Deviation, Average Gradient, Degree of Distortion and Peak Signal-to-Noise Ratio. The experimental results show that the image fusion effect of the algorithm in this paper is better than that of traditional methods.

  3. Region-based compression of remote sensing stereo image pairs

    NASA Astrophysics Data System (ADS)

    Yan, Ruomei; Li, Yunsong; Wu, Chengke; Wang, Keyan; Li, Shizhong

    2009-08-01

    According to the data characteristics of remote sensing stereo image pairs, a novel compression algorithm based on the combination of feature-based image matching (FBM), area-based image matching (ABM), and region-based disparity estimation is proposed. First, the Scale Invariant Feature Transform (SIFT) and the Sobel operator are carried out for texture classification. Second, an improved ABM is used in the area with flat terrain (flat area), while the disparity estimation, a combination of quadtree decomposition and FBM, is used in the area with alpine terrain (alpine area). Furthermore, the radiation compensation is applied in every area. Finally, the disparities, the residual image, and the reference image are compressed by JPEG2000 together. The new algorithm provides a reasonable prediction in different areas according to characteristics of image textures, which improves the precision of the sensed image. The experimental results show that the PSNR of the proposed algorithm can obtain up to about 3dB's gain compared with the traditional algorithm at low or medium bitrates, and the subjective quality is obviously enhanced.

  4. Adaptive compression of remote sensing stereo image pairs

    NASA Astrophysics Data System (ADS)

    Li, Yunsong; Yan, Ruomei; Wu, Chengke; Wang, Keyan; Li, Shizhong; Wang, Yu

    2010-09-01

    According to the data characteristics of remote sensing stereo image pairs, a novel adaptive compression algorithm based on the combination of feature-based image matching (FBM), area-based image matching (ABM), and region-based disparity estimation is proposed. First, the Scale Invariant Feature Transform (SIFT) and the Sobel operator are carried out for texture classification. Second, an improved ABM is used in the flat area, while the disparity estimation is used in the alpine area. The radiation compensation is applied to further improve the performance. Finally, the residual image and the reference image are compressed by JPEG2000 independently. The new algorithm provides a reasonable prediction in different areas according to the image textures, which improves the precision of the sensed image. The experimental results show that the PSNR of the proposed algorithm can obtain up to about 3dB's gain compared with the traditional algorithm at low or medium bitrates, and the DTM and subjective quality is also obviously enhanced.

  5. Lossless compression of stromatolite images: a biogenicity index?

    PubMed

    Corsetti, Frank A; Storrie-Lombardi, Michael C

    2003-01-01

    It has been underappreciated that inorganic processes can produce stromatolites (laminated macroscopic constructions commonly attreibuted to microbiological activity), thus calling into question the long-standing use of stromatolites as de facto evidence for ancient life. Using lossless compression on unmagnified reflectance red-green-blue (RGB) images of matched stromatolite-sediment matrix pairs as a complexity metric, the compressibility index (delta(c), the log ratio of the ratio of the compressibility of the matrix versus the target) of a putative abiotic test stromatolite is significantly less than the delta(c) of a putative biotic test stromatolite. There is a clear separation in delta(c) between the different stromatolites discernible at the outcrop scale. In terms of absolute compressibility, the sediment matrix between the stromatolite columns was low in both cases, the putative abiotic stromatolite was similar to the intracolumnar sediment, and the putative biotic stromatolite was much greater (again discernible at the outcrop scale). We propose tht this metric would be useful for evaluating the biogenicity of images obtained by the camera systems available on every Mars surface probe launched to date including Viking, Pathfinder, Beagle, and the two Mars Exploration Rovers.

  6. Lossless Compression of Stromatolite Images: A Biogenicity Index?

    NASA Astrophysics Data System (ADS)

    Corsetti, Frank A.; Storrie-Lombardi, Michael C.

    2003-12-01

    It has been underappreciated that inorganic processes can produce stromatolites (laminated macroscopic constructions commonly attributed to microbiological activity), thus calling into question the long-standing use of stromatolites as de facto evidence for ancient life. Using lossless compression on unmagnified reflectance red-green-blue (RGB) images of matched stromatolite-sediment matrix pairs as a complexity metric, the compressibility index (δc, the log of the ratio of the compressibility of the matrix versus the target) of a putative abiotic test stromatolite is significantly less than the δc of a putative biotic test stromatolite. There is a clear separation in δc between the different stromatolites discernible at the outcrop scale. In terms of absolute compressibility, the sediment matrix between the stromatolite columns was low in both cases, the putative abiotic stromatolite was similar to the intracolumnar sediment, and the putative biotic stromatolite was much greater (again discernible at the outcrop scale). We propose that this metric would be useful for evaluating the biogenicity of images obtained by the camera systems available on every Mars surface probe launched to date including Viking, Pathfinder, Beagle, and the two Mars Exploration Rovers.

  7. Image segmentation by iterative parallel region growing with application to data compression and image analysis

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    1988-01-01

    Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.

  8. Phase Preserving Dynamic Range Compression of Aeromagnetic Images

    NASA Astrophysics Data System (ADS)

    Kovesi, Peter

    2014-05-01

    Geoscientific images with a high dynamic range, such as aeromagnetic images, are difficult to present in a manner that facilitates interpretation. The data values may range over 20000 nanoteslas or more but a computer monitor is typically designed to present input data constrained to 8 bit values. Standard photographic high dynamic range tonemapping algorithms may be unsuitable, or inapplicable to such data because they are have been developed on the basis of statistics of natural images, feature types found in natural images, and models of the human visual system. These algorithms may also require image segmentation and/or decomposition of the image into base and detail layers but these operations may have no meaning for geoscientific images. For geological and geophysical data high dynamic range images are often dealt with via histogram equalization. The problem with this approach is that the contrast stretch or compression applied to data values depends on how frequently the data values occur in the image and not on the magnitude of any data features themselves. This can lead to inappropriate distortions in the output. Other approaches include use of the Automatic Gain Control algorithm developed by Rajagopalan, or the tilt derivative. A difficulty with these approaches is that the signal can be over-normalized and perception of the overall variations in the signal can be lost. To overcome these problems a method is presented that compresses the dynamic range of an image while preserving local features. It makes no assumptions about the formation of the image, the feature types it contains, or its range of values. Thus, unlike algorithms designed for photographic images, this algorithm can be applied to a wide range of scientific images. The method is based on extracting local phase and amplitude values across the image using monogenic filters. The dynamic range of the image can then be reduced by applying a range reducing function to the amplitude values, for

  9. Rank minimization code aperture design for spectrally selective compressive imaging.

    PubMed

    Arguello, Henry; Arce, Gonzalo R

    2013-03-01

    A new code aperture design framework for multiframe code aperture snapshot spectral imaging (CASSI) system is presented. It aims at the optimization of code aperture sets such that a group of compressive spectral measurements is constructed, each with information from a specific subset of bands. A matrix representation of CASSI is introduced that permits the optimization of spectrally selective code aperture sets. Furthermore, each code aperture set forms a matrix such that rank minimization is used to reduce the number of CASSI shots needed. Conditions for the code apertures are identified such that a restricted isometry property in the CASSI compressive measurements is satisfied with higher probability. Simulations show higher quality of spectral image reconstruction than that attained by systems using Hadamard or random code aperture sets.

  10. Compression of fingerprint data using the wavelet vector quantization image compression algorithm. 1992 progress report

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1992-04-11

    This report describes the development of a Wavelet Vector Quantization (WVQ) image compression algorithm for fingerprint raster files. The pertinent work was performed at Los Alamos National Laboratory for the Federal Bureau of Investigation. This document describes a previously-sent package of C-language source code, referred to as LAFPC, that performs the WVQ fingerprint compression and decompression tasks. The particulars of the WVQ algorithm and the associated design procedure are detailed elsewhere; the purpose of this document is to report the results of the design algorithm for the fingerprint application and to delineate the implementation issues that are incorporated in LAFPC. Special attention is paid to the computation of the wavelet transform, the fast search algorithm used for the VQ encoding, and the entropy coding procedure used in the transmission of the source symbols.

  11. Accelerated dynamic EPR imaging using fast acquisition and compressive recovery

    NASA Astrophysics Data System (ADS)

    Ahmad, Rizwan; Samouilov, Alexandre; Zweier, Jay L.

    2016-12-01

    Electron paramagnetic resonance (EPR) allows quantitative imaging of tissue redox status, which provides important information about ischemic syndromes, cancer and other pathologies. For continuous wave EPR imaging, however, poor signal-to-noise ratio and low acquisition efficiency limit its ability to image dynamic processes in vivo including tissue redox, where conditions can change rapidly. Here, we present a data acquisition and processing framework that couples fast acquisition with compressive sensing-inspired image recovery to enable EPR-based redox imaging with high spatial and temporal resolutions. The fast acquisition (FA) allows collecting more, albeit noisier, projections in a given scan time. The composite regularization based processing method, called spatio-temporal adaptive recovery (STAR), not only exploits sparsity in multiple representations of the spatio-temporal image but also adaptively adjusts the regularization strength for each representation based on its inherent level of the sparsity. As a result, STAR adjusts to the disparity in the level of sparsity across multiple representations, without introducing any tuning parameter. Our simulation and phantom imaging studies indicate that a combination of fast acquisition and STAR (FASTAR) enables high-fidelity recovery of volumetric image series, with each volumetric image employing less than 10 s of scan. In addition to image fidelity, the time constants derived from FASTAR also match closely to the ground truth even when a small number of projections are used for recovery. This development will enhance the capability of EPR to study fast dynamic processes that cannot be investigated using existing EPR imaging techniques.

  12. Accelerated dynamic EPR imaging using fast acquisition and compressive recovery.

    PubMed

    Ahmad, Rizwan; Samouilov, Alexandre; Zweier, Jay L

    2016-12-01

    Electron paramagnetic resonance (EPR) allows quantitative imaging of tissue redox status, which provides important information about ischemic syndromes, cancer and other pathologies. For continuous wave EPR imaging, however, poor signal-to-noise ratio and low acquisition efficiency limit its ability to image dynamic processes in vivo including tissue redox, where conditions can change rapidly. Here, we present a data acquisition and processing framework that couples fast acquisition with compressive sensing-inspired image recovery to enable EPR-based redox imaging with high spatial and temporal resolutions. The fast acquisition (FA) allows collecting more, albeit noisier, projections in a given scan time. The composite regularization based processing method, called spatio-temporal adaptive recovery (STAR), not only exploits sparsity in multiple representations of the spatio-temporal image but also adaptively adjusts the regularization strength for each representation based on its inherent level of the sparsity. As a result, STAR adjusts to the disparity in the level of sparsity across multiple representations, without introducing any tuning parameter. Our simulation and phantom imaging studies indicate that a combination of fast acquisition and STAR (FASTAR) enables high-fidelity recovery of volumetric image series, with each volumetric image employing less than 10 s of scan. In addition to image fidelity, the time constants derived from FASTAR also match closely to the ground truth even when a small number of projections are used for recovery. This development will enhance the capability of EPR to study fast dynamic processes that cannot be investigated using existing EPR imaging techniques.

  13. Remotely sensed image compression based on wavelet transform

    NASA Technical Reports Server (NTRS)

    Kim, Seong W.; Lee, Heung K.; Kim, Kyung S.; Choi, Soon D.

    1995-01-01

    In this paper, we present an image compression algorithm that is capable of significantly reducing the vast amount of information contained in multispectral images. The developed algorithm exploits the spectral and spatial correlations found in multispectral images. The scheme encodes the difference between images after contrast/brightness equalization to remove the spectral redundancy, and utilizes a two-dimensional wavelet transform to remove the spatial redundancy. the transformed images are then encoded by Hilbert-curve scanning and run-length-encoding, followed by Huffman coding. We also present the performance of the proposed algorithm with the LANDSAT MultiSpectral Scanner data. The loss of information is evaluated by PSNR (peak signal to noise ratio) and classification capability.

  14. Computational ghost imaging: advanced compressive sensing (CS) technique

    NASA Astrophysics Data System (ADS)

    Katkovnik, Vladimir; Astola, Jaakko

    2012-10-01

    A novel efficient variational technique for speckle imaging is discussed. It is developed with the main motivation to filter noise, to wipe out the typical diffraction artifacts and to achieve crisp imaging. A sparse modeling is used for the wave field at the object plane in order to overcome the loss of information due to the ill-posedness of forward propagation image formation operators. This flexible and data adaptive modeling relies on the recent progress in sparse imaging and compressive sensing (CS). Being in line with the general formalism of CS, we develop an original approach to wave field reconstruction.7 In this paper we demonstrate this technique in its application for computational amplitude ghost imaging (GI), where a spatial light modulator (SLM) is used in order to generate a speckle wave field sensing a transmitted mask object.

  15. Evaluation of color-embedded wavelet image compression techniques

    NASA Astrophysics Data System (ADS)

    Saenz, Martha; Salama, Paul; Shen, Ke; Delp, Edward J., III

    1998-12-01

    Color embedded image compression is investigated by means of a set of core experiments that seek to evaluate the advantages of various color transformations, spatial orientation trees and the use of monochrome embedded coding schemes such as EZW and SPIHT. In order to take advantage of the interdependencies of the color components for a given color space, two new spatial orientation trees that relate frequency bands and color components are investigated.

  16. Real-Time Digital Compression Of Television Image Data

    NASA Technical Reports Server (NTRS)

    Barnes, Scott P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1990-01-01

    Digital encoding/decoding system compresses color television image data in real time for transmission at lower data rates and, consequently, lower bandwidths. Implements predictive coding process, in which each picture element (pixel) predicted from values of prior neighboring pixels, and coded transmission expresses difference between actual and predicted current values. Combines differential pulse-code modulation process with non-linear, nonadaptive predictor, nonuniform quantizer, and multilevel Huffman encoder.

  17. Real-Time Digital Compression Of Television Image Data

    NASA Technical Reports Server (NTRS)

    Barnes, Scott P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1990-01-01

    Digital encoding/decoding system compresses color television image data in real time for transmission at lower data rates and, consequently, lower bandwidths. Implements predictive coding process, in which each picture element (pixel) predicted from values of prior neighboring pixels, and coded transmission expresses difference between actual and predicted current values. Combines differential pulse-code modulation process with non-linear, nonadaptive predictor, nonuniform quantizer, and multilevel Huffman encoder.

  18. Processing and image compression based on the platform Arduino

    NASA Astrophysics Data System (ADS)

    Lazar, Jan; Kostolanyova, Katerina; Bradac, Vladimir

    2017-07-01

    This paper focuses on the use of a minicomputer built on platform Arduino for the purposes of image compression and decompression. Arduino is used as a control element, which integrates needed proposed algorithms. This solution is unique as there is no commonly available solution with low computational performance for demanding graphical operations with the possibility of subsequent extending, because using Arduino, as an open source, enables further extensions and adjustments.

  19. Lossless compression of images from China-Brazil Earth Resources Satellite

    NASA Astrophysics Data System (ADS)

    Pinho, Marcelo S.

    2011-11-01

    The aim of this work is to evaluate the performance of different schemes of lossless compression when applied to compact images collected by the satellite CBERS-2B. This satellite is the third one constructed under the CBERS Program (China- Brazil Earth Resources Satellite) and it was launched in 2007. This work focuses in the compression of images from the CCD camera which has a resolution of 20 x 20 meters and five bands. CBERS-2B transmits the data from CCD in real time, with no compression and it does not storage even a small part of images. In fact, this satellite can work in this way because the bit rate produced by CCD is smaller than the transmitter bit rate. However, the resolution and the number of spectral bands of imaging systems are increasing and the constrains in power and bandwidth bound the communication capacity of a satellite channel. Therefore, in the future satellites the communication systems must be reviewed. There are many algorithms for image compression described in the literature and some of them have already been used in remote sensing satellites (RSS). When the bit rate produced by the imaging system is much higher than the transmitter bit rate, a lossy encoder must be used. However, when the gap between the bit rates is not so high, a lossless procedure can be an interesting choice. This work evaluates JPEG-LS, CALIC, SPIHT, JPEG2000, CCSDS recommendation, H.264, and JPEG-XR when they are used to compress images from the CCD camera of CBERS-2B with no loss. The algorithms are applied in a set of twenty images with 5, 812 x 5, 812 pixels, running in blocks of 128 x 128; 256 x 256; 512 x 512; and 1, 024x1, 024 pixels. The tests are done independently in each original band and also in five transformed bands, obtained by a procedure which decorrelates them. In general, the results have shown that algorithms based on predictive schemes (CALIC and JPEG-LS) applied in transformed decorrelated bands produces a better performance in the mean

  20. A geometric approach to multi-view compressive imaging

    NASA Astrophysics Data System (ADS)

    Park, Jae Young; Wakin, Michael B.

    2012-12-01

    In this paper, we consider multi-view imaging problems in which an ensemble of cameras collect images describing a common scene. To simplify the acquisition and encoding of these images, we study the effectiveness of non-collaborative compressive sensing encoding schemes wherein each sensor directly and independently compresses its image using randomized measurements. After these measurements and also perhaps the camera positions are transmitted to a central node, the key to an accurate reconstruction is to fully exploit the joint correlation among the signal ensemble. To capture such correlations, we propose a geometric modeling framework in which the image ensemble is treated as a sampling of points from a low-dimensional manifold in the ambient signal space. Building on results that guarantee stable embeddings of manifolds under random measurements, we propose a "manifold lifting" algorithm for recovering the ensemble that can operate even without knowledge of the camera positions. We divide our discussion into two scenarios, the near-field and far-field cases, and describe how the manifold lifting algorithm could be applied to these scenarios. At the end of this paper, we present an in-depth case study of a far-field imaging scenario, where the aim is to reconstruct an ensemble of satellite images taken from different positions with limited but overlapping fields of view. In this case study, we demonstrate the impressive power of random measurements to capture single- and multi-image structure without explicitly searching for it, as the randomized measurement encoding in conjunction with the proposed manifold lifting algorithm can even outperform image-by-image transform coding.

  1. Filtered gradient reconstruction algorithm for compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Mejia, Yuri; Arguello, Henry

    2017-04-01

    Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.

  2. High-resolution three-dimensional imaging with compress sensing

    NASA Astrophysics Data System (ADS)

    Wang, Jingyi; Ke, Jun

    2016-10-01

    LIDAR three-dimensional imaging technology have been used in many fields, such as military detection. However, LIDAR require extremely fast data acquisition speed. This makes the manufacture of detector array for LIDAR system is very difficult. To solve this problem, we consider using compress sensing which can greatly decrease the data acquisition and relax the requirement of a detection device. To use the compressive sensing idea, a spatial light modulator will be used to modulate the pulsed light source. Then a photodetector is used to receive the reflected light. A convex optimization problem is solved to reconstruct the 2D depth map of the object. To improve the resolution in transversal direction, we use multiframe image restoration technology. For each 2D piecewise-planar scene, we move the SLM half-pixel each time. Then the position where the modulated light illuminates will changed accordingly. We repeat moving the SLM to four different directions. Then we can get four low-resolution depth maps with different details of the same plane scene. If we use all of the measurements obtained by the subpixel movements, we can reconstruct a high-resolution depth map of the sense. A linear minimum-mean-square error algorithm is used for the reconstruction. By combining compress sensing and multiframe image restoration technology, we reduce the burden on data analyze and improve the efficiency of detection. More importantly, we obtain high-resolution depth maps of a 3D scene.

  3. Block-based adaptive lifting schemes for multiband image compression

    NASA Astrophysics Data System (ADS)

    Masmoudi, Hela; Benazza-Benyahia, Amel; Pesquet, Jean-Christophe

    2004-02-01

    In this paper, we are interested in designing lifting schemes adapted to the statistics of the wavelet coefficients of multiband images for compression applications. More precisely, nonseparable vector lifting schemes are used in order to capture simultaneously the spatial and the spectral redundancies. The underlying operators are then computed in order to minimize the entropy of the resulting multiresolution representation. To this respect, we have developed a new iterative block-based classification algorithm. Simulation tests carried out on remotely sensed multispectral images indicate that a substantial gain in terms of bit-rate is achieved by the proposed adaptive coding method w.r.t the non-adaptive one.

  4. An adaptive technique to maximize lossless image data compression of satellite images

    NASA Technical Reports Server (NTRS)

    Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe

    1994-01-01

    Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.

  5. Compressive imaging system design using task-specific information.

    PubMed

    Ashok, Amit; Baheti, Pawan K; Neifeld, Mark A

    2008-09-01

    We present a task-specific information (TSI) based framework for designing compressive imaging (CI) systems. The task of target detection is chosen to demonstrate the performance of the optimized CI system designs relative to a conventional imager. In our optimization framework, we first select a projection basis and then find the associated optimal photon-allocation vector in the presence of a total photon-count constraint. Several projection bases, including principal components (PC), independent components, generalized matched-filter, and generalized Fisher discriminant (GFD) are considered for candidate CI systems, and their respective performance is analyzed for the target-detection task. We find that the TSI-optimized CI system design based on a GFD projection basis outperforms all other candidate CI system designs as well as the conventional imager. The GFD-based compressive imager yields a TSI of 0.9841 bits (out of a maximum possible 1 bit for the detection task), which is nearly ten times the 0.0979 bits achieved by the conventional imager at a signal-to-noise ratio of 5.0. We also discuss the relation between the information-theoretic TSI metric and a conventional statistical metric like probability of error in the context of the target-detection problem. It is shown that the TSI can be used to derive an upper bound on the probability of error that can be attained by any detection algorithm.

  6. Evaluation of JPEG and wavelet compression of body CT images for direct digital teleradiologic transmission.

    PubMed

    Kalyanpur, A; Neklesa, V P; Taylor, C R; Daftary, A R; Brink, J A

    2000-12-01

    To determine acceptable levels of JPEG (Joint Photographic Experts Group) and wavelet compression for teleradiologic transmission of body computed tomographic (CT) images. A digital test pattern (Society of Motion Picture and Television Engineers, 512 x 512 matrix) was transmitted after JPEG or wavelet compression by using point-to-point and Web-based teleradiology, respectively. Lossless, 10:1 lossy, and 20:1 lossy ratios were tested. Images were evaluated for high- and low-contrast resolution, sensitivity to small signal differences, and misregistration artifacts. Three independent observers who were blinded to the compression scheme evaluated these image quality measures in 20 clinical cases with similar levels of compression. High-contrast resolution was not diminished with any tested level of JPEG or wavelet compression. With JPEG compression, low-contrast resolution was not lost with 10:1 lossy compression but was lost at 3% modulation with 20:1 lossy compression. With wavelet compression, there was loss of 1% modulation with 10:1 lossy compression and loss of 5% modulation with 20:1 lossy compression. Sensitivity to small signal differences (5% and 95% of the maximal signal) diminished only with 20:1 lossy wavelet compression. With 10:1 lossy compression, misregistration artifacts were mild and were equivalent with JPEG and wavelet compression. Qualitative clinical findings supported these findings. Lossy 10:1 compression is suitable for on-call electronic transmission of body CT images as long as original images are subsequently reviewed.

  7. Fast Second Degree Total Variation Method for Image Compressive Sensing

    PubMed Central

    Liu, Pengfei; Xiao, Liang; Zhang, Jun

    2015-01-01

    This paper presents a computationally efficient algorithm for image compressive sensing reconstruction using a second degree total variation (HDTV2) regularization. Firstly, a preferably equivalent formulation of the HDTV2 functional is derived, which can be formulated as a weighted L1-L2 mixed norm of second degree image derivatives under the spectral decomposition framework. Secondly, using the equivalent formulation of HDTV2, we introduce an efficient forward-backward splitting (FBS) scheme to solve the HDTV2-based image reconstruction model. Furthermore, from the averaged non-expansive operator point of view, we make a detailed analysis on the convergence of the proposed FBS algorithm. Experiments on medical images demonstrate that the proposed method outperforms several fast algorithms of the TV and HDTV2 reconstruction models in terms of peak signal to noise ratio (PSNR), structural similarity index (SSIM) and convergence speed. PMID:26361008

  8. Subband directional vector quantization in radiological image compression

    NASA Astrophysics Data System (ADS)

    Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel

    1992-05-01

    The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.

  9. Compressed sensing sparse reconstruction for coherent field imaging

    NASA Astrophysics Data System (ADS)

    Bei, Cao; Xiu-Juan, Luo; Yu, Zhang; Hui, Liu; Ming-Lai, Chen

    2016-04-01

    Return signal processing and reconstruction plays a pivotal role in coherent field imaging, having a significant influence on the quality of the reconstructed image. To reduce the required samples and accelerate the sampling process, we propose a genuine sparse reconstruction scheme based on compressed sensing theory. By analyzing the sparsity of the received signal in the Fourier spectrum domain, we accomplish an effective random projection and then reconstruct the return signal from as little as 10% of traditional samples, finally acquiring the target image precisely. The results of the numerical simulations and practical experiments verify the correctness of the proposed method, providing an efficient processing approach for imaging fast-moving targets in the future. Project supported by the National Natural Science Foundation of China (Grant No. 61505248) and the Fund from Chinese Academy of Sciences, the Light of “Western” Talent Cultivation Plan “Dr. Western Fund Project” (Grant No. Y429621213).

  10. Block-based image compression with parameter-assistant inpainting.

    PubMed

    Xiong, Zhiwei; Sun, Xiaoyan; Wu, Feng

    2010-06-01

    This correspondence presents an image compression approach that integrates our proposed parameter-assistant inpainting (PAI) to exploit visual redundancy in color images. In this scheme, we study different distributions of image regions and represent them with a model class. Based on that, an input image at the encoder side is divided into featured and non-featured regions at block level. The featured blocks fitting the predefined model class are coded by a few parameters, whereas the non-featured blocks are coded traditionally. At the decoder side, the featured regions are restored through PAI relying on both delivered parameters and surrounding information. Experimental results show that our method outperforms JPEG in featured regions by an average bit-rate saving of 76% at similar perceptual quality levels.

  11. Compound image compression for real-time computer screen image transmission.

    PubMed

    Lin, Tony; Hao, Pengwei

    2005-08-01

    We present a compound image compression algorithm for real-time applications of computer screen image transmission. It is called shape primitive extraction and coding (SPEC). Real-time image transmission requires that the compression algorithm should not only achieve high compression ratio, but also have low complexity and provide excellent visual quality. SPEC first segments a compound image into text/graphics pixels and pictorial pixels, and then compresses the text/graphics pixels with a new lossless coding algorithm and the pictorial pixels with the standard lossy JPEG, respectively. The segmentation first classifies image blocks into picture and text/graphics blocks by thresholding the number of colors of each block, then extracts shape primitives of text/graphics from picture blocks. Dynamic color palette that tracks recent text/graphics colors is used to separate small shape primitives of text/graphics from pictorial pixels. Shape primitives are also extracted from text/graphics blocks. All shape primitives from both block types are losslessly compressed by using a combined shape-based and palette-based coding algorithm. Then, the losslessly coded bitstream is fed into a LZW coder. Experimental results show that the SPEC has very low complexity and provides visually lossless quality while keeping competitive compression ratios.

  12. Compressive sampling in passive millimeter-wave imaging

    NASA Astrophysics Data System (ADS)

    Gopalsami, N.; Elmer, T. W.; Liao, S.; Ahern, R.; Heifetz, A.; Raptis, A. C.; Luessi, M.; Babacan, D.; Katsaggelos, A. K.

    2011-05-01

    We present a Hadamard transform based imaging technique and have implemented it on a single-pixel passive millimeter-wave imager in the 146-154 GHz range. The imaging arrangement uses a set of Hadamard transform masks of size p x q at the image plane of a lens and the transformed image signals are focused and collected by a horn antenna of the imager. The cyclic nature of Hadamard matrix allows the use of a single extended 2-D Hadamard mask of size (2p-1) x (2q-1) to expose a p x q submask for each acquisition by raster scanning the large mask one pixel at a time. A total of N = pq acquisitions can be made with a complete scan. The original p x q image may be reconstructed by a simple matrix operation. Instead of full N acquisitions, we can use a subset of the masks for compressive sensing. In this regard, we have developed a relaxation technique that recovers the full Hadamard measurement space from sub-sampled Hadamard acquisitions. We have reconstructed high fidelity images with 1/9 of the full Hadamard acquisitions, thus reducing the image acquisition time by a factor of 9.

  13. Compressive sensing for direct millimeter-wave holographic imaging.

    PubMed

    Qiao, Lingbo; Wang, Yingxin; Shen, Zongjun; Zhao, Ziran; Chen, Zhiqiang

    2015-04-10

    Direct millimeter-wave (MMW) holographic imaging, which provides both the amplitude and phase information by using the heterodyne mixing technique, is considered a powerful tool for personnel security surveillance. However, MWW imaging systems usually suffer from the problem of high cost or relatively long data acquisition periods for array or single-pixel systems. In this paper, compressive sensing (CS), which aims at sparse sampling, is extended to direct MMW holographic imaging for reducing the number of antenna units or the data acquisition time. First, following the scalar diffraction theory, an exact derivation of the direct MMW holographic reconstruction is presented. Then, CS reconstruction strategies for complex-valued MMW images are introduced based on the derived reconstruction formula. To pursue the applicability for near-field MMW imaging and more complicated imaging targets, three sparsity bases, including total variance, wavelet, and curvelet, are evaluated for the CS reconstruction of MMW images. We also discuss different sampling patterns for single-pixel, linear array and two-dimensional array MMW imaging systems. Both simulations and experiments demonstrate the feasibility of recovering MMW images from measurements at 1/2 or even 1/4 of the Nyquist rate.

  14. Effects of Image Compression on Automatic Count of Immunohistochemically Stained Nuclei in Digital Images

    PubMed Central

    López, Carlos; Lejeune, Marylène; Escrivà, Patricia; Bosch, Ramón; Salvadó, Maria Teresa; Pons, Lluis E.; Baucells, Jordi; Cugat, Xavier; Álvaro, Tomás; Jaén, Joaquín

    2008-01-01

    This study investigates the effects of digital image compression on automatic quantification of immunohistochemical nuclear markers. We examined 188 images with a previously validated computer-assisted analysis system. A first group was composed of 47 images captured in TIFF format, and other three contained the same images converted from TIFF to JPEG format with 3×, 23× and 46× compression. Counts of TIFF format images were compared with the other three groups. Overall, differences in the count of the images increased with the percentage of compression. Low-complexity images (≤100 cells/field, without clusters or with small-area clusters) had small differences (<5 cells/field in 95–100% of cases) and high-complexity images showed substantial differences (<35–50 cells/field in 95–100% of cases). Compression does not compromise the accuracy of immunohistochemical nuclear marker counts obtained by computer-assisted analysis systems for digital images with low complexity and could be an efficient method for storing these images. PMID:18755997

  15. Is JPEG compression of videomicroscopic images compatible with telediagnosis? Comparison between diagnostic performance and pattern recognition on uncompressed TIFF images and JPEG compressed ones.

    PubMed

    Seidenari, Stefania; Pellacani, Giovanni; Righi, Elena; Di Nardo, Anna

    2004-01-01

    Early melanoma diagnosis is an important goal for dermatologists. Polarized light systems are increasingly employed for dermatoscopic diagnosis of melanocytic lesions. For the purpose of teledermoscopy, whose importance is increasingly growing for consultation and teaching purposes, it is of utmost importance to establish whether, after compression, polarized light images maintain their informativeness. The aim of our study was to check the effects of compression on melanocytic lesion images acquired by means of a digital videomicroscope on the identification of morphological details of the image and on diagnostic accuracy. A total of 170 50-fold-magnified pigmented skin lesion images, acquired in Tagged Image File Format (TIFF) by a digital videomicroscope, were compressed using Joint Photographic Experts Group (JPEG) algorithms (compression factor 30). Two experts in videomicroscopy evaluated both original and compressed images twice by describing single lesion features and expressing a diagnosis. Reproducibility in the assessment of dermoscopic parameters and observer performance were studied by kappa statistics and Receiver Operating Characteristic (ROC) analysis. Both intra- and interobserver reproducibility in the assessment of morphological details were higher when TIFF images were considered, indicating a better image quality. Nonetheless, there was no significant difference in the diagnostic accuracy between uncompressed images and compressed ones, although the intraobserver reproducibility in the diagnostic judgement was higher for uncompressed images. Despite loss in image details, factor 30 compressed videomicroscopic images enable a good diagnostic accuracy.

  16. Objective Quality Assessment and Perceptual Compression of Screen Content Images.

    PubMed

    Wang, Shiqi; Gu, Ke; Zeng, Kai; Wang, Zhou; Lin, Weisi

    2016-05-25

    Screen content image (SCI) has recently emerged as an active topic due to the rapidly increasing demand in many graphically rich services such as wireless displays and virtual desktops. Image quality models play an important role in measuring and optimizing user experience of SCI compression and transmission systems, but are currently lacking. SCIs are often composed of pictorial regions and computer generated textual/graphical content, which exhibit different statistical properties that often lead to different viewer behaviors. Inspired by this, we propose an objective quality assessment approach for SCIs that incorporates both visual field adaptation and information content weighting into structural similarity based local quality assessment. Furthermore, we develop a perceptual screen content coding scheme based on the newly proposed quality assessment measure, targeting at further improving the SCI compression performance. Experimental results show that the proposed quality assessment method not only better predicts the perceptual quality of SCIs, but also demonstrates great potentials in the design of perceptually optimal SCI compression schemes.

  17. Progressive image data compression with adaptive scale-space quantization

    NASA Astrophysics Data System (ADS)

    Przelaskowski, Artur

    1999-12-01

    Some improvements of embedded zerotree wavelet algorithm are considere. Compression methods tested here are based on dyadic wavelet image decomposition, scalar quantization and coding in progressive fashion. Profitable coders with embedded form of code and rate fixing abilities like Shapiro EZW and Said nad Pearlman SPIHT are modified to improve compression efficiency. We explore the modifications of the initial threshold value, reconstruction levels and quantization scheme in SPIHT algorithm. Additionally, we present the result of the best filter bank selection. The most efficient biorthogonal filter banks are tested. Significant efficiency improvement of SPIHT coder was finally noticed even up to 0.9dB of PSNR in some cases. Because of the problems with optimization of quantization scheme in embedded coder we propose another solution: adaptive threshold selection of wavelet coefficients in progressive coding scheme. Two versions of this coder are tested: progressive in quality and resolution. As a result, improved compression effectiveness is achieved - close to 1.3 dB in comparison to SPIHT for image Barbara. All proposed algorithms are optimized automatically and are not time-consuming. But sometimes the most efficient solution must be found in iterative way. Final results are competitive across the most efficient wavelet coders.

  18. Development of a compressive sampling hyperspectral imager prototype

    NASA Astrophysics Data System (ADS)

    Barducci, Alessandro; Guzzi, Donatella; Lastri, Cinzia; Nardino, Vanni; Marcoionni, Paolo; Pippi, Ivan

    2013-10-01

    Compressive sensing (CS) is a new technology that investigates the chance to sample signals at a lower rate than the traditional sampling theory. The main advantage of CS is that compression takes place during the sampling phase, making possible significant savings in terms of the ADC, data storage memory, down-link bandwidth, and electrical power absorption. The CS technology could have primary importance for spaceborne missions and technology, paving the way to noteworthy reductions of payload mass, volume, and cost. On the contrary, the main CS disadvantage is made by the intensive off-line data processing necessary to obtain the desired source estimation. In this paper we summarize the CS architecture and its possible implementations for Earth observation, giving evidence of possible bottlenecks hindering this technology. CS necessarily employs a multiplexing scheme, which should produce some SNR disadvantage. Moreover, this approach would necessitate optical light modulators and 2-dim detector arrays of high frame rate. This paper describes the development of a sensor prototype at laboratory level that will be utilized for the experimental assessment of CS performance and the related reconstruction errors. The experimental test-bed adopts a push-broom imaging spectrometer, a liquid crystal plate, a standard CCD camera and a Silicon PhotoMultiplier (SiPM) matrix. The prototype is being developed within the framework of the ESA ITI-B Project titled "Hyperspectral Passive Satellite Imaging via Compressive Sensing".

  19. A novel image compression-encryption hybrid algorithm based on the analysis sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Ye; Xu, Biao; Zhou, Nanrun

    2017-06-01

    Recent advances on the compressive sensing theory were invoked for image compression-encryption based on the synthesis sparse model. In this paper we concentrate on an alternative sparse representation model, i.e., the analysis sparse model, to propose a novel image compression-encryption hybrid algorithm. The analysis sparse representation of the original image is obtained with an overcomplete fixed dictionary that the order of the dictionary atoms is scrambled, and the sparse representation can be considered as an encrypted version of the image. Moreover, the sparse representation is compressed to reduce its dimension and re-encrypted by the compressive sensing simultaneously. To enhance the security of the algorithm, a pixel-scrambling method is employed to re-encrypt the measurements of the compressive sensing. Various simulation results verify that the proposed image compression-encryption hybrid algorithm could provide a considerable compression performance with a good security.

  20. A comparison of select image-compression algorithms for an electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.

  1. Motion-compensated compressed sensing for dynamic imaging

    NASA Astrophysics Data System (ADS)

    Sundaresan, Rajagopalan; Kim, Yookyung; Nadar, Mariappan S.; Bilgin, Ali

    2010-08-01

    The recently introduced Compressed Sensing (CS) theory explains how sparse or compressible signals can be reconstructed from far fewer samples than what was previously believed possible. The CS theory has attracted significant attention for applications such as Magnetic Resonance Imaging (MRI) where long acquisition times have been problematic. This is especially true for dynamic MRI applications where high spatio-temporal resolution is needed. For example, in cardiac cine MRI, it is desirable to acquire the whole cardiac volume within a single breath-hold in order to avoid artifacts due to respiratory motion. Conventional MRI techniques do not allow reconstruction of high resolution image sequences from such limited amount of data. Vaswani et al. recently proposed an extension of the CS framework to problems with partially known support (i.e. sparsity pattern). In their work, the problem of recursive reconstruction of time sequences of sparse signals was considered. Under the assumption that the support of the signal changes slowly over time, they proposed using the support of the previous frame as the "known" part of the support for the current frame. While this approach works well for image sequences with little or no motion, motion causes significant change in support between adjacent frames. In this paper, we illustrate how motion estimation and compensation techniques can be used to reconstruct more accurate estimates of support for image sequences with substantial motion (such as cardiac MRI). Experimental results using phantoms as well as real MRI data sets illustrate the improved performance of the proposed technique.

  2. Turning Diffusion-based Image Colorization into Efficient Color Compression.

    PubMed

    Peter, Pascal; Kaufhold, Lilli; Weickert, Joachim

    2016-11-10

    The work of Levin et al. (2004) popularised strokebased methods that add color to gray value images according to a small amount of user-specified color samples. Even though such reconstructions from sparse data suggest a possible use in compression, only few attempts were made so far in this direction. Diffusion-based compression methods pursue a similar idea: They store only few image pixels and inpaint the missing regions. Despite this close relation and a lack of diffusion-based color codecs, colorization ideas were so far only integrated into transform-based approaches such as JPEG. We address this missing link with two contributions. First, we show the relation between the discrete colorization of Levin et al. and continuous diffusion-based inpainting in the YCbCr color space. It decomposes the image into a luma (brightness) channel and two chroma (color) channels. Our luma-guided diffusion framework steers the diffusion inpainting in the chroma channels according to the structure in the luma channel. We show that making the luma-guided colorization anisotropic outperforms the method of Levin et al. significantly. Secondly, we propose a new luma preference codec that invests a large fraction of the bit budget into an accurate representation of the luma channel. This allows a high-quality reconstruction of color data with our colorization technique. Simultaneously we exploit the fact that the human visual system is more sensitive to structural than to color information. Our experiments demonstrate that our new codec outperforms the state-of-the-art in diffusion-based image compression and is competitive to transform-based codecs.

  3. Simultaneous encryption and compression of medical images based on optimized tensor compressed sensing with 3D Lorenz.

    PubMed

    Wang, Qingzhu; Chen, Xiaoming; Wei, Mengying; Miao, Zhuang

    2016-11-04

    The existing techniques for simultaneous encryption and compression of images refer lossy compression. Their reconstruction performances did not meet the accuracy of medical images because most of them have not been applicable to three-dimensional (3D) medical image volumes intrinsically represented by tensors. We propose a tensor-based algorithm using tensor compressive sensing (TCS) to address these issues. Alternating least squares is further used to optimize the TCS with measurement matrices encrypted by discrete 3D Lorenz. The proposed method preserves the intrinsic structure of tensor-based 3D images and achieves a better balance of compression ratio, decryption accuracy, and security. Furthermore, the characteristic of the tensor product can be used as additional keys to make unauthorized decryption harder. Numerical simulation results verify the validity and the reliability of this scheme.

  4. Emerging standards for still image compression: A software implementation and simulation study

    NASA Technical Reports Server (NTRS)

    Pollara, F.; Arnold, S.

    1991-01-01

    The software implementation is described of an emerging standard for the lossy compression of continuous tone still images. This software program can be used to compress planetary images and other 2-D instrument data. It provides a high compression image coding capability that preserves image fidelity at compression rates competitive or superior to most known techniques. This software implementation confirms the usefulness of such data compression and allows its performance to be compared with other schemes used in deep space missions and for data based storage.

  5. Application of strong zerotrees to compression of correlated MRI image sets

    NASA Astrophysics Data System (ADS)

    Soloveyko, Olexandr M.; Musatenko, Yurij S.; Kurashov, Vitalij N.; Dubikovskiy, Vladislav A.

    2001-08-01

    It is known that gainful interframe compression of magnetic resonance(MR) image set is quite difficult problem. Only few authors reported gain in performance of compressors like that comparing to separate compression of every MR image from the set (intraframe compression). Known reasons of such a situation are significant noise in MR images and presence of only low frequency correlations in images of the set. Recently we suggested new method of correlated image set compression based on Karhunen-Loeve(KL) transform and special EZW compression scheme with strong zerotrees(KLSEZW). KLSEZW algorithm showed good results in compression of video sequences with low and middle motion even without motion compensation. The paper presents successful application of the basic method and its modification to interframe MR image compression problem.

  6. Degradative encryption: An efficient way to protect SPIHT compressed images

    NASA Astrophysics Data System (ADS)

    Xiang, Tao; Qu, Jinyu; Yu, Chenyun; Fu, Xinwen

    2012-11-01

    Degradative encryption, a new selective image encryption paradigm, is proposed to encrypt only a small part of image data to make the detail blurred but keep the skeleton discernible. The efficiency is further optimized by combining compression and encryption. A format-compliant degradative encryption algorithm based on set partitioning in hierarchical trees (SPIHT) is then proposed, and the scheme is designed to work in progressive mode for gaining a tradeoff between efficiency and security. Extensive experiments are conducted to evaluate the strength and efficiency of the scheme, and it is found that less than 10% data need to be encrypted for a secure degradation. In security analysis, the scheme is verified to be immune to cryptographic attacks as well as those adversaries utilizing image processing techniques. The scheme can find its wide applications in online try-and-buy service on mobile devices, searchable multimedia encryption in cloud computing, etc.

  7. Efficient image compression scheme based on differential coding

    NASA Astrophysics Data System (ADS)

    Zhu, Li; Wang, Guoyou; Liu, Ying

    2007-11-01

    Embedded zerotree (EZW) and Set Partitioning in Hierarchical Trees (SPIHT) coding, introduced by J.M. Shapiro and Amir Said, are very effective and being used in many fields widely. In this study, brief explanation of the principles of SPIHT was first provided, and then, some improvement of SPIHT algorithm according to experiments was introduced. 1) For redundancy among the coefficients in the wavelet region, we propose differential method to reduce it during coding. 2) Meanwhile, based on characteristic of the coefficients' distribution in subband, we adjust sorting pass and optimize differential coding, in order to reduce the redundancy coding in each subband. 3) The image coding result, calculated by certain threshold, shows that through differential coding, the rate of compression get higher, and the quality of reconstructed image have been get raised greatly, when bpp (bit per pixel)=0.5, PSNR (Peak Signal to Noise Ratio) of reconstructed image exceeds that of standard SPIHT by 0.2~0.4db.

  8. High-speed compressive range imaging based on active illumination.

    PubMed

    Sun, Yangyang; Yuan, Xin; Pang, Shuo

    2016-10-03

    We report a compressive imaging method based on active illumination, which reconstructs a 3D scene at a frame rate beyond the acquisition speed limit of the camera. We have built an imaging prototype that projects temporally varying illumination pattern and demonstrated a joint reconstruction algorithm that iteratively retrieves both the range and high-temporal-frequency information from the 2D low-frame rate measurement. The reflectance and depth-map videos have been reconstructed at 1000 frames per second (fps) from the measurement captured at 200 fps. The range resolution is in agreement with the resolution calculated from the triangulation methods based on the same system geometry. We expect such an imaging method could become a simple solution to a wide range of applications, including industrial metrology, 3D printing, and vehicle navigations.

  9. Recommendations for imaging tumor response in neurofibromatosis clinical trials

    PubMed Central

    Ardern-Holmes, Simone L.; Babovic-Vuksanovic, Dusica; Barker, Fred G.; Connor, Steve; Evans, D. Gareth; Fisher, Michael J.; Goutagny, Stephane; Harris, Gordon J.; Jaramillo, Diego; Karajannis, Matthias A.; Korf, Bruce R.; Mautner, Victor; Plotkin, Scott R.; Poussaint, Tina Y.; Robertson, Kent; Shih, Chie-Schin; Widemann, Brigitte C.

    2013-01-01

    Objective: Neurofibromatosis (NF)-related benign tumors such as plexiform neurofibromas (PN) and vestibular schwannomas (VS) can cause substantial morbidity. Clinical trials directed at these tumors have become available. Due to differences in disease manifestations and the natural history of NF-related tumors, response criteria used for solid cancers (1-dimensional/RECIST [Response Evaluation Criteria in Solid Tumors] and bidimensional/World Health Organization) have limited applicability. No standardized response criteria for benign NF tumors exist. The goal of the Tumor Measurement Working Group of the REiNS (Response Evaluation in Neurofibromatosis and Schwannomatosis) committee is to propose consensus guidelines for the evaluation of imaging response in clinical trials for NF tumors. Methods: Currently used imaging endpoints, designs of NF clinical trials, and knowledge of the natural history of NF-related tumors, in particular PN and VS, were reviewed. Consensus recommendations for response evaluation for future studies were developed based on this review and the expertise of group members. Results: MRI with volumetric analysis is recommended to sensitively and reproducibly evaluate changes in tumor size in clinical trials. Volumetric analysis requires adherence to specific imaging recommendations. A 20% volume change was chosen to indicate a decrease or increase in tumor size. Use of these criteria in future trials will enable meaningful comparison of results across studies. Conclusions: The proposed imaging response evaluation guidelines, along with validated clinical outcome measures, will maximize the ability to identify potentially active agents for patients with NF and benign tumors. PMID:24249804

  10. Recommendations for imaging tumor response in neurofibromatosis clinical trials.

    PubMed

    Dombi, Eva; Ardern-Holmes, Simone L; Babovic-Vuksanovic, Dusica; Barker, Fred G; Connor, Steve; Evans, D Gareth; Fisher, Michael J; Goutagny, Stephane; Harris, Gordon J; Jaramillo, Diego; Karajannis, Matthias A; Korf, Bruce R; Mautner, Victor; Plotkin, Scott R; Poussaint, Tina Y; Robertson, Kent; Shih, Chie-Schin; Widemann, Brigitte C

    2013-11-19

    Neurofibromatosis (NF)-related benign tumors such as plexiform neurofibromas (PN) and vestibular schwannomas (VS) can cause substantial morbidity. Clinical trials directed at these tumors have become available. Due to differences in disease manifestations and the natural history of NF-related tumors, response criteria used for solid cancers (1-dimensional/RECIST [Response Evaluation Criteria in Solid Tumors] and bidimensional/World Health Organization) have limited applicability. No standardized response criteria for benign NF tumors exist. The goal of the Tumor Measurement Working Group of the REiNS (Response Evaluation in Neurofibromatosis and Schwannomatosis) committee is to propose consensus guidelines for the evaluation of imaging response in clinical trials for NF tumors. Currently used imaging endpoints, designs of NF clinical trials, and knowledge of the natural history of NF-related tumors, in particular PN and VS, were reviewed. Consensus recommendations for response evaluation for future studies were developed based on this review and the expertise of group members. MRI with volumetric analysis is recommended to sensitively and reproducibly evaluate changes in tumor size in clinical trials. Volumetric analysis requires adherence to specific imaging recommendations. A 20% volume change was chosen to indicate a decrease or increase in tumor size. Use of these criteria in future trials will enable meaningful comparison of results across studies. The proposed imaging response evaluation guidelines, along with validated clinical outcome measures, will maximize the ability to identify potentially active agents for patients with NF and benign tumors.

  11. Recent Advances in Compressed Sensing: Discrete Uncertainty Principles and Fast Hyperspectral Imaging

    DTIC Science & Technology

    2015-03-26

    medical imaging , e.g., magnetic resonance imaging (MRI). Since the early 1980s, MRI has granted doctors the ability to distinguish between healthy tissue ...Recent Advances in Compressed Sensing: Discrete Uncertainty Principles and Fast Hyperspectral Imaging THESIS MARCH 2015 Megan E. Lewis, Second...IN COMPRESSED SENSING: DISCRETE UNCERTAINTY PRINCIPLES AND FAST HYPERSPECTRAL IMAGING THESIS Presented to the Faculty Department of Mathematics and

  12. Objective index of image fidelity for JPEG2000 compressed body CT images

    SciTech Connect

    Kim, Kil Joong; Lee, Kyoung Ho; Kang, Heung-Sik; Kim, So Yeon; Kim, Young Hoon; Kim, Bohyoung; Seo, Jinwook; Mantiuk, Rafal

    2009-07-15

    Compression ratio (CR) has been the de facto standard index of compression level for medical images. The aim of the study is to evaluate the CR, peak signal-to-noise ratio (PSNR), and a perceptual quality metric (high-dynamic range visual difference predictor HDR-VDP) as objective indices of image fidelity for Joint Photographic Experts Group (JPEG) 2000 compressed body computed tomography (CT) images, from the viewpoint of visually lossless compression approach. A total of 250 body CT images obtained with five different scan protocols (5-mm-thick abdomen, 0.67-mm-thick abdomen, 5-mm-thick lung, 0.67-mm-thick lung, and 5-mm-thick low-dose lung) were compressed to one of five CRs (reversible, 6:1, 8:1, 10:1, and 15:1). The PSNR and HDR-VDP values were calculated for the 250 pairs of the original and compressed images. By alternately displaying an original and its compressed image on the same monitor, five radiologists independently determined if the pair was distinguishable or indistinguishable. The kappa statistic for the interobserver agreement among the five radiologists' responses was 0.70. According to the radiologists' responses, the number of distinguishable image pairs tended to significantly differ among the five scan protocols at 6:1-10:1 compressions (Fisher-Freeman-Halton exact tests). Spearman's correlation coefficients between each of the CR, PSNR, and HDR-VDP and the number of radiologists who responded as distinguishable were 0.72, -0.77, and 0.85, respectively. Using the radiologists' pooled responses as the reference standards, the areas under the receiver-operating-characteristic curves for the CR, PSNR, and HDR-VDP were 0.87, 0.93, and 0.97, respectively, showing significant differences between the CR and PSNR (p=0.04), or HDR-VDP (p<0.001), and between the PSNR and HDR-VDP (p<0.001). In conclusion, the CR is less suitable than the PSNR or HDR-VDP as an objective index of image fidelity for JPEG2000 compressed body CT images. The HDR-VDP is more

  13. Application-oriented region of interest based image compression using bit-allocation optimization

    NASA Astrophysics Data System (ADS)

    Zhu, Yuanping

    2015-01-01

    Region of interest (ROI) based image compression can offer a high image-compression ratio along with high quality in the important regions of the image. For many applications, stable compression quality is required for both the ROIs and the images. However, image compression does not consider information specific to the application and cannot meet this requirement well. This paper proposes an application-oriented ROI-based image-compression method using bit-allocation optimization. Unlike typical methods that define bit-rate parameters empirically, the proposed method adjusts the bit-rate parameters adaptively to both images and ROIs. First, an application-dependent optimization model is constructed. The relationship between the compression parameters and the image content is learned from a training image set. Image redundancy is used to measure the compression capability of image content. Then, during compression, the global bit rate and the ROI bit rate are adjusted in the images and ROIs, respectively-supported by the application-dependent information in the optimization model. As a result, stable compression quality is assured in the applications. Experiments with two different applications showed that quality deviation in the reconstructed images decreased, verifying the effectiveness of the proposed method.

  14. Real-time Image Generation for Compressive Light Field Displays

    NASA Astrophysics Data System (ADS)

    Wetzstein, G.; Lanman, D.; Hirsch, M.; Raskar, R.

    2013-02-01

    With the invention of integral imaging and parallax barriers in the beginning of the 20th century, glasses-free 3D displays have become feasible. Only today—more than a century later—glasses-free 3D displays are finally emerging in the consumer market. The technologies being employed in current-generation devices, however, are fundamentally the same as what was invented 100 years ago. With rapid advances in optical fabrication, digital processing power, and computational perception, a new generation of display technology is emerging: compressive displays exploring the co-design of optical elements and computational processing while taking particular characteristics of the human visual system into account. In this paper, we discuss real-time implementation strategies for emerging compressive light field displays. We consider displays composed of multiple stacked layers of light-attenuating or polarization-rotating layers, such as LCDs. The involved image generation requires iterative tomographic image synthesis. We demonstrate that, for the case of light field display, computed tomographic light field synthesis maps well to operations included in the standard graphics pipeline, facilitating efficient GPU-based implementations with real-time framerates.

  15. Multifrequency Bayesian compressive sensing methods for microwave imaging.

    PubMed

    Poli, Lorenzo; Oliveri, Giacomo; Ding, Ping Ping; Moriyama, Toshifumi; Massa, Andrea

    2014-11-01

    The Bayesian retrieval of sparse scatterers under multifrequency transverse magnetic illuminations is addressed. Two innovative imaging strategies are formulated to process the spectral content of microwave scattering data according to either a frequency-hopping multistep scheme or a multifrequency one-shot scheme. To solve the associated inverse problems, customized implementations of single-task and multitask Bayesian compressive sensing are introduced. A set of representative numerical results is discussed to assess the effectiveness and the robustness against the noise of the proposed techniques also in comparison with some state-of-the-art deterministic strategies.

  16. Image reconstruction and compressive sensing in MIMO radar

    NASA Astrophysics Data System (ADS)

    Sun, Bing; Lopez, Juan; Qiao, Zhijun

    2014-05-01

    Multiple-input multiple-output (MIMO) radar utilizes the flexible configuration of transmitting and receiving antennas to construct images of target scenes. Because of the target scenes' sparsity, the compressive sensing (CS) technique can be used to realize a feasible reconstruction of the target scenes from undersampling data. This paper presents the signal model of MIMO radar and derive the corresponding CS measurement matrix, which shows success of the CS technique. Also the basis pursuit method and total-variation minimization method are adopted for different scenes' recovery. Numerical simulations are provided to illustrate the validity of reconstruction for one dimensional and two dimensional scenes.

  17. Compressed Sensing (CS) Imaging with Wide FOV and Dynamic Magnification

    DTIC Science & Technology

    2011-03-14

    Sponsor: Office of Naval Research March 14, 2011 ONR Program manager: Dr . Michael Duncan, Program Officer Tel: (703) 696-5787 Electro-Optics...significant wavelet coefficients at scale k is: /•OO \\k = 6(4 Kh) f(x;ak,0k) dr . (4) Jft Consider the (m,n)th DFT atom within the band Bk...34 Preprint. 2008. [6] M. Lustig , D. Donoho, and J. M. Pauly, "Sparse MRI: The application of compressed sensing for rapid MR imaging," Mag. Reson. in

  18. Image denoising by adaptive Compressed Sensing reconstructions and fusions

    NASA Astrophysics Data System (ADS)

    Meiniel, William; Angelini, Elsa; Olivo-Marin, Jean-Christophe

    2015-09-01

    In this work, Compressed Sensing (CS) is investigated as a denoising tool in bioimaging. The denoising algorithm exploits multiple CS reconstructions, taking advantage of the robustness of CS in the presence of noise via regularized reconstructions and the properties of the Fourier transform of bioimages. Multiple reconstructions at low sampling rates are combined to generate high quality denoised images using several sparsity constraints. We present different combination methods for the CS reconstructions and quantitatively compare the performance of our denoising methods to state-of-the-art ones.

  19. COxSwAIN: Compressive Sensing for Advanced Imaging and Navigation

    NASA Technical Reports Server (NTRS)

    Kurwitz, Richard; Pulley, Marina; LaFerney, Nathan; Munoz, Carlos

    2015-01-01

    The COxSwAIN project focuses on building an image and video compression scheme that can be implemented in a small or low-power satellite. To do this, we used Compressive Sensing, where the compression is performed by matrix multiplications on the satellite and reconstructed on the ground. Our paper explains our methodology and demonstrates the results of the scheme, being able to achieve high quality image compression that is robust to noise and corruption.

  20. Image Recommendation Algorithm Using Feature-Based Collaborative Filtering

    NASA Astrophysics Data System (ADS)

    Kim, Deok-Hwan

    As the multimedia contents market continues its rapid expansion, the amount of image contents used in mobile phone services, digital libraries, and catalog service is increasing remarkably. In spite of this rapid growth, users experience high levels of frustration when searching for the desired image. Even though new images are profitable to the service providers, traditional collaborative filtering methods cannot recommend them. To solve this problem, in this paper, we propose feature-based collaborative filtering (FBCF) method to reflect the user's most recent preference by representing his purchase sequence in the visual feature space. The proposed approach represents the images that have been purchased in the past as the feature clusters in the multi-dimensional feature space and then selects neighbors by using an inter-cluster distance function between their feature clusters. Various experiments using real image data demonstrate that the proposed approach provides a higher quality recommendation and better performance than do typical collaborative filtering and content-based filtering techniques.

  1. Resolution enhancement for ISAR imaging via improved statistical compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Wang, Hongxian; Qiao, Zhi-jun

    2016-12-01

    Developing compressed sensing (CS) theory reveals that optimal reconstruction of an unknown signal can be achieved from very limited observations by utilizing signal sparsity. For inverse synthetic aperture radar (ISAR), the image of an interesting target is generally constructed by limited strong scattering centers, representing strong spatial sparsity. Such prior sparsity intrinsically paves a way to improved ISAR imaging performance. In this paper, we develop a super-resolution algorithm for forming ISAR images from limited observations. When the amplitude of the target scattered field follows an identical Laplace probability distribution, the approach converts super-resolution imaging into sparsity-driven optimization in the Bayesian statistics sense. We show that improved performance is achievable by taking advantage of the meaningful spatial structure of the scattered field. Further, we use the nonidentical Laplace distribution with small scale on strong signal components and large scale on noise to discriminate strong scattering centers from noise. A maximum likelihood estimator combined with a bandwidth extrapolation technique is also developed to estimate the scale parameters. Real measured data processing indicates the proposal can reconstruct the high-resolution image though only limited pulses even with low SNR, which shows advantages over current super-resolution imaging methods.

  2. Edge-preserving image compression using adaptive lifting wavelet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Libao; Qiu, Bingchang

    2015-07-01

    In this paper, a novel 2-D adaptive lifting wavelet transform is presented. The proposed algorithm is designed to further reduce the high-frequency energy of wavelet transform, improve the image compression efficiency and preserve the edge or texture of original images more effectively. In this paper, a new optional direction set, covering the surrounding integer pixels and sub-pixels, is designed. Hence, our algorithm adapts far better to the image orientation features in local image blocks. To obtain the computationally efficient and coding performance, the complete processes of 2-D adaptive lifting wavelet transform is introduced and implemented. Compared with the traditional lifting-based wavelet transform, the adaptive directional lifting and the direction-adaptive discrete wavelet transform, the new structure reduces the high-frequency wavelet coefficients more effectively, and the texture structures of the reconstructed images are more refined and clear than that of the other methods. The peak signal-to-noise ratio and the subjective quality of the reconstructed images are significantly improved.

  3. Compressed Sensing MR Image Reconstruction Exploiting TGV and Wavelet Sparsity

    PubMed Central

    Du, Huiqian; Han, Yu; Mei, Wenbo

    2014-01-01

    Compressed sensing (CS) based methods make it possible to reconstruct magnetic resonance (MR) images from undersampled measurements, which is known as CS-MRI. The reference-driven CS-MRI reconstruction schemes can further decrease the sampling ratio by exploiting the sparsity of the difference image between the target and the reference MR images in pixel domain. Unfortunately existing methods do not work well given that contrast changes are incorrectly estimated or motion compensation is inaccurate. In this paper, we propose to reconstruct MR images by utilizing the sparsity of the difference image between the target and the motion-compensated reference images in wavelet transform and gradient domains. The idea is attractive because it requires neither the estimation of the contrast changes nor multiple times motion compensations. In addition, we apply total generalized variation (TGV) regularization to eliminate the staircasing artifacts caused by conventional total variation (TV). Fast composite splitting algorithm (FCSA) is used to solve the proposed reconstruction problem in order to improve computational efficiency. Experimental results demonstrate that the proposed method can not only reduce the computational cost but also decrease sampling ratio or improve the reconstruction quality alternatively. PMID:25371704

  4. Terahertz compressive imaging with metamaterial spatial light modulators

    NASA Astrophysics Data System (ADS)

    Watts, Claire M.; Shrekenhamer, David; Montoya, John; Lipworth, Guy; Hunt, John; Sleasman, Timothy; Krishna, Sanjay; Smith, David R.; Padilla, Willie J.

    2014-08-01

    Imaging at long wavelengths, for example at terahertz and millimetre-wave frequencies, is a highly sought-after goal of researchers because of the great potential for applications ranging from security screening and skin cancer detection to all-weather navigation and biodetection. Here, we design, fabricate and demonstrate active metamaterials that function as real-time tunable, spectrally sensitive spatial masks for terahertz imaging with only a single-pixel detector. A modulation technique permits imaging with negative mask values, which is typically difficult to achieve with intensity-based components. We demonstrate compressive techniques allowing the acquisition of high-frame-rate, high-fidelity images. Our system is all solid-state with no moving parts, yields improved signal-to-noise ratios over standard raster-scanning techniques, and uses a source orders of magnitude lower in power than conventional set-ups. The demonstrated imaging system establishes a new path for terahertz imaging that is distinct from existing focal-plane-array-based cameras.

  5. An investigation of image compression on NIIRS rating degradation through automated image analysis

    NASA Astrophysics Data System (ADS)

    Chen, Hua-Mei; Blasch, Erik; Pham, Khanh; Wang, Zhonghai; Chen, Genshe

    2016-05-01

    The National Imagery Interpretability Rating Scale (NIIRS) is a subjective quantification of static image widely adopted by the Geographic Information System (GIS) community. Efforts have been made to relate NIIRS image quality to sensor parameters using the general image quality equations (GIQE), which make it possible to automatically predict the NIIRS rating of an image through automated image analysis. In this paper, we present an automated procedure to extract line edge profile based on which the NIIRS rating of a given image can be estimated through the GIQEs if the ground sampling distance (GSD) is known. Steps involved include straight edge detection, edge stripes determination, and edge intensity determination, among others. Next, we show how to employ GIQEs to estimate NIIRS degradation without knowing the ground truth GSD and investigate the effects of image compression on the degradation of an image's NIIRS rating. Specifically, we consider JPEG and JPEG2000 image compression standards. The extensive experimental results demonstrate the effect of image compression on the ground sampling distance and relative edge response, which are the major factors effecting NIIRS rating.

  6. Impact of JPEG lossy image compression on quantitative digital subtraction radiography.

    PubMed

    Fidler, A; Likar, B; Pernus, F; Skaleric, U

    2002-03-01

    The aim of the study was to evaluate the impact of JPEG lossy image compression on the estimation of alveolar bone gain by quantitative digital subtraction radiography (DSR). Nine dry domestic pig mandible posterior segments were radiographed three times ('Baseline', 'No change', and 'Gain') with standardized projection geometry. Bone gain was simulated by adding artificial bone chips (1, 4, and 15 mg). Images were either compressed before or after registration. No change areas in compressed and subtracted 'No change-Baseline' images and bone gain volumes in compressed and subtracted 'Gain-Baseline' images were calculated and compared to the corresponding measurements performed on original subtracted images. Measurements of no change areas ('No change-Baseline') were only slightly affected by compressions down to JPEG 50 (J50) applied either before or after registration. Simulated gain of alveolar bone ('Gain-Baseline') was underestimated when compression before registration was performed. The underestimation was bigger when small bone chips of 1 mg were measured and when higher compression rates were used. Bone chips of 4 and 15 mg were only slightly underestimated when using J90, J70, and J50 compressions before registration. Lossy JPEG compression does not affect the measurements of no change areas by DSR. Images undergoing subtraction should be registered before compression and if so, J90 compression with a compression ratio of 1:7 can be used to detect and measure 4 mg and larger bone gain.

  7. High-performance JPEG image compression chip set for multimedia applications

    NASA Astrophysics Data System (ADS)

    Razavi, Abbas; Shenberg, Isaac; Seltz, Danny; Fronczak, Dave

    1993-04-01

    By its very nature, multimedia includes images, text and audio stored in digital format. Image compression is an enabling technology essential to overcoming two bottlenecks: cost of storage and bus speed limitation. Storing 10 seconds of high resolution RGB (640 X 480) motion video (30 frames/sec) requires 277 MBytes and a bus speed of 28 MBytes/sec (which cannot be handled by a standard bus). With high quality JPEG baseline compression the storage and bus requirements are reduced to 12 MBytes of storage and a bus speed of 1.2 MBytes/sec. Moreover, since consumer video and photography products (e.g., digital still video cameras, camcorders, TV) will increasingly use digital (and therefore compressed) images because of quality, accessibility, and the ease of adding features, compressed images may become the bridge between the multimedia computer and consumer products. The image compression challenge can be met by implementing the discrete cosine transform (DCT)-based image compression algorithm defined by the JPEG baseline standard. Using the JPEG baseline algorithm, an image can be compressed by a factor of about 24:1 without noticeable degradation in image quality. Because motion video is compressed frame by frame (or field by field), system cost is minimized (no frame or field memories and interframe operations are required) and each frame can be edited independently. Since JPEG is an international standard, the compressed files generated by this solution can be readily interchanged with other users and processed by standard software packages. This paper describes a multimedia image compression board utilizing Zoran's 040 JPEG Image Compression chip set. The board includes digitization, video decoding and compression. While the original video is sent to the display (`video in a window'), it is also compressed and transferred to the computer bus for storage. During playback, the system receives the compressed sequence from the bus and displays it on the screen.

  8. A linear mixture analysis-based compression for hyperspectral image analysis

    SciTech Connect

    C. I. Chang; I. W. Ginsberg

    2000-06-30

    In this paper, the authors present a fully constrained least squares linear spectral mixture analysis-based compression technique for hyperspectral image analysis, particularly, target detection and classification. Unlike most compression techniques that directly deal with image gray levels, the proposed compression approach generates the abundance fractional images of potential targets present in an image scene and then encodes these fractional images so as to achieve data compression. Since the vital information used for image analysis is generally preserved and retained in the abundance fractional images, the loss of information may have very little impact on image analysis. In some occasions, it even improves analysis performance. Airborne visible infrared imaging spectrometer (AVIRIS) data experiments demonstrate that it can effectively detect and classify targets while achieving very high compression ratios.

  9. Honey Bee Mating Optimization Vector Quantization Scheme in Image Compression

    NASA Astrophysics Data System (ADS)

    Horng, Ming-Huwi

    The vector quantization is a powerful technique in the applications of digital image compression. The traditionally widely used method such as the Linde-Buzo-Gray (LBG) algorithm always generated local optimal codebook. Recently, particle swarm optimization (PSO) is adapted to obtain the near-global optimal codebook of vector quantization. In this paper, we applied a new swarm algorithm, honey bee mating optimization, to construct the codebook of vector quantization. The proposed method is called the honey bee mating optimization based LBG (HBMO-LBG) algorithm. The results were compared with the other two methods that are LBG and PSO-LBG algorithms. Experimental results showed that the proposed HBMO-LBG algorithm is more reliable and the reconstructed images get higher quality than those generated form the other three methods.

  10. Underwater Acoustic Matched Field Imaging Based on Compressed Sensing

    PubMed Central

    Yan, Huichen; Xu, Jia; Long, Teng; Zhang, Xudong

    2015-01-01

    Matched field processing (MFP) is an effective method for underwater target imaging and localizing, but its performance is not guaranteed due to the nonuniqueness and instability problems caused by the underdetermined essence of MFP. By exploiting the sparsity of the targets in an imaging area, this paper proposes a compressive sensing MFP (CS-MFP) model from wave propagation theory by using randomly deployed sensors. In addition, the model’s recovery performance is investigated by exploring the lower bounds of the coherence parameter of the CS dictionary. Furthermore, this paper analyzes the robustness of CS-MFP with respect to the displacement of the sensors. Subsequently, a coherence-excluding coherence optimized orthogonal matching pursuit (CCOOMP) algorithm is proposed to overcome the high coherent dictionary problem in special cases. Finally, some numerical experiments are provided to demonstrate the effectiveness of the proposed CS-MFP method. PMID:26457708

  11. Compressive dynamic range imaging via Bayesian shrinkage dictionary learning

    NASA Astrophysics Data System (ADS)

    Yuan, Xin

    2016-12-01

    We apply the Bayesian shrinkage dictionary learning into compressive dynamic-range imaging. By attenuating the luminous intensity impinging upon the detector at the pixel level, we demonstrate a conceptual design of an 8-bit camera to sample high-dynamic-range scenes with a single snapshot. Coding strategies for both monochrome and color cameras are proposed. A Bayesian reconstruction algorithm is developed to learn a dictionary in situ on the sampled image, for joint reconstruction and demosaicking. We use global-local shrinkage priors to learn the dictionary and dictionary coefficients representing the data. Simulation results demonstrate the feasibility of the proposed camera and the superior performance of the Bayesian shrinkage dictionary learning algorithm.

  12. Underwater Acoustic Matched Field Imaging Based on Compressed Sensing.

    PubMed

    Yan, Huichen; Xu, Jia; Long, Teng; Zhang, Xudong

    2015-10-07

    Matched field processing (MFP) is an effective method for underwater target imaging and localizing, but its performance is not guaranteed due to the nonuniqueness and instability problems caused by the underdetermined essence of MFP. By exploiting the sparsity of the targets in an imaging area, this paper proposes a compressive sensing MFP (CS-MFP) model from wave propagation theory by using randomly deployed sensors. In addition, the model's recovery performance is investigated by exploring the lower bounds of the coherence parameter of the CS dictionary. Furthermore, this paper analyzes the robustness of CS-MFP with respect to the displacement of the sensors. Subsequently, a coherence-excluding coherence optimized orthogonal matching pursuit (CCOOMP) algorithm is proposed to overcome the high coherent dictionary problem in special cases. Finally, some numerical experiments are provided to demonstrate the effectiveness of the proposed CS-MFP method.

  13. Error-resilient pyramid vector quantization for image compression.

    PubMed

    Hung, A C; Tsern, E K; Meng, T H

    1998-01-01

    Pyramid vector quantization (PVQ) uses the lattice points of a pyramidal shape in multidimensional space as the quantizer codebook. It is a fixed-rate quantization technique that can be used for the compression of Laplacian-like sources arising from transform and subband image coding, where its performance approaches the optimal entropy-coded scalar quantizer without the necessity of variable length codes. In this paper, we investigate the use of PVQ for compressed image transmission over noisy channels, where the fixed-rate quantization reduces the susceptibility to bit-error corruption. We propose a new method of deriving the indices of the lattice points of the multidimensional pyramid and describe how these techniques can also improve the channel noise immunity of general symmetric lattice quantizers. Our new indexing scheme improves channel robustness by up to 3 dB over previous indexing methods, and can be performed with similar computational cost. The final fixed-rate coding algorithm surpasses the performance of typical Joint Photographic Experts Group (JPEG) implementations and exhibits much greater error resilience.

  14. Pairwise KLT-Based Compression for Multispectral Images

    NASA Astrophysics Data System (ADS)

    Nian, Yongjian; Liu, Yu; Ye, Zhen

    2016-12-01

    This paper presents a pairwise KLT-based compression algorithm for multispectral images. Although the KLT has been widely employed for spectral decorrelation, its complexity is high if it is performed on the global multispectral images. To solve this problem, this paper presented a pairwise KLT for spectral decorrelation, where KLT is only performed on two bands every time. First, KLT is performed on the first two adjacent bands and two principle components are obtained. Secondly, one remainning band and the principal component (PC) with the larger eigenvalue is selected to perform a KLT on this new couple. This procedure is repeated until the last band is reached. Finally, the optimal truncation technique of post-compression rate-distortion optimization is employed for the rate allocation of all the PCs, followed by embedded block coding with optimized truncation to generate the final bit-stream. Experimental results show that the proposed algorithm outperforms the algorithm based on global KLT. Moreover, the pairwise KLT structure can significantly reduce the complexity compared with a global KLT.

  15. High dynamic range coherent imaging using compressed sensing.

    PubMed

    He, Kuan; Sharma, Manoj Kumar; Cossairt, Oliver

    2015-11-30

    In both lensless Fourier transform holography (FTH) and coherent diffraction imaging (CDI), a beamstop is used to block strong intensities which exceed the limited dynamic range of the sensor, causing a loss in low-frequency information, making high quality reconstructions difficult or even impossible. In this paper, we show that an image can be recovered from high-frequencies alone, thereby overcoming the beamstop problem in both FTH and CDI. The only requirement is that the object is sparse in a known basis, a common property of most natural and manmade signals. The reconstruction method relies on compressed sensing (CS) techniques, which ensure signal recovery from incomplete measurements. Specifically, in FTH, we perform compressed sensing (CS) reconstruction of captured holograms and show that this method is applicable not only to standard FTH, but also multiple or extended reference FTH. For CDI, we propose a new phase retrieval procedure, which combines Fienup's hybrid input-output (HIO) method and CS. Both numerical simulations and proof-of-principle experiments are shown to demonstrate the effectiveness and robustness of the proposed CS-based reconstructions in dealing with missing data in both FTH and CDI.

  16. A CMOS Imager with Focal Plane Compression using Predictive Coding

    NASA Technical Reports Server (NTRS)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  17. A CMOS Imager with Focal Plane Compression using Predictive Coding

    NASA Technical Reports Server (NTRS)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  18. High-performance compression and double cryptography based on compressive ghost imaging with the fast Fourier transform

    NASA Astrophysics Data System (ADS)

    Leihong, Zhang; Zilan, Pan; Luying, Wu; Xiuhua, Ma

    2016-11-01

    To solve the problem that large images can hardly be retrieved for stringent hardware restrictions and the security level is low, a method based on compressive ghost imaging (CGI) with Fast Fourier Transform (FFT) is proposed, named FFT-CGI. Initially, the information is encrypted by the sender with FFT, and the FFT-coded image is encrypted by the system of CGI with a secret key. Then the receiver decrypts the image with the aid of compressive sensing (CS) and FFT. Simulation results are given to verify the feasibility, security, and compression of the proposed encryption scheme. The experiment suggests the method can improve the quality of large images compared with conventional ghost imaging and achieve the imaging for large-sized images, further the amount of data transmitted largely reduced because of the combination of compressive sensing and FFT, and improve the security level of ghost images through ciphertext-only attack (COA), chosen-plaintext attack (CPA), and noise attack. This technique can be immediately applied to encryption and data storage with the advantages of high security, fast transmission, and high quality of reconstructed information.

  19. Code aperture optimization for spectrally agile compressive imaging.

    PubMed

    Arguello, Henry; Arce, Gonzalo R

    2011-11-01

    Coded aperture snapshot spectral imaging (CASSI) provides a mechanism for capturing a 3D spectral cube with a single shot 2D measurement. In many applications selective spectral imaging is sought since relevant information often lies within a subset of spectral bands. Capturing and reconstructing all the spectral bands in the observed image cube, to then throw away a large portion of this data, is inefficient. To this end, this paper extends the concept of CASSI to a system admitting multiple shot measurements, which leads not only to higher quality of reconstruction but also to spectrally selective imaging when the sequence of code aperture patterns is optimized. The aperture code optimization problem is shown to be analogous to the optimization of a constrained multichannel filter bank. The optimal code apertures allow the decomposition of the CASSI measurement into several subsets, each having information from only a few selected spectral bands. The rich theory of compressive sensing is used to effectively reconstruct the spectral bands of interest from the measurements. A number of simulations are developed to illustrate the spectral imaging characteristics attained by optimal aperture codes.

  20. Sparse radar imaging using 2D compressed sensing

    NASA Astrophysics Data System (ADS)

    Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying

    2014-10-01

    Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.

  1. Oriented wavelet transform for image compression and denoising.

    PubMed

    Chappelier, Vivien; Guillemot, Christine

    2006-10-01

    In this paper, we introduce a new transform for image processing, based on wavelets and the lifting paradigm. The lifting steps of a unidimensional wavelet are applied along a local orientation defined on a quincunx sampling grid. To maximize energy compaction, the orientation minimizing the prediction error is chosen adaptively. A fine-grained multiscale analysis is provided by iterating the decomposition on the low-frequency band. In the context of image compression, the multiresolution orientation map is coded using a quad tree. The rate allocation between the orientation map and wavelet coefficients is jointly optimized in a rate-distortion sense. For image denoising, a Markov model is used to extract the orientations from the noisy image. As long as the map is sufficiently homogeneous, interesting properties of the original wavelet are preserved such as regularity and orthogonality. Perfect reconstruction is ensured by the reversibility of the lifting scheme. The mutual information between the wavelet coefficients is studied and compared to the one observed with a separable wavelet transform. The rate-distortion performance of this new transform is evaluated for image coding using state-of-the-art subband coders. Its performance in a denoising application is also assessed against the performance obtained with other transforms or denoising methods.

  2. Learning-based compressed sensing for infrared image super resolution

    NASA Astrophysics Data System (ADS)

    Zhao, Yao; Sui, Xiubao; Chen, Qian; Wu, Shaochi

    2016-05-01

    This paper presents an infrared image super-resolution method based on compressed sensing (CS). First, the reconstruction model under the CS framework is established and a Toeplitz matrix is selected as the sensing matrix. Compared with traditional learning-based methods, the proposed method uses a set of sub-dictionaries instead of two coupled dictionaries to recover high resolution (HR) images. And Toeplitz sensing matrix allows the proposed method time-efficient. Second, all training samples are divided into several feature spaces by using the proposed adaptive k-means classification method, which is more accurate than the standard k-means method. On the basis of this approach, a complex nonlinear mapping from the HR space to low resolution (LR) space can be converted into several compact linear mappings. Finally, the relationships between HR and LR image patches can be obtained by multi-sub-dictionaries and HR infrared images are reconstructed by the input LR images and multi-sub-dictionaries. The experimental results show that the proposed method is quantitatively and qualitatively more effective than other state-of-the-art methods.

  3. Review and Implementation of the Emerging CCSDS Recommended Standard for Multispectral and Hyperspectral Lossless Image Coding

    NASA Technical Reports Server (NTRS)

    Sanchez, Jose Enrique; Auge, Estanislau; Santalo, Josep; Blanes, Ian; Serra-Sagrista, Joan; Kiely, Aaron

    2011-01-01

    A new standard for image coding is being developed by the MHDC working group of the CCSDS, targeting onboard compression of multi- and hyper-spectral imagery captured by aircraft and satellites. The proposed standard is based on the "Fast Lossless" adaptive linear predictive compressor, and is adapted to better overcome issues of onboard scenarios. In this paper, we present a review of the state of the art in this field, and provide an experimental comparison of the coding performance of the emerging standard in relation to other state-of-the-art coding techniques. Our own independent implementation of the MHDC Recommended Standard, as well as of some of the other techniques, has been used to provide extensive results over the vast corpus of test images from the CCSDS-MHDC.

  4. Review and Implementation of the Emerging CCSDS Recommended Standard for Multispectral and Hyperspectral Lossless Image Coding

    NASA Technical Reports Server (NTRS)

    Sanchez, Jose Enrique; Auge, Estanislau; Santalo, Josep; Blanes, Ian; Serra-Sagrista, Joan; Kiely, Aaron

    2011-01-01

    A new standard for image coding is being developed by the MHDC working group of the CCSDS, targeting onboard compression of multi- and hyper-spectral imagery captured by aircraft and satellites. The proposed standard is based on the "Fast Lossless" adaptive linear predictive compressor, and is adapted to better overcome issues of onboard scenarios. In this paper, we present a review of the state of the art in this field, and provide an experimental comparison of the coding performance of the emerging standard in relation to other state-of-the-art coding techniques. Our own independent implementation of the MHDC Recommended Standard, as well as of some of the other techniques, has been used to provide extensive results over the vast corpus of test images from the CCSDS-MHDC.

  5. Application of region selective embedded zerotree wavelet coder in CT image compression.

    PubMed

    Li, Guoli; Zhang, Jian; Wang, Qunjing; Hu, Cungang; Deng, Na; Li, Jianping

    2005-01-01

    Compression is necessary in medical image preservation because of the huge data quantity. Medical images are different from the common images because of their own characteristics, for example, part of information in CT image is useless, and it's a kind of resource waste to save this part information. The region selective EZW coder was proposed with which only useful part of image was selected and compressed, and the test image provides good result.

  6. Single exposure optically compressed imaging and visualization using random aperture coding

    NASA Astrophysics Data System (ADS)

    Stern, A.; Rivenson, Yair; Javidi, Bahrain

    2008-11-01

    The common approach in digital imaging follows the sample-then-compress framework. According to this approach, in the first step as many pixels as possible are captured and in the second step the captured image is compressed by digital means. The recently introduced theory of compressed sensing provides the mathematical foundation necessary to combine these two steps in a single one, that is, to compress the information optically before it is recorded. In this paper we overview and extend an optical implementation of compressed sensing theory that we have recently proposed. With this new imaging approach the compression is accomplished inherently in the optical acquisition step. The primary feature of this imaging approach is a randomly encoded aperture realized by means of a random phase screen. The randomly encoded aperture implements random projection of the object field in the image plane. Using a single exposure, a randomly encoded image is captured which can be decoded by proper decoding algorithm.

  7. 3-D Adaptive Sparsity Based Image Compression with Applications to Optical Coherence Tomography

    PubMed Central

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A.; Farsiu, Sina

    2015-01-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  8. Image-Data Compression Using Edge-Optimizing Algorithm for WFA Inference.

    ERIC Educational Resources Information Center

    Culik, Karel II; Kari, Jarkko

    1994-01-01

    Presents an inference algorithm that produces a weighted finite automata (WFA), in particular, the grayness functions of graytone images. Image-data compression results based on the new inference algorithm produces a WFA with a relatively small number of edges. Image-data compression results alone and in combination with wavelets are discussed.…

  9. Fluid Flow Characterization of High Turbulent Intensity Compressible Flow Using Particle Image Velocimetry

    DTIC Science & Technology

    2015-08-01

    TURBULENT INTENSITY COMPRESSIBLE FLOW USING PARTICLE IMAGE VELOCIMETRY A high turbulent intensity combustion chamber has been designed in order...INTENSITY COMPRESSIBLE FLOW USING PARTICLE IMAGE VELOCIMETRY Report Title A high turbulent intensity combustion chamber has been designed in order to...USING PARTICLE IMAGE VELOCIMETRY MARCO EFRAIN QUIROZ-REGALADO Department of Mechanical Engineering APPROVED: Ahsan R. Choudhuri

  10. Image-Data Compression Using Edge-Optimizing Algorithm for WFA Inference.

    ERIC Educational Resources Information Center

    Culik, Karel II; Kari, Jarkko

    1994-01-01

    Presents an inference algorithm that produces a weighted finite automata (WFA), in particular, the grayness functions of graytone images. Image-data compression results based on the new inference algorithm produces a WFA with a relatively small number of edges. Image-data compression results alone and in combination with wavelets are discussed.…

  11. Evaluation of JPEG and JPEG2000 compression algorithms for dermatological images.

    PubMed

    Gulkesen, K H; Akman, A; Yuce, Y K; Yilmaz, E; Samur, A A; Isleyen, F; Cakcak, D S; Alpsoy, E

    2010-08-01

    Some image compression methods are used to reduce the disc space needed for the image to store and transmit the image efficiently. JPEG is the most frequently used algorithm of compression in medical systems. JPEG compression can be performed at various qualities. There are many other compression algorithms; among these, JPEG2000 is an appropriate candidate to be used in future. To investigate perceived image quality of JPEG and JPEG2000 in 1 : 20, 1 : 30, 1 : 40 and 1 : 50 compression rates. In total, photographs of 90 patients were taken in dermatology outpatient clinics. For each patient, a set which is composed of eight compressed images and one uncompressed image has been prepared. Images were shown to dermatologists on two separate 17-inch LCD monitors at the same time, with one as compressed image and the other as uncompressed image. Each dermatologist evaluated 720 image couples in total and defined whether there existed any difference between two images in terms of quality. If there was a difference, they reported the better one. Among four dermatologists, each evaluated 720 image couples in total. Quality rates for JPEG compressions 1 : 20, 1 : 30, 1 : 40 and 1 : 50 were 69%, 35%, 10% and 5% respectively. Quality rates for corresponding JPEG2000 compressions were 77%, 67%, 56% and 53% respectively. When JPEG and JPEG2000 algorithms were compared, it was observed that JPEG2000 algorithm was more successful than JPEG for all compression rates. However, loss of image quality is recognizable in some of images in all compression rates.

  12. Accelerated diffusion spectrum imaging with compressed sensing using adaptive dictionaries.

    PubMed

    Bilgic, Berkin; Setsompop, Kawin; Cohen-Adad, Julien; Yendiki, Anastasia; Wald, Lawrence L; Adalsteinsson, Elfar

    2012-12-01

    Diffusion spectrum imaging offers detailed information on complex distributions of intravoxel fiber orientations at the expense of extremely long imaging times (∼1 h). Recent work by Menzel et al. demonstrated successful recovery of diffusion probability density functions from sub-Nyquist sampled q-space by imposing sparsity constraints on the probability density functions under wavelet and total variation transforms. As the performance of compressed sensing reconstruction depends strongly on the level of sparsity in the selected transform space, a dictionary specifically tailored for diffusion probability density functions can yield higher fidelity results. To our knowledge, this work is the first application of adaptive dictionaries in diffusion spectrum imaging, whereby we reduce the scan time of whole brain diffusion spectrum imaging acquisition from 50 to 17 min while retaining high image quality. In vivo experiments were conducted with the 3T Connectome MRI. The root-mean-square error of the reconstructed "missing" diffusion images were calculated by comparing them to a gold standard dataset (obtained from acquiring 10 averages of diffusion images in these missing directions). The root-mean-square error from the proposed reconstruction method is up to two times lower than that of Menzel et al.'s method and is actually comparable to that of the fully-sampled 50 minute scan. Comparison of tractography solutions in 18 major white-matter pathways also indicated good agreement between the fully-sampled and 3-fold accelerated reconstructions. Further, we demonstrate that a dictionary trained using probability density functions from a single slice of a particular subject generalizes well to other slices from the same subject, as well as to slices from other subjects. Copyright © 2012 Wiley Periodicals, Inc.

  13. Vertebral Compression Fracture with Intravertebral Vacuum Cleft Sign: Pathogenesis, Image, and Surgical Intervention

    PubMed Central

    Wu, Ai-Min; Ni, Wen-Fei

    2013-01-01

    The intravertebral vacuum cleft (IVC) sign in vertebral compression fracture patients has obtained much attention. The pathogenesis, image character and efficacy of surgical intervention were disputed. Many pathogenesis theories were proposed, and its image characters are distinct from malignancy and infection. Percutaneous vertebroplasty (PVP) or percutaneous kyphoplasty (PKP) have been the main therapeutic methods for these patients in recent years. Avascular necrosis theory is the most supported; PVP could relieve back pain, restore vertebral body height and correct the kyphotic angulation (KA), and is recommended for these patients. PKP seems to be more effective for the correction of KA and lower cement leakage. The Kümmell's disease with IVC sign reported by modern authors was incomplete consistent with syndrome reported by Dr. Hermann Kümmell. PMID:23741556

  14. Fourier-domain beamforming: the path to compressed ultrasound imaging.

    PubMed

    Chernyakova, Tanya; Eldar, Yonina

    2014-08-01

    Sonography techniques use multiple transducer elements for tissue visualization. Signals received at each element are sampled before digital beamforming. The sampling rates required to perform high-resolution digital beamforming are significantly higher than the Nyquist rate of the signal and result in considerable amount of data that must be stored and processed. A recently developed technique, compressed beamforming, based on the finite rate of innovation model, compressed sensing (CS), and Xampling ideas, allows a reduction in the number of samples needed to reconstruct an image comprised of strong reflectors. A drawback of this method is its inability to treat speckle, which is of significant importance in medical imaging. Here, we build on previous work and extend it to a general concept of beamforming in frequency. This allows exploitation of the low bandwidth of the ultrasound signal and bypassing of the oversampling dictated by digital implementation of beamforming in time. By using beamforming in frequency, the same image quality is obtained from far fewer samples. We next present a CS technique that allows for further rate reduction, using only a portion of the beamformed signal's bandwidth. We demonstrate our methods on in vivo cardiac data and show that reductions up to 1/28 of the standard beamforming rates are possible. Finally, we present an implementation on an ultrasound machine using sub-Nyquist sampling and processing. Our results prove that the concept of sub-Nyquist processing is feasible for medical ultrasound, leading to the potential of considerable reduction in future ultrasound machines' size, power consumption, and cost.

  15. Spatially Regularized Compressed Sensing for High Angular Resolution Diffusion Imaging

    PubMed Central

    Rathi, Yogesh; Dolui, Sudipto

    2013-01-01

    Despite the relative recency of its inception, the theory of compressive sampling (aka compressed sensing) (CS) has already revolutionized multiple areas of applied sciences, a particularly important instance of which is medical imaging. Specifically, the theory has provided a different perspective on the important problem of optimal sampling in magnetic resonance imaging (MRI), with an ever-increasing body of works reporting stable and accurate reconstruction of MRI scans from the number of spectral measurements which would have been deemed unacceptably small as recently as five years ago. In this paper, the theory of CS is employed to palliate the problem of long acquisition times, which is known to be a major impediment to the clinical application of high angular resolution diffusion imaging (HARDI). Specifically, we demonstrate that a substantial reduction in data acquisition times is possible through minimization of the number of diffusion encoding gradients required for reliable reconstruction of HARDI scans. The success of such a minimization is primarily due to the availability of spherical ridgelet transformation, which excels in sparsifying HARDI signals. What makes the resulting reconstruction procedure even more accurate is a combination of the sparsity constraints in the diffusion domain with additional constraints imposed on the estimated diffusion field in the spatial domain. Accordingly, the present paper describes an original way to combine the diffusion-and spatial-domain constraints to achieve a maximal reduction in the number of diffusion measurements, while sacrificing little in terms of reconstruction accuracy. Finally, details are provided on an efficient numerical scheme which can be used to solve the aforementioned reconstruction problem by means of standard and readily available estimation tools. The paper is concluded with experimental results which support the practical value of the proposed reconstruction methodology. PMID:21536524

  16. Compressed Sensing Techniques Applied to Ultrasonic Imaging of Cargo Containers.

    PubMed

    López, Yuri Álvarez; Lorenzo, José Ángel Martínez

    2017-01-15

    One of the key issues in the fight against the smuggling of goods has been the development of scanners for cargo inspection. X-ray-based radiographic system scanners are the most developed sensing modality. However, they are costly and use bulky sources that emit hazardous, ionizing radiation. Aiming to improve the probability of threat detection, an ultrasonic-based technique, capable of detecting the footprint of metallic containers or compartments concealed within the metallic structure of the inspected cargo, has been proposed. The system consists of an array of acoustic transceivers that is attached to the metallic structure-under-inspection, creating a guided acoustic Lamb wave. Reflections due to discontinuities are detected in the images, provided by an imaging algorithm. Taking into consideration that the majority of those images are sparse, this contribution analyzes the application of Compressed Sensing (CS) techniques in order to reduce the amount of measurements needed, thus achieving faster scanning, without compromising the detection capabilities of the system. A parametric study of the image quality, as a function of the samples needed in spatial and frequency domains, is presented, as well as the dependence on the sampling pattern. For this purpose, realistic cargo inspection scenarios have been simulated.

  17. Adaptive wavelet transform algorithm for lossy image compression

    NASA Astrophysics Data System (ADS)

    Pogrebnyak, Oleksiy B.; Ramirez, Pablo M.; Acevedo Mosqueda, Marco Antonio

    2004-11-01

    A new algorithm of locally adaptive wavelet transform based on the modified lifting scheme is presented. It performs an adaptation of the wavelet high-pass filter at the prediction stage to the local image data activity. The proposed algorithm uses the generalized framework for the lifting scheme that permits to obtain easily different wavelet filter coefficients in the case of the (~N, N) lifting. Changing wavelet filter order and different control parameters, one can obtain the desired filter frequency response. It is proposed to perform the hard switching between different wavelet lifting filter outputs according to the local data activity estimate. The proposed adaptive transform possesses a good energy compaction. The designed algorithm was tested on different images. The obtained simulation results show that the visual and quantitative quality of the restored images is high. The distortions are less in the vicinity of high spatial activity details comparing to the non-adaptive transform, which introduces ringing artifacts. The designed algorithm can be used for lossy image compression and in the noise suppression applications.

  18. Potential of compressed sensing in quantitative MR imaging of cancer

    PubMed Central

    Smith, David S.; Li, Xia; Abramson, Richard G.; Chad Quarles, C.; Yankeelov, Thomas E.

    2013-01-01

    Abstract Classic signal processing theory dictates that, in order to faithfully reconstruct a band-limited signal (e.g., an image), the sampling rate must be at least twice the maximum frequency contained within the signal, i.e., the Nyquist frequency. Recent developments in applied mathematics, however, have shown that it is often possible to reconstruct signals sampled below the Nyquist rate. This new method of compressed sensing (CS) requires that the signal have a concise and extremely dense representation in some mathematical basis. Magnetic resonance imaging (MRI) is particularly well suited for CS approaches, owing to the flexibility of data collection in the spatial frequency (Fourier) domain available in most MRI protocols. With custom CS acquisition and reconstruction strategies, one can quickly obtain a small subset of the full data and then iteratively reconstruct images that are consistent with the acquired data and sparse by some measure. Successful use of CS results in a substantial decrease in the time required to collect an individual image. This extra time can then be harnessed to increase spatial resolution, temporal resolution, signal-to-noise, or any combination of the three. In this article, we first review the salient features of CS theory and then discuss the specific barriers confronting CS before it can be readily incorporated into clinical quantitative MRI studies of cancer. We finally illustrate applications of the technique by describing examples of CS in dynamic contrast-enhanced MRI and dynamic susceptibility contrast MRI. PMID:24434808

  19. Coded aperture design in mismatched compressive spectral imaging.

    PubMed

    Galvis, Laura; Arguello, Henry; Arce, Gonzalo R

    2015-11-20

    Compressive spectral imaging (CSI) senses a scene by using two-dimensional coded projections such that the number of measurements is far less than that used in spectral scanning-type instruments. An architecture that efficiently implements CSI is the coded aperture snapshot spectral imager (CASSI). A physical limitation of the CASSI is the system resolution, which is determined by the lowest resolution element used in the detector and the coded aperture. Although the final resolution of the system is usually given by the detector, in the CASSI, for instance, the use of a low resolution coded aperture implemented using a digital micromirror device (DMD), which induces the grouping of pixels in superpixels in the detector, is decisive to the final resolution. The mismatch occurs by the differences in the pitch size of the DMD mirrors and focal plane array (FPA) pixels. A traditional solution to this mismatch consists of grouping several pixels in square features, which subutilizes the DMD and the detector resolution and, therefore, reduces the spatial and spectral resolution of the reconstructed spectral images. This paper presents a model for CASSI which admits the mismatch and permits exploiting the maximum resolution of the coding element and the FPA sensor. A super-resolution algorithm and a synthetic coded aperture are developed in order to solve the mismatch. The mathematical models are verified using a real implementation of CASSI. The results of the experiments show a significant gain in spatial and spectral imaging quality over the traditional grouping pixel technique.

  20. Compressed Sensing Techniques Applied to Ultrasonic Imaging of Cargo Containers

    PubMed Central

    Álvarez López, Yuri; Martínez Lorenzo, José Ángel

    2017-01-01

    One of the key issues in the fight against the smuggling of goods has been the development of scanners for cargo inspection. X-ray-based radiographic system scanners are the most developed sensing modality. However, they are costly and use bulky sources that emit hazardous, ionizing radiation. Aiming to improve the probability of threat detection, an ultrasonic-based technique, capable of detecting the footprint of metallic containers or compartments concealed within the metallic structure of the inspected cargo, has been proposed. The system consists of an array of acoustic transceivers that is attached to the metallic structure-under-inspection, creating a guided acoustic Lamb wave. Reflections due to discontinuities are detected in the images, provided by an imaging algorithm. Taking into consideration that the majority of those images are sparse, this contribution analyzes the application of Compressed Sensing (CS) techniques in order to reduce the amount of measurements needed, thus achieving faster scanning, without compromising the detection capabilities of the system. A parametric study of the image quality, as a function of the samples needed in spatial and frequency domains, is presented, as well as the dependence on the sampling pattern. For this purpose, realistic cargo inspection scenarios have been simulated. PMID:28098841

  1. Auto-shape lossless compression of pharynx and esophagus fluoroscopic images.

    PubMed

    Arif, Arif Sameh; Mansor, Sarina; Logeswaran, Rajasvaran; Karim, Hezerul Abdul

    2015-02-01

    The massive number of medical images produced by fluoroscopic and other conventional diagnostic imaging devices demand a considerable amount of space for data storage. This paper proposes an effective method for lossless compression of fluoroscopic images. The main contribution in this paper is the extraction of the regions of interest (ROI) in fluoroscopic images using appropriate shapes. The extracted ROI is then effectively compressed using customized correlation and the combination of Run Length and Huffman coding, to increase compression ratio. The experimental results achieved show that the proposed method is able to improve the compression ratio by 400 % as compared to that of traditional methods.

  2. Compressive spectral polarization imaging by a pixelized polarizer and colored patterned detector.

    PubMed

    Fu, Chen; Arguello, Henry; Sadler, Brian M; Arce, Gonzalo R

    2015-11-01

    A compressive spectral and polarization imager based on a pixelized polarizer and colored patterned detector is presented. The proposed imager captures several dispersed compressive projections with spectral and polarization coding. Stokes parameter images at several wavelengths are reconstructed directly from 2D projections. Employing a pixelized polarizer and colored patterned detector enables compressive sensing over spatial, spectral, and polarization domains, reducing the total number of measurements. Compressive sensing codes are specially designed to enhance the peak signal-to-noise ratio in the reconstructed images. Experiments validate the architecture and reconstruction algorithms.

  3. Prior image constrained compressed sensing: Implementation and performance evaluation

    PubMed Central

    Lauzier, Pascal Thériault; Tang, Jie; Chen, Guang-Hong

    2012-01-01

    Purpose: Prior image constrained compressed sensing (PICCS) is an image reconstruction framework which incorporates an often available prior image into the compressed sensing objective function. The images are reconstructed using an optimization procedure. In this paper, several alternative unconstrained minimization methods are used to implement PICCS. The purpose is to study and compare the performance of each implementation, as well as to evaluate the performance of the PICCS objective function with respect to image quality. Methods: Six different minimization methods are investigated with respect to convergence speed and reconstruction accuracy. These minimization methods include the steepest descent (SD) method and the conjugate gradient (CG) method. These algorithms require a line search to be performed. Thus, for each minimization algorithm, two line searching algorithms are evaluated: a backtracking (BT) line search and a fast Newton-Raphson (NR) line search. The relative root mean square error is used to evaluate the reconstruction accuracy. The algorithm that offers the best convergence speed is used to study the performance of PICCS with respect to the prior image parameter α and the data consistency parameter λ. PICCS is studied in terms of reconstruction accuracy, low-contrast spatial resolution, and noise characteristics. A numerical phantom was simulated and an animal model was scanned using a multirow detector computed tomography (CT) scanner to yield the projection datasets used in this study. Results: For λ within a broad range, the CG method with Fletcher-Reeves formula and NR line search offers the fastest convergence for an equal level of reconstruction accuracy. Using this minimization method, the reconstruction accuracy of PICCS was studied with respect to variations in α and λ. When the number of view angles is varied between 107, 80, 64, 40, 20, and 16, the relative root mean square error reaches a minimum value for α ≈ 0.5. For

  4. Medical image compression using cubic spline interpolation for low bit-rate telemedicine applications

    NASA Astrophysics Data System (ADS)

    Truong, Trieu-Kien; Chen, Shi-Huang

    2006-03-01

    In this paper, a new medical image compression algorithm using cubic spline interpolation (CSI) is presented for telemedicine applications. The CSI is developed in order to subsample image data with minimal distortion and to achieve image compression. It has been shown in literatures that the CSI can be combined with the JPEG or JPEG2000 algorithm to develop a modified JPEG or JPEG2000 codec, which obtains a higher compression ratio and a better quality of reconstructed image than the standard JPEG and JPEG2000 codecs. This paper further makes use of the modified JPEG codec to medical image compression. Experimental results show that the proposed scheme can increase 25~30% compression ratio of original JPEG medical data compression system with similar visual quality. This system can reduce the loading of telecommunication networks and is quite suitable for low bit-rate telemedicine applications.

  5. Dual photon excitation microscopy and image threshold segmentation in live cell imaging during compression testing.

    PubMed

    Moo, Eng Kuan; Abusara, Ziad; Abu Osman, Noor Azuan; Pingguan-Murphy, Belinda; Herzog, Walter

    2013-08-09

    Morphological studies of live connective tissue cells are imperative to helping understand cellular responses to mechanical stimuli. However, photobleaching is a constant problem to accurate and reliable live cell fluorescent imaging, and various image thresholding methods have been adopted to account for photobleaching effects. Previous studies showed that dual photon excitation (DPE) techniques are superior over conventional one photon excitation (OPE) confocal techniques in minimizing photobleaching. In this study, we investigated the effects of photobleaching resulting from OPE and DPE on morphology of in situ articular cartilage chondrocytes across repeat laser exposures. Additionally, we compared the effectiveness of three commonly-used image thresholding methods in accounting for photobleaching effects, with and without tissue loading through compression. In general, photobleaching leads to an apparent volume reduction for subsequent image scans. Performing seven consecutive scans of chondrocytes in unloaded cartilage, we found that the apparent cell volume loss caused by DPE microscopy is much smaller than that observed using OPE microscopy. Applying scan-specific image thresholds did not prevent the photobleaching-induced volume loss, and volume reductions were non-uniform over the seven repeat scans. During cartilage loading through compression, cell fluorescence increased and, depending on the thresholding method used, led to different volume changes. Therefore, different conclusions on cell volume changes may be drawn during tissue compression, depending on the image thresholding methods used. In conclusion, our findings confirm that photobleaching directly affects cell morphology measurements, and that DPE causes less photobleaching artifacts than OPE for uncompressed cells. When cells are compressed during tissue loading, a complicated interplay between photobleaching effects and compression-induced fluorescence increase may lead to interpretations in

  6. Fast recovery of compressed multi-contrast magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Güngör, Alper; Kopanoǧlu, Emre; ćukur, Tolga; Güven, H. Emre

    2017-02-01

    In many settings, multiple Magnetic Resonance Imaging (MRI) scans are performed with different contrast characteristics at a single patient visit. Unfortunately, MRI data-acquisition is inherently slow creating a persistent need to accelerate scans. Multi-contrast reconstruction deals with the joint reconstruction of different contrasts simultaneously. Previous approaches suggest solving a regularized optimization problem using group sparsity and/or color total variation, using composite-splitting denoising and FISTA. Yet, there is significant room for improvement in existing methods regarding computation time, ease of parameter selection, and robustness in reconstructed image quality. Selection of sparsifying transformations is critical in applications of compressed sensing. Here we propose using non-convex p-norm group sparsity (with p < 1), and apply color total variation (CTV). Our method is readily applicable to magnitude images rather than each of the real and imaginary parts separately. We use the constrained form of the problem, which allows an easier choice of data-fidelity error-bound (based on noise power determined from a noise-only scan without any RF excitation). We solve the problem using an adaptation of Alternating Direction Method of Multipliers (ADMM), which provides faster convergence in terms of CPU-time. We demonstrated the effectiveness of the method on two MR image sets (numerical brain phantom images and SRI24 atlas data) in terms of CPU-time and image quality. We show that a non-convex group sparsity function that uses the p-norm instead of the convex counterpart accelerates convergence and improves the peak-Signal-to-Noise-Ratio (pSNR), especially for highly undersampled data.

  7. Toward an image compression algorithm for the high-resolution electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.

  8. A novel joint data-hiding and compression scheme based on SMVQ and image inpainting.

    PubMed

    Chuan Qin; Chin-Chen Chang; Yi-Ping Chiu

    2014-03-01

    In this paper, we propose a novel joint data-hiding and compression scheme for digital images using side match vector quantization (SMVQ) and image inpainting. The two functions of data hiding and image compression can be integrated into one single module seamlessly. On the sender side, except for the blocks in the leftmost and topmost of the image, each of the other residual blocks in raster-scanning order can be embedded with secret data and compressed simultaneously by SMVQ or image inpainting adaptively according to the current embedding bit. Vector quantization is also utilized for some complex blocks to control the visual distortion and error diffusion caused by the progressive compression. After segmenting the image compressed codes into a series of sections by the indicator bits, the receiver can achieve the extraction of secret bits and image decompression successfully according to the index values in the segmented sections. Experimental results demonstrate the effectiveness of the proposed scheme.

  9. Hardware Implementation of a Lossless Image Compression Algorithm Using a Field Programmable Gate Array

    NASA Astrophysics Data System (ADS)

    Klimesh, M.; Stanton, V.; Watola, D.

    2000-10-01

    We describe a hardware implementation of a state-of-the-art lossless image compression algorithm. The algorithm is based on the LOCO-I (low complexity lossless compression for images) algorithm developed by Weinberger, Seroussi, and Sapiro, with modifications to lower the implementation complexity. In this setup, the compression itself is performed entirely in hardware using a field programmable gate array and a small amount of random access memory. The compression speed achieved is 1.33 Mpixels/second. Our algorithm yields about 15 percent better compression than the Rice algorithm.

  10. Joint image encryption and compression scheme based on IWT and SPIHT

    NASA Astrophysics Data System (ADS)

    Zhang, Miao; Tong, Xiaojun

    2017-03-01

    A joint lossless image encryption and compression scheme based on integer wavelet transform (IWT) and set partitioning in hierarchical trees (SPIHT) is proposed to achieve lossless image encryption and compression simultaneously. Making use of the properties of IWT and SPIHT, encryption and compression are combined. Moreover, the proposed secure set partitioning in hierarchical trees (SSPIHT) via the addition of encryption in the SPIHT coding process has no effect on compression performance. A hyper-chaotic system, nonlinear inverse operation, Secure Hash Algorithm-256(SHA-256), and plaintext-based keystream are all used to enhance the security. The test results indicate that the proposed methods have high security and good lossless compression performance.

  11. Image compression impact on quantitative angiogenesis analysis of ovarian epithelial neoplasms.

    PubMed

    Nicolosi, Jacqueline S; Yoshida, Adriana O; Sarian, Luís O Z; Silva, Cleide A M; Andrade, Liliana A L A; Derchain, Sophie F M; Vassallo, José; Schenka, André Almeida

    2012-01-01

    This study aims to investigate the impact of digital image compression on manual and semiautomatic quantification of angiogenesis in ovarian epithelial neoplasms (including benign, borderline, and malignant specimens). We examined 405 digital images (obtained from a previously validated computer-assisted analysis system), which were equally divided into 5 groups: images captured in Tagged Image File Format (TIFF), low and high compression Joint Photographic Experts Group (JPEG) formats, and low and high compression JPEG images converted from the TIFF files. Microvessel density counts and CD34 endothelial areas manually and semiautomatically determined from TIFF images were compared with those from the other 4 groups. Mostly, the correlations between TIFF and JPEG images were very high (intraclass correlation coefficients >0.8), especially for low compression JPEG images obtained by capture, regardless of the variable considered. The only exception consisted in the use of high compression JPEG files for semiautomatic microvessel density counts, which resulted in intraclass correlation coefficients of <0.7. Nonetheless, even then, interconversion between TIFF and JPEG values could be successfully achieved using prediction models established by linear regression. Image compression does not seem to significantly compromise the accuracy of angiogenesis quantitation in the ovarian epithelial tumors, although low compression JPEG images should always be preferred over high compression ones.

  12. Rate and power efficient image compressed sensing and transmission

    NASA Astrophysics Data System (ADS)

    Olanigan, Saheed; Cao, Lei; Viswanathan, Ramanarayanan

    2016-01-01

    This paper presents a suboptimal quantization and transmission scheme for multiscale block-based compressed sensing images over wireless channels. The proposed method includes two stages: dealing with quantization distortion and transmission errors. First, given the total transmission bit rate, the optimal number of quantization bits is assigned to the sensed measurements in different wavelet sub-bands so that the total quantization distortion is minimized. Second, given the total transmission power, the energy is allocated to different quantization bit layers based on their different error sensitivities. The method of Lagrange multipliers with Karush-Kuhn-Tucker conditions is used to solve both optimization problems, for which the first problem can be solved with relaxation and the second problem can be solved completely. The effectiveness of the scheme is illustrated through simulation results, which have shown up to 10 dB improvement over the method without the rate and power optimization in medium and low signal-to-noise ratio cases.

  13. Image compression with QM-AYA adaptive binary arithmetic coder

    NASA Astrophysics Data System (ADS)

    Cheng, Joe-Ming; Langdon, Glen G., Jr.

    1993-01-01

    The Q-coder has been reported in the literature, and is a renorm-driven binary adaptive arithmetic coder. A similar renorm-driven coder, the QM coder, uses the same approach with an initial attack to more rapidly estimate the statistics in the beginning, and with a different state table. The QM coder is the adaptive binary arithmetic coder employed in the JBIG and JPEG image compression algorithms. The QM-AYA arithmetic coder is similar to the QM coder, with a different state table, that offers balanced improvements to the QM probability estimation for the less skewed distributions. The QM-AYA performs better when the probability estimate is near 0.5 for each binary symbol. An approach for constructing effective index change tables for Q-coder type adaptation is discussed.

  14. Television image compression and small animal remote monitoring

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Jackson, Robert W.

    1990-01-01

    It was shown that a subject can reliably discriminate a difference in video image quality (using a specific commercial product) for image compression levels ranging from 384 kbits per second to 1536 kbits per second. However, their discriminations are significantly influenced by whether or not the TV camera is stable or moving and whether or not the animals are quiescent or active, which is correlated with illumination level (daylight versus night illumination, respectively). The highest video rate used here was 1.54 megabits per second, which is about 18 percent of the so-called normal TV resolution of 8.4MHz. Since this video rate was judged to be acceptable by 27 of the 34 subjects (79 percent), for monitoring the general health and status of small animals within their illuminated (lights on) cages (regardless of whether the camera was stable or moved), it suggests that an immediate Space Station Freedom to ground bandwidth reduction of about 80 percent can be tolerated without a significant loss in general monitoring capability. Another general conclusion is that the present methodology appears to be effective in quantifying visual judgments of video image quality.

  15. Stable and Robust Sampling Strategies for Compressive Imaging.

    PubMed

    Krahmer, Felix; Ward, Rachel

    2014-02-01

    In many signal processing applications, one wishes to acquire images that are sparse in transform domains such as spatial finite differences or wavelets using frequency domain samples. For such applications, overwhelming empirical evidence suggests that superior image reconstruction can be obtained through variable density sampling strategies that concentrate on lower frequencies. The wavelet and Fourier transform domains are not incoherent because low-order wavelets and low-order frequencies are correlated, so compressive sensing theory does not immediately imply sampling strategies and reconstruction guarantees. In this paper, we turn to a more refined notion of coherence-the so-called local coherence-measuring for each sensing vector separately how correlated it is to the sparsity basis. For Fourier measurements and Haar wavelet sparsity, the local coherence can be controlled and bounded explicitly, so for matrices comprised of frequencies sampled from a suitable inverse square power-law density, we can prove the restricted isometry property with near-optimal embedding dimensions. Consequently, the variable-density sampling strategy we provide allows for image reconstructions that are stable to sparsity defects and robust to measurement noise. Our results cover both reconstruction by ℓ1-minimization and total variation minimization. The local coherence framework developed in this paper should be of independent interest, as it implies that for optimal sparse recovery results, it suffices to have bounded average coherence from sensing basis to sparsity basis-as opposed to bounded maximal coherence-as long as the sampling strategy is adapted accordingly.

  16. Television image compression and small animal remote monitoring

    NASA Astrophysics Data System (ADS)

    Haines, Richard F.; Jackson, Robert W.

    1990-04-01

    It was shown that a subject can reliably discriminate a difference in video image quality (using a specific commercial product) for image compression levels ranging from 384 kbits per second to 1536 kbits per second. However, their discriminations are significantly influenced by whether or not the TV camera is stable or moving and whether or not the animals are quiescent or active, which is correlated with illumination level (daylight versus night illumination, respectively). The highest video rate used here was 1.54 megabits per second, which is about 18 percent of the so-called normal TV resolution of 8.4MHz. Since this video rate was judged to be acceptable by 27 of the 34 subjects (79 percent), for monitoring the general health and status of small animals within their illuminated (lights on) cages (regardless of whether the camera was stable or moved), it suggests that an immediate Space Station Freedom to ground bandwidth reduction of about 80 percent can be tolerated without a significant loss in general monitoring capability. Another general conclusion is that the present methodology appears to be effective in quantifying visual judgments of video image quality.

  17. Image compression with directional lifting on separated sections

    NASA Astrophysics Data System (ADS)

    Zhu, Jieying; Wang, Nengchao

    2007-11-01

    A novel image compression scheme is presented that the directional sections are separated and transformed differently from the rest of image. The discrete directions of anisotropic pixels are calculated and then grouped to compact directional sections. One dimensional (1-D) adaptive directional lifting is continuously applied along orientations of direction sections other than applying 1-D wavelet transform alternately in two dimensions for the whole image. For the rest sections, 2-D adaptive lifting filters are applied according to pixels' positions. Our single embedded coding stream can be truncated exactly for any bit rate. Experiments have showed that large coefficients can be significantly reduced along directional sections by our transform which makes energy more compact than traditional wavelet transform. Though rate-distortion (R-D) optimization isn't exploited, the PSNR is still comparable to that of JPEG-2000 with 9/7 filters at high bit rates. And at low bit rates, the visual quality is better than that of JPEG-2000 for along directional sections both blurring and ringing artifacts can be avoided and edge preservation is good.

  18. Adaptive wavelet transform algorithm for image compression applications

    NASA Astrophysics Data System (ADS)

    Pogrebnyak, Oleksiy B.; Manrique Ramirez, Pablo

    2003-11-01

    A new algorithm of locally adaptive wavelet transform is presented. The algorithm implements the integer-to-integer lifting scheme. It performs an adaptation of the wavelet function at the prediction stage to the local image data activity. The proposed algorithm is based on the generalized framework for the lifting scheme that permits to obtain easily different wavelet coefficients in the case of the (N~,N) lifting. It is proposed to perform the hard switching between (2, 4) and (4, 4) lifting filter outputs according to an estimate of the local data activity. When the data activity is high, i.e., in the vicinity of edges, the (4, 4) lifting is performed. Otherwise, in the plain areas, the (2,4) decomposition coefficients are calculated. The calculations are rather simples that permit the implementation of the designed algorithm in fixed point DSP processors. The proposed adaptive transform possesses the perfect restoration of the processed data and possesses good energy compactation. The designed algorithm was tested on different images. The proposed adaptive transform algorithm can be used for image/signal lossless compression.

  19. Intelligent fuzzy approach for fast fractal image compression

    NASA Astrophysics Data System (ADS)

    Nodehi, Ali; Sulong, Ghazali; Al-Rodhaan, Mznah; Al-Dhelaan, Abdullah; Rehman, Amjad; Saba, Tanzila

    2014-12-01

    Fractal image compression (FIC) is recognized as a NP-hard problem, and it suffers from a high number of mean square error (MSE) computations. In this paper, a two-phase algorithm was proposed to reduce the MSE computation of FIC. In the first phase, based on edge property, range and domains are arranged. In the second one, imperialist competitive algorithm (ICA) is used according to the classified blocks. For maintaining the quality of the retrieved image and accelerating algorithm operation, we divided the solutions into two groups: developed countries and undeveloped countries. Simulations were carried out to evaluate the performance of the developed approach. Promising results thus achieved exhibit performance better than genetic algorithm (GA)-based and Full-search algorithms in terms of decreasing the number of MSE computations. The number of MSE computations was reduced by the proposed algorithm for 463 times faster compared to the Full-search algorithm, although the retrieved image quality did not have a considerable change.

  20. Integer cosine transform chip design for image compression

    NASA Astrophysics Data System (ADS)

    Ruiz, Gustavo A.; Michell, Juan A.; Buron, Angel M.; Solana, Jose M.; Manzano, Miguel A.; Diaz, J.

    2003-04-01

    The Discrete Cosine Transform (DCT) is the most widely used transform for image compression. The Integer Cosine Transform denoted ICT (10, 9, 6, 2, 3, 1) has been shown to be a promising alternative to the DCT due to its implementation simplicity, similar performance and compatibility with the DCT. This paper describes the design and implementation of a 8×8 2-D ICT processor for image compression, that meets the numerical characteristic of the IEEE std. 1180-1990. This processor uses a low latency data flow that minimizes the internal memory and a parallel pipelined architecture, based on a numerical strength reduction Integer Cosine Transform (10, 9, 6, 2, 3, 1) algorithm, in order to attain high throughput and continuous data flow. A prototype of the 8×8 ICT processor has been implemented using a standard cell design methodology and a 0.35-μm CMOS CSD 3M/2P 3.3V process on a 10 mm2 die. Pipeline circuit techniques have been used to attain the maximum frequency of operation allowed by the technology, attaining a critical path of 1.8ns, which should be increased by a 20% to allow for line delays, placing the estimated operational frequency at 500Mhz. The circuit includes 12446 cells, being flip-flops 6757 of them. Two clock signals have been distributed, an external one (fs) and an internal one (fs/2). The high number of flip-flops has forced the use of a strategy to minimize clock-skew, combining big sized buffers on the periphery and using wide metal lines (clock-trunks) to distribute the signals.

  1. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information

    NASA Astrophysics Data System (ADS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2010-09-01

    We describe a compression method for floating-point astronomical images that gives compression ratios of 6–10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process can greatly improve the precision of measurements in the images. This is especially important if the analysis algorithm relies on the mode or the median, which would be similarly quantized if the pixel values are not dithered. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  2. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information

    NASA Technical Reports Server (NTRS)

    Pence, William D.; White, R. L.; Seaman, R.

    2010-01-01

    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  3. Radon transform imaging: low-cost video compressive imaging at extreme resolutions

    NASA Astrophysics Data System (ADS)

    Sankaranarayanan, Aswin C.; Wang, Jian; Gupta, Mohit

    2016-05-01

    Most compressive imaging architectures rely on programmable light-modulators to obtain coded linear measurements of a signal. As a consequence, the properties of the light modulator place fundamental limits on the cost, performance, practicality, and capabilities of the compressive camera. For example, the spatial resolution of the single pixel camera is limited to that of its light modulator, which is seldom greater than 4 megapixels. In this paper, we describe a novel approach to compressive imaging that avoids the use of spatial light modulator. In its place, we use novel cylindrical optics and a rotation gantry to directly sample the Radon transform of the image focused on the sensor plane. We show that the reconstruction problem is identical to sparse tomographic recovery and we can leverage the vast literature in compressive magnetic resonance imaging (MRI) to good effect. The proposed design has many important advantages over existing compressive cameras. First, we can achieve a resolution of N × N pixels using a sensor with N photodetectors; hence, with commercially available SWIR line-detectors with 10k pixels, we can potentially achieve spatial resolutions of 100 megapixels, a capability that is unprecedented. Second, our design is scalable more gracefully across wavebands of light since we only require sensors and optics that are optimized for the wavelengths of interest; in contrast, spatial light modulators like DMDs require expensive coatings to be effective in non-visible wavebands. Third, we can exploit properties of line-detectors including electronic shutters and pixels with large aspect ratios to optimize light throughput. On the ip side, a drawback of our approach is the need for moving components in the imaging architecture.

  4. Recommending images of user interests from the biomedical literature

    NASA Astrophysics Data System (ADS)

    Clukey, Steven; Xu, Songhua

    2013-03-01

    Every year hundreds of thousands of biomedical images are published in journals and conferences. Consequently, finding images relevant to one's interests becomes an ever daunting task. This vast amount of literature creates a need for intelligent and easy-to-use tools that can help researchers effectively navigate through the content corpus and conveniently locate materials of their interests. Traditionally, literature search tools allow users to query content using topic keywords. However, manual query composition is often time and energy consuming. A better system would be one that can automatically deliver relevant content to a researcher without having the end user manually manifest one's search intent and interests via search queries. Such a computer-aided assistance for information access can be provided by a system that first determines a researcher's interests automatically and then recommends images relevant to the person's interests accordingly. The technology can greatly improve a researcher's ability to stay up to date in their fields of study by allowing them to efficiently browse images and documents matching their needs and interests among the vast amount of the biomedical literature. A prototype system implementation of the technology can be accessed via http://www.smartdataware.com.

  5. Universal interactive image data acquisition and compression technology (UNIDAC) and its dual-use applications

    NASA Astrophysics Data System (ADS)

    Novik, Dmitry A.; Addis, John E., II; Sims, Lea V.

    1995-06-01

    The limited channel capacity associated with telecommunication and data network necessitates applications of image data compression for teleimaging systems. The UNIDAC technology is based on the optical consolidation of lossy and lossless compressions controlled interactively by a remote image analyst. The UNIDAC technology incorporates the positive features of separate lossy (a high data compression ratio) and lossless (errorless image quality) image data compressions without their associated weaknesses. The high value of that data compression ratio achieved by the UNIDAC technology is based on the elimination of a positional statistical redundancy additionally to a spatial statistical redundancy, and on the sequential nature of the visual analysis. A positional statistical redundancy reflects the fact that the data essential for image analysis is enclosed not in the whole image, but rather in parts of it--i.e. the window (area) of interest. In the professional knowledge to locate the areas of interest within the lossy compressed image. Selected positional information for the window of interest is transmitted back to the image source. The lossless compressed/decompressed residual image data is then used to update the image in the window of interest to its original lossless, errorless image quality. The potential capabilities of the UNIDAC technology are illustrated by its application for such teleimaging systems as teleradiology, telepathology, telesurveillance, and telereconnaissance.

  6. Effect of Breast Compression on Lesion Characteristic Visibility with Diffraction-Enhanced Imaging

    SciTech Connect

    Faulconer, L.; Parham, C; Connor, D; Kuzmiak, C; Koomen, M; Lee, Y; Cho, K; Rafoth, J; Livasy, C; et al.

    2010-01-01

    Conventional mammography can not distinguish between transmitted, scattered, or refracted x-rays, thus requiring breast compression to decrease tissue depth and separate overlapping structures. Diffraction-enhanced imaging (DEI) uses monochromatic x-rays and perfect crystal diffraction to generate images with contrast based on absorption, refraction, or scatter. Because DEI possesses inherently superior contrast mechanisms, the current study assesses the effect of breast compression on lesion characteristic visibility with DEI imaging of breast specimens. Eleven breast tissue specimens, containing a total of 21 regions of interest, were imaged by DEI uncompressed, half-compressed, or fully compressed. A fully compressed DEI image was displayed on a soft-copy mammography review workstation, next to a DEI image acquired with reduced compression, maintaining all other imaging parameters. Five breast imaging radiologists scored image quality metrics considering known lesion pathology, ranking their findings on a 7-point Likert scale. When fully compressed DEI images were compared to those acquired with approximately a 25% difference in tissue thickness, there was no difference in scoring of lesion feature visibility. For fully compressed DEI images compared to those acquired with approximately a 50% difference in tissue thickness, across the five readers, there was a difference in scoring of lesion feature visibility. The scores for this difference in tissue thickness were significantly different at one rocking curve position and for benign lesion characterizations. These results should be verified in a larger study because when evaluating the radiologist scores overall, we detected a significant difference between the scores reported by the five radiologists. Reducing the need for breast compression might increase patient comfort during mammography. Our results suggest that DEI may allow a reduction in compression without substantially compromising clinical image

  7. Recommendations

    ERIC Educational Resources Information Center

    Brazelton, G. Blue; Renn, Kristen A.; Stewart, Dafina-Lazarus

    2015-01-01

    In this chapter, the editors provide a summary of the information shared in this sourcebook about the success of students who have minoritized identities of sexuality or gender and offer recommendations for policy, practice, and further research.

  8. Recommendations

    ERIC Educational Resources Information Center

    Brazelton, G. Blue; Renn, Kristen A.; Stewart, Dafina-Lazarus

    2015-01-01

    In this chapter, the editors provide a summary of the information shared in this sourcebook about the success of students who have minoritized identities of sexuality or gender and offer recommendations for policy, practice, and further research.

  9. Bit-plane-channelized hotelling observer for predicting task performance using lossy-compressed images

    NASA Astrophysics Data System (ADS)

    Schmanske, Brian M.; Loew, Murray H.

    2003-05-01

    A technique for assessing the impact of lossy wavelet-based image compression on signal detection tasks is presented. A medical image"s value is based on its ability to support clinical decisions such as detecting and diagnosing abnormalities. Image quality of compressed images is, however, often stated in terms of mathematical metrics such as mean square error. The presented technique provides a more suitable measure of image degradation by building on the channelized Hotelling observer model, which has been shown to predict human performance of signal detection tasks in noise-limited images. The technique first decomposes an image into its constituent wavelet subband coefficient bit-planes. Channel responses for the individual subband bit-planes are computed, combined,and processed with a Hotelling observer model to provide a measure of signal detectability versus compression ratio. This allows a user to determine how much compression can be tolerated before signal detectability drops below a certain threshold.

  10. Research on application for integer wavelet transform for lossless compression of medical image

    NASA Astrophysics Data System (ADS)

    Zhou, Zude; Li, Quan; Long, Quan

    2003-09-01

    This paper proposes an approach based on using lifting scheme to construct integer wavelet transform whose purpose is to realize the lossless compression of images. Then researches on application of medical image, software simulation of corresponding algorithm and experiment result are presented in this paper. Experiment shows that this method could improve the compression ration and resolution.

  11. A coded aperture compressive imaging array and its visual detection and tracking algorithms for surveillance systems.

    PubMed

    Chen, Jing; Wang, Yongtian; Wu, Hanxiao

    2012-10-29

    In this paper, we propose an application of a compressive imaging system to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system is proposed to reduce the needed high resolution coded mask requirements and facilitate the storage of the projection matrix. Random Gaussian, Toeplitz and binary phase coded masks are utilized to obtain the compressive sensing images. The corresponding motion targets detection and tracking algorithms directly using the compressive sampling images are developed. A mixture of Gaussian distribution is applied in the compressive image space to model the background image and for foreground detection. For each motion target in the compressive sampling domain, a compressive feature dictionary spanned by target templates and noises templates is sparsely represented. An l(1) optimization algorithm is used to solve the sparse coefficient of templates. Experimental results demonstrate that low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz phase mask, motion detection algorithms using a random binary phase mask can yield better detection results. However using random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed image. Our tracking algorithm can achieve a real time speed that is up to 10 times faster than that of the l(1) tracker without any optimization.

  12. Clipping service: ATR-based SAR image compression

    NASA Astrophysics Data System (ADS)

    Rodkey, David L.; Welby, Stephen P.; Hostetler, Larry D.

    1996-06-01

    Future wide area surveillance systems such as the Tier II+ and Tier III- unmanned aerial vehicles (UAVs) will be gathering cast amounts of high resolution SAR data for transmission to ground stations and subsequent analysis by image interpreters to provide critical and timely information to field commanders. This extremely high data rate presents two problems. First, wide bandwidth data link channels which would be needed to transmit this high data rate presents two problems. First, wide bandwidth data link channels which would be needed to transmit this imagery to a ground station are both expensive and difficult to obtain. Second, the volume of data which is generated by the system will quickly saturate any human-based analysis system without some degree of computer assistance. The ARPA sponsored clipping service program seeks to apply automatic target recognition (ATR) technology to perform 'intelligent' data compression on this imagery in a way which will provide a product on the ground that preserves essential information for further processing either by the military analyst or by a ground-based ATR system. An ATR system on board the UAV would examine the imagery data stream in real time determining regions of interest. Imagery from those regions would be transmitted to the ground in a manner which preserved most or all of the information contained in the original image. The remainder of the imagery would be transmitted to the ground with lesser fidelity. This paper presents system analysis deriving the operational requirements for the clipping service system and examines candidate architectures.

  13. Venous compression syndromes: clinical features, imaging findings and management

    PubMed Central

    Liu, R; Oliveira, G R; Ganguli, S; Kalva, S

    2013-01-01

    Extrinsic venous compression is caused by compression of the veins in tight anatomic spaces by adjacent structures, and is seen in a number of locations. Venous compression syndromes, including Paget–Schroetter syndrome, Nutcracker syndrome, May–Thurner syndrome and popliteal venous compression will be discussed. These syndromes are usually seen in young, otherwise healthy individuals, and can lead to significant overall morbidity. Aside from clinical findings and physical examination, diagnosis can be made with ultrasound, CT, or MR conventional venography. Symptoms and haemodynamic significance of the compression determine the ideal treatment method. PMID:23908347

  14. The wavelet/scalar quantization compression standard for digital fingerprint images

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  15. Correlated image set compression system based on new fast efficient algorithm of Karhunen-Loeve transform

    NASA Astrophysics Data System (ADS)

    Musatenko, Yurij S.; Kurashov, Vitalij N.

    1998-10-01

    The paper presents improved version of our new method for compression of correlated image sets Optimal Image Coding using Karhunen-Loeve transform (OICKL). It is known that Karhunen-Loeve (KL) transform is most optimal representation for such a purpose. The approach is based on fact that every KL basis function gives maximum possible average contribution in every image and this contribution decreases most quickly among all possible bases. So, we lossy compress every KL basis function by Embedded Zerotree Wavelet (EZW) coding with essentially different loss that depends on the functions' contribution in the images. The paper presents new fast low memory consuming algorithm of KL basis construction for compression of correlated image ensembles that enable our OICKL system to work on common hardware. We also present procedure for determining of optimal losses of KL basic functions caused by compression. It uses modified EZW coder which produce whole PSNR (bitrate) curve during the only compression pass.

  16. Methods for efficient compressing and archiving of medical digital motion images

    NASA Astrophysics Data System (ADS)

    Okura, Yasuhiko; Inamura, Kiyonari; Matsumura, Yasushi; Inada, Hiroshi

    2000-05-01

    For efficient storing of medical motion images such as digital cine angiography, optimized compression ratio for motion image and efficient mass storage media is required. We have many selections to compressing motion image. MPEG-2 is one of de facto standard in motion image compression techniques. In order to find out optimized compression ratio using MPEG-2, both subjective evaluation and objective evaluation were carried out. These evaluation methods are based on severity decision of vessel stenosis for coronary vessel. From these results, we found optimized compression ratio using MPEG-2 is 1:80. In case of employing DVD-RAM media as storage media of medical motion images, the cost for storage is slightly more expensive than in case of employing CD-R media.

  17. Rapid MR spectroscopic imaging of lactate using compressed sensing

    NASA Astrophysics Data System (ADS)

    Vidya Shankar, Rohini; Agarwal, Shubhangi; Geethanath, Sairam; Kodibagkar, Vikram D.

    2015-03-01

    Imaging lactate metabolism in vivo may improve cancer targeting and therapeutics due to its key role in the development, maintenance, and metastasis of cancer. The long acquisition times associated with magnetic resonance spectroscopic imaging (MRSI), which is a useful technique for assessing metabolic concentrations, are a deterrent to its routine clinical use. The objective of this study was to combine spectral editing and prospective compressed sensing (CS) acquisitions to enable precise and high-speed imaging of the lactate resonance. A MRSI pulse sequence with two key modifications was developed: (1) spectral editing components for selective detection of lactate, and (2) a variable density sampling mask for pseudo-random under-sampling of the k-space `on the fly'. The developed sequence was tested on phantoms and in vivo in rodent models of cancer. Datasets corresponding to the 1X (fully-sampled), 2X, 3X, 4X, 5X, and 10X accelerations were acquired. The under-sampled datasets were reconstructed using a custom-built algorithm in MatlabTM, and the fidelity of the CS reconstructions was assessed in terms of the peak amplitudes, SNR, and total acquisition time. The accelerated reconstructions demonstrate a reduction in the scan time by up to 90% in vitro and up to 80% in vivo, with negligible loss of information when compared with the fully-sampled dataset. The proposed unique combination of spectral editing and CS facilitated rapid mapping of the spatial distribution of lactate at high temporal resolution. This technique could potentially be translated to the clinic for the routine assessment of lactate changes in solid tumors.

  18. Interlabial masses in little girls: review and imaging recommendations

    SciTech Connect

    Nussbaum, A.R.; Lebowitz, R.L.

    1983-07-01

    When an interlabial mass is seen on physical examination in a little girl, there is often confusion about its etiology, its implications, and what should be done next. Five common interlabial masses, which superficially are strikingly similar, include a prolapsed ectopic ureterocele, a prolapsed urethra, a paraurethral cyst, hydro(metro)colpos, and rhabdomyosarcoma of the vagina (botryoid sarcoma). A prolapsed ectopic ureterocele occurs in white girls as a smooth mass which protrudes from the urethral meatus so that urine exits circumferentially. A prolapsed urethra occurs in black girls and resembles a donut with the urethral meatus in the center. A paraurethral cyst is smaller and displaces the meatus, so that the urinary stream is eccentric. Hydro(metro)colpos from hymenal imperforation presents as a smooth mass that fills the vaginal introitus, as opposed to the introital grapelike cluster of masses of botryoid sarcoma. Recommendations for efficient imaging are presented.

  19. Method for low-light-level image compression based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Sun, Shaoyuan; Zhang, Baomin; Wang, Liping; Bai, Lianfa

    2001-10-01

    Low light level (LLL) image communication has received more and more attentions in the night vision field along with the advance of the importance of image communication. LLL image compression technique is the key of LLL image wireless transmission. LLL image, which is different from the common visible light image, has its special characteristics. As still image compression, we propose in this paper a wavelet-based image compression algorithm suitable for LLL image. Because the information in the LLL image is significant, near lossless data compression is required. The LLL image is compressed based on improved EZW (Embedded Zerotree Wavelet) algorithm. We encode the lowest frequency subband data using DPCM (Differential Pulse Code Modulation). All the information in the lowest frequency is kept. Considering the HVS (Human Visual System) characteristics and the LLL image characteristics, we detect the edge contour in the high frequency subband image first using templet and then encode the high frequency subband data using EZW algorithm. And two guiding matrix is set to avoid redundant scanning and replicate encoding of significant wavelet coefficients in the above coding. The experiment results show that the decoded image quality is good and the encoding time is shorter than that of the original EZW algorithm.

  20. Medical image processing using novel wavelet filters based on atomic functions: optimal medical image compression.

    PubMed

    Landin, Cristina Juarez; Reyes, Magally Martinez; Martin, Anabelem Soberanes; Rosas, Rosa Maria Valdovinos; Ramirez, Jose Luis Sanchez; Ponomaryov, Volodymyr; Soto, Maria Dolores Torres

    2011-01-01

    The analysis of different Wavelets including novel Wavelet families based on atomic functions are presented, especially for ultrasound (US) and mammography (MG) images compression. This way we are able to determine with what type of filters Wavelet works better in compression of such images. Key properties: Frequency response, approximation order, projection cosine, and Riesz bounds were determined and compared for the classic Wavelets W9/7 used in standard JPEG2000, Daubechies8, Symlet8, as well as for the complex Kravchenko-Rvachev Wavelets ψ(t) based on the atomic functions up(t),  fup (2)(t), and eup(t). The comparison results show significantly better performance of novel Wavelets that is justified by experiments and in study of key properties.

  1. Spectral compression algorithms for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R.

    2007-10-16

    A method for spectrally compressing data sets enables the efficient analysis of very large multivariate images. The spectral compression algorithm uses a factored representation of the data that can be obtained from Principal Components Analysis or other factorization technique. Furthermore, a block algorithm can be used for performing common operations more efficiently. An image analysis can be performed on the factored representation of the data, using only the most significant factors. The spectral compression algorithm can be combined with a spatial compression algorithm to provide further computational efficiencies.

  2. Low-Rank Decomposition Based Restoration of Compressed Images via Adaptive Noise Estimation.

    PubMed

    Zhang, Xinfeng; Lin, Weisi; Xiong, Ruiqin; Liu, Xianming; Ma, Siwei; Gao, Wen

    2016-07-07

    Images coded at low bit rates in real-world applications usually suffer from significant compression noise, which significantly degrades the visual quality. Traditional denoising methods are not suitable for the content-dependent compression noise, which usually assume that noise is independent and with identical distribution. In this paper, we propose a unified framework of content-adaptive estimation and reduction for compression noise via low-rank decomposition of similar image patches. We first formulate the framework of compression noise reduction based upon low-rank decomposition. Compression noises are removed by soft-thresholding the singular values in singular value decomposition (SVD) of every group of similar image patches. For each group of similar patches, the thresholds are adaptively determined according to compression noise levels and singular values. We analyze the relationship of image statistical characteristics in spatial and transform domains, and estimate compression noise level for every group of similar patches from statistics in both domains jointly with quantization steps. Finally, quantization constraint is applied to estimated images to avoid over-smoothing. Extensive experimental results show that the proposed method not only improves the quality of compressed images obviously for post-processing, but are also helpful for computer vision tasks as a pre-processing method.

  3. Adjustable lossless image compression based on a natural splitting of an image into drawing, shading, and fine-grained components

    NASA Technical Reports Server (NTRS)

    Novik, Dmitry A.; Tilton, James C.

    1993-01-01

    The compression, or efficient coding, of single band or multispectral still images is becoming an increasingly important topic. While lossy compression approaches can produce reconstructions that are visually close to the original, many scientific and engineering applications require exact (lossless) reconstructions. However, the most popular and efficient lossless compression techniques do not fully exploit the two-dimensional structural links existing in the image data. We describe here a general approach to lossless data compression that effectively exploits two-dimensional structural links of any length. After describing in detail two main variants on this scheme, we discuss experimental results.

  4. High dynamic range image compression by optimizing tone mapped image quality index.

    PubMed

    Ma, Kede; Yeganeh, Hojatollah; Zeng, Kai; Wang, Zhou

    2015-10-01

    Tone mapping operators (TMOs) aim to compress high dynamic range (HDR) images to low dynamic range (LDR) ones so as to visualize HDR images on standard displays. Most existing TMOs were demonstrated on specific examples without being thoroughly evaluated using well-designed and subject-validated image quality assessment models. A recently proposed tone mapped image quality index (TMQI) made one of the first attempts on objective quality assessment of tone mapped images. Here, we propose a substantially different approach to design TMO. Instead of using any predefined systematic computational structure for tone mapping (such as analytic image transformations and/or explicit contrast/edge enhancement), we directly navigate in the space of all images, searching for the image that optimizes an improved TMQI. In particular, we first improve the two building blocks in TMQI—structural fidelity and statistical naturalness components—leading to a TMQI-II metric. We then propose an iterative algorithm that alternatively improves the structural fidelity and statistical naturalness of the resulting image. Numerical and subjective experiments demonstrate that the proposed algorithm consistently produces better quality tone mapped images even when the initial images of the iteration are created by the most competitive TMOs. Meanwhile, these results also validate the superiority of TMQI-II over TMQI.

  5. Telemedicine + OCT: toward design of optimized algorithms for high-quality compressed images

    NASA Astrophysics Data System (ADS)

    Mousavi, Mahta; Lurie, Kristen; Land, Julian; Javidi, Tara; Ellerbee, Audrey K.

    2014-03-01

    Telemedicine is an emerging technology that aims to provide clinical healthcare at a distance. Among its goals, the transfer of diagnostic images over telecommunication channels has been quite appealing to the medical community. When viewed as an adjunct to biomedical device hardware, one highly important consideration aside from the transfer rate and speed is the accuracy of the reconstructed image at the receiver end. Although optical coherence tomography (OCT) is an established imaging technique that is ripe for telemedicine, the effects of OCT data compression, which may be necessary on certain telemedicine platforms, have not received much attention in the literature. We investigate the performance and efficiency of several lossless and lossy compression techniques for OCT data and characterize their effectiveness with respect to achievable compression ratio, compression rate and preservation of image quality. We examine the effects of compression in the interferogram vs. A-scan domain as assessed with various objective and subjective metrics.

  6. Lossless compression of medical images using Burrows-Wheeler Transformation with Inversion Coder.

    PubMed

    Preston, Collin; Arnavut, Ziya; Koc, Basar

    2015-08-01

    Medical imaging is a quickly growing industry where the need for highly efficient lossless compression algorithms is necessary in order to reduce storage space and transmission rates for the large, high resolution, medical images. Due to the fact that medical imagining cannot utilize lossy compression, in the event that vital information may be lost, it is imperative that lossless compression be used. While several authors have investigated lossless compression of medical images, the Burrows-Wheeler Transformation with an Inversion Coder (BWIC) has not been examined. Our investigation shows that BWIC runs in linear time and yields better compression rates than well-known image coders, such as JPEG-LS and JPEG-2000.

  7. Computational simulation of breast compression based on segmented breast and fibroglandular tissues on magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Shih, Tzu-Ching; Chen, Jeon-Hor; Liu, Dongxu; Nie, Ke; Sun, Lizhi; Lin, Muqing; Chang, Daniel; Nalcioglu, Orhan; Su, Min-Ying

    2010-07-01

    This study presents a finite element-based computational model to simulate the three-dimensional deformation of a breast and fibroglandular tissues under compression. The simulation was based on 3D MR images of the breast, and craniocaudal and mediolateral oblique compression, as used in mammography, was applied. The geometry of the whole breast and the segmented fibroglandular tissues within the breast were reconstructed using triangular meshes by using the Avizo® 6.0 software package. Due to the large deformation in breast compression, a finite element model was used to simulate the nonlinear elastic tissue deformation under compression, using the MSC.Marc® software package. The model was tested in four cases. The results showed a higher displacement along the compression direction compared to the other two directions. The compressed breast thickness in these four cases at a compression ratio of 60% was in the range of 5-7 cm, which is a typical range of thickness in mammography. The projection of the fibroglandular tissue mesh at a compression ratio of 60% was compared to the corresponding mammograms of two women, and they demonstrated spatially matched distributions. However, since the compression was based on magnetic resonance imaging (MRI), which has much coarser spatial resolution than the in-plane resolution of mammography, this method is unlikely to generate a synthetic mammogram close to the clinical quality. Whether this model may be used to understand the technical factors that may impact the variations in breast density needs further investigation. Since this method can be applied to simulate compression of the breast at different views and different compression levels, another possible application is to provide a tool for comparing breast images acquired using different imaging modalities--such as MRI, mammography, whole breast ultrasound and molecular imaging--that are performed using different body positions and under different compression

  8. JPEG2000 compressed domain image retrieval using context labels of significance coding and wavelet autocorrelogram

    NASA Astrophysics Data System (ADS)

    Angkura, Navin; Aramvith, Supavadee; Siddhichai, Supakorn

    2007-09-01

    JPEG has been a widely recognized image compression standard for many years. Nevertheless, it faces its own limitations as compressed image quality degrades significantly at lower bit rates. This limitation has been addressed in JPEG2000 which also has a tendency to replace JPEG, especially in the storage and retrieval applications. To efficiently and practically index and retrieve compressed-domain images from a database, several image features could be extracted directly in compressed domain without having to fully decompress the JPEG2000 images. JPEG2000 utilizes wavelet transform. Wavelet transforms is one of widely-used to analyze and describe texture patterns of image. Another advantage of wavelet transform is that one can analyze textures with multiresolution and can classify directional texture pattern information into each directional subband. Where as, HL subband implies horizontal frequency information, LH subband implies vertical frequency information and HH subband implies diagonal frequency. Nevertheless, many wavelet-based image retrieval approaches are not good tool to use directional subband information, obtained by wavelet transforms, for efficient directional texture pattern classification of retrieved images. This paper proposes a novel image retrieval technique in JPEG2000 compressed domain using image significant map to compute an image context in order to construct image index. Experimental results indicate that the proposed method can effectively differentiate and categorize images with different texture directional information. In addition, an integration of the proposed features with wavelet autocorrelogram also showed improvement in retrieval performance using ANMRR (Average Normalized Modified Retrieval Rank) compared to other known methods.

  9. Clinical performance of contrast enhanced abdominal pediatric MRI with fast combined parallel imaging compressed sensing reconstruction.

    PubMed

    Zhang, Tao; Chowdhury, Shilpy; Lustig, Michael; Barth, Richard A; Alley, Marcus T; Grafendorfer, Thomas; Calderon, Paul D; Robb, Fraser J L; Pauly, John M; Vasanawala, Shreyas S

    2014-07-01

    To deploy clinically, a combined parallel imaging compressed sensing method with coil compression that achieves a rapid image reconstruction, and assess its clinical performance in contrast-enhanced abdominal pediatric MRI. With Institutional Review Board approval and informed patient consent/assent, 29 consecutive pediatric patients were recruited. Dynamic contrast-enhanced MRI was acquired on a 3 Tesla scanner using a dedicated 32-channel pediatric coil and a three-dimensional SPGR sequence, with pseudo-random undersampling at a high acceleration (R = 7.2). Undersampled data were reconstructed with three methods: a traditional parallel imaging method and a combined parallel imaging compressed sensing method with and without coil compression. The three sets of images were evaluated independently and blindly by two radiologists at one siting, for overall image quality and delineation of anatomical structures. Wilcoxon tests were performed to test the hypothesis that there was no significant difference in the evaluations, and interobserver agreement was analyzed. Fast reconstruction with coil compression did not deteriorate image quality. The mean score of structural delineation of the fast reconstruction was 4.1 on a 5-point scale, significantly better (P < 0.05) than traditional parallel imaging (mean score 3.1). Fair to substantial interobserver agreement was reached in structural delineation assessment. A fast combined parallel imaging compressed sensing method is feasible in a pediatric clinical setting. Preliminary results suggest it may improve structural delineation over parallel imaging. © 2013 Wiley Periodicals, Inc.

  10. Low-complexity wavelet filter design for image compression

    NASA Technical Reports Server (NTRS)

    Majani, E.

    1994-01-01

    Image compression algorithms based on the wavelet transform are an increasingly attractive and flexible alternative to other algorithms based on block orthogonal transforms. While the design of orthogonal wavelet filters has been studied in significant depth, the design of nonorthogonal wavelet filters, such as linear-phase (LP) filters, has not yet reached that point. Of particular interest are wavelet transforms with low complexity at the encoder. In this article, we present known and new parameterizations of the two families of LP perfect reconstruction (PR) filters. The first family is that of all PR LP filters with finite impulse response (FIR), with equal complexity at the encoder and decoder. The second family is one of LP PR filters, which are FIR at the encoder and infinite impulse response (IIR) at the decoder, i.e., with controllable encoder complexity. These parameterizations are used to optimize the subband/wavelet transform coding gain, as defined for nonorthogonal wavelet transforms. Optimal LP wavelet filters are given for low levels of encoder complexity, as well as their corresponding integer approximations, to allow for applications limited to using integer arithmetic. These optimal LP filters yield larger coding gains than orthogonal filters with an equivalent complexity. The parameterizations described in this article can be used for the optimization of any other appropriate objective function.

  11. Prediction of coefficients for lossless compression of multispectral images

    NASA Astrophysics Data System (ADS)

    Ruedin, Ana M. C.; Acevedo, Daniel G.

    2005-08-01

    We present a lossless compressor for multispectral Landsat images that exploits interband and intraband correlations. The compressor operates on blocks of 256 x 256 pixels, and performs two kinds of predictions. For bands 1, 2, 3, 4, 5, 6.2 and 7, the compressor performs an integer-to-integer wavelet transform, which is applied to each block separately. The wavelet coefficients that have not yet been encoded are predicted by means of a linear combination of already coded coefficients that belong to the same orientation and spatial location in the same band, and coefficients of the same location from other spectral bands. A fast block classification is performed in order to use the best weights for each landscape. The prediction errors or differences are finally coded with an entropy - based coder. For band 6.1, we do not use wavelet transforms, instead, a median edge detector is applied to predict a pixel, with the information of the neighbouring pixels and the equalized pixel from band 6.2. This technique exploits better the great similarity between histograms of bands 6.1 and 6.2. The prediction differences are finally coded with a context-based entropy coder. The two kinds of predictions used reduce both spatial and spectral correlations, increasing the compression rates. Our compressor has shown to be superior to the lossless compressors Winzip, LOCO-I, PNG and JPEG2000.

  12. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-01-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters

  13. Image Algebra Matlab language version 2.3 for image processing and compression research

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Hayden, Eric

    2010-08-01

    Image algebra is a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra was developed under DARPA and US Air Force sponsorship at University of Florida for over 15 years beginning in 1984. Image algebra has been implemented in a variety of programming languages designed specifically to support the development of image processing and computer vision algorithms and software. The University of Florida has been associated with development of the languages FORTRAN, Ada, Lisp, and C++. The latter implementation involved a class library, iac++, that supported image algebra programming in C++. Since image processing and computer vision are generally performed with operands that are array-based, the Matlab™ programming language is ideal for implementing the common subset of image algebra. Objects include sets and set operations, images and operations on images, as well as templates and image-template convolution operations. This implementation, called Image Algebra Matlab (IAM), has been found to be useful for research in data, image, and video compression, as described herein. Due to the widespread acceptance of the Matlab programming language in the computing community, IAM offers exciting possibilities for supporting a large group of users. The control over an object's computational resources provided to the algorithm designer by Matlab means that IAM programs can employ versatile representations for the operands and operations of the algebra, which are supported by the underlying libraries written in Matlab. In a previous publication, we showed how the functionality of IAC++ could be carried forth into a Matlab implementation, and provided practical details of a prototype implementation called IAM Version 1. In this paper, we further elaborate the purpose and structure of image algebra, then present a maturing implementation of Image Algebra Matlab called IAM Version 2.3, which extends the previous implementation

  14. Edge-preserving image compression for magnetic-resonance images using dynamic associative neural networks (DANN)-based neural networks

    NASA Astrophysics Data System (ADS)

    Wan, Tat C.; Kabuka, Mansur R.

    1994-05-01

    With the tremendous growth in imaging applications and the development of filmless radiology, the need for compression techniques that can achieve high compression ratios with user specified distortion rates becomes necessary. Boundaries and edges in the tissue structures are vital for detection of lesions and tumors, which in turn requires the preservation of edges in the image. The proposed edge preserving image compressor (EPIC) combines lossless compression of edges with neural network compression techniques based on dynamic associative neural networks (DANN), to provide high compression ratios with user specified distortion rates in an adaptive compression system well-suited to parallel implementations. Improvements to DANN-based training through the use of a variance classifier for controlling a bank of neural networks speed convergence and allow the use of higher compression ratios for `simple' patterns. The adaptation and generalization capabilities inherent in EPIC also facilitate progressive transmission of images through varying the number of quantization levels used to represent compressed patterns. Average compression ratios of 7.51:1 with an averaged average mean squared error of 0.0147 were achieved.

  15. Lossless compression of grayscale medical images: effectiveness of traditional and state-of-the-art approaches

    NASA Astrophysics Data System (ADS)

    Clunie, David A.

    2000-05-01

    Proprietary compression schemes have a cost and risk associated with their support, end of life and interoperability. Standards reduce this cost and risk. The new JPEG-LS process (ISO/IEC 14495-1), and the lossless mode of the proposed JPEG 2000 scheme (ISO/IEC CD15444-1), new standard schemes that may be incorporated into DICOM, are evaluated here. Three thousand, six hundred and seventy-nine (3,679) single frame grayscale images from multiple anatomical regions, modalities and vendors, were tested. For all images combined JPEG-LS and JPEG 2000 performed equally well (3.81), almost as well as CALIC (3.91), a complex predictive scheme used only as a benchmark. Both out-performed existing JPEG (3.04 with optimum predictor choice per image, 2.79 for previous pixel prediction as most commonly used in DICOM). Text dictionary schemes performed poorly (gzip 2.38), as did image dictionary schemes without statistical modeling (PNG 2.76). Proprietary transform based schemes did not perform as well as JPEG-LS or JPEG 2000 (S+P Arithmetic 3.4, CREW 3.56). Stratified by modality, JPEG-LS compressed CT images (4.00), MR (3.59), NM (5.98), US (3.4), IO (2.66), CR (3.64), DX (2.43), and MG (2.62). CALIC always achieved the highest compression except for one modality for which JPEG-LS did better (MG digital vendor A JPEG-LS 4.02, CALIC 4.01). JPEG-LS outperformed existing JPEG for all modalities. The use of standard schemes can achieve state of the art performance, regardless of modality, JPEG-LS is simple, easy to implement, consumes less memory, and is faster than JPEG 2000, though JPEG 2000 will offer lossy and progressive transmission. It is recommended that DICOM add transfer syntaxes for both JPEG-LS and JPEG 2000.

  16. Image compression in morphometry studies requiring 21 CFR Part 11 compliance: procedure is key with TIFFs and various JPEG compression strengths.

    PubMed

    Tengowski, Mark W

    2004-01-01

    This study aims to compare the integrity and reproducibility of measurements created from uncompressed and compressed digital images in order to implement compliance with 21 CFR Part 11 for image analysis studies executed using 21 CFR Part 58 compliant capture systems. Images of a 400-mesh electron microscope grid and H&E stained rat liver tissue were captured on an upright microscope with digital camera using commercially available analysis software. Digital images were stored as either uncompressed TIFFs or in one of five different levels of JPEG compression. The grid images were analyzed with automatic detection of bright objects while the liver images were segmented using color cube-based morphometry techniques, respectively, using commercially-available image analysis software. When comparing the feature-extracted measurements from the TIFF uncompressed to the JPEG compressed images, the data suggest that JPEG compression does not alter the accuracy or reliability to reproduce individual data point measurements in all but the highest compression levels. There is, however, discordance if the initial measure was obtained with a TIFF format and subsequently saved as one of the JPEG levels, suggesting that the use of compression must precede feature extraction. It is a common practice in software packages to work with TIFF uncompressed images. However, this study suggests that the use of JPEG compression as part of the analysis work flow was an acceptable practice for these images and features. Investigators applying image file compression to other organ images will need to validate the utility of image compression in their work flow. A procedure to digitally acquire and JPEG compress images prior to image analysis has the potential to reduce file archiving demands without compromising reproducibility of data.

  17. Pornographic image recognition and filtering using incremental learning in compressed domain

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Wang, Chao; Zhuo, Li; Geng, Wenhao

    2015-11-01

    With the rapid development and popularity of the network, the openness, anonymity, and interactivity of networks have led to the spread and proliferation of pornographic images on the Internet, which have done great harm to adolescents' physical and mental health. With the establishment of image compression standards, pornographic images are mainly stored with compressed formats. Therefore, how to efficiently filter pornographic images is one of the challenging issues for information security. A pornographic image recognition and filtering method in the compressed domain is proposed by using incremental learning, which includes the following steps: (1) low-resolution (LR) images are first reconstructed from the compressed stream of pornographic images, (2) visual words are created from the LR image to represent the pornographic image, and (3) incremental learning is adopted to continuously adjust the classification rules to recognize the new pornographic image samples after the covering algorithm is utilized to train and recognize the visual words in order to build the initial classification model of pornographic images. The experimental results show that the proposed pornographic image recognition method using incremental learning has a higher recognition rate as well as costing less recognition time in the compressed domain.

  18. A novel color image compression algorithm using the human visual contrast sensitivity characteristics

    NASA Astrophysics Data System (ADS)

    Yao, Juncai; Liu, Guizhong

    2017-03-01

    In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.

  19. Comparison of image compression viability for lossy and lossless JPEG and Wavelet data reduction in coronary angiography.

    PubMed

    Brennecke, R; Bürgel, U; Rippin, G; Post, F; Rupprecht, H J; Meyer, J

    2001-02-01

    Lossless or lossy compression of coronary angiogram data can reduce the enormous amounts of data generated by coronary angiographic imaging. The recent International Study of Angiographic Data Compression (ISAC) assessed the clinical viability of lossy Joint Photographic Expert Group (JPEG) compression but was unable to resolve two related questions: (A) the performance of lossless modes of compression in coronary angiography and (B) the performance of newer lossy wavelet algorithms. This present study seeks to supply some of this information. The performance of several lossless image compression methods was measured in the same set of images as used in the ISAC study. For the assessment of the relative image quality of lossy JPEG and wavelet compression, the observers ranked the perceived image quality of computer-generated coronary angiograms compressed with wavelet compression relative to the same images with JPEG compression. This ranking allowed the matching of compression ratios for wavelet compression with the clinically viable compression ratios for the JPEG method as obtained in the ISAC study. The best lossless compression scheme (LOCO-I) offered a mean compression ratio (CR) of 3.80:1. The quality of images compressed with the lossy wavelet-based method at CR = 10:1 and 20:1 was comparable to JPEG compression at CR = 6:1 and 10:1, respectively. The study has shown that lossless compression can exceed the CR of 2:1 usually quoted. For lossy compression, the range of clinically viable compression ratios can probably be extended by 50 to 100% when applying wavelet compression algorithms as compared to JPEG compression. These results can motivate a larger clinical study.

  20. Depth-dependent swimbladder compression in herring Clupea harengus observed using magnetic resonance imaging.

    PubMed

    Fässler, S M M; Fernandes, P G; Semple, S I K; Brierley, A S

    2009-01-01

    Changes in swimbladder morphology in an Atlantic herring Clupea harengus with pressure were examined by magnetic resonance imaging of a dead fish in a purpose-built pressure chamber. Swimbladder volume changed with pressure according to Boyle's Law, but compression in the lateral aspect was greater than in the dorsal aspect. This uneven compression has a reduced effect on acoustic backscattering than symmetrical compression and would elicit less pronounced effects of depth on acoustic biomass estimates of C. harengus.

  1. Passive forgery detection using discrete cosine transform coefficient analysis in JPEG compressed images

    NASA Astrophysics Data System (ADS)

    Lin, Cheng-Shian; Tsay, Jyh-Jong

    2016-05-01

    Passive forgery detection aims to detect traces of image tampering without the need for prior information. With the increasing demand for image content protection, passive detection methods able to identify image tampering areas are increasingly needed. However, most current passive approaches either work only for image-level JPEG compression detection and cannot localize region-level forgery, or suffer from high-false detection rates in localizing altered regions. This paper proposes an effective approach based on discrete cosine transform coefficient analysis for the detection and localization of altered regions of JPEG compressed images. This approach can also work with altered JPEG images resaved in JPEG compressed format with different quality factors. Experiments with various tampering methods such as copy-and-paste, image completion, and composite tampering, show that the proposed approach is able to effectively detect and localize altered areas and is not sensitive to image contents such as edges and textures.

  2. Best wavelet packet basis for joint image deblurring-denoising and compression

    NASA Astrophysics Data System (ADS)

    Dherete, Pierre; Durand, Sylvain; Froment, Jacques; Rouge, Bernard

    2003-01-01

    We propose a unique mathematical framework to deblur, denoise and compress natural images. Images are decomposed in a wavelet packet basis adapted both to the deblurring filter and to the denoising process. Effective denoising is performed by thresholding small wavelet packet coefficients while deblurring is obtained by multiplying the coefficients with a deconvolution kernel. This representation is compressed by quantizing the remaining coefficients and by coding the values using a context-based entropy coder. We present examples of such treatments on a satellite image chain. The results show a significant improvement compared to separate treatments with up-to-date compression approach.

  3. Architecture for one-shot compressive imaging using computer-generated holograms.

    PubMed

    Macfaden, Alexander J; Kindness, Stephen J; Wilkinson, Timothy D

    2016-09-10

    We propose a synchronous implementation of compressive imaging. This method is mathematically equivalent to prevailing sequential methods, but uses a static holographic optical element to create a spatially distributed spot array from which the image can be reconstructed with an instantaneous measurement. We present the holographic design requirements and demonstrate experimentally that the linear algebra of compressed imaging can be implemented with this technique. We believe this technique can be integrated with optical metasurfaces, which will allow the development of new compressive sensing methods.

  4. The development of an underwater pulsed compressive line sensing imaging system

    NASA Astrophysics Data System (ADS)

    Ouyang, Bing; Gong, Sue; Hou, Weilin; Dalgleish, Fraser R.; Caimi, Frank M.; Vuorenkoski, Anni K.

    2017-05-01

    Compressive Line Sensing (CLS) imaging system is a compressive sensing (CS) based imaging system with the goal of developing a compact and resource efficient imaging system for the degraded visual environment. In the CLS system, each line segment is sensed independently; however, the correlation among the adjacent lines (sources) is exploited via the joint sparsity in the distributed compressing sensing model during signal reconstruction. Several different CLS prototypes have been developed. This paper discusses the development of a pulsed CLS system. Initial experimental results using this system in a turbid water environment are presented.

  5. A Lossless hybrid wavelet-fractal compression for welding radiographic images.

    PubMed

    Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud

    2016-01-01

    In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm.

  6. A perceptual evaluation of JPEG 2000 image compression for digital mammography: contrast-detail characteristics.

    PubMed

    Suryanarayanan, Sankararaman; Karellas, Andrew; Vedantham, Srinivasan; Waldrop, Sandra M; D'Orsi, Carl J

    2004-03-01

    In this investigation the effect of JPEG 2000 compression on the contrast-detail (CD) characteristics of digital mammography images was studied using an alternative forced choice (AFC) technique. Images of a contrast-detail phantom, acquired using a clinical full-field digital mammography system, were compressed using a commercially available software product (JPEG 2000). Data compression was achieved at ratios of 1:1, 10:1, 20:1, and 30:1 and the images were reviewed by seven observers on a high-resolution display. Psychophysical detection characteristics were first computed by fitting perception data using a maximum-likelihood technique from which CD curves were derived at 50%, 62.5%, and 75% threshold levels. Statistical analysis indicated no significant difference in the perception of mean disk thickness up to 20:1 compression except for disk diameter of 1 mm. All other compression combinations exhibited significant degradation in CD characteristics.

  7. 2D-pattern matching image and video compression: theory, algorithms, and experiments.

    PubMed

    Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth

    2002-01-01

    In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.

  8. Optimization of block size for DCT-based medical image compression.

    PubMed

    Singh, S; Kumar, V; Verma, H K

    2007-01-01

    In view of the increasing importance of medical imaging in healthcare and the large amount of image data to be transmitted/stored, the need for development of an efficient medical image compression method, which would preserve the critical diagnostic information at higher compression, is growing. Discrete cosine transform (DCT) is a popular transform used in many practical image/video compression systems because of its high compression performance and good computational efficiency. As the computational burden of full frame DCT would be heavy, the image is usually divided into non-overlapping sub-images, or blocks, for processing. This paper aims to identify the optimum size of the block, in reference to compression of CT, ultrasound and X-ray images. Three conflicting requirements are considered, namely processing time, compression ratio and the quality of the reconstructed image. The quantitative comparison of various block sizes has been carried out on the basis of benefit-to-cost ratio (BCR) and reconstruction quality score (RQS). Experimental results are presented that verify the optimality of the 16 x 16 block size.

  9. Development and evaluation of a novel lossless image compression method (AIC: artificial intelligence compression method) using neural networks as artificial intelligence.

    PubMed

    Fukatsu, Hiroshi; Naganawa, Shinji; Yumura, Shinnichiro

    2008-04-01

    This study was aimed to validate the performance of a novel image compression method using a neural network to achieve a lossless compression. The encoding consists of the following blocks: a prediction block; a residual data calculation block; a transformation and quantization block; an organization and modification block; and an entropy encoding block. The predicted image is divided into four macro-blocks using the original image for teaching; and then redivided into sixteen sub-blocks. The predicted image is compared to the original image to create the residual image. The spatial and frequency data of the residual image are compared and transformed. Chest radiography, computed tomography (CT), magnetic resonance imaging, positron emission tomography, radioisotope mammography, ultrasonography, and digital subtraction angiography images were compressed using the AIC lossless compression method; and the compression rates were calculated. The compression rates were around 15:1 for chest radiography and mammography, 12:1 for CT, and around 6:1 for other images. This method thus enables greater lossless compression than the conventional methods. This novel method should improve the efficiency of handling of the increasing volume of medical imaging data.

  10. Using a Visual Discrimination Model for the Detection of Compression Artifacts in Virtual Pathology Images

    PubMed Central

    Johnson, Jeffrey P.; Yan, Michelle; Roehrig, Hans; Graham, Anna R.; Weinstein, Ronald S.

    2013-01-01

    A major issue in telepathology is the extremely large and growing size of digitized “virtual” slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. “Visually lossless” compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5–12 times the data reduction of reversible methods. PMID:20875970

  11. Using a visual discrimination model for the detection of compression artifacts in virtual pathology images.

    PubMed

    Johnson, Jeffrey P; Krupinski, Elizabeth A; Yan, Michelle; Roehrig, Hans; Graham, Anna R; Weinstein, Ronald S

    2011-02-01

    A major issue in telepathology is the extremely large and growing size of digitized "virtual" slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. "Visually lossless" compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods.

  12. Image data compression using a new floating-point digital signal processor.

    PubMed

    Siegel, E L; Templeton, A W; Hensley, K L; McFadden, M A; Baxter, K G; Murphey, M D; Cronin, P E; Gesell, R G; Dwyer, S J

    1991-08-01

    A new dual-ported, floating-point, digital signal processor has been evaluated for compressing 512 and 1,024 digital radiographic images using a full-frame, two-dimensional, discrete cosine transform (2D-DCT). The floating point digital signal processor operates at 49.5 million floating point instructions per second (MFLOPS). The level of compression can be changed by varying four parameters in the lossy compression algorithm. Throughput times were measured for both 2D-DCT compression and decompression. For a 1,024 x 1,024 x 10-bit image with a compression ratio of 316:1, the throughput was 75.73 seconds (compression plus decompression throughput). For a digital fluorography 1,024 x 1,024 x 8-bit image and a compression ratio of 26:1, the total throughput time was 63.23 seconds. For a computed tomography image of 512 x 512 x 12 bits and a compression ratio of 10:1 the throughput time was 19.65 seconds.