Science.gov

Sample records for image compression recommendation

  1. The New CCSDS Image Compression Recommendation

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron B.; Masschelein, Bart; Moury, Gilles; Schafer, Christoph

    2004-01-01

    The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists a two dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An ASIC implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm.

  2. The New CCSDS Image Compression Recommendation

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron; Masschelein, Bart; Moury, Gilles; Schaefer, Christoph

    2005-01-01

    The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists of a two-dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-Earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An Application-Specific Integrated Circuit (ASIC) implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm. Performance results and comparisons with other compressors are given for a test set of space images.

  3. Radiological Image Compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  4. Fractal image compression

    NASA Technical Reports Server (NTRS)

    Barnsley, Michael F.; Sloan, Alan D.

    1989-01-01

    Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed.

  5. Compressive optical image encryption.

    PubMed

    Li, Jun; Sheng Li, Jiao; Yang Pan, Yang; Li, Rong

    2015-01-01

    An optical image encryption technique based on compressive sensing using fully optical means has been proposed. An object image is first encrypted to a white-sense stationary noise pattern using a double random phase encoding (DRPE) method in a Mach-Zehnder interferometer. Then, the encrypted image is highly compressed to a signal using single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the encrypted image is reconstructed well via compressive sensing theory, and the original image can be decrypted with three reconstructed holograms and the correct keys. The numerical simulations show that the method is effective and suitable for optical image security transmission in future all-optical networks because of the ability of completely optical implementation and substantially smaller hologram data volume. PMID:25992946

  6. Compressive Optical Image Encryption

    PubMed Central

    Li, Jun; Sheng Li, Jiao; Yang Pan, Yang; Li, Rong

    2015-01-01

    An optical image encryption technique based on compressive sensing using fully optical means has been proposed. An object image is first encrypted to a white-sense stationary noise pattern using a double random phase encoding (DRPE) method in a Mach-Zehnder interferometer. Then, the encrypted image is highly compressed to a signal using single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the encrypted image is reconstructed well via compressive sensing theory, and the original image can be decrypted with three reconstructed holograms and the correct keys. The numerical simulations show that the method is effective and suitable for optical image security transmission in future all-optical networks because of the ability of completely optical implementation and substantially smaller hologram data volume. PMID:25992946

  7. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  8. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  9. Mosaic image compression

    NASA Astrophysics Data System (ADS)

    Chaudhari, Kapil A.; Reeves, Stanley J.

    2005-02-01

    Most consumer-level digital cameras use a color filter array to capture color mosaic data followed by demosaicking to obtain full-color images. However, many sophisticated demosaicking algorithms are too complex to implement on-board a camera. To use these algorithms, one must transfer the mosaic data from the camera to a computer without introducing compression losses that could generate artifacts in the demosaicked image. The memory required for losslessly stored mosaic images severely restricts the number of images that can be stored in the camera. Therefore, we need an algorithm to compress the original mosaic data losslessly so that it can later be transferred intact for demosaicking. We propose a new lossless compression technique for mosaic images in this paper. Ordinary image compression methods do not apply to mosaic images because of their non-canonical color sampling structure. Because standard compression methods such as JPEG, JPEG2000, etc. are already available in most digital cameras, we have chosen to build our algorithms using a standard method as a key part of the system. The algorithm begins by separating the mosaic image into 3 color (RGB) components. This is followed by an interpolation or down-sampling operation--depending on the particular variation of the algorithm--that makes all three components the same size. Using the three color components, we form a color image that is coded with JPEG. After appropriately reformatting the data, we calculate the residual between the original image and the coded image and then entropy-code the residual values corresponding to the mosaic data.

  10. [Irreversible image compression in radiology. Current status].

    PubMed

    Pinto dos Santos, D; Jungmann, F; Friese, C; Düber, C; Mildenberger, P

    2013-03-01

    Due to increasing amounts of data in radiology methods for image compression appear both economically and technically interesting. Irreversible image compression allows markedly higher reduction of data volume in comparison with reversible compression algorithms but is, however, accompanied by a certain amount of mathematical and visual loss of information. Various national and international radiological societies have published recommendations for the use of irreversible image compression. The degree of acceptable compression varies across modalities and regions of interest.The DICOM standard supports JPEG, which achieves compression through tiling, DCT/DWT and quantization. Although mathematical loss due to rounding up errors and reduction of high frequency information occurs this results in relatively low visual degradation.It is still unclear where to implement irreversible compression in the radiological workflow as only few studies analyzed the impact of irreversible compression on specialized image postprocessing. As long as this is within the limits recommended by the German Radiological Society irreversible image compression could be implemented directly at the imaging modality as it would comply with § 28 of the roentgen act (RöV). PMID:23456043

  11. Progressive compressive imager

    NASA Astrophysics Data System (ADS)

    Evladov, Sergei; Levi, Ofer; Stern, Adrian

    2012-06-01

    We have designed and built a working automatic progressive sampling imaging system based on the vector sensor concept, which utilizes a unique sampling scheme of Radon projections. This sampling scheme makes it possible to progressively add information resulting in tradeoff between compression and the quality of reconstruction. The uniqueness of our sampling is that in any moment of the acquisition process the reconstruction can produce a reasonable version of the image. The advantage of the gradual addition of the samples is seen when the sparsity rate of the object is unknown, and thus the number of needed measurements. We have developed the iterative algorithm OSO (Ordered Sets Optimization) which employs our sampling scheme for creation of nearly uniform distributed sets of samples, which allows the reconstruction of Mega-Pixel images. We present the good quality reconstruction from compressed data ratios of 1:20.

  12. Compressive optical imaging systems

    NASA Astrophysics Data System (ADS)

    Wu, Yuehao

    Compared to the classic Nyquist sampling theorem, Compressed Sensing or Compressive Sampling (CS) was proposed as a more efficient alternative for sampling sparse signals. In this dissertation, we discuss the implementation of the CS theory in building a variety of optical imaging systems. CS-based Imaging Systems (CSISs) exploit the sparsity of optical images in their transformed domains by imposing incoherent CS measurement patterns on them. The amplitudes and locations of sparse frequency components of optical images in their transformed domains can be reconstructed from the CS measurement results by solving an l1-regularized minimization problem. In this work, we review the theoretical background of the CS theory and present two hardware implementation schemes for CSISs, including a single pixel detector based scheme and an array detector based scheme. The first implementation scheme is suitable for acquiring Two-Dimensional (2D) spatial information of the imaging scene. We demonstrate the feasibility of this implementation scheme by developing a single pixel camera, a multispectral imaging system, and an optical sectioning microscope for fluorescence microscopy. The array detector based scheme is suitable for hyperspectral imaging applications, wherein both the spatial and spectral information of the imaging scene are of interest. We demonstrate the feasibility of this scheme by developing a Digital Micromirror Device-based Snapshot Spectral Imaging (DMD-SSI) system, which implements CS measurement processes on the Three-Dimensional (3D) spatial/spectral information of the imaging scene. Tens of spectral images can be reconstructed from the DMD-SSI system simultaneously without any mechanical or temporal scanning processes.

  13. Adaptive compression of image data

    NASA Astrophysics Data System (ADS)

    Hludov, Sergei; Schroeter, Claus; Meinel, Christoph

    1998-09-01

    In this paper we will introduce a method of analyzing images, a criterium to differentiate between images, a compression method of medical images in digital form based on the classification of the image bit plane and finally an algorithm for adaptive image compression. The analysis of the image content is based on a valuation of the relative number and absolute values of the wavelet coefficients. A comparison between the original image and the decoded image will be done by a difference criteria calculated by the wavelet coefficients of the original image and the decoded image of the first and second iteration step of the wavelet transformation. This adaptive image compression algorithm is based on a classification of digital images into three classes and followed by the compression of the image by a suitable compression algorithm. Furthermore we will show that applying these classification rules on DICOM-images is a very effective method to do adaptive compression. The image classification algorithm and the image compression algorithms have been implemented in JAVA.

  14. Image compression using constrained relaxation

    NASA Astrophysics Data System (ADS)

    He, Zhihai

    2007-01-01

    In this work, we develop a new data representation framework, called constrained relaxation for image compression. Our basic observation is that an image is not a random 2-D array of pixels. They have to satisfy a set of imaging constraints so as to form a natural image. Therefore, one of the major tasks in image representation and coding is to efficiently encode these imaging constraints. The proposed data representation and image compression method not only achieves more efficient data compression than the state-of-the-art H.264 Intra frame coding, but also provides much more resilience to wireless transmission errors with an internal error-correction capability.

  15. Compressive sensing in medical imaging

    PubMed Central

    Graff, Christian G.; Sidky, Emil Y.

    2015-01-01

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400

  16. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  17. Lossy Compression of ACS images

    NASA Astrophysics Data System (ADS)

    Cox, Colin

    2004-01-01

    A method of compressing images stored as floating point arrays was proposed several years ago by White and Greenfield. With the increased image sizes encountered in the last few years and the consequent need to distribute large data volumes, the value of applying such a procedure has become more evident. Methods such as this which offer significant compression ratios are lossy and there is always some concern that statistically important information might be discarded. Several astronomical images have been analyzed and, in the examples tested, compression ratios of about six were obtained with no significant information loss.

  18. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  19. Object-Based Image Compression

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    2003-01-01

    Image compression frequently supports reduced storage requirement in a computer system, as well as enhancement of effective channel bandwidth in a communication system, by decreasing the source bit rate through reduction of source redundancy. The majority of image compression techniques emphasize pixel-level operations, such as matching rectangular or elliptical sampling blocks taken from the source data stream, with exemplars stored in a database (e.g., a codebook in vector quantization or VQ). Alternatively, one can represent a source block via transformation, coefficient quantization, and selection of coefficients deemed significant for source content approximation in the decompressed image. This approach, called transform coding (TC), has predominated for several decades in the signal and image processing communities. A further technique that has been employed is the deduction of affine relationships from source properties such as local self-similarity, which supports the construction of adaptive codebooks in a self-VQ paradigm that has been called iterated function systems (IFS). Although VQ, TC, and IFS based compression algorithms have enjoyed varying levels of success for different types of applications, bit rate requirements, and image quality constraints, few of these algorithms examine the higher-level spatial structure of an image, and fewer still exploit this structure to enhance compression ratio. In this paper, we discuss a fourth type of compression algorithm, called object-based compression, which is based on research in joint segmentaton and compression, as well as previous research in the extraction of sketch-like representations from digital imagery. Here, large image regions that correspond to contiguous recognizeable objects or parts of objects are segmented from the source, then represented compactly in the compressed image. Segmentation is facilitated by source properties such as size, shape, texture, statistical properties, and spectral

  20. A programmable image compression system

    NASA Technical Reports Server (NTRS)

    Farrelle, Paul M.

    1989-01-01

    A programmable image compression system which has the necessary flexibility to address diverse imaging needs is described. It can compress and expand single frame video images (monochrome or color) as well as documents and graphics (black and white or color) for archival or transmission applications. Through software control, the compression mode can be set for lossless or controlled quality coding; the image size and bit depth can be varied; and the image source and destination devices can be readily changed. Despite the large combination of image data types, image sources, and algorithms, the system provides a simple consistent interface to the programmer. This system (OPTIPAC) is based on the TITMS320C25 digital signal processing (DSP) chip and has been implemented as a co-processor board for an IBM PC-AT compatible computer. The underlying philosophy can readily be applied to different hardware platforms. By using multiple DSP chips or incorporating algorithm specific chips, the compression and expansion times can be significantly reduced to meet performance requirements.

  1. Astronomical context coder for image compression

    NASA Astrophysics Data System (ADS)

    Pata, Petr; Schindler, Jaromir

    2015-10-01

    Recent lossless still image compression formats are powerful tools for compression of all kind of common images (pictures, text, schemes, etc.). Generally, the performance of a compression algorithm depends on its ability to anticipate the image function of the processed image. In other words, a compression algorithm to be successful, it has to take perfectly the advantage of coded image properties. Astronomical data form a special class of images and they have, among general image properties, also some specific characteristics which are unique. If a new coder is able to correctly use the knowledge of these special properties it should lead to its superior performance on this specific class of images at least in terms of the compression ratio. In this work, the novel lossless astronomical image data compression method will be presented. The achievable compression ratio of this new coder will be compared to theoretical lossless compression limit and also to the recent compression standards of the astronomy and general multimedia.

  2. [Medical image compression: a review].

    PubMed

    Noreña, Tatiana; Romero, Eduardo

    2013-01-01

    Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings. PMID:23715317

  3. Hyperspectral imaging using compressed sensing

    NASA Astrophysics Data System (ADS)

    Ramirez I., Gabriel Eduardo; Manian, Vidya B.

    2012-06-01

    Compressed sensing (CS) has attracted a lot of attention in recent years as a promising signal processing technique that exploits a signal's sparsity to reduce its size. It allows for simple compression that does not require a lot of additional computational power, and would allow physical implementation at the sensor using spatial light multiplexers using Texas Instruments (TI) digital micro-mirror device (DMD). The DMD can be used as a random measurement matrix, reflecting the image off the DMD is the equivalent of an inner product between the images individual pixels and the measurement matrix. CS however is asymmetrical, meaning that the signals recovery or reconstruction from the measurements does require a higher level of computation. This makes the prospect of working with the compressed version of the signal in implementations such as detection or classification much more efficient. If an initial analysis shows nothing of interest, the signal need not be reconstructed. Many hyper-spectral image applications are precisely focused on these areas, and would greatly benefit from a compression technique like CS that could help minimize the light sensor down to a single pixel, lowering costs associated with the cameras while reducing the large amounts of data generated by all the bands. The present paper will show an implementation of CS using a single pixel hyper-spectral sensor, and compare the reconstructed images to those obtained through the use of a regular sensor.

  4. Absolutely lossless compression of medical images.

    PubMed

    Ashraf, Robina; Akbar, Muhammad

    2005-01-01

    Data in medical images is very large and therefore for storage and/or transmission of these images, compression is essential. A method is proposed which provides high compression ratios for radiographic images with no loss of diagnostic quality. In the approach an image is first compressed at a high compression ratio but with loss, and the error image is then compressed losslessly. The resulting compression is not only strictly lossless, but also expected to yield a high compression ratio, especially if the lossy compression technique is good. A neural network vector quantizer (NNVQ) is used as a lossy compressor, while for lossless compression Huffman coding is used. Quality of images is evaluated by comparing with standard compression techniques available. PMID:17281110

  5. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    PubMed Central

    Li, Xiangwei; Lan, Xuguang; Yang, Meng; Xue, Jianru; Zheng, Nanning

    2014-01-01

    Compressive Sensing Imaging (CSI) is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS) acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4∼2 dB comparing with current state-of-the-art, while maintaining a low computational complexity. PMID:25490597

  6. Compressing TV-image data

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Lee, J.; Rice, R. F.; Schlutsmeyer, A. P.

    1981-01-01

    Compressing technique calculates activity estimator for each segment of image line. Estimator is used in conjunction with allowable bits per line, N, to determine number of bits necessary to code each segment and which segments can tolerate truncation. Preprocessed line data are then passed to adaptive variable-length coder, which selects optimum transmission code. Method increases capacity of broadcast and cable television transmissions and helps reduce size of storage medium for video and digital audio recordings.

  7. Snapshot colored compressive spectral imager.

    PubMed

    Correa, Claudia V; Arguello, Henry; Arce, Gonzalo R

    2015-10-01

    Traditional spectral imaging approaches require sensing all the voxels of a scene. Colored mosaic FPA detector-based architectures can acquire sets of the scene's spectral components, but the number of spectral planes depends directly on the number of available filters used on the FPA, which leads to reduced spatiospectral resolutions. Instead of sensing all the voxels of the scene, compressive spectral imaging (CSI) captures coded and dispersed projections of the spatiospectral source. This approach mitigates the resolution issues by exploiting optical phenomena in lenses and other elements, which, in turn, compromise the portability of the devices. This paper presents a compact snapshot colored compressive spectral imager (SCCSI) that exploits the benefits of the colored mosaic FPA detectors and the compression capabilities of CSI sensing techniques. The proposed optical architecture has no moving parts and can capture the spatiospectral information of a scene in a single snapshot by using a dispersive element and a color-patterned detector. The optical and the mathematical models of SCCSI are presented along with a testbed implementation of the system. Simulations and real experiments show the accuracy of SCCSI and compare the reconstructions with those of similar CSI optical architectures, such as the CASSI and SSCSI systems, resulting in improvements of up to 6 dB and 1 dB of PSNR, respectively. PMID:26479928

  8. Longwave infrared compressive hyperspectral imager

    NASA Astrophysics Data System (ADS)

    Dupuis, Julia R.; Kirby, Michael; Cosofret, Bogdan R.

    2015-06-01

    Physical Sciences Inc. (PSI) is developing a longwave infrared (LWIR) compressive sensing hyperspectral imager (CS HSI) based on a single pixel architecture for standoff vapor phase plume detection. The sensor employs novel use of a high throughput stationary interferometer and a digital micromirror device (DMD) converted for LWIR operation in place of the traditional cooled LWIR focal plane array. The CS HSI represents a substantial cost reduction over the state of the art in LWIR HSI instruments. Radiometric improvements for using the DMD in the LWIR spectral range have been identified and implemented. In addition, CS measurement and sparsity bases specifically tailored to the CS HSI instrument and chemical plume imaging have been developed and validated using LWIR hyperspectral image streams of chemical plumes. These bases enable comparable statistics to detection based on uncompressed data. In this paper, we present a system model predicting the overall performance of the CS HSI system. Results from a breadboard build and test validating the system model are reported. In addition, the measurement and sparsity basis work demonstrating the plume detection on compressed hyperspectral images is presented.

  9. Correlation and image compression for limited-bandwidth CCD.

    SciTech Connect

    Thompson, Douglas G.

    2005-07-01

    As radars move to Unmanned Aerial Vehicles with limited-bandwidth data downlinks, the amount of data stored and transmitted with each image becomes more significant. This document gives the results of a study to determine the effect of lossy compression in the image magnitude and phase on Coherent Change Detection (CCD). We examine 44 lossy compression types, plus lossless zlib compression, and test each compression method with over 600 CCD image pairs. We also derive theoretical predictions for the correlation for most of these compression schemes, which compare favorably with the experimental results. We recommend image transmission formats for limited-bandwidth programs having various requirements for CCD, including programs which cannot allow performance degradation and those which have stricter bandwidth requirements at the expense of CCD performance.

  10. High compression image and image sequence coding

    NASA Technical Reports Server (NTRS)

    Kunt, Murat

    1989-01-01

    The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.

  11. Compressive imaging in scattering media.

    PubMed

    Durán, V; Soldevila, F; Irles, E; Clemente, P; Tajahuerce, E; Andrés, P; Lancis, J

    2015-06-01

    One challenge that has long held the attention of scientists is that of clearly seeing objects hidden by turbid media, as smoke, fog or biological tissue, which has major implications in fields such as remote sensing or early diagnosis of diseases. Here, we combine structured incoherent illumination and bucket detection for imaging an absorbing object completely embedded in a scattering medium. A sequence of low-intensity microstructured light patterns is launched onto the object, whose image is accurately reconstructed through the light fluctuations measured by a single-pixel detector. Our technique is noninvasive, does not require coherent sources, raster scanning nor time-gated detection and benefits from the compressive sensing strategy. As a proof of concept, we experimentally retrieve the image of a transilluminated target both sandwiched between two holographic diffusers and embedded in a 6mm-thick sample of chicken breast. PMID:26072804

  12. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  13. Image coding compression based on DCT

    NASA Astrophysics Data System (ADS)

    Feng, Fei; Liu, Peixue; Jiang, Baohua

    2012-04-01

    With the development of computer science and communications, the digital image processing develops more and more fast. High quality images are loved by people, but it will waste more stored space in our computer and it will waste more bandwidth when it is transferred by Internet. Therefore, it's necessary to have an study on technology of image compression. At present, many algorithms about image compression is applied to network and the image compression standard is established. In this dissertation, some analysis on DCT will be written. Firstly, the principle of DCT will be shown. It's necessary to realize image compression, because of the widely using about this technology; Secondly, we will have a deep understanding of DCT by the using of Matlab, the process of image compression based on DCT, and the analysis on Huffman coding; Thirdly, image compression based on DCT will be shown by using Matlab and we can have an analysis on the quality of the picture compressed. It is true that DCT is not the only algorithm to realize image compression. I am sure there will be more algorithms to make the image compressed have a high quality. I believe the technology about image compression will be widely used in the network or communications in the future.

  14. Simultaneous denoising and compression of multispectral images

    NASA Astrophysics Data System (ADS)

    Hagag, Ahmed; Amin, Mohamed; Abd El-Samie, Fathi E.

    2013-01-01

    A new technique for denoising and compression of multispectral satellite images to remove the effect of noise on the compression process is presented. One type of multispectral images has been considered: Landsat Enhanced Thematic Mapper Plus. The discrete wavelet transform (DWT), the dual-tree DWT, and a simple Huffman coder are used in the compression process. Simulation results show that the proposed technique is more effective than other traditional compression-only techniques.

  15. Image Compression in Signal-Dependent Noise

    NASA Astrophysics Data System (ADS)

    Shahnaz, Rubeena; Walkup, John F.; Krile, Thomas F.

    1999-09-01

    The performance of an image compression scheme is affected by the presence of noise, and the achievable compression may be reduced significantly. We investigated the effects of specific signal-dependent-noise (SDN) sources, such as film-grain and speckle noise, on image compression, using JPEG (Joint Photographic Experts Group) standard image compression. For the improvement of compression ratios noisy images are preprocessed for noise suppression before compression is applied. Two approaches are employed for noise suppression. In one approach an estimator designed specifically for the SDN model is used. In an alternate approach, the noise is first transformed into signal-independent noise (SIN) and then an estimator designed for SIN is employed. The performances of these two schemes are compared. The compression results achieved for noiseless, noisy, and restored images are also presented.

  16. Studies on image compression and image reconstruction

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Nori, Sekhar; Araj, A.

    1994-01-01

    During this six month period our works concentrated on three, somewhat different areas. We looked at and developed a number of error concealment schemes for use in a variety of video coding environments. This work is described in an accompanying (draft) Masters thesis. In the thesis we describe application of this techniques to the MPEG video coding scheme. We felt that the unique frame ordering approach used in the MPEG scheme would be a challenge to any error concealment/error recovery technique. We continued with our work in the vector quantization area. We have also developed a new type of vector quantizer, which we call a scan predictive vector quantization. The scan predictive VQ was tested on data processed at Goddard to approximate Landsat 7 HRMSI resolution and compared favorably with existing VQ techniques. A paper describing this work is included. The third area is concerned more with reconstruction than compression. While there is a variety of efficient lossless image compression schemes, they all have a common property that they use past data to encode future data. This is done either via taking differences, context modeling, or by building dictionaries. When encoding large images, this common property becomes a common flaw. When the user wishes to decode just a portion of the image, the requirement that the past history be available forces the decoding of a significantly larger portion of the image than desired by the user. Even with intelligent partitioning of the image dataset, the number of pixels decoded may be four times the number of pixels requested. We have developed an adaptive scanning strategy which can be used with any lossless compression scheme and which lowers the additional number of pixels to be decoded to about 7 percent of the number of pixels requested! A paper describing these results is included.

  17. Spectral image compression for data communications

    NASA Astrophysics Data System (ADS)

    Hauta-Kasari, Markku; Lehtonen, Juha; Parkkinen, Jussi P. S.; Jaeaeskelaeinen, Timo

    2000-12-01

    We report a technique for spectral image compression to be used in the field of data communications. The spectral domain of the images is represented by a low-dimensional component image set, which is used to obtain an efficient compression of the high-dimensional spectral data. The component images are compressed using a similar technique as the JPEG- and MPEG-type compressions use to subsample the chrominance channels. The spectral compression is based on Principal Component Analysis (PCA) combined with color image transmission coding technique of 'chromatic channel subsampling' of the component images. The component images are subsampled using 4:2:2, 4:2:0, and 4:1:1-based compressions. In addition, we extended the test for larger block sizes and larger number of component images than in the original JPEG- and MPEG-standards. Totally 50 natural spectral images were used as test material in our experiments. Several error measures of the compression are reported. The same compressions are done using Independent Component Analysis and the results are compared with PCA. These methods give a good compression ratio while keeping visual quality of color still good. Quantitative comparisons between the original and reconstructed spectral images are presented.

  18. Tomographic Image Compression Using Multidimensional Transforms.

    ERIC Educational Resources Information Center

    Villasenor, John D.

    1994-01-01

    Describes a method for compressing tomographic images obtained using Positron Emission Tomography (PET) and Magnetic Resonance (MR) by applying transform compression using all available dimensions. This takes maximum advantage of redundancy of the data, allowing significant increases in compression efficiency and performance. (13 references) (KRN)

  19. Color space selection for JPEG image compression

    NASA Astrophysics Data System (ADS)

    Moroney, Nathan; Fairchild, Mark D.

    1995-10-01

    The Joint Photographic Experts Group's image compression algorithm has been shown to provide a very efficient and powerful method of compressing images. However, there is little substantive information about which color space should be utilized when implementing the JPEG algorithm. Currently, the JPEG algorithm is set up for use with any three-component color space. The objective of this research is to determine whether or not the color space selected will significantly improve the image compression. The RGB, XYZ, YIQ, CIELAB, CIELUV, and CIELAB LCh color spaces were examined and compared. Both numerical measures and psychophysical techniques were used to assess the results. The final results indicate that the device space, RGB, is the worst color space to compress images. In comparison, the nonlinear transforms of the device space, CIELAB and CIELUV, are the best color spaces to compress images. The XYZ, YIQ, and CIELAB LCh color spaces resulted in intermediate levels of compression.

  20. Compressing images for the Internet

    NASA Astrophysics Data System (ADS)

    Beretta, Giordano B.

    1998-01-01

    The World Wide Web has rapidly become the hot new mass communications medium. Content creators are using similar design and layout styles as in printed magazines, i.e., with many color images and graphics. The information is transmitted over plain telephone lines, where the speed/price trade-off is much more severe than in the case of printed media. The standard design approach is to use palettized color and to limit as much as possible the number of colors used, so that the images can be encoded with a small number of bits per pixel using the Graphics Interchange Format (GIF) file format. The World Wide Web standards contemplate a second data encoding method (JPEG) that allows color fidelity but usually performs poorly on text, which is a critical element of information communicated on this medium. We analyze the spatial compression of color images and describe a methodology for using the JPEG method in a way that allows a compact representation while preserving full color fidelity.

  1. An efficient medical image compression scheme.

    PubMed

    Li, Xiaofeng; Shen, Yi; Ma, Jiachen

    2005-01-01

    In this paper, a fast lossless compression scheme is presented for the medical image. This scheme consists of two stages. In the first stage, a Differential Pulse Code Modulation (DPCM) is used to decorrelate the raw image data, therefore increasing the compressibility of the medical image. In the second stage, an effective scheme based on the Huffman coding method is developed to encode the residual image. This newly proposed scheme could reduce the cost for the Huffman coding table while achieving high compression ratio. With this algorithm, a compression ratio higher than that of the lossless JPEG method for image can be obtained. At the same time, this method is quicker than the lossless JPEG2000. In other words, the newly proposed algorithm provides a good means for lossless medical image compression. PMID:17280962

  2. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  3. Digital Image Compression Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.

    1993-01-01

    The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.

  4. Segmentation-based CT image compression

    NASA Astrophysics Data System (ADS)

    Thammineni, Arunoday; Mukhopadhyay, Sudipta; Kamath, Vidya

    2004-04-01

    The existing image compression standards like JPEG and JPEG 2000, compress the whole image as a single frame. This makes the system simple but inefficient. The problem is acute for applications where lossless compression is mandatory viz. medical image compression. If the spatial characteristics of the image are considered, it can give rise to a more efficient coding scheme. For example, CT reconstructed images have uniform background outside the field of view (FOV). Even the portion within the FOV can be divided as anatomically relevant and irrelevant parts. They have distinctly different statistics. Hence coding them separately will result in more efficient compression. Segmentation is done based on thresholding and shape information is stored using 8-connected differential chain code. Simple 1-D DPCM is used as the prediction scheme. The experiments show that the 1st order entropies of images fall by more than 11% when each segment is coded separately. For simplicity and speed of decoding Huffman code is chosen for entropy coding. Segment based coding will have an overhead of one table per segment but the overhead is minimal. Lossless compression of image based on segmentation resulted in reduction of bit rate by 7%-9% compared to lossless compression of whole image as a single frame by the same prediction coder. Segmentation based scheme also has the advantage of natural ROI based progressive decoding. If it is allowed to delete the diagnostically irrelevant portions, the bit budget can go down as much as 40%. This concept can be extended to other modalities.

  5. Compressive Hyperspectral Imaging With Side Information

    NASA Astrophysics Data System (ADS)

    Yuan, Xin; Tsai, Tsung-Han; Zhu, Ruoyu; Llull, Patrick; Brady, David; Carin, Lawrence

    2015-09-01

    A blind compressive sensing algorithm is proposed to reconstruct hyperspectral images from spectrally-compressed measurements.The wavelength-dependent data are coded and then superposed, mapping the three-dimensional hyperspectral datacube to a two-dimensional image. The inversion algorithm learns a dictionary {\\em in situ} from the measurements via global-local shrinkage priors. By using RGB images as side information of the compressive sensing system, the proposed approach is extended to learn a coupled dictionary from the joint dataset of the compressed measurements and the corresponding RGB images, to improve reconstruction quality. A prototype camera is built using a liquid-crystal-on-silicon modulator. Experimental reconstructions of hyperspectral datacubes from both simulated and real compressed measurements demonstrate the efficacy of the proposed inversion algorithm, the feasibility of the camera and the benefit of side information.

  6. Lossless Compression on MRI Images Using SWT.

    PubMed

    Anusuya, V; Raghavan, V Srinivasa; Kavitha, G

    2014-10-01

    Medical image compression is one of the growing research fields in biomedical applications. Most medical images need to be compressed using lossless compression as each pixel information is valuable. With the wide pervasiveness of medical imaging applications in health-care settings and the increased interest in telemedicine technologies, it has become essential to reduce both storage and transmission bandwidth requirements needed for archival and communication of related data, preferably by employing lossless compression methods. Furthermore, providing random access as well as resolution and quality scalability to the compressed data has become of great utility. Random access refers to the ability to decode any section of the compressed image without having to decode the entire data set. The system proposes to implement a lossless codec using an entropy coder. 3D medical images are decomposed into 2D slices and subjected to 2D-stationary wavelet transform (SWT). The decimated coefficients are compressed in parallel using embedded block coding with optimized truncation of the embedded bit stream. These bit streams are decoded and reconstructed using inverse SWT. Finally, the compression ratio (CR) is evaluated to prove the efficiency of the proposal. As an enhancement, the proposed system concentrates on minimizing the computation time by introducing parallel computing on the arithmetic coding stage as it deals with multiple subslices. PMID:24848945

  7. Lossy compression in nuclear medicine images.

    PubMed Central

    Rebelo, M. S.; Furuie, S. S.; Munhoz, A. C.; Moura, L.; Melo, C. P.

    1993-01-01

    The goal of image compression is to reduce the amount of data needed to represent images. In medical applications, it is not desirable to lose any information and thus lossless compression methods are often used. However, medical imaging systems have intrinsic noise associated to it. The application of a lossy technique, which acts as a low pass filter, reduces the amount of data at a higher rate without any noticeable loss in the information contained in the images. We have compressed images of nuclear medicine using the discrete cosine transform algorithm. The decompressed images were considered reliable for visual inspection. Furthermore, a parameter was computed from these images and no discernible change was found from the results obtained using the original uncompressed images. PMID:8130593

  8. Reversible intraframe compression of medical images.

    PubMed

    Roos, P; Viergever, M A; van Dijke, M A; Peters, J H

    1988-01-01

    The performance of several reversible, intraframe compression methods is compared by applying them to angiographic and magnetic resonance (MR) images. Reversible data compression involves two consecutive steps: decorrelation and coding. The result of the decorrelation step is presented in terms of entropy. Because Huffman coding generally approximates these entropy measures within a few percent, coding has not been investigated separately. It appears that a hierarchical decorrelation method based on interpolation (HINT) outperforms all other methods considered. The compression ratio is around 3 for angiographic images of 8-9 b/pixel, but is considerably less for MR images whose noise level is substantially higher. PMID:18230486

  9. Context-Aware Image Compression

    PubMed Central

    Chan, Jacky C. K.; Mahjoubfar, Ata; Chen, Claire L.; Jalali, Bahram

    2016-01-01

    We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling. PMID:27367904

  10. Wavelet compression efficiency investigation for medical images

    NASA Astrophysics Data System (ADS)

    Moryc, Marcin; Dziech, Wiera

    2006-03-01

    Medical images are acquired or stored digitally. These images can be very large in size and number, and compression can increase the speed of transmission and reduce the cost of storage. In the paper analysis of medical images' approximation using the transform method based on wavelet functions is investigated. The tested clinical images are taken from multiple anatomical regions and modalities (Computer Tomography CT, Magnetic Resonance MR, Ultrasound, Mammography and X-Ray images). To compress medical images, the threshold criterion has been applied. The mean square error (MSE) is used as a measure of approximation quality. Plots of the MSE versus compression percentage and approximated images are included for comparison of approximation efficiency.

  11. Robust retrieval from compressed medical image archives

    NASA Astrophysics Data System (ADS)

    Sidorov, Denis N.; Lerallut, Jean F.; Cocquerez, Jean-Pierre; Azpiroz, Joaquin

    2005-04-01

    Paper addresses the computational aspects of extracting important features directly from compressed images for the purpose of aiding biomedical image retrieval based on content. The proposed method for treatment of compressed medical archives follows the JPEG compression standard and exploits algorithm based on spacial analysis of the image cosine spectrum coefficients amplitude and location. The experiments on modality-specific archive of osteoarticular images show robustness of the method based on measured spectral spatial statistics. The features, which were based on the cosine spectrum coefficients' values, could satisfy different types of queries' modalities (MRI, US, etc), which emphasized texture and edge properties. In particular, it has been shown that there is wealth of information in the AC coefficients of the DCT transform, which can be utilized to support fast content-based image retrieval. The computational cost of proposed signature generation algorithm is low. Influence of conventional and the state-of-the-art compression techniques based on cosine and wavelet integral transforms on the performance of content-based medical image retrieval has been also studied. We found no significant differences in retrieval efficiencies for non-compressed and JPEG2000-compressed images even at the lowest bit rate tested.

  12. Lossless compression of VLSI layout image data.

    PubMed

    Dai, Vito; Zakhor, Avideh

    2006-09-01

    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data. PMID:16948299

  13. Cloud Optimized Image Format and Compression

    NASA Astrophysics Data System (ADS)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  14. Iris Recognition: The Consequences of Image Compression

    NASA Astrophysics Data System (ADS)

    Ives, Robert W.; Bishop, Daniel A.; Du, Yingzi; Belcher, Craig

    2010-12-01

    Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA) is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  15. Postprocessing of Compressed Images via Sequential Denoising

    NASA Astrophysics Data System (ADS)

    Dar, Yehuda; Bruckstein, Alfred M.; Elad, Michael; Giryes, Raja

    2016-07-01

    In this work we propose a novel postprocessing technique for compression-artifact reduction. Our approach is based on posing this task as an inverse problem, with a regularization that leverages on existing state-of-the-art image denoising algorithms. We rely on the recently proposed Plug-and-Play Prior framework, suggesting the solution of general inverse problems via Alternating Direction Method of Multipliers (ADMM), leading to a sequence of Gaussian denoising steps. A key feature in our scheme is a linearization of the compression-decompression process, so as to get a formulation that can be optimized. In addition, we supply a thorough analysis of this linear approximation for several basic compression procedures. The proposed method is suitable for diverse compression techniques that rely on transform coding. Specifically, we demonstrate impressive gains in image quality for several leading compression methods - JPEG, JPEG2000, and HEVC.

  16. Postprocessing of Compressed Images via Sequential Denoising.

    PubMed

    Dar, Yehuda; Bruckstein, Alfred M; Elad, Michael; Giryes, Raja

    2016-07-01

    In this paper, we propose a novel postprocessing technique for compression-artifact reduction. Our approach is based on posing this task as an inverse problem, with a regularization that leverages on existing state-of-the-art image denoising algorithms. We rely on the recently proposed Plug-and-Play Prior framework, suggesting the solution of general inverse problems via alternating direction method of multipliers, leading to a sequence of Gaussian denoising steps. A key feature in our scheme is a linearization of the compression-decompression process, so as to get a formulation that can be optimized. In addition, we supply a thorough analysis of this linear approximation for several basic compression procedures. The proposed method is suitable for diverse compression techniques that rely on transform coding. In particular, we demonstrate impressive gains in image quality for several leading compression methods-JPEG, JPEG2000, and HEVC. PMID:27214878

  17. Data compression for satellite images

    NASA Technical Reports Server (NTRS)

    Chen, P. H.; Wintz, P. A.

    1976-01-01

    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.

  18. Information preserving image compression for archiving NMR images.

    PubMed

    Li, C C; Gokmen, M; Hirschman, A D; Wang, Y

    1991-01-01

    This paper presents a result on information preserving compression of NMR images for the archiving purpose. Both Lynch-Davisson coding and linear predictive coding have been studied. For NMR images of 256 x 256 x 12 resolution, the Lynch-Davisson coding with a block size of 64 as applied to prediction error sequences in the Gray code bit planes of each image gave an average compression ratio of 2.3:1 for 14 testing images. The predictive coding with a third order linear predictor and the Huffman encoding of the prediction error gave an average compression ratio of 3.1:1 for 54 images under test, while the maximum compression ratio achieved was 3.8:1. This result is one step further toward the improvement, albeit small, of the information preserving image compression for medical applications. PMID:1913579

  19. Hyperspectral image data compression based on DSP

    NASA Astrophysics Data System (ADS)

    Fan, Jiming; Zhou, Jiankang; Chen, Xinhua; Shen, Weimin

    2010-11-01

    The huge data volume of hyperspectral image challenges its transportation and store. It is necessary to find an effective method to compress the hyperspectral image. Through analysis and comparison of current various algorithms, a mixed compression algorithm based on prediction, integer wavelet transform and embedded zero-tree wavelet (EZW) is proposed in this paper. We adopt a high-powered Digital Signal Processor (DSP) of TMS320DM642 to realize the proposed algorithm. Through modifying the mixed algorithm and optimizing its algorithmic language, the processing efficiency of the program was significantly improved, compared the non-optimized one. Our experiment show that the mixed algorithm based on DSP runs much faster than the algorithm on personal computer. The proposed method can achieve the nearly real-time compression with excellent image quality and compression performance.

  20. A New Approach for Fingerprint Image Compression

    SciTech Connect

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  1. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  2. FBI compression standard for digitized fingerprint images

    NASA Astrophysics Data System (ADS)

    Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas

    1996-11-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  3. Compression of gray-scale fingerprint images

    NASA Astrophysics Data System (ADS)

    Hopper, Thomas

    1994-03-01

    The FBI has developed a specification for the compression of gray-scale fingerprint images to support paperless identification services within the criminal justice community. The algorithm is based on a scalar quantization of a discrete wavelet transform decomposition of the images, followed by zero run encoding and Huffman encoding.

  4. Universal lossless compression algorithm for textual images

    NASA Astrophysics Data System (ADS)

    al Zahir, Saif

    2012-03-01

    In recent years, an unparalleled volume of textual information has been transported over the Internet via email, chatting, blogging, tweeting, digital libraries, and information retrieval systems. As the volume of text data has now exceeded 40% of the total volume of traffic on the Internet, compressing textual data becomes imperative. Many sophisticated algorithms were introduced and employed for this purpose including Huffman encoding, arithmetic encoding, the Ziv-Lempel family, Dynamic Markov Compression, and Burrow-Wheeler Transform. My research presents novel universal algorithm for compressing textual images. The algorithm comprises two parts: 1. a universal fixed-to-variable codebook; and 2. our row and column elimination coding scheme. Simulation results on a large number of Arabic, Persian, and Hebrew textual images show that this algorithm has a compression ratio of nearly 87%, which exceeds published results including JBIG2.

  5. Recommended frequency of ABPI review for patients wearing compression hosiery.

    PubMed

    Furlong, Winnie

    2015-11-11

    This paper is a sequel to the article 'How often should patients in compression have ABPI recorded?' ( Furlong, 2013 ). Monitoring ankle brachial pressure index (ABPI) is essential, especially in those patients wearing compression hosiery, as it can change over time ( Simon et al, 1994 ; Pankhurst, 2004 ), particularly in the presence of peripheral arterial disease (PAD). Leg ulceration caused by venous disease requires graduated compression ( Wounds UK, 2002 ; Anderson, 2008). Once healed, compression hosiery is required to help prevent ulcer recurrence ( Vandongen and Stacey, 2000 ). The Royal College of Nursing ( RCN, 2006 ) guidelines suggest 3-monthly reviews, including ABPI, with no further guidance. Wounds UK (2002) suggests that patients who have ABPI<0.9, diabetes, reduced mobility or symptoms of claudication should have at least 3/12 Doppler, and that those in compression hosiery without complications who are able to report should have vascular assessment yearly. PMID:26559232

  6. Compressive sensing image sensors-hardware implementation.

    PubMed

    Dadkhah, Mohammadreza; Deen, M Jamal; Shirani, Shahram

    2013-01-01

    The compressive sensing (CS) paradigm uses simultaneous sensing and compression to provide an efficient image acquisition technique. The main advantages of the CS method include high resolution imaging using low resolution sensor arrays and faster image acquisition. Since the imaging philosophy in CS imagers is different from conventional imaging systems, new physical structures have been developed for cameras that use the CS technique. In this paper, a review of different hardware implementations of CS encoding in optical and electrical domains is presented. Considering the recent advances in CMOS (complementary metal-oxide-semiconductor) technologies and the feasibility of performing on-chip signal processing, important practical issues in the implementation of CS in CMOS sensors are emphasized. In addition, the CS coding for video capture is discussed. PMID:23584123

  7. Compressive line sensing underwater imaging system

    NASA Astrophysics Data System (ADS)

    Ouyang, B.; Dalgleish, F. R.; Vuorenkoski, A. K.; Caimi, F. M.; Britton, W.

    2013-05-01

    Compressive sensing (CS) theory has drawn great interest and led to new imaging techniques in many different fields. In recent years, the FAU/HBOI OVOL has conducted extensive research to study the CS based active electro-optical imaging system in the scattering medium such as the underwater environment. The unique features of such system in comparison with the traditional underwater electro-optical imaging system are discussed. Building upon the knowledge from the previous work on a frame based CS underwater laser imager concept, more advantageous for hover-capable platforms such as the Hovering Autonomous Underwater Vehicle (HAUV), a compressive line sensing underwater imaging (CLSUI) system that is more compatible with the conventional underwater platforms where images are formed in whiskbroom fashion, is proposed in this paper. Simulation results are discussed.

  8. Optical Data Compression in Time Stretch Imaging

    PubMed Central

    Chen, Claire Lifan; Mahjoubfar, Ata; Jalali, Bahram

    2015-01-01

    Time stretch imaging offers real-time image acquisition at millions of frames per second and subnanosecond shutter speed, and has enabled detection of rare cancer cells in blood with record throughput and specificity. An unintended consequence of high throughput image acquisition is the massive amount of digital data generated by the instrument. Here we report the first experimental demonstration of real-time optical image compression applied to time stretch imaging. By exploiting the sparsity of the image, we reduce the number of samples and the amount of data generated by the time stretch camera in our proof-of-concept experiments by about three times. Optical data compression addresses the big data predicament in such systems. PMID:25906244

  9. Directly Estimating Endmembers for Compressive Hyperspectral Images

    PubMed Central

    Xu, Hongwei; Fu, Ning; Qiao, Liyan; Peng, Xiyuan

    2015-01-01

    The large volume of hyperspectral images (HSI) generated creates huge challenges for transmission and storage, making data compression more and more important. Compressive Sensing (CS) is an effective data compression technology that shows that when a signal is sparse in some basis, only a small number of measurements are needed for exact signal recovery. Distributed CS (DCS) takes advantage of both intra- and inter- signal correlations to reduce the number of measurements needed for multichannel-signal recovery. HSI can be observed by the DCS framework to reduce the volume of data significantly. The traditional method for estimating endmembers (spectral information) first recovers the images from the compressive HSI and then estimates endmembers via the recovered images. The recovery step takes considerable time and introduces errors into the estimation step. In this paper, we propose a novel method, by designing a type of coherent measurement matrix, to estimate endmembers directly from the compressively observed HSI data via convex geometry (CG) approaches without recovering the images. Numerical simulations show that the proposed method outperforms the traditional method with better estimation speed and better (or comparable) accuracy in both noisy and noiseless cases. PMID:25905699

  10. Spatial versus spectral compression ratio in compressive sensing of hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    August, Yitzhak; Vachman, Chaim; Stern, Adrian

    2013-05-01

    Compressive hyperspectral imaging is based on the fact that hyperspectral data is highly redundant. However, there is no symmetry between the compressibility of the spatial and spectral domains, and that should be taken into account for optimal compressive hyperspectral imaging system design. Here we present a study of the influence of the ratio between the compression in the spatial and spectral domains on the performance of a 3D separable compressive hyperspectral imaging method we recently developed.

  11. Compressive Deconvolution in Medical Ultrasound Imaging.

    PubMed

    Chen, Zhouye; Basarab, Adrian; Kouame, Denis

    2016-03-01

    The interest of compressive sampling in ultrasound imaging has been recently extensively evaluated by several research teams. Following the different application setups, it has been shown that the RF data may be reconstructed from a small number of measurements and/or using a reduced number of ultrasound pulse emissions. Nevertheless, RF image spatial resolution, contrast and signal to noise ratio are affected by the limited bandwidth of the imaging transducer and the physical phenomenon related to US wave propagation. To overcome these limitations, several deconvolution-based image processing techniques have been proposed to enhance the ultrasound images. In this paper, we propose a novel framework, named compressive deconvolution, that reconstructs enhanced RF images from compressed measurements. Exploiting an unified formulation of the direct acquisition model, combining random projections and 2D convolution with a spatially invariant point spread function, the benefit of our approach is the joint data volume reduction and image quality improvement. The proposed optimization method, based on the Alternating Direction Method of Multipliers, is evaluated on both simulated and in vivo data. PMID:26513780

  12. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as phi(sub gamma), are discrete time signals, where gamma represents the dictionary index. A dictionary with a collection of these waveforms is typically complete or overcomplete. Given such a dictionary, the goal is to obtain a representation image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  13. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as lambda, are discrete time signals, where y represents the dictionary index. A dictionary with a collection of these waveforms Is typically complete or over complete. Given such a dictionary, the goal is to obtain a representation Image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  14. Compression of color-mapped images

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, A. C.; Sayood, Khalid

    1992-01-01

    In a standard image coding scenario, pixel-to-pixel correlation nearly always exists in the data, especially if the image is a natural scene. This correlation is what allows predictive coding schemes (e.g., DPCM) to perform efficient compression. In a color-mapped image, the values stored in the pixel array are no longer directly related to the pixel intensity. Two color indices which are numerically adjacent (close) may point to two very different colors. The correlation still exists, but only via the colormap. This fact can be exploited by sorting the color map to reintroduce the structure. The sorting of colormaps is studied and it is shown how the resulting structure can be used in both lossless and lossy compression of images.

  15. Entangled-photon compressive ghost imaging

    SciTech Connect

    Zerom, Petros; Chan, Kam Wai Clifford; Howell, John C.; Boyd, Robert W.

    2011-12-15

    We have experimentally demonstrated high-resolution compressive ghost imaging at the single-photon level using entangled photons produced by a spontaneous parametric down-conversion source and using single-pixel detectors. For a given mean-squared error, the number of photons needed to reconstruct a two-dimensional image is found to be much smaller than that in quantum ghost imaging experiments employing a raster scan. This procedure not only shortens the data acquisition time, but also suggests a more economical use of photons for low-light-level and quantum image formation.

  16. Quad Tree Structures for Image Compression Applications.

    ERIC Educational Resources Information Center

    Markas, Tassos; Reif, John

    1992-01-01

    Presents a class of distortion controlled vector quantizers that are capable of compressing images so they comply with certain distortion requirements. Highlights include tree-structured vector quantizers; multiresolution vector quantization; error coding vector quantizer; error coding multiresolution algorithm; and Huffman coding of the quad-tree…

  17. Effect of Lossy JPEG Compression of an Image with Chromatic Aberrations on Target Measurement Accuracy

    NASA Astrophysics Data System (ADS)

    Matsuoka, R.

    2014-05-01

    This paper reports an experiment conducted to investigate the effect of lossy JPEG compression of an image with chromatic aberrations on the measurement accuracy of target center by the intensity-weighted centroid method. I utilized six images shooting a white sheet with 30 by 20 black filled circles in the experiment. The images were acquired by a digital camera Canon EOS 20D. The image data were compressed by using two compression parameter sets of a downsampling ratio, a quantization table and a Huffman code table utilized in EOS 20D. The experiment results clearly indicate that lossy JPEG compression of an image with chromatic aberrations would produce a significant effect on measurement accuracy of target center by the intensity-weighted centroid method. The maximum displacements of red, green and blue components caused by lossy JPEG compression were 0.20, 0.09, and 0.20 pixels respectively. The results also suggest that the downsampling of the chrominance components Cb and Cr in lossy JPEG compression would produce displacements between uncompressed image data and compressed image data. In conclusion, since the author consider that it would be unable to correct displacements caused by lossy JPEG compression, the author would recommend that lossy JPEG compression before recording an image in a digital camera should not be executed in case of highly precise image measurement by using color images acquired by a non-metric digital camera.

  18. Multi-spectral compressive snapshot imaging using RGB image sensors.

    PubMed

    Rueda, Hoover; Lau, Daniel; Arce, Gonzalo R

    2015-05-01

    Compressive sensing is a powerful sensing and reconstruction framework for recovering high dimensional signals with only a handful of observations and for spectral imaging, compressive sensing offers a novel method of multispectral imaging. Specifically, the coded aperture snapshot spectral imager (CASSI) system has been demonstrated to produce multi-spectral data cubes color images from a single snapshot taken by a monochrome image sensor. In this paper, we expand the theoretical framework of CASSI to include the spectral sensitivity of the image sensor pixels to account for color and then investigate the impact on image quality using either a traditional color image sensor that spatially multiplexes red, green, and blue light filters or a novel Foveon image sensor which stacks red, green, and blue pixels on top of one another. PMID:25969307

  19. Adaptive prediction trees for image compression.

    PubMed

    Robinson, John A

    2006-08-01

    This paper presents a complete general-purpose method for still-image compression called adaptive prediction trees. Efficient lossy and lossless compression of photographs, graphics, textual, and mixed images is achieved by ordering the data in a multicomponent binary pyramid, applying an empirically optimized nonlinear predictor, exploiting structural redundancies between color components, then coding with hex-trees and adaptive runlength/Huffman coders. Color palettization and order statistics prefiltering are applied adaptively as appropriate. Over a diverse image test set, the method outperforms standard lossless and lossy alternatives. The competing lossy alternatives use block transforms and wavelets in well-studied configurations. A major result of this paper is that predictive coding is a viable and sometimes preferable alternative to these methods. PMID:16900671

  20. Realization of hybrid compressive imaging strategies.

    PubMed

    Li, Yun; Sankaranarayanan, Aswin C; Xu, Lina; Baraniuk, Richard; Kelly, Kevin F

    2014-08-01

    The tendency of natural scenes to cluster around low frequencies is not only useful in image compression, it also can prove advantageous in novel infrared and hyperspectral image acquisition. In this paper, we exploit this signal model with two approaches to enhance the quality of compressive imaging as implemented in a single-pixel compressive camera and compare these results against purely random acquisition. We combine projection patterns that can efficiently extract the model-based information with subsequent random projections to form the hybrid pattern sets. With the first approach, we generate low-frequency patterns via a direct transform. As an alternative, we also used principal component analysis of an image library to identify the low-frequency components. We present the first (to the best of our knowledge) experimental validation of this hybrid signal model on real data. For both methods, we acquire comparable quality of reconstructions while acquiring only half the number of measurements needed by traditional random sequences. The optimal combination of hybrid patterns and the effects of noise on image reconstruction are also discussed. PMID:25121526

  1. A recommender system for medical imaging diagnostic.

    PubMed

    Monteiro, Eriksson; Valente, Frederico; Costa, Carlos; Oliveira, José Luís

    2015-01-01

    The large volume of data captured daily in healthcare institutions is opening new and great perspectives about the best ways to use it towards improving clinical practice. In this paper we present a context-based recommender system to support medical imaging diagnostic. The system relies on data mining and context-based retrieval techniques to automatically lookup for relevant information that may help physicians in the diagnostic decision. PMID:25991188

  2. Chest tuberculosis: Radiological review and imaging recommendations

    PubMed Central

    Bhalla, Ashu Seith; Goyal, Ankur; Guleria, Randeep; Gupta, Arun Kumar

    2015-01-01

    Chest tuberculosis (CTB) is a widespread problem, especially in our country where it is one of the leading causes of mortality. The article reviews the imaging findings in CTB on various modalities. We also attempt to categorize the findings into those definitive for active TB, indeterminate for disease activity, and those indicating healed TB. Though various radiological modalities are widely used in evaluation of such patients, no imaging guidelines exist for the use of these modalities in diagnosis and follow-up. Consequently, imaging is not optimally utilized and patients are often unnecessarily subjected to repeated CT examinations, which is undesirable. Based on the available literature and our experience, we propose certain recommendations delineating the role of imaging in the diagnosis and follow-up of such patients. The authors recognize that this is an evolving field and there may be future revisions depending on emergence of new evidence. PMID:26288514

  3. Multi-wavelength compressive computational ghost imaging

    NASA Astrophysics Data System (ADS)

    Welsh, Stephen S.; Edgar, Matthew P.; Jonathan, Phillip; Sun, Baoqing; Padgett, Miles J.

    2013-03-01

    The field of ghost imaging encompasses systems which can retrieve the spatial information of an object through correlated measurements of a projected light field, having spatial resolution, and the associated reflected or transmitted light intensity measured by a photodetector. By employing a digital light projector in a computational ghost imaging system with multiple spectrally filtered photodetectors we obtain high-quality multi-wavelength reconstructions of real macroscopic objects. We compare different reconstruction algorithms and reveal the use of compressive sensing techniques for achieving sub-Nyquist performance. Furthermore, we demonstrate the use of this technology in non-visible and fluorescence imaging applications.

  4. Compressive Hyperspectral Imaging via Approximate Message Passing

    NASA Astrophysics Data System (ADS)

    Tan, Jin; Ma, Yanting; Rueda, Hoover; Baron, Dror; Arce, Gonzalo R.

    2016-03-01

    We consider a compressive hyperspectral imaging reconstruction problem, where three-dimensional spatio-spectral information about a scene is sensed by a coded aperture snapshot spectral imager (CASSI). The CASSI imaging process can be modeled as suppressing three-dimensional coded and shifted voxels and projecting these onto a two-dimensional plane, such that the number of acquired measurements is greatly reduced. On the other hand, because the measurements are highly compressive, the reconstruction process becomes challenging. We previously proposed a compressive imaging reconstruction algorithm that is applied to two-dimensional images based on the approximate message passing (AMP) framework. AMP is an iterative algorithm that can be used in signal and image reconstruction by performing denoising at each iteration. We employed an adaptive Wiener filter as the image denoiser, and called our algorithm "AMP-Wiener." In this paper, we extend AMP-Wiener to three-dimensional hyperspectral image reconstruction, and call it "AMP-3D-Wiener." Applying the AMP framework to the CASSI system is challenging, because the matrix that models the CASSI system is highly sparse, and such a matrix is not suitable to AMP and makes it difficult for AMP to converge. Therefore, we modify the adaptive Wiener filter and employ a technique called damping to solve for the divergence issue of AMP. Our approach is applied in nature, and the numerical experiments show that AMP-3D-Wiener outperforms existing widely-used algorithms such as gradient projection for sparse reconstruction (GPSR) and two-step iterative shrinkage/thresholding (TwIST) given a similar amount of runtime. Moreover, in contrast to GPSR and TwIST, AMP-3D-Wiener need not tune any parameters, which simplifies the reconstruction process.

  5. Microseismic source imaging in a compressed domain

    NASA Astrophysics Data System (ADS)

    Vera Rodriguez, Ismael; Sacchi, Mauricio D.

    2014-08-01

    Microseismic monitoring is an essential tool for the characterization of hydraulic fractures. Fast estimation of the parameters that define a microseismic event is relevant to understand and control fracture development. The amount of data contained in the microseismic records however, poses a challenge for fast continuous detection and evaluation of the microseismic source parameters. Work inspired by the emerging field of Compressive Sensing has showed that it is possible to evaluate source parameters in a compressed domain, thereby reducing processing time. This technique performs well in scenarios where the amplitudes of the signal are above the noise level, as is often the case in microseismic monitoring using downhole tools. This paper extends the idea of the compressed domain processing to scenarios of microseismic monitoring using surface arrays, where the signal amplitudes are commonly at the same level as, or below, the noise amplitudes. To achieve this, we resort to the use of an imaging operator, which has previously been found to produce better results in detection and location of microseismic events from surface arrays. The operator in our method is formed by full-waveform elastodynamic Green's functions that are band-limited by a source time function and represented in the frequency domain. Where full-waveform Green's functions are not available, ray tracing can also be used to compute the required Green's functions. Additionally, we introduce the concept of the compressed inverse, which derives directly from the compression of the migration operator using a random matrix. The described methodology reduces processing time at a cost of introducing distortions into the results. However, the amount of distortion can be managed by controlling the level of compression applied to the operator. Numerical experiments using synthetic and real data demonstrate the reductions in processing time that can be achieved and exemplify the process of selecting the

  6. Combining image-processing and image compression schemes

    NASA Technical Reports Server (NTRS)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  7. Complementary compressive imaging for the telescopic system.

    PubMed

    Yu, Wen-Kai; Liu, Xue-Feng; Yao, Xu-Ri; Wang, Chao; Zhai, Yun; Zhai, Guang-Jie

    2014-01-01

    Conventional single-pixel cameras recover images only from the data recorded in one arm of the digital micromirror device, with the light reflected to the other direction not to be collected. Actually, the sampling in these two reflection orientations is correlated with each other, in view of which we propose a sampling concept of complementary compressive imaging, for the first time to our knowledge. We use this method in a telescopic system and acquire images of a target at about 2.0 km range with 20 cm resolution, with the variance of the noise decreasing by half. The influence of the sampling rate and the integration time of photomultiplier tubes on the image quality is also investigated experimentally. It is evident that this technique has advantages of large field of view over a long distance, high-resolution, high imaging speed, high-quality imaging capabilities, and needs fewer measurements in total than any single-arm sampling, thus can be used to improve the performance of all compressive imaging schemes and opens up possibilities for new applications in the remote-sensing area. PMID:25060569

  8. Patch-primitive driven compressive ghost imaging.

    PubMed

    Hu, Xuemei; Suo, Jinli; Yue, Tao; Bian, Liheng; Dai, Qionghai

    2015-05-01

    Ghost imaging has rapidly developed for about two decades and attracted wide attention from different research fields. However, the practical applications of ghost imaging are still largely limited, by its low reconstruction quality and large required measurements. Inspired by the fact that the natural image patches usually exhibit simple structures, and these structures share common primitives, we propose a patch-primitive driven reconstruction approach to raise the quality of ghost imaging. Specifically, we resort to a statistical learning strategy by representing each image patch with sparse coefficients upon an over-complete dictionary. The dictionary is composed of various primitives learned from a large number of image patches from a natural image database. By introducing a linear mapping between non-overlapping image patches and the whole image, we incorporate the above local prior into the convex optimization framework of compressive ghost imaging. Experiments demonstrate that our method could obtain better reconstruction from the same amount of measurements, and thus reduce the number of requisite measurements for achieving satisfying imaging quality. PMID:25969205

  9. Image Segmentation, Registration, Compression, and Matching

    NASA Technical Reports Server (NTRS)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  10. Lossless Astronomical Image Compression and the Effects of Random Noise

    NASA Technical Reports Server (NTRS)

    Pence, William

    2009-01-01

    In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.

  11. Reconfigurable Hardware for Compressing Hyperspectral Image Data

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh; Namkung, Jeffrey; Villapando, Carlos; Kiely, Aaron; Klimesh, Matthew; Xie, Hua

    2010-01-01

    High-speed, low-power, reconfigurable electronic hardware has been developed to implement ICER-3D, an algorithm for compressing hyperspectral-image data. The algorithm and parts thereof have been the topics of several NASA Tech Briefs articles, including Context Modeler for Wavelet Compression of Hyperspectral Images (NPO-43239) and ICER-3D Hyperspectral Image Compression Software (NPO-43238), which appear elsewhere in this issue of NASA Tech Briefs. As described in more detail in those articles, the algorithm includes three main subalgorithms: one for computing wavelet transforms, one for context modeling, and one for entropy encoding. For the purpose of designing the hardware, these subalgorithms are treated as modules to be implemented efficiently in field-programmable gate arrays (FPGAs). The design takes advantage of industry- standard, commercially available FPGAs. The implementation targets the Xilinx Virtex II pro architecture, which has embedded PowerPC processor cores with flexible on-chip bus architecture. It incorporates an efficient parallel and pipelined architecture to compress the three-dimensional image data. The design provides for internal buffering to minimize intensive input/output operations while making efficient use of offchip memory. The design is scalable in that the subalgorithms are implemented as independent hardware modules that can be combined in parallel to increase throughput. The on-chip processor manages the overall operation of the compression system, including execution of the top-level control functions as well as scheduling, initiating, and monitoring processes. The design prototype has been demonstrated to be capable of compressing hyperspectral data at a rate of 4.5 megasamples per second at a conservative clock frequency of 50 MHz, with a potential for substantially greater throughput at a higher clock frequency. The power consumption of the prototype is less than 6.5 W. The reconfigurability (by means of reprogramming) of

  12. Computed Tomography Image Compressibility and Limitations of Compression Ratio-Based Guidelines.

    PubMed

    Pambrun, Jean-François; Noumeir, Rita

    2015-12-01

    Finding optimal compression levels for diagnostic imaging is not an easy task. Significant compressibility variations exist between modalities, but little is known about compressibility variations within modalities. Moreover, compressibility is affected by acquisition parameters. In this study, we evaluate the compressibility of thousands of computed tomography (CT) slices acquired with different slice thicknesses, exposures, reconstruction filters, slice collimations, and pitches. We demonstrate that exposure, slice thickness, and reconstruction filters have a significant impact on image compressibility due to an increased high frequency content and a lower acquisition signal-to-noise ratio. We also show that compression ratio is not a good fidelity measure. Therefore, guidelines based on compression ratio should ideally be replaced with other compression measures better correlated with image fidelity. Value-of-interest (VOI) transformations also affect the perception of quality. We have studied the effect of value-of-interest transformation and found significant masking of artifacts when window is widened. PMID:25804842

  13. Fpack and Funpack Utilities for FITS Image Compression and Uncompression

    NASA Technical Reports Server (NTRS)

    Pence, W.

    2008-01-01

    Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see http://fits.gsfc.nasa.gov). The associated funpack program restores the compressed image file back to its original state (as long as a lossless compression algorithm is used). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression algorithms. Fpack stores the compressed image using the FITS tiled image compression convention (see http://fits.gsfc.nasa.gov/fits_registry.html). Under this convention, the image is first divided into a user-configurable grid of rectangular tiles, and then each tile is individually compressed and stored in a variable-length array column in a FITS binary table. By default, fpack usually adopts a row-by-row tiling pattern. The FITS image header keywords remain uncompressed for fast access by FITS reading and writing software. The tiled image compression convention can in principle support any number of different compression algorithms. The fpack and funpack utilities call on routines in the CFITSIO library (http://hesarc.gsfc.nasa.gov/fitsio) to perform the actual compression and uncompression of the FITS images, which currently supports the GZIP, Rice, H-compress, and PLIO IRAF pixel list compression algorithms.

  14. On the use of standards for microarray lossless image compression.

    PubMed

    Pinho, Armando J; Paiva, António R C; Neves, António J R

    2006-03-01

    The interest in methods that are able to efficiently compress microarray images is relatively new. This is not surprising, since the appearance and fast growth of the technology responsible for producing these images is also quite recent. In this paper, we present a set of compression results obtained with 49 publicly available images, using three image coding standards: lossless JPEG2000, JBIG, and JPEG-LS. We concluded that the compression technology behind JBIG seems to be the one that offers the best combination of compression efficiency and flexibility for microarray image compression. PMID:16532784

  15. Fast Lossless Compression of Multispectral-Image Data

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew

    2006-01-01

    An algorithm that effects fast lossless compression of multispectral-image data is based on low-complexity, proven adaptive-filtering algorithms. This algorithm is intended for use in compressing multispectral-image data aboard spacecraft for transmission to Earth stations. Variants of this algorithm could be useful for lossless compression of three-dimensional medical imagery and, perhaps, for compressing image data in general.

  16. Selective document image data compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1998-01-01

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel.--(235 words)

  17. Selective document image data compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1998-05-19

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel. 10 figs.

  18. Outer planet Pioneer imaging communications system study. [data compression

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The effects of different types of imaging data compression on the elements of the Pioneer end-to-end data system were studied for three imaging transmission methods. These were: no data compression, moderate data compression, and the advanced imaging communications system. It is concluded that: (1) the value of data compression is inversely related to the downlink telemetry bit rate; (2) the rolling characteristics of the spacecraft limit the selection of data compression ratios; and (3) data compression might be used to perform acceptable outer planet mission at reduced downlink telemetry bit rates.

  19. Centralized and interactive compression of multiview images

    NASA Astrophysics Data System (ADS)

    Gelman, Andriy; Dragotti, Pier Luigi; Velisavljević, Vladan

    2011-09-01

    In this paper, we propose two multiview image compression methods. The basic concept of both schemes is the layer-based representation, in which the captured three-dimensional (3D) scene is partitioned into layers each related to a constant depth in the scene. The first algorithm is a centralized scheme where each layer is de-correlated using a separable multi-dimensional wavelet transform applied across the viewpoint and spatial dimensions. The transform is modified to efficiently deal with occlusions and disparity variations for different depths. Although the method achieves a high compression rate, the joint encoding approach requires the transmission of all data to the users. By contrast, in an interactive setting, the users request only a subset of the captured images, but in an unknown order a priori. We address this scenario in the second algorithm using Distributed Source Coding (DSC) principles which reduces the inter-view redundancy and facilitates random access at the image level. We demonstrate that the proposed centralized and interactive methods outperform H.264/MVC and JPEG 2000, respectively.

  20. Remote sensing image compression assessment based on multilevel distortions

    NASA Astrophysics Data System (ADS)

    Jiang, Hongxu; Yang, Kai; Liu, Tingshan; Zhang, Yongfei

    2014-01-01

    The measurement of visual quality is of fundamental importance to remote sensing image compression, especially for image quality assessment and compression algorithm optimization. We exploit the distortion features of optical remote sensing image compression and propose a full-reference image quality metric based on multilevel distortions (MLD), which assesses image quality by calculating distortions of three levels (such as pixel-level, contexture-level, and content-level) between original images and compressed images. Based on this, a multiscale MLD (MMLD) algorithm is designed and it outperforms the other current methods in our testing. In order to validate the performance of our algorithm, a special remote sensing image compression distortion (RICD) database is constructed, involving 250 remote sensing images compressed with different algorithms and various distortions. Experimental results on RICD and Laboratory for Image and Video Engineering databases show that the proposed MMLD algorithm has better consistency with subjective perception values than current state-of-the-art methods in remote sensing image compression assessment, and the objective assessment results can show the distortion features and visual quality of compressed image well. It is suitable to be the evaluation criteria for optical remote sensing image compression.

  1. Reconfigurable machine for applications in image and video compression

    NASA Astrophysics Data System (ADS)

    Hartenstein, Reiner W.; Becker, Juergen; Kress, Rainier; Reinig, Helmut; Schmidt, Karin

    1995-02-01

    This paper presents a reconfigurable machine for applications in image or video compression. The machine can be used stand alone or as a universal accelerator co-processor for desktop computers for image processing. It is well suited for image compression algorithms such as JPEG for still pictures or for encoding MPEG movies. It provides a much cheaper and more flexible hardware platform than special image compression ASICs and it can substantially accelerate desktop computing.

  2. Improved Compression of Wavelet-Transformed Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Klimesh, Matthew

    2005-01-01

    A recently developed data-compression method is an adaptive technique for coding quantized wavelet-transformed data, nominally as part of a complete image-data compressor. Unlike some other approaches, this method admits a simple implementation and does not rely on the use of large code tables. A common data compression approach, particularly for images, is to perform a wavelet transform on the input data, and then losslessly compress a quantized version of the wavelet-transformed data. Under this compression approach, it is common for the quantized data to include long sequences, or runs, of zeros. The new coding method uses prefixfree codes for the nonnegative integers as part of an adaptive algorithm for compressing the quantized wavelet-transformed data by run-length coding. In the form of run-length coding used here, the data sequence to be encoded is parsed into strings consisting of some number (possibly 0) of zeros, followed by a nonzero value. The nonzero value and the length of the run of zeros are encoded. For a data stream that contains a sufficiently high frequency of zeros, this method is known to be more effective than using a single variable length code to encode each symbol. The specific prefix-free codes used are from two classes of variable-length codes: a class known as Golomb codes, and a class known as exponential-Golomb codes. The codes within each class are indexed by a single integer parameter. The present method uses exponential-Golomb codes for the lengths of the runs of zeros, and Golomb codes for the nonzero values. The code parameters within each code class are determined adaptively on the fly as compression proceeds, on the basis of statistics from previously encoded values. In particular, a simple adaptive method has been devised to select the parameter identifying the particular exponential-Golomb code to use. The method tracks the average number of bits used to encode recent runlengths, and takes the difference between this average

  3. Research on compressive fusion for remote sensing images

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Wan, Guobin; Li, Yuanyuan; Zhao, Xiaoxia; Chong, Xin

    2014-02-01

    A compressive fusion of remote sensing images is presented based on the block compressed sensing (BCS) and non-subsampled contourlet transform (NSCT). Since the BCS requires small memory space and enables fast computation, firstly, the images with large amounts of data can be compressively sampled into block images with structured random matrix. Further, the compressive measurements are decomposed with NSCT and their coefficients are fused by a rule of linear weighting. And finally, the fused image is reconstructed by the gradient projection sparse reconstruction algorithm, together with consideration of blocking artifacts. The field test of remote sensing images fusion shows the validity of the proposed method.

  4. Watermark Compression in Medical Image Watermarking Using Lempel-Ziv-Welch (LZW) Lossless Compression Technique.

    PubMed

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohd; Ali, Mushtaq

    2016-04-01

    In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery. PMID:26429361

  5. Compressed sensing in imaging mass spectrometry

    NASA Astrophysics Data System (ADS)

    Bartels, Andreas; Dülk, Patrick; Trede, Dennis; Alexandrov, Theodore; Maaß, Peter

    2013-12-01

    Imaging mass spectrometry (IMS) is a technique of analytical chemistry for spatially resolved, label-free and multipurpose analysis of biological samples that is able to detect the spatial distribution of hundreds of molecules in one experiment. The hyperspectral IMS data is typically generated by a mass spectrometer analyzing the surface of the sample. In this paper, we propose a compressed sensing approach to IMS which potentially allows for faster data acquisition by collecting only a part of the pixels in the hyperspectral image and reconstructing the full image from this data. We present an integrative approach to perform both peak-picking spectra and denoising m/z-images simultaneously, whereas the state of the art data analysis methods solve these problems separately. We provide a proof of the robustness of the recovery of both the spectra and individual channels of the hyperspectral image and propose an algorithm to solve our optimization problem which is based on proximal mappings. The paper concludes with the numerical reconstruction results for an IMS dataset of a rat brain coronal section.

  6. On-board image compression for the RAE lunar mission

    NASA Technical Reports Server (NTRS)

    Miller, W. H.; Lynch, T. J.

    1976-01-01

    The requirements, design, implementation, and flight performance of an on-board image compression system for the lunar orbiting Radio Astronomy Explorer-2 (RAE-2) spacecraft are described. The image to be compressed is a panoramic camera view of the long radio astronomy antenna booms used for gravity-gradient stabilization of the spacecraft. A compression ratio of 32 to 1 is obtained by a combination of scan line skipping and adaptive run-length coding. The compressed imagery data are convolutionally encoded for error protection. This image compression system occupies about 1000 cu cm and consumes 0.4 W.

  7. Using compressed images in multimedia education

    NASA Astrophysics Data System (ADS)

    Guy, William L.; Hefner, Lance V.

    1996-04-01

    The classic radiologic teaching file consists of hundreds, if not thousands, of films of various ages, housed in paper jackets with brief descriptions written on the jackets. The development of a good teaching file has been both time consuming and voluminous. Also, any radiograph to be copied was unavailable during the reproduction interval, inconveniencing other medical professionals needing to view the images at that time. These factors hinder motivation to copy films of interest. If a busy radiologist already has an adequate example of a radiological manifestation, it is unlikely that he or she will exert the effort to make a copy of another similar image even if a better example comes along. Digitized radiographs stored on CD-ROM offer marked improvement over the copied film teaching files. Our institution has several laser digitizers which are used to rapidly scan radiographs and produce high quality digital images which can then be converted into standard microcomputer (IBM, Mac, etc.) image format. These images can be stored on floppy disks, hard drives, rewritable optical disks, recordable CD-ROM disks, or removable cartridge media. Most hospital computer information systems include radiology reports in their database. We demonstrate that the reports for the images included in the users teaching file can be copied and stored on the same storage media as the images. The radiographic or sonographic image and the corresponding dictated report can then be 'linked' together. The description of the finding or findings of interest on the digitized image is thus electronically tethered to the image. This obviates the need to write much additional detail concerning the radiograph, saving time. In addition, the text on this disk can be indexed such that all files with user specified features can be instantly retrieve and combined in a single report, if desired. With the use of newer image compression techniques, hundreds of cases may be stored on a single CD

  8. Design of real-time remote sensing image compression system

    NASA Astrophysics Data System (ADS)

    Wu, Wenbo; Lei, Ning; Wang, Kun; Wang, Qingyuan; Li, Tao

    2013-08-01

    This paper focuses on the issue of CCD remote sensing image compression. Compared with other images, CCD remote sensing image data is characterized with high speed and high quantized bits. A high speed CCD image compression system is proposed based on ADV212 chip. The system is mainly composed of three devices: FPGA, SRAM and ADV212. In this system, SRAM plays the role of data buffer, ADV212 focuses on data compression and the FPGA is used for image storage and interface bus control. Finally, a system platform is designed to test the performance of compression. Test results show that the proposed scheme can satisfy the real-time processing requirement and there is no obvious difference between the sourced image and the compressed image in respect of image quality.

  9. Coherent radar imaging based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Zhu, Qian; Volz, Ryan; Mathews, John D.

    2015-12-01

    High-resolution radar images in the horizontal spatial domain generally require a large number of different baselines that usually come with considerable cost. In this paper, aspects of compressed sensing (CS) are introduced to coherent radar imaging. We propose a single CS-based formalism that enables the full three-dimensional (3-D)—range, Doppler frequency, and horizontal spatial (represented by the direction cosines) domain—imaging. This new method can not only reduce the system costs and decrease the needed number of baselines by enabling spatial sparse sampling but also achieve high resolution in the range, Doppler frequency, and horizontal space dimensions. Using an assumption of point targets, a 3-D radar signal model for imaging has been derived. By comparing numerical simulations with the fast Fourier transform and maximum entropy methods at different signal-to-noise ratios, we demonstrate that the CS method can provide better performance in resolution and detectability given comparatively few available measurements relative to the number required by Nyquist-Shannon sampling criterion. These techniques are being applied to radar meteor observations.

  10. Image compression system and method having optimized quantization tables

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)

    1998-01-01

    A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.

  11. Oncologic image compression using both wavelet and masking techniques.

    PubMed

    Yin, F F; Gao, Q

    1997-12-01

    A new algorithm has been developed to compress oncologic images using both wavelet transform and field masking methods. A compactly supported wavelet transform is used to decompose the original image into high- and low-frequency subband images. The region-of-interest (ROI) inside an image, such as an irradiated field in an electronic portal image, is identified using an image segmentation technique and is then used to generate a mask. The wavelet transform coefficients outside the mask region are then ignored so that these coefficients can be efficiently coded to minimize the image redundancy. In this study, an adaptive uniform scalar quantization method and Huffman coding with a fixed code book are employed in subsequent compression procedures. Three types of typical oncologic images are tested for compression using this new algorithm: CT, MRI, and electronic portal images with 256 x 256 matrix size and 8-bit gray levels. Peak signal-to-noise ratio (PSNR) is used to evaluate the quality of reconstructed image. Effects of masking and image quality on compression ratio are illustrated. Compression ratios obtained using wavelet transform with and without masking for the same PSNR are compared for all types of images. The addition of masking shows an increase of compression ratio by a factor of greater than 1.5. The effect of masking on the compression ratio depends on image type and anatomical site. A compression ratio of greater than 5 can be achieved for a lossless compression of various oncologic images with respect to the region inside the mask. Examples of reconstructed images with compression ratio greater than 50 are shown. PMID:9434988

  12. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    PubMed

    Gillespy, T; Rowberg, A H

    1994-02-01

    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution. PMID:8172973

  13. Image compression using the W-transform

    SciTech Connect

    Reynolds, W.D. Jr.

    1995-12-31

    The authors present the W-transform for a multiresolution signal decomposition. One of the differences between the wavelet transform and W-transform is that the W-transform leads to a nonorthogonal signal decomposition. Another difference between the two is the manner in which the W-transform handles the endpoints (boundaries) of the signal. This approach does not restrict the length of the signal to be a power of two. Furthermore, it does not call for the extension of the signal thus, the W-transform is a convenient tool for image compression. They present the basic theory behind the W-transform and include experimental simulations to demonstrate its capabilities.

  14. Learning random networks for compression of still and moving images

    NASA Technical Reports Server (NTRS)

    Gelenbe, Erol; Sungur, Mert; Cramer, Christopher

    1994-01-01

    Image compression for both still and moving images is an extremely important area of investigation, with numerous applications to videoconferencing, interactive education, home entertainment, and potential applications to earth observations, medical imaging, digital libraries, and many other areas. We describe work on a neural network methodology to compress/decompress still and moving images. We use the 'point-process' type neural network model which is closer to biophysical reality than standard models, and yet is mathematically much more tractable. We currently achieve compression ratios of the order of 120:1 for moving grey-level images, based on a combination of motion detection and compression. The observed signal-to-noise ratio varies from values above 25 to more than 35. The method is computationally fast so that compression and decompression can be carried out in real-time. It uses the adaptive capabilities of a set of neural networks so as to select varying compression ratios in real-time as a function of quality achieved. It also uses a motion detector which will avoid retransmitting portions of the image which have varied little from the previous frame. Further improvements can be achieved by using on-line learning during compression, and by appropriate compensation of nonlinearities in the compression/decompression scheme. We expect to go well beyond the 250:1 compression level for color images with good quality levels.

  15. Efficient MR image reconstruction for compressed MR imaging.

    PubMed

    Huang, Junzhou; Zhang, Shaoting; Metaxas, Dimitris

    2011-10-01

    In this paper, we propose an efficient algorithm for MR image reconstruction. The algorithm minimizes a linear combination of three terms corresponding to a least square data fitting, total variation (TV) and L1 norm regularization. This has been shown to be very powerful for the MR image reconstruction. First, we decompose the original problem into L1 and TV norm regularization subproblems respectively. Then, these two subproblems are efficiently solved by existing techniques. Finally, the reconstructed image is obtained from the weighted average of solutions from two subproblems in an iterative framework. We compare the proposed algorithm with previous methods in term of the reconstruction accuracy and computation complexity. Numerous experiments demonstrate the superior performance of the proposed algorithm for compressed MR image reconstruction. PMID:21742542

  16. Efficient MR image reconstruction for compressed MR imaging.

    PubMed

    Huang, Junzhou; Zhang, Shaoting; Metaxas, Dimitris

    2010-01-01

    In this paper, we propose an efficient algorithm for MR image reconstruction. The algorithm minimizes a linear combination of three terms corresponding to a least square data fitting, total variation (TV) and L1 norm regularization. This has been shown to be very powerful for the MR image reconstruction. First, we decompose the original problem into L1 and TV norm regularization subproblems respectively. Then, these two subproblems are efficiently solved by existing techniques. Finally, the reconstructed image is obtained from the weighted average of solutions from two subproblems in an iterative framework. We compare the proposed algorithm with previous methods in term of the reconstruction accuracy and computation complexity. Numerous experiments demonstrate the superior performance of the proposed algorithm for compressed MR image reconstruction. PMID:20879224

  17. Lossless compression of medical images using Hilbert scan

    NASA Astrophysics Data System (ADS)

    Sun, Ziguang; Li, Chungui; Liu, Hao; Zhang, Zengfang

    2007-12-01

    The effectiveness of Hilbert scan in lossless medical images compression is discussed. In our methods, after coding of intensities, the pixels in a medical images have been decorrelated with differential pulse code modulation, then the error image has been rearranged using Hilbert scan, finally we implement five coding schemes, such as Huffman coding, RLE, lZW coding, Arithmetic coding, and RLE followed by Huffman coding. The experiments show that the case, which applies DPCM followed by Hilbert scan and then compressed by the Arithmetic coding scheme, has the best compression result, also indicate that Hilbert scan can enhance pixel locality, and increase the compression ratio effectively.

  18. Perceptually lossless wavelet-based compression for medical images

    NASA Astrophysics Data System (ADS)

    Lin, Nai-wen; Yu, Tsaifa; Chan, Andrew K.

    1997-05-01

    In this paper, we present a wavelet-based medical image compression scheme so that images displayed on different devices are perceptually lossless. Since visual sensitivity of human varies with different subbands, we apply the perceptual lossless criteria to quantize the wavelet transform coefficients of each subband such that visual distortions are reduced to unnoticeable. Following this, we use a high compression ratio hierarchical tree to code these coefficients. Experimental results indicate that our perceptually lossless coder achieves a compression ratio 2-5 times higher than typical lossless compression schemes while producing perceptually identical image content on the target display device.

  19. Fast computational scheme of image compression for 32-bit microprocessors

    NASA Technical Reports Server (NTRS)

    Kasperovich, Leonid

    1994-01-01

    This paper presents a new computational scheme of image compression based on the discrete cosine transform (DCT), underlying JPEG and MPEG International Standards. The algorithm for the 2-d DCT computation uses integer operations (register shifts and additions / subtractions only); its computational complexity is about 8 additions per image pixel. As a meaningful example of an on-board image compression application we consider the software implementation of the algorithm for the Mars Rover (Marsokhod, in Russian) imaging system being developed as a part of Mars-96 International Space Project. It's shown that fast software solution for 32-bit microprocessors may compete with the DCT-based image compression hardware.

  20. Texture-based medical image retrieval in compressed domain using compressive sensing.

    PubMed

    Yadav, Kuldeep; Srivastava, Avi; Mittal, Ankush; Ansari, M A

    2014-01-01

    Content-based image retrieval has gained considerable attention in today's scenario as a useful tool in many applications; texture is one of them. In this paper, we focus on texture-based image retrieval in compressed domain using compressive sensing with the help of DC coefficients. Medical imaging is one of the fields which have been affected most, as there had been huge size of image database and getting out the concerned image had been a daunting task. Considering this, in this paper we propose a new model of image retrieval process using compressive sampling, since it allows accurate recovery of image from far fewer samples of unknowns and it does not require a close relation of matching between sampling pattern and characteristic image structure with increase acquisition speed and enhanced image quality. PMID:24589833

  1. Accelerated Compressed Sensing Based CT Image Reconstruction

    PubMed Central

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R.; Paul, Narinder S.; Cobbold, Richard S. C.

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization. PMID:26167200

  2. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.

  3. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-12-30

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.

  4. Wavelet/scalar quantization compression standard for fingerprint images

    SciTech Connect

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.

  5. Digital mammography, cancer screening: Factors important for image compression

    NASA Technical Reports Server (NTRS)

    Clarke, Laurence P.; Blaine, G. James; Doi, Kunio; Yaffe, Martin J.; Shtern, Faina; Brown, G. Stephen; Winfield, Daniel L.; Kallergi, Maria

    1993-01-01

    The use of digital mammography for breast cancer screening poses several novel problems such as development of digital sensors, computer assisted diagnosis (CAD) methods for image noise suppression, enhancement, and pattern recognition, compression algorithms for image storage, transmission, and remote diagnosis. X-ray digital mammography using novel direct digital detection schemes or film digitizers results in large data sets and, therefore, image compression methods will play a significant role in the image processing and analysis by CAD techniques. In view of the extensive compression required, the relative merit of 'virtually lossless' versus lossy methods should be determined. A brief overview is presented here of the developments of digital sensors, CAD, and compression methods currently proposed and tested for mammography. The objective of the NCI/NASA Working Group on Digital Mammography is to stimulate the interest of the image processing and compression scientific community for this medical application and identify possible dual use technologies within the NASA centers.

  6. Compressing industrial computed tomography images by means of contour coding

    NASA Astrophysics Data System (ADS)

    Jiang, Haina; Zeng, Li

    2013-10-01

    An improved method for compressing industrial computed tomography (CT) images is presented. To have higher resolution and precision, the amount of industrial CT data has become larger and larger. Considering that industrial CT images are approximately piece-wise constant, we develop a compression method based on contour coding. The traditional contour-based method for compressing gray images usually needs two steps. The first is contour extraction and then compression, which is negative for compression efficiency. So we merge the Freeman encoding idea into an improved method for two-dimensional contours extraction (2-D-IMCE) to improve the compression efficiency. By exploiting the continuity and logical linking, preliminary contour codes are directly obtained simultaneously with the contour extraction. By that, the two steps of the traditional contour-based compression method are simplified into only one. Finally, Huffman coding is employed to further losslessly compress preliminary contour codes. Experimental results show that this method can obtain a good compression ratio as well as keeping satisfactory quality of compressed images.

  7. Wavelet for Ultrasonic Flaw Enhancement and Image Compression

    NASA Astrophysics Data System (ADS)

    Cheng, W.; Tsukada, K.; Li, L. Q.; Hanasaki, K.

    2003-03-01

    Ultrasonic imaging has been widely used in Non-destructive Testing (NDT) and medical application. However, the image is always degraded by blur and noise. Besides, the pressure on both storage and transmission gives rise to the need of image compression. We apply 2-D Discrete Wavelet Transform (DWT) to C-scan 2-D images to realize flaw enhancement and image compression, taking advantage of DWT scale and orientation selectivity. The Wavelet coefficient thresholding and scalar quantization are employed respectively. Furthermore, we realize the unification of flaw enhancement and image compression in one process. The reconstructed image from the compressed data gives a clearer interpretation of the flaws at a much smaller bit rate.

  8. Parallel image compression circuit for high-speed cameras

    NASA Astrophysics Data System (ADS)

    Nishikawa, Yukinari; Kawahito, Shoji; Inoue, Toru

    2005-02-01

    In this paper, we propose 32 parallel image compression circuits for high-speed cameras. The proposed compression circuits are based on a 4 x 4-point 2-dimensional DCT using a DA method, zigzag scanning of 4 blocks of the 2-D DCT coefficients and a 1-dimensional Huffman coding. The compression engine is designed with FPGAs, and the hardware complexity is compared with JPEG algorithm. It is found that the proposed compression circuits require much less hardware, leading to a compact high-speed implementation of the image compression circuits using parallel processing architecture. The PSNR of the reconstructed image using the proposed encoding method is better than that of JPEG at the region of low compression ratio.

  9. Techniques for region coding in object-based image compression

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    2004-01-01

    Object-based compression (OBC) is an emerging technology that combines region segmentation and coding to produce a compact representation of a digital image or video sequence. Previous research has focused on a variety of segmentation and representation techniques for regions that comprise an image. The author has previously suggested [1] partitioning of the OBC problem into three steps: (1) region segmentation, (2) region boundary extraction and compression, and (3) region contents compression. A companion paper [2] surveys implementationally feasible techniques for boundary compression. In this paper, we analyze several strategies for region contents compression, including lossless compression, lossy VPIC, EPIC, and EBLAST compression, wavelet-based coding (e.g., JPEG-2000), as well as texture matching approaches. This paper is part of a larger study that seeks to develop highly efficient compression algorithms for still and video imagery, which would eventually support automated object recognition (AOR) and semantic lookup of images in large databases or high-volume OBC-format datastreams. Example applications include querying journalistic archives, scientific or medical imaging, surveillance image processing and target tracking, as well as compression of video for transmission over the Internet. Analysis emphasizes time and space complexity, as well as sources of reconstruction error in decompressed imagery.

  10. The impact of skull bone intensity on the quality of compressed CT neuro images

    NASA Astrophysics Data System (ADS)

    Kowalik-Urbaniak, Ilona; Vrscay, Edward R.; Wang, Zhou; Cavaro-Menard, Christine; Koff, David; Wallace, Bill; Obara, Boguslaw

    2012-02-01

    The increasing use of technologies such as CT and MRI, along with a continuing improvement in their resolution, has contributed to the explosive growth of digital image data being generated. Medical communities around the world have recognized the need for efficient storage, transmission and display of medical images. For example, the Canadian Association of Radiologists (CAR) has recommended compression ratios for various modalities and anatomical regions to be employed by lossy JPEG and JPEG2000 compression in order to preserve diagnostic quality. Here we investigate the effects of the sharp skull edges present in CT neuro images on JPEG and JPEG2000 lossy compression. We conjecture that this atypical effect is caused by the sharp edges between the skull bone and the background regions as well as between the skull bone and the interior regions. These strong edges create large wavelet coefficients that consume an unnecessarily large number of bits in JPEG2000 compression because of its bitplane coding scheme, and thus result in reduced quality at the interior region, which contains most diagnostic information in the image. To validate the conjecture, we investigate a segmentation based compression algorithm based on simple thresholding and morphological operators. As expected, quality is improved in terms of PSNR as well as the structural similarity (SSIM) image quality measure, and its multiscale (MS-SSIM) and informationweighted (IW-SSIM) versions. This study not only supports our conjecture, but also provides a solution to improve the performance of JPEG and JPEG2000 compression for specific types of CT images.

  11. Context Modeler for Wavelet Compression of Spectral Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Xie, Hua; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    A context-modeling sub-algorithm has been developed as part of an algorithm that effects three-dimensional (3D) wavelet-based compression of hyperspectral image data. The context-modeling subalgorithm, hereafter denoted the context modeler, provides estimates of probability distributions of wavelet-transformed data being encoded. These estimates are utilized by an entropy coding subalgorithm that is another major component of the compression algorithm. The estimates make it possible to compress the image data more effectively than would otherwise be possible. The following background discussion is prerequisite to a meaningful summary of the context modeler. This discussion is presented relative to ICER-3D, which is the name attached to a particular compression algorithm and the software that implements it. The ICER-3D software is summarized briefly in the preceding article, ICER-3D Hyperspectral Image Compression Software (NPO-43238). Some aspects of this algorithm were previously described, in a slightly more general context than the ICER-3D software, in "Improving 3D Wavelet-Based Compression of Hyperspectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. In turn, ICER-3D is a product of generalization of ICER, another previously reported algorithm and computer program that can perform both lossless and lossy wavelet-based compression and decompression of gray-scale-image data. In ICER-3D, hyperspectral image data are decomposed using a 3D discrete wavelet transform (DWT). Following wavelet decomposition, mean values are subtracted from spatial planes of spatially low-pass subbands prior to encoding. The resulting data are converted to sign-magnitude form and compressed. In ICER-3D, compression is progressive, in that compressed information is ordered so that as more of the compressed data stream is received, successive reconstructions of the hyperspectral image data are of successively higher overall fidelity.

  12. Estimating JPEG2000 compression for image forensics using Benford's Law

    NASA Astrophysics Data System (ADS)

    Qadir, Ghulam; Zhao, Xi; Ho, Anthony T. S.

    2010-05-01

    With the tremendous growth and usage of digital images nowadays, the integrity and authenticity of digital content is becoming increasingly important, and a growing concern to many government and commercial sectors. Image Forensics, based on a passive statistical analysis of the image data only, is an alternative approach to the active embedding of data associated with Digital Watermarking. Benford's Law was first introduced to analyse the probability distribution of the 1st digit (1-9) numbers of natural data, and has since been applied to Accounting Forensics for detecting fraudulent income tax returns [9]. More recently, Benford's Law has been further applied to image processing and image forensics. For example, Fu et al. [5] proposed a Generalised Benford's Law technique for estimating the Quality Factor (QF) of JPEG compressed images. In our previous work, we proposed a framework incorporating the Generalised Benford's Law to accurately detect unknown JPEG compression rates of watermarked images in semi-fragile watermarking schemes. JPEG2000 (a relatively new image compression standard) offers higher compression rates and better image quality as compared to JPEG compression. In this paper, we propose the novel use of Benford's Law for estimating JPEG2000 compression for image forensics applications. By analysing the DWT coefficients and JPEG2000 compression on 1338 test images, the initial results indicate that the 1st digit probability of DWT coefficients follow the Benford's Law. The unknown JPEG2000 compression rates of the image can also be derived, and proved with the help of a divergence factor, which shows the deviation between the probabilities and Benford's Law. Based on 1338 test images, the mean divergence for DWT coefficients is approximately 0.0016, which is lower than DCT coefficients at 0.0034. However, the mean divergence for JPEG2000 images compression rate at 0.1 is 0.0108, which is much higher than uncompressed DWT coefficients. This result

  13. Image compression and transmission based on LAN

    NASA Astrophysics Data System (ADS)

    Huang, Sujuan; Li, Yufeng; Zhang, Zhijiang

    2004-11-01

    In this work an embedded system is designed which implements MPEG-2 LAN transmission of CVBS or S-video signal. The hardware consists of three parts. The first is digitization of analog inputs CVBS or S-video (Y/C) from TV or VTR sources. The second is MPEG-2 compression coding primarily performed by a MPEG-2 1chip audio/video encoder. Its output is MPEG-2 system PS/TS. The third part includes data stream packing, accessing LAN and system control based on an ARM microcontroller. It packs the encoded stream into Ethernet data frames and accesses LAN, and accepts Ethernet data packets bearing control information from the network and decodes corresponding commands to control digitization, coding, and other operations. In order to increase the network transmission rate to conform to the MEPG-2 data stream, an efficient TCP/IP network protocol stack is constructed directly from network hardware provided by the embedded system, instead of using an ordinary operating system for embedded systems. In the design of the network protocol stack to obtain a high LAN transmission rate on a low-end ARM, a special transmission channel is opened for the MPEG-2 stream. The designed system has been tested on an experimental LAN. The experiment shows a maximum LAN transmission rate up to 12.7 Mbps with good sound and image quality, and satisfactory system reliability.

  14. Compressing subbanded image data with Lempel-Ziv-based coders

    NASA Technical Reports Server (NTRS)

    Glover, Daniel; Kwatra, S. C.

    1993-01-01

    A method of improving the compression of image data using Lempel-Ziv-based coding is presented. Image data is first processed with a simple transform, such as the Walsh Hadamard Transform, to produce subbands. The subbanded data can be rounded to eight bits or it can be quantized for higher compression at the cost of some reduction in the quality of the reconstructed image. The data is then run-length coded to take advantage of the large runs of zeros produced by quantization. Compression results are presented and contrasted with a subband compression method using quantization followed by run-length coding and Huffman coding. The Lempel-Ziv-based coding in conjunction with run-length coding produces the best compression results at the same reconstruction quality (compared with the Huffman-based coding) on the image data used.

  15. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  16. Halftoning processing on a JPEG-compressed image

    NASA Astrophysics Data System (ADS)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  17. Multispectral Image Compression Based on DSC Combined with CCSDS-IDC

    PubMed Central

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741

  18. Investigation into the geometric consequences of processing substantially compressed images

    NASA Astrophysics Data System (ADS)

    Tempelmann, Udo; Nwosu, Zubbi; Zumbrunn, Roland M.

    1995-07-01

    One of the major driving forces behind digital photogrammetric systems is the continued drop in the cost of digital storage systems. However, terrestrial remote sensing systems continue to generate enormous volumes of data due to smaller pixels, larger coverage, and increased multispectral and multitemporal possibilities. Sophisticated compression algorithms have been developed but reduced visual quality of their output, which impedes object identification, and resultant geometric deformation have been limiting factors in employing compression. Compression and decompression time is also an issue but of less importance due to off-line possibilities. Two typical image blocks have been selected, one sub-block from a SPOT image and the other is an image of industrial targets taken with an off-the-shelf CCD. Three common compression algorithms have been chosen: JPEG, Wavelet, and Fractal. The images are run through the compression/decompression cycle, with parameter chosen to cover the whole range of available compression ratios. Points are identified on these images and their locations are compared against those in the originals. These results are presented to assist choice of compression facilities after considerations on metric quality against storage availability. Fractals offer the best visual quality but JPEG, closely followed by wavelets, imposes less geometric defects. JPEG seems to offer the best all-around performance when you consider geometric and visual quality, and compression/decompression speed.

  19. CMOS low data rate imaging method based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Xiao, Long-long; Liu, Kun; Han, Da-peng

    2012-07-01

    Complementary metal-oxide semiconductor (CMOS) technology enables the integration of image sensing and image compression processing, making improvements on overall system performance possible. We present a CMOS low data rate imaging approach by implementing compressed sensing (CS). On the basis of the CS framework, the image sensor projects the image onto a separable two-dimensional (2D) basis set and measures the corresponding coefficients obtained. First, the electrical current output from the pixels in a column are combined, with weights specified by voltage, in accordance with Kirchhoff's law. The second computation is performed in an analog vector-matrix multiplier (VMM). Each element of the VMM considers the total value of each column as the input and multiplies it by a unique coefficient. Both weights and coefficients are reprogrammable through analog floating-gate (FG) transistors. The image can be recovered from a percentage of these measurements using an optimization algorithm. The percentage, which can be altered flexibly by programming on the hardware circuit, determines the image compression ratio. These novel designs facilitate image compression during the image-capture phase before storage, and have the potential to reduce power consumption. Experimental results demonstrate that the proposed method achieves a large image compression ratio and ensures imaging quality.

  20. Image Compression Using Vector Quantization with Variable Block Size Division

    NASA Astrophysics Data System (ADS)

    Matsumoto, Hiroki; Kichikawa, Fumito; Sasazaki, Kazuya; Maeda, Junji; Suzuki, Yukinori

    In this paper, we propose a method for compressing a still image using vector quantization (VQ). Local fractal dimension (LFD) is computed to divided an image into variable block size. The LFD shows the complexity of local regions of an image, so that a region of an image that shows higher LFD values than those of other regions is partitioned into small blocks of pixels, while a region of an image that shows lower LFD values than those of other regions is partitioned into large blocks. Furthermore, we developed a division and merging algorithm to decrease the number of blocks to encode. This results in improvement of compression rate. We construct code books for respective blocks sizes. To encode an image, a block of pixels is transformed by discrete cosine transform (DCT) and the closest vector is chosen from the code book (CB). In decoding, the code vector corresponding to the index is selected from the CB and then the code vector is transformed by inverse DCT to reconstruct a block of pixels. Computational experiments were carried out to show the effectiveness of the proposed method. Performance of the proposed method is slightly better than that of JPEG. In the case of learning images to construct a CB being different from test images, the compression rate is comparable to compression rates of methods proposed so far, while image quality evaluated by NPIQM (normalized perceptual image quality measure) is almost the highest step. The results show that the proposed method is effective for still image compression.

  1. OARSI Clinical Trials Recommendations for Hip Imaging in Osteoarthritis

    PubMed Central

    Gold, Garry E.; Cicuttini, Flavia; Crema, Michel D.; Eckstein, Felix; Guermazi, Ali; Kijowski, Richard; Link, Thomas M.; Maheu, Emmanuel; Martel-Pelletier, Johanne; Miller, Colin G.; Pelletier, Jean-Pierre; Peterfy, Charles G.; Potter, Hollis G.; Roemer, Frank W.; Hunter, David. J

    2015-01-01

    Imaging of hip in osteoarthritis (OA) has seen considerable progress in the past decade, with the introduction of new techniques that may be more sensitive to structural disease changes. The purpose of this expert opinion, consensus driven recommendation is to provide detail on how to apply hip imaging in disease modifying clinical trials. It includes information on acquisition methods/ techniques (including guidance on positioning for radiography, sequence/protocol recommendations/ hardware for MRI); commonly encountered problems (including positioning, hardware and coil failures, artifacts associated with various MRI sequences); quality assurance/ control procedures; measurement methods; measurement performance (reliability, responsiveness, and validity); recommendations for trials; and research recommendations. PMID:25952344

  2. A high-speed distortionless predictive image-compression scheme

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Smyth, P.; Wang, H.

    1990-01-01

    A high-speed distortionless predictive image-compression scheme that is based on differential pulse code modulation output modeling combined with efficient source-code design is introduced. Experimental results show that this scheme achieves compression that is very close to the difference entropy of the source.

  3. Increasing FTIR spectromicroscopy speed and resolution through compressive imaging

    SciTech Connect

    Gallet, Julien; Riley, Michael; Hao, Zhao; Martin, Michael C

    2007-10-15

    At the Advanced Light Source at Lawrence Berkeley National Laboratory, we are investigating how to increase both the speed and resolution of synchrotron infrared imaging. Synchrotron infrared beamlines have diffraction-limited spot sizes and high signal to noise, however spectral images must be obtained one point at a time and the spatial resolution is limited by the effects of diffraction. One technique to assist in speeding up spectral image acquisition is described here and uses compressive imaging algorithms. Compressive imaging can potentially attain resolutions higher than allowed by diffraction and/or can acquire spectral images without having to measure every spatial point individually thus increasing the speed of such maps. Here we present and discuss initial tests of compressive imaging techniques performed with ALS Beamline 1.4.3?s Nic-Plan infrared microscope, Beamline 1.4.4 Continuum XL IR microscope, and also with a stand-alone Nicolet Nexus 470 FTIR spectrometer.

  4. Medical image compression algorithm based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Chen, Minghong; Zhang, Guoping; Wan, Wei; Liu, Minmin

    2005-02-01

    With rapid development of electronic imaging and multimedia technology, the telemedicine is applied to modern medical servings in the hospital. Digital medical image is characterized by high resolution, high precision and vast data. The optimized compression algorithm can alleviate restriction in the transmission speed and data storage. This paper describes the characteristics of human vision system based on the physiology structure, and analyses the characteristics of medical image in the telemedicine, then it brings forward an optimized compression algorithm based on wavelet zerotree. After the image is smoothed, it is decomposed with the haar filters. Then the wavelet coefficients are quantified adaptively. Therefore, we can maximize efficiency of compression and achieve better subjective visual image. This algorithm can be applied to image transmission in the telemedicine. In the end, we examined the feasibility of this algorithm with an image transmission experiment in the network.

  5. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    PubMed

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression. PMID:22722754

  6. Image compression and decompression based on gazing area

    NASA Astrophysics Data System (ADS)

    Tsumura, Norimichi; Endo, Chizuko; Haneishi, Hideaki; Miyake, Yoichi

    1996-04-01

    In this paper, we introduce a new method of data compression and decompression technique to search the aimed image based on the gazing area of the image. Many methods of data compression have been proposed. Particularly, JPEG compression technique has been widely used as a standard method. However, this method is not always effective to search the aimed images from the image filing system. In a previous paper, by the eye movement analysis, we found that images have a particular gazing area. It is considered that the gazing area is the most important region of the image, then we considered introducing the information to compress and transmit the image. A method named fixation based progressive image transmission is introduced to transmit the image effectively. In this method, after the gazing area is estimated, the area is first transmitted and then the other regions are transmitted. If we are not interested in the first transmitted image, then we can search other images. Therefore, the aimed image can be searched from the filing system, effectively. We compare the searching time of the proposed method with the conventional method. The result shows that the proposed method is faster than the conventional one to search the aimed image.

  7. Wavelet based hierarchical coding scheme for radar image compression

    NASA Astrophysics Data System (ADS)

    Sheng, Wen; Jiao, Xiaoli; He, Jifeng

    2007-12-01

    This paper presents a wavelet based hierarchical coding scheme for radar image compression. Radar signal is firstly quantized to digital signal, and reorganized as raster-scanned image according to radar's repeated period frequency. After reorganization, the reformed image is decomposed to image blocks with different frequency band by 2-D wavelet transformation, each block is quantized and coded by the Huffman coding scheme. A demonstrating system is developed, showing that under the requirement of real time processing, the compression ratio can be very high, while with no significant loss of target signal in restored radar image.

  8. Measurement dimensions compressed spectral imaging with a single point detector

    NASA Astrophysics Data System (ADS)

    Liu, Xue-Feng; Yu, Wen-Kai; Yao, Xu-Ri; Dai, Bin; Li, Long-Zhen; Wang, Chao; Zhai, Guang-Jie

    2016-04-01

    An experimental demonstration of spectral imaging with measurement dimensions compressed has been performed. With the method of dual compressed sensing (CS) we derive, the spectral image of a colored object can be obtained with only a single point detector, and sub-sampling is achieved in both spatial and spectral domains. The performances of dual CS spectral imaging are analyzed, including the effects of dual modulation numbers and measurement noise on the imaging quality. Our scheme provides a stable, high-flux measurement approach of spectral imaging.

  9. CoGI: Towards Compressing Genomes as an Image.

    PubMed

    Xie, Xiaojing; Zhou, Shuigeng; Guan, Jihong

    2015-01-01

    Genomic science is now facing an explosive increase of data thanks to the fast development of sequencing technology. This situation poses serious challenges to genomic data storage and transferring. It is desirable to compress data to reduce storage and transferring cost, and thus to boost data distribution and utilization efficiency. Up to now, a number of algorithms / tools have been developed for compressing genomic sequences. Unlike the existing algorithms, most of which treat genomes as one-dimensional text strings and compress them based on dictionaries or probability models, this paper proposes a novel approach called CoGI (the abbreviation of Compressing Genomes as an Image) for genome compression, which transforms the genomic sequences to a two-dimensional binary image (or bitmap), then applies a rectangular partition coding algorithm to compress the binary image. CoGI can be used as either a reference-based compressor or a reference-free compressor. For the former, we develop two entropy-based algorithms to select a proper reference genome. Performance evaluation is conducted on various genomes. Experimental results show that the reference-based CoGI significantly outperforms two state-of-the-art reference-based genome compressors GReEn and RLZ-opt in both compression ratio and compression efficiency. It also achieves comparable compression ratio but two orders of magnitude higher compression efficiency in comparison with XM--one state-of-the-art reference-free genome compressor. Furthermore, our approach performs much better than Gzip--a general-purpose and widely-used compressor, in both compression speed and compression ratio. So, CoGI can serve as an effective and practical genome compressor. The source code and other related documents of CoGI are available at: http://admis.fudan.edu.cn/projects/cogi.htm. PMID:26671800

  10. Image compression and encryption scheme based on 2D compressive sensing and fractional Mellin transform

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Li, Haolin; Wang, Di; Pan, Shumin; Zhou, Zhihong

    2015-05-01

    Most of the existing image encryption techniques bear security risks for taking linear transform or suffer encryption data expansion for adopting nonlinear transformation directly. To overcome these difficulties, a novel image compression-encryption scheme is proposed by combining 2D compressive sensing with nonlinear fractional Mellin transform. In this scheme, the original image is measured by measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the nonlinear fractional Mellin transform. The measurement matrices are controlled by chaos map. The Newton Smoothed l0 Norm (NSL0) algorithm is adopted to obtain the decryption image. Simulation results verify the validity and the reliability of this scheme.

  11. Segmentation and thematic classification of color orthophotos over non-compressed and JPEG 2000 compressed images

    NASA Astrophysics Data System (ADS)

    Zabala, A.; Cea, C.; Pons, X.

    2012-04-01

    Lossy compression is now increasingly used due to the enormous amount of images gathered by airborne and satellite sensors. Nevertheless, the implications of these compression procedures have been scarcely assessed. Segmentation before digital image classification is also a technique increasingly used in GEOBIA (GEOgraphic Object-Based Image Analysis). This paper presents an object-oriented application for image analysis using color orthophotos (RGB bands) and a Quickbird image (RGB and a near infrared band). We use different compression levels in order to study the effects of the data loss on the segmentation-based classification results. A set of 4 color orthophotos with 1 m spatial resolution and a 4-band Quickbird satellite image with 0.7 m spatial resolution each covering an area of about 1200 × 1200 m 2 (144 ha) was chosen for the experiment. Those scenes were compressed at 8 compression ratios (between 5:1 and 1000:1) using the JPEG 2000 standard. There were 7 thematic categories: dense vegetation, herbaceous, bare lands, road and asphalt areas, building areas, swimming pools and rivers (if necessary). The best category classification was obtained using a hierarchical classification algorithm over the second segmentation level. The same segmentation and classification methods were applied in order to establish a semi-automatic technique for all 40 images. To estimate the overall accuracy, a confusion matrix was calculated using a photointerpreted ground-truth map (fully covering 25% of each orthophoto). The mean accuracy over non-compressed images was 66% for the orthophotos and 72% for the Quickbird image. It is interesting to obtain this medium overall accuracy to be able to properly assess the compression effects (if the initial overall accuracy is very high, the possible positive effects of compression would not be noticeable). The first and second compression levels (up to 10:1) obtain results similar to the reference ones. Differences in the third to

  12. Compressed/reconstructed test images for CRAF/Cassini

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.

    1991-01-01

    A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.

  13. Pre-Processor for Compression of Multispectral Image Data

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron

    2006-01-01

    A computer program that preprocesses multispectral image data has been developed to provide the Mars Exploration Rover (MER) mission with a means of exploiting the additional correlation present in such data without appreciably increasing the complexity of compressing the data.

  14. Effect of severe image compression on face recognition algorithms

    NASA Astrophysics Data System (ADS)

    Zhao, Peilong; Dong, Jiwen; Li, Hengjian

    2015-10-01

    In today's information age, people will depend more and more on computers to obtain and make use of information, there is a big gap between the multimedia information after digitization that has large data and the current hardware technology that can provide the computer storage resources and network band width. For example, there is a large amount of image storage and transmission problem. Image compression becomes useful in cases when images need to be transmitted across networks in a less costly way by increasing data volume while reducing transmission time. This paper discusses image compression to effect on face recognition system. For compression purposes, we adopted the JPEG, JPEG2000, JPEG XR coding standard. The face recognition algorithms studied are SIFT. As a form of an extensive research, Experimental results show that it still maintains a high recognition rate under the high compression ratio, and JPEG XR standards is superior to other two kinds in terms of performance and complexity.

  15. A High Performance Image Data Compression Technique for Space Applications

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Venbrux, Jack

    2003-01-01

    A highly performing image data compression technique is currently being developed for space science applications under the requirement of high-speed and pushbroom scanning. The technique is also applicable to frame based imaging data. The algorithm combines a two-dimensional transform with a bitplane encoding; this results in an embedded bit string with exact desirable compression rate specified by the user. The compression scheme performs well on a suite of test images acquired from spacecraft instruments. It can also be applied to three-dimensional data cube resulting from hyper-spectral imaging instrument. Flight qualifiable hardware implementations are in development. The implementation is being designed to compress data in excess of 20 Msampledsec and support quantization from 2 to 16 bits. This paper presents the algorithm, its applications and status of development.

  16. An image compression technique for use on token ring networks

    NASA Technical Reports Server (NTRS)

    Gorjala, B.; Sayood, Khalid; Meempat, G.

    1992-01-01

    A low complexity technique for compression of images for transmission over local area networks is presented. The technique uses the synchronous traffic as a side channel for improving the performance of an adaptive differential pulse code modulation (ADPCM) based coder.

  17. The Pixon Method for Data Compression Image Classification, and Image Reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard; Yahil, Amos

    2002-01-01

    As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.

  18. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  19. New Methods for Lossless Image Compression Using Arithmetic Coding.

    ERIC Educational Resources Information Center

    Howard, Paul G.; Vitter, Jeffrey Scott

    1992-01-01

    Identifies four components of a good predictive lossless image compression method: (1) pixel sequence, (2) image modeling and prediction, (3) error modeling, and (4) error coding. Highlights include Laplace distribution and a comparison of the multilevel progressive method for image coding with the prediction by partial precision matching method.…

  20. Architecture for hardware compression/decompression of large images

    NASA Astrophysics Data System (ADS)

    Akil, Mohamed; Perroton, Laurent; Gailhard, Stephane; Denoulet, Julien; Bartier, Frederic

    2001-04-01

    In this article, we present a popular loseless compression/decompression algorithm, GZIP, and the study to implement it on a FPGA based architecture. The algorithm is loseless, and applied to 'bi-level' images of large size. It insures a minimum compression rate for the images we are considering. The proposed architecture for the compressor is based ona hash table and the decompressor is based on a parallel decoder of the Huffman codes.

  1. Planning/scheduling techniques for VQ-based image compression

    NASA Technical Reports Server (NTRS)

    Short, Nicholas M., Jr.; Manohar, Mareboyana; Tilton, James C.

    1994-01-01

    The enormous size of the data holding and the complexity of the information system resulting from the EOS system pose several challenges to computer scientists, one of which is data archival and dissemination. More than ninety percent of the data holdings of NASA is in the form of images which will be accessed by users across the computer networks. Accessing the image data in its full resolution creates data traffic problems. Image browsing using a lossy compression reduces this data traffic, as well as storage by factor of 30-40. Of the several image compression techniques, VQ is most appropriate for this application since the decompression of the VQ compressed images is a table lookup process which makes minimal additional demands on the user's computational resources. Lossy compression of image data needs expert level knowledge in general and is not straightforward to use. This is especially true in the case of VQ. It involves the selection of appropriate codebooks for a given data set and vector dimensions for each compression ratio, etc. A planning and scheduling system is described for using the VQ compression technique in the data access and ingest of raw satellite data.

  2. On the use of the Stockwell transform for image compression

    NASA Astrophysics Data System (ADS)

    Wang, Yanwei; Orchard, Jeff

    2009-02-01

    In this paper, we investigate the use of the Stockwell Transform for image compression. The proposed technique uses the Discrete Orthogonal Stockwell Transform (DOST), an orthogonal version of the Discrete Stockwell Transform (DST). These mathematical transforms provide a multiresolution spatial-frequency representation of a signal or image. First, we give a brief introduction for the Stockwell transform and the DOST. Then we outline a simplistic compression method based on setting the smallest coefficients to zero. In an experiment, we use this compression strategy on three different transforms: the Fast Fourier transform, the Daubechies wavelet transform and the DOST. The results show that the DOST outperforms the two other methods.

  3. PCIF: An Algorithm for Lossless True Color Image Compression

    NASA Astrophysics Data System (ADS)

    Barcucci, Elena; Brlek, Srecko; Brocchi, Stefano

    An efficient algorithm for compressing true color images is proposed. The technique uses a combination of simple and computationally cheap operations. The three main steps consist of predictive image filtering, decomposition of data, and data compression through the use of run length encoding, Huffman coding and grouping the values into polyominoes. The result is a practical scheme that achieves good compression while providing fast decompression. The approach has performance comparable to, and often better than, competing standards such JPEG 2000 and JPEG-LS.

  4. Image compression software for the SOHO LASCO and EIT experiments

    NASA Technical Reports Server (NTRS)

    Grunes, Mitchell R.; Howard, Russell A.; Hoppel, Karl; Mango, Stephen A.; Wang, Dennis

    1994-01-01

    This paper describes the lossless and lossy image compression algorithms to be used on board the Solar Heliospheric Observatory (SOHO) in conjunction with the Large Angle Spectrometric Coronograph and Extreme Ultraviolet Imaging Telescope experiments. It also shows preliminary results obtained using similar prior imagery and discusses the lossy compression artifacts which will result. This paper is in part intended for the use of SOHO investigators who need to understand the results of SOHO compression in order to better allocate the transmission bits which they have been allocated.

  5. Hyperspectral image compression using an online learning method

    NASA Astrophysics Data System (ADS)

    Ülkü, Ä.°rem; Töreyin, B. Uǧur

    2015-05-01

    A hyperspectral image compression method is proposed using an online dictionary learning approach. The online learning mechanism is aimed at utilizing least number of dictionary elements for each hyperspectral image under consideration. In order to meet this "sparsity constraint", basis pursuit algorithm is used. Hyperspectral imagery from AVIRIS datasets are used for testing purposes. Effects of non-zero dictionary elements on the compression performance are analyzed. Results indicate that, the proposed online dictionary learning algorithm may be utilized for higher data rates, as it performs better in terms of PSNR values, as compared with the state-of-the-art predictive lossy compression schemes.

  6. Compression of CCD raw images for digital still cameras

    NASA Astrophysics Data System (ADS)

    Sriram, Parthasarathy; Sudharsanan, Subramania

    2005-03-01

    Lossless compression of raw CCD images captured using color filter arrays has several benefits. The benefits include improved storage capacity, reduced memory bandwidth, and lower power consumption for digital still camera processors. The paper discusses the benefits in detail and proposes the use of a computationally efficient block adaptive scheme for lossless compression. Experimental results are provided that indicate that the scheme performs well for CCD raw images attaining compression factors of more than two. The block adaptive method also compares favorably with JPEG-LS. A discussion is provided indicating how the proposed lossless coding scheme can be incorporated into digital still camera processors enabling lower memory bandwidth and storage requirements.

  7. Imaging industry expectations for compressed sensing in MRI

    NASA Astrophysics Data System (ADS)

    King, Kevin F.; Kanwischer, Adriana; Peters, Rob

    2015-09-01

    Compressed sensing requires compressible data, incoherent acquisition and a nonlinear reconstruction algorithm to force creation of a compressible image consistent with the acquired data. MRI images are compressible using various transforms (commonly total variation or wavelets). Incoherent acquisition of MRI data by appropriate selection of pseudo-random or non-Cartesian locations in k-space is straightforward. Increasingly, commercial scanners are sold with enough computing power to enable iterative reconstruction in reasonable times. Therefore integration of compressed sensing into commercial MRI products and clinical practice is beginning. MRI frequently requires the tradeoff of spatial resolution, temporal resolution and volume of spatial coverage to obtain reasonable scan times. Compressed sensing improves scan efficiency and reduces the need for this tradeoff. Benefits to the user will include shorter scans, greater patient comfort, better image quality, more contrast types per patient slot, the enabling of previously impractical applications, and higher throughput. Challenges to vendors include deciding which applications to prioritize, guaranteeing diagnostic image quality, maintaining acceptable usability and workflow, and acquisition and reconstruction algorithm details. Application choice depends on which customer needs the vendor wants to address. The changing healthcare environment is putting cost and productivity pressure on healthcare providers. The improved scan efficiency of compressed sensing can help alleviate some of this pressure. Image quality is strongly influenced by image compressibility and acceleration factor, which must be appropriately limited. Usability and workflow concerns include reconstruction time and user interface friendliness and response. Reconstruction times are limited to about one minute for acceptable workflow. The user interface should be designed to optimize workflow and minimize additional customer training. Algorithm

  8. Improvements for Image Compression Using Adaptive Principal Component Extraction (APEX)

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1997-01-01

    The issues of image compression and pattern classification have been a primary focus of researchers among a variety of fields including signal and image processing, pattern recognition, data classification, etc. These issues depend on finding an efficient representation of the source data. In this paper we collate our earlier results where we introduced the application of the. Hilbe.rt scan to a principal component algorithm (PCA) with Adaptive Principal Component Extraction (APEX) neural network model. We apply these technique to medical imaging, particularly image representation and compression. We apply the Hilbert scan to the APEX algorithm to improve results

  9. Integer cosine transform for image compression

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Pollara, F.; Shahshahani, M.

    1991-01-01

    This article describes a recently introduced transform algorithm called the integer cosine transform (ICT), which is used in transform-based data compression schemes. The ICT algorithm requires only integer operations on small integers and at the same time gives a rate-distortion performance comparable to that offered by the floating-point discrete cosine transform (DCT). The article addresses the issue of implementation complexity, which is of prime concern for source coding applications of interest in deep-space communications. Complexity reduction in the transform stage of the compression scheme is particularly relevant, since this stage accounts for most (typically over 80 percent) of the computational load.

  10. Symbolic document image compression based on pattern matching techniques

    NASA Astrophysics Data System (ADS)

    Shiah, Chwan-Yi; Yen, Yun-Sheng

    2011-10-01

    In this paper, a novel compression algorithm for Chinese document images is proposed. Initially, documents are segmented into readable components such as characters and punctuation marks. Similar patterns within the text are found by shape context matching and grouped to form a set of prototype symbols. Text redundancies can be removed by replacing repeated symbols by their corresponding prototype symbols. To keep the compression visually lossless, we use a multi-stage symbol clustering procedure to group similar symbols and to ensure that there is no visible error in the decompressed image. In the encoding phase, the resulting data streams are encoded by adaptive arithmetic coding. Our results show that the average compression ratio is better than the international standard JBIG2 and the compressed form of a document image is suitable for a content-based keyword searching operation.

  11. Lossy and lossless compression of MERIS hyperspectral images with exogenous quasi-optimal spectral transforms

    NASA Astrophysics Data System (ADS)

    Akam Bita, Isidore Paul; Barret, Michel; Dalla Vedova, Florio; Gutzwiller, Jean-Louis

    2010-07-01

    Our research focuses on reducing complexity of hyperspectral image codecs based on transform and/or subband coding, so they can be on-board a satellite. It is well-known that the Karhunen Loeve transform (KLT) can be sub-optimal for non Gaussian data. However, it is generally recommended as the best calculable coding transform in practice. Now, for a compression scheme compatible with both the JPEG2000 Part2 standard and the CCSDS recommendations for onboard satellite image compression, the concept and computation of optimal spectral transforms (OST), at high bit-rates, were carried out, under low restrictive hypotheses. These linear transforms are optimal for reducing spectral redundancies of multi- or hyper-spectral images, when the spatial redundancies are reduced with a fixed 2-D discrete wavelet transform. The problem of OST is their heavy computational cost. In this paper we present the performances in coding of a quasi-optimal spectral transform, called exogenous OrthOST, obtained by learning an orthogonal OST on a sample of hyperspectral images from the spectrometer MERIS. Moreover, we compute an integer variant of OrthOST for lossless compression. The performances are compared to the ones of the KLT in both lossy and lossless compressions. We observe good performances of the exogenous OrthOST.

  12. Image analysis and compression: renewed focus on texture

    NASA Astrophysics Data System (ADS)

    Pappas, Thrasyvoulos N.; Zujovic, Jana; Neuhoff, David L.

    2010-01-01

    We argue that a key to further advances in the fields of image analysis and compression is a better understanding of texture. We review a number of applications that critically depend on texture analysis, including image and video compression, content-based retrieval, visual to tactile image conversion, and multimodal interfaces. We introduce the idea of "structurally lossless" compression of visual data that allows significant differences between the original and decoded images, which may be perceptible when they are viewed side-by-side, but do not affect the overall quality of the image. We then discuss the development of objective texture similarity metrics, which allow substantial point-by-point deviations between textures that according to human judgment are essentially identical.

  13. Remote sensing images fusion based on block compressed sensing

    NASA Astrophysics Data System (ADS)

    Yang, Sen-lin; Wan, Guo-bin; Zhang, Bian-lian; Chong, Xin

    2013-08-01

    A novel strategy for remote sensing images fusion is presented based on the block compressed sensing (BCS). Firstly, the multiwavelet transform (MWT) are employed for better sparse representation of remote sensing images. The sparse representations of block images are then compressive sampling by the BCS with an identical scrambled block hadamard operator. Further, the measurements are fused by a linear weighting rule in the compressive domain. And finally, the fused image is reconstructed by the gradient projection sparse reconstruction (GPSR) algorithm. Experiments result analyzes the selection of block dimension and sampling rating, as well as the convergence performance of the proposed method. The field test of remote sensing images fusion shows the validity of the proposed method.

  14. Image Compression Algorithm Altered to Improve Stereo Ranging

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron

    2008-01-01

    A report discusses a modification of the ICER image-data-compression algorithm to increase the accuracy of ranging computations performed on compressed stereoscopic image pairs captured by cameras aboard the Mars Exploration Rovers. (ICER and variants thereof were discussed in several prior NASA Tech Briefs articles.) Like many image compressors, ICER was designed to minimize a mean-square-error measure of distortion in reconstructed images as a function of the compressed data volume. The present modification of ICER was preceded by formulation of an alternative error measure, an image-quality metric that focuses on stereoscopic-ranging quality and takes account of image-processing steps in the stereoscopic-ranging process. This metric was used in empirical evaluation of bit planes of wavelet-transform subbands that are generated in ICER. The present modification, which is a change in a bit-plane prioritization rule in ICER, was adopted on the basis of this evaluation. This modification changes the order in which image data are encoded, such that when ICER is used for lossy compression, better stereoscopic-ranging results are obtained as a function of the compressed data volume.

  15. A Novel Psychovisual Threshold on Large DCT for Image Compression

    PubMed Central

    2015-01-01

    A psychovisual experiment prescribes the quantization values in image compression. The quantization process is used as a threshold of the human visual system tolerance to reduce the amount of encoded transform coefficients. It is very challenging to generate an optimal quantization value based on the contribution of the transform coefficient at each frequency order. The psychovisual threshold represents the sensitivity of the human visual perception at each frequency order to the image reconstruction. An ideal contribution of the transform at each frequency order will be the primitive of the psychovisual threshold in image compression. This research study proposes a psychovisual threshold on the large discrete cosine transform (DCT) image block which will be used to automatically generate the much needed quantization tables. The proposed psychovisual threshold will be used to prescribe the quantization values at each frequency order. The psychovisual threshold on the large image block provides significant improvement in the quality of output images. The experimental results on large quantization tables from psychovisual threshold produce largely free artifacts in the visual output image. Besides, the experimental results show that the concept of psychovisual threshold produces better quality image at the higher compression rate than JPEG image compression. PMID:25874257

  16. Watermarking of ultrasound medical images in teleradiology using compressed watermark.

    PubMed

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohamad; Ali, Mushtaq

    2016-01-01

    The open accessibility of Internet-based medical images in teleradialogy face security threats due to the nonsecured communication media. This paper discusses the spatial domain watermarking of ultrasound medical images for content authentication, tamper detection, and lossless recovery. For this purpose, the image is divided into two main parts, the region of interest (ROI) and region of noninterest (RONI). The defined ROI and its hash value are combined as watermark, lossless compressed, and embedded into the RONI part of images at pixel's least significant bits (LSBs). The watermark lossless compression and embedding at pixel's LSBs preserve image diagnostic and perceptual qualities. Different lossless compression techniques including Lempel-Ziv-Welch (LZW) were tested for watermark compression. The performances of these techniques were compared based on more bit reduction and compression ratio. LZW was found better than others and used in tamper detection and recovery watermarking of medical images (TDARWMI) scheme development to be used for ROI authentication, tamper detection, localization, and lossless recovery. TDARWMI performance was compared and found to be better than other watermarking schemes. PMID:26839914

  17. Preprocessing and compression of Hyperspectral images captured onboard UAVs

    NASA Astrophysics Data System (ADS)

    Herrero, Rolando; Cadirola, Martin; Ingle, Vinay K.

    2015-10-01

    Advancements in image sensors and signal processing have led to the successful development of lightweight hyperspectral imaging systems that are critical to the deployment of Photometry and Remote Sensing (PaRS) capabilities in unmanned aerial vehicles (UAVs). In general, hyperspectral data cubes include a few dozens of spectral bands that are extremely useful for remote sensing applications that range from detection of land vegetation to monitoring of atmospheric products derived from the processing of lower level radiance images. Because these data cubes are captured in the challenging environment of UAVs, where resources are limited, source encoding by means of compression is a fundamental mechanism that considerably improves the overall system performance and reliability. In this paper, we focus on the hyperspectral images captured by a state-of-the-art commercial hyperspectral camera by showing the results of applying ultraspectral data compression to the obtained data set. Specifically the compression scheme that we introduce integrates two stages; (1) preprocessing and (2) compression itself. The outcomes of this procedure are linear prediction coefficients and an error signal that, when encoded, results in a compressed version of the original image. Second, preprocessing and compression algorithms are optimized and have their time complexity analyzed to guarantee their successful deployment using low power ARM based embedded processors in the context of UAVs. Lastly, we compare the proposed architecture against other well known schemes and show how the compression scheme presented in this paper outperforms all of them by providing substantial improvement and delivering both lower compression rates and lower distortion.

  18. International standards activities in image data compression

    NASA Technical Reports Server (NTRS)

    Haskell, Barry

    1989-01-01

    Integrated Services Digital Network (ISDN); coding for color TV, video conferencing, video conferencing/telephone, and still color images; ISO color image coding standard; and ISO still picture standard are briefly discussed. This presentation is represented by viewgraphs only.

  19. Compressive SAR imaging with joint sparsity and local similarity exploitation.

    PubMed

    Shen, Fangfang; Zhao, Guanghui; Shi, Guangming; Dong, Weisheng; Wang, Chenglong; Niu, Yi

    2015-01-01

    Compressive sensing-based synthetic aperture radar (SAR) imaging has shown its superior capability in high-resolution image formation. However, most of those works focus on the scenes that can be sparsely represented in fixed spaces. When dealing with complicated scenes, these fixed spaces lack adaptivity in characterizing varied image contents. To solve this problem, a new compressive sensing-based radar imaging approach with adaptive sparse representation is proposed. Specifically, an autoregressive model is introduced to adaptively exploit the structural sparsity of an image. In addition, similarity among pixels is integrated into the autoregressive model to further promote the capability and thus an adaptive sparse representation facilitated by a weighted autoregressive model is derived. Since the weighted autoregressive model is inherently determined by the unknown image, we propose a joint optimization scheme by iterative SAR imaging and updating of the weighted autoregressive model to solve this problem. Eventually, experimental results demonstrated the validity and generality of the proposed approach. PMID:25686307

  20. Medical Image Compression Using a New Subband Coding Method

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen; Tucker, Doug

    1995-01-01

    A recently introduced iterative complexity- and entropy-constrained subband quantization design algorithm is generalized and applied to medical image compression. In particular, the corresponding subband coder is used to encode Computed Tomography (CT) axial slice head images, where statistical dependencies between neighboring image subbands are exploited. Inter-slice conditioning is also employed for further improvements in compression performance. The subband coder features many advantages such as relatively low complexity and operation over a very wide range of bit rates. Experimental results demonstrate that the performance of the new subband coder is relatively good, both objectively and subjectively.

  1. The FBI compression standard for digitized fingerprint images

    SciTech Connect

    Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.; Hopper, T.

    1996-10-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  2. Image Compression on a VLSI Neural-Based Vector Quantizer.

    ERIC Educational Resources Information Center

    Chen, Oscal T.-C.; And Others

    1992-01-01

    Describes a modified frequency-sensitive self-organization (FSO) algorithm for image data compression and the associated VLSI architecture. Topics discussed include vector quantization; VLSI neural processor architecture; detailed circuit implementation; and a neural network vector quantization prototype chip. Examples of images using the FSO…

  3. Multiview image compression based on LDV scheme

    NASA Astrophysics Data System (ADS)

    Battin, Benjamin; Niquin, Cédric; Vautrot, Philippe; Debons, Didier; Lucas, Laurent

    2011-03-01

    In recent years, we have seen several different approaches dealing with multiview compression. First, we can find the H264/MVC extension which generates quite heavy bitstreams when used on n-views autostereoscopic medias and does not allow inter-view reconstruction. Another solution relies on the MVD (MultiView+Depth) scheme which keeps p views (n > p > 1) and their associated depth-maps. This method is not suitable for multiview compression since it does not exploit the redundancy between the p views, moreover occlusion areas cannot be accurately filled. In this paper, we present our method based on the LDV (Layered Depth Video) approach which keeps one reference view with its associated depth-map and the n-1 residual ones required to fill occluded areas. We first perform a global per-pixel matching step (providing a good consistency between each view) in order to generate one unified-color RGB texture (where a unique color is devoted to all pixels corresponding to the same 3D-point, thus avoiding illumination artifacts) and a signed integer disparity texture. Next, we extract the non-redundant information and store it into two textures (a unified-color one and a disparity one) containing the reference and the n-1 residual views. The RGB texture is compressed with a conventional DCT or DWT-based algorithm and the disparity texture with a lossless dictionary algorithm. Then, we will discuss about the signal deformations generated by our approach.

  4. Compression of 3D integral images using wavelet decomposition

    NASA Astrophysics Data System (ADS)

    Mazri, Meriem; Aggoun, Amar

    2003-06-01

    This paper presents a wavelet-based lossy compression technique for unidirectional 3D integral images (UII). The method requires the extraction of different viewpoint images from the integral image. A single viewpoint image is constructed by extracting one pixel from each microlens, then each viewpoint image is decomposed using a Two Dimensional Discrete Wavelet Transform (2D-DWT). The resulting array of coefficients contains several frequency bands. The lower frequency bands of the viewpoint images are assembled and compressed using a 3 Dimensional Discrete Cosine Transform (3D-DCT) followed by Huffman coding. This will achieve decorrelation within and between 2D low frequency bands from the different viewpoint images. The remaining higher frequency bands are Arithmetic coded. After decoding and decompression of the viewpoint images using an inverse 3D-DCT and an inverse 2D-DWT, each pixel from every reconstructed viewpoint image is put back into its original position within the microlens to reconstruct the whole 3D integral image. Simulations were performed on a set of four different grey level 3D UII using a uniform scalar quantizer with deadzone. The results for the average of the four UII intensity distributions are presented and compared with previous use of 3D-DCT scheme. It was found that the algorithm achieves better rate-distortion performance, with respect to compression ratio and image quality at very low bit rates.

  5. Compressive spectral integral imaging using a microlens array

    NASA Astrophysics Data System (ADS)

    Feng, Weiyi; Rueda, Hoover; Fu, Chen; Qian, Chen; Arce, Gonzalo R.

    2016-05-01

    In this paper, a compressive spectral integral imaging system using a microlens array (MLA) is proposed. This system can sense the 4D spectro-volumetric information into a compressive 2D measurement image on the detector plane. In the reconstruction process, the 3D spatial information at different depths and the spectral responses of each spatial volume pixel can be obtained simultaneously. In the simulation, sensing of the 3D objects is carried out by optically recording elemental images (EIs) using a scanned pinhole camera. With the elemental images, a spectral data cube with different perspectives and depth information can be reconstructed using the TwIST algorithm in the multi-shot compressive spectral imaging framework. Then, the 3D spatial images with one dimensional spectral information at arbitrary depths are computed using the computational integral imaging method by inversely mapping the elemental images according to geometrical optics. The simulation results verify the feasibility of the proposed system. The 3D volume images and the spectral information of the volume pixels can be successfully reconstructed at the location of the 3D objects. The proposed system can capture both 3D volumetric images and spectral information in a video rate, which is valuable in biomedical imaging and chemical analysis.

  6. Context and task-aware knowledge-enhanced compressive imaging

    NASA Astrophysics Data System (ADS)

    Rao, Shankar; Ni, Kang-Yu; Owechko, Yuri

    2013-09-01

    We describe a foveated compressive sensing approach for image analysis applications that utilizes knowledge of the task to be performed to reduce the number of required measurements compared to conventional Nyquist sampling and compressive sensing based approaches. Our Compressive Optical Foveated Architecture (COFA) adapts the dictionary and compressive measurements to structure and sparsity in the signal, task, and scene by reducing measurement and dictionary mutual coherence and increasing sparsity using principles of actionable information and foveated compressive sensing. Actionable information is used to extract task-relevant regions of interest (ROIs) from a low-resolution scene analysis by eliminating the effects of nuisances for occlusion and anomalous motion detection. From the extracted ROIs, preferential measurements are taken using foveation as part of the compressive sensing adaptation process. The task-specific measurement matrix is optimized by using a novel saliency-weighted coherence minimization with respect to the learned signal dictionary. This incorporates the relative usage of the atoms in the dictionary. Therefore, the measurement matrix is not random, as in conventional compressive sensing, but is based on the dictionary structure and atom distributions. We utilize a patch-based method to learn the signal priors. A treestructured dictionary of image patches using KSVD is learned which can sparsely represent any given image patch with the tree-structure. We have implemented COFA in an end-to-end simulation of a vehicle fingerprinting task for aerial surveillance using foveated compressive measurements adapted to hierarchical ROIs consisting of background, roads, and vehicles. Our results show 113x reduction in measurements over conventional sensing and 28x reduction over compressive sensing using random measurements.

  7. Compressive spectral imaging systems based on linear detector

    NASA Astrophysics Data System (ADS)

    Liu, Yanli; Zhong, Xiaoming; Zhao, Haibo; Li, Huan

    2015-08-01

    The spectrometers capture large amount of raw and 3-dimensional (3D) spatial-spectral scene information with 2- dimensional (2D) focal plane arrays(FPA). In many applications, including imaging system and video cameras, the Nyquist rate is so high that too many samples result, making compression a precondition to storage or transmission. Compressive sensing theory employs non-adaptive linear projections that preserve the structure of the signal, the signal is then reconstructed from these projections using an optimization process. This article overview the fundamental spectral imagers based on compressive sensing, the coded aperture snapshot spectral imagers (CASSI) and high-resolution imagers via moving random exposure. Besides that, the article propose a new method to implement spectral imagers with linear detector imager systems based on spectrum compressed. The article describes the system introduction and code process, and it illustrates results with real data and imagery. Simulations are shown to illustrate the performance improvement attained by the new model and complexity of the imaging system greatly reduced by using linear detector.

  8. Improved satellite image compression and reconstruction via genetic algorithms

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael; Lamont, Gary

    2008-10-01

    A wide variety of signal and image processing applications, including the US Federal Bureau of Investigation's fingerprint compression standard [3] and the JPEG-2000 image compression standard [26], utilize wavelets. This paper describes new research that demonstrates how a genetic algorithm (GA) may be used to evolve transforms that outperform wavelets for satellite image compression and reconstruction under conditions subject to quantization error. The new approach builds upon prior work by simultaneously evolving real-valued coefficients representing matched forward and inverse transform pairs at each of three levels of a multi-resolution analysis (MRA) transform. The training data for this investigation consists of actual satellite photographs of strategic urban areas. Test results show that a dramatic reduction in the error present in reconstructed satellite images may be achieved without sacrificing the compression capabilities of the forward transform. The transforms evolved during this research outperform previous start-of-the-art solutions, which optimized coefficients for the reconstruction transform only. These transforms also outperform wavelets, reducing error by more than 0.76 dB at a quantization level of 64. In addition, transforms trained using representative satellite images do not perform quite as well when subsequently tested against images from other classes (such as fingerprints or portraits). This result suggests that the GA developed for this research is automatically learning to exploit specific attributes common to the class of images represented in the training population.

  9. Fast-adaptive near-lossless image compression

    NASA Astrophysics Data System (ADS)

    He, Kejing

    2016-05-01

    The purpose of image compression is to store or transmit image data efficiently. However, most compression methods emphasize the compression ratio rather than the throughput. We propose an encoding process and rules, and consequently a fast-adaptive near-lossless image compression method (FAIC) with good compression ratio. FAIC is a single-pass method, which removes bits from each codeword, then predicts the next pixel value through localized edge detection techniques, and finally uses Golomb-Rice codes to encode the residuals. FAIC uses only logical operations, bitwise operations, additions, and subtractions. Meanwhile, it eliminates the slow operations (e.g., multiplication, division, and logarithm) and the complex entropy coder, which can be a bottleneck in hardware implementations. Besides, FAIC does not depend on any precomputed tables or parameters. Experimental results demonstrate that FAIC achieves good balance between compression ratio and computational complexity in certain range (e.g., peak signal-to-noise ratio >35 dB, bits per pixel>2). It is suitable for applications in which the amount of data is huge or the computation power is limited.

  10. Improved vector quantization scheme for grayscale image compression

    NASA Astrophysics Data System (ADS)

    Hu, Y.-C.; Chen, W.-L.; Lo, C.-C.; Chuang, J.-C.

    2012-06-01

    This paper proposes an improved image coding scheme based on vector quantization. It is well known that the image quality of a VQ-compressed image is poor when a small-sized codebook is used. In order to solve this problem, the mean value of the image block is taken as an alternative block encoding rule to improve the image quality in the proposed scheme. To cut down the storage cost of compressed codes, a two-stage lossless coding approach including the linear prediction technique and the Huffman coding technique is employed in the proposed scheme. The results show that the proposed scheme achieves better image qualities than vector quantization while keeping low bit rates.

  11. Hyperspectral image compression using bands combination wavelet transformation

    NASA Astrophysics Data System (ADS)

    Wang, Wenjie; Zhao, Zhongming; Zhu, Haiqing

    2009-10-01

    Hyperspectral imaging technology is the foreland of the remote sensing development in the 21st century and is one of the most important focuses of the remote sensing domain. Hyperspectral images can provide much more information than multispectral images do and can solve many problems which can't be solved by multispectral imaging technology. However this advantage is at the cost of massy quantity of data that brings difficulties of images' process, storage and transmission. Research on hyperspectral image compression method has important practical significance. This paper intends to do some improvement of the famous KLT-WT-2DSPECK (Karhunen-Loeve transform+ wavelet transformation+ two-dimensional set partitioning embedded block compression) algorithm and advances KLT + bands combination 2DWT + 2DSPECK algorithm. Experiment proves that this method is effective.

  12. Research of the wavelet based ECW remote sensing image compression technology

    NASA Astrophysics Data System (ADS)

    Zhang, Lan; Gu, Xingfa; Yu, Tao; Dong, Yang; Hu, Xinli; Xu, Hua

    2007-11-01

    This paper mainly study the wavelet based ECW remote sensing image compression technology. Comparing with the tradition compression technology JPEG and new compression technology JPEG2000 witch based on wavelet we can find that when compress quite large remote sensing image the ER Mapper Compressed Wavelet (ECW) can has significant advantages. The way how to use the ECW SDK was also discussed and prove that it's also the best and faster way to compress China-Brazil Earth Resource Satellite (CBERS) image.

  13. Feature-preserving image/video compression

    NASA Astrophysics Data System (ADS)

    Al-Jawad, Naseer; Jassim, Sabah

    2005-10-01

    Advances in digital image processing, the advents of multimedia computing, and the availability of affordable high quality digital cameras have led to increased demand for digital images/videos. There has been a fast growth in the number of information systems that benefit from digital imaging techniques and present many tough challenges. In this paper e are concerned with applications for which image quality is a critical requirement. The fields of medicine, remote sensing, real time surveillance, and image-based automatic fingerprint/face identification systems are all but few examples of such applications. Medical care is increasingly dependent on imaging for diagnostics, surgery, and education. It is estimated that medium size hospitals in the US generate terabytes of MRI images and X-Ray images are generated to be stored in very large databases which are frequently accessed and searched for research and training. On the other hand, the rise of international terrorism and the growth of identity theft have added urgency to the development of new efficient biometric-based person verification/authentication systems. In future, such systems can provide an additional layer of security for online transactions or for real-time surveillance.

  14. Counter-propagation neural network for image compression

    NASA Astrophysics Data System (ADS)

    Sygnowski, Wojciech; Macukow, Bohdan

    1996-08-01

    Recently, several image compression techniques based on neural network algorithms have been developed. In this paper, we propose a new method for image compression--the modified counter-propagation neural network algorithm, which is a combination of the self-organizing map of Kohonen and the outstar structure of Grossberg. This algorithm has been successfully used in many applications. The modification presented has also demonstrated an interesting performance in comparison with the standard techniques. It was found that at the learning stage we can use any image for a network training (without a significant influence on the net operation) and the compression ratio and quality depend on the size of the basic element (the number of pixels in the cluster) and the amount of error tolerated when processing.

  15. [Spatially modulated Fourier transform imaging spectrometer data compression research].

    PubMed

    Huang, Min; Xiangli, Bin; Yuan, Yan; Shen, Zhong; Lu, Qun-bo; Wang, Zhong-hou; Liu, Xue-bin

    2010-01-01

    Fourier transform imaging spectrometer is a new technic, and has been developed very fast in recent ten years. When it is used in satellite, because of the limit by the data transmission, the authors need to compress the original data obtained by the Fourier transform imaging spectrometer, then, the data can be transmitted, and can be incepted on the earth and decompressed. Then the authors can do data process to get spectrum data which can be used by user. Data compression technic used in Fourier transform imaging spectrometer is a new technic, and few papers introduce it at home and abroad. In this paper the authors will give a data compression method, which has been used in EDIS, and achieved a good result. PMID:20302132

  16. Compressive image acquisition and classification via secant projections

    NASA Astrophysics Data System (ADS)

    Li, Yun; Hegde, Chinmay; Sankaranarayanan, Aswin C.; Baraniuk, Richard; Kelly, Kevin F.

    2015-06-01

    Given its importance in a wide variety of machine vision applications, extending high-speed object detection and recognition beyond the visible spectrum in a cost-effective manner presents a significant technological challenge. As a step in this direction, we developed a novel approach for target image classification using a compressive sensing architecture. Here we report the first implementation of this approach utilizing the compressive single-pixel camera system. The core of our approach rests on the design of new measurement patterns, or projections, that are tuned to objects of interest. Our measurement patterns are based on the notion of secant projections of image classes that are constructed using two different approaches. Both approaches show at least a twofold improvement in terms of the number of measurements over the conventional, data-oblivious compressive matched filter. As more noise is added to the image, the second method proves to be the most robust.

  17. Compression of Ultrasonic NDT Image by Wavelet Based Local Quantization

    NASA Astrophysics Data System (ADS)

    Cheng, W.; Li, L. Q.; Tsukada, K.; Hanasaki, K.

    2004-02-01

    Compression on ultrasonic image that is always corrupted by noise will cause `over-smoothness' or much distortion. To solve this problem to meet the need of real time inspection and tele-inspection, a compression method based on Discrete Wavelet Transform (DWT) that can also suppress the noise without losing much flaw-relevant information, is presented in this work. Exploiting the multi-resolution and interscale correlation property of DWT, a simple way named DWCs classification, is introduced first to classify detail wavelet coefficients (DWCs) as dominated by noise, signal or bi-effected. A better denoising can be realized by selective thresholding DWCs. While in `Local quantization', different quantization strategies are applied to the DWCs according to their classification and the local image property. It allocates the bit rate more efficiently to the DWCs thus achieve a higher compression rate. Meanwhile, the decompressed image shows the effects of noise suppressed and flaw characters preserved.

  18. High-speed lossless compression for angiography image sequences

    NASA Astrophysics Data System (ADS)

    Kennedy, Jonathon M.; Simms, Michael; Kearney, Emma; Dowling, Anita; Fagan, Andrew; O'Hare, Neil J.

    2001-05-01

    High speed processing of large amounts of data is a requirement for many diagnostic quality medical imaging applications. A demanding example is the acquisition, storage and display of image sequences in angiography. The functional performance requirements for handling angiography data were identified. A new lossless image compression algorithm was developed, implemented in C++ for the Intel Pentium/MS-Windows environment and optimized for speed of operation. Speeds of up to 6M pixels per second for compression and 12M pixels per second for decompression were measured. This represents an improvement of up to 400% over the next best high-performance algorithm (LOCO-I) without significant reduction in compression ratio. Performance tests were carried out at St. James's Hospital using actual angiography data. Results were compared with the lossless JPEG standard and other leading methods such as JPEG-LS (LOCO-I) and the lossless wavelet approach proposed for JPEG 2000. Our new algorithm represents a significant improvement in the performance of lossless image compression technology without using specialized hardware. It has been applied successfully to image sequence decompression at video rate for angiography, one of the most challenging application areas in medical imaging.

  19. Compressed image quality metric based on perceptually weighted distortion.

    PubMed

    Hu, Sudeng; Jin, Lina; Wang, Hanli; Zhang, Yun; Kwong, Sam; Kuo, C-C Jay

    2015-12-01

    Objective quality assessment for compressed images is critical to various image compression systems that are essential in image delivery and storage. Although the mean squared error (MSE) is computationally simple, it may not be accurate to reflect the perceptual quality of compressed images, which is also affected dramatically by the characteristics of human visual system (HVS), such as masking effect. In this paper, an image quality metric (IQM) is proposed based on perceptually weighted distortion in terms of the MSE. To capture the characteristics of HVS, a randomness map is proposed to measure the masking effect and a preprocessing scheme is proposed to simulate the processing that occurs in the initial part of HVS. Since the masking effect highly depends on the structural randomness, the prediction error from neighborhood with a statistical model is used to measure the significance of masking. Meanwhile, the imperceptible signal with high frequency could be removed by preprocessing with low-pass filters. The relation is investigated between the distortions before and after masking effect, and a masking modulation model is proposed to simulate the masking effect after preprocessing. The performance of the proposed IQM is validated on six image databases with various compression distortions. The experimental results show that the proposed algorithm outperforms other benchmark IQMs. PMID:26415170

  20. A specific measurement matrix in compressive imaging system

    NASA Astrophysics Data System (ADS)

    Wang, Fen; Wei, Ping; Ke, Jun

    2011-11-01

    Compressed sensing or compressive sampling (CS) is a new framework for simultaneous data sampling and compression which was proposed by Candes, Donoho, and Tao several years ago. Ever since the advent of a single-pixel camera, one of the CS applications - compressive imaging (CI, also referred as feature-specific imaging) has aroused more interest of numerous researchers. However, it is still a challenging problem to choose a simple and efficient measurement matrix in such a hardware system, especially for large scale image. In this paper, we propose a new measurement matrix whose rows are the odd rows of N order Hadamard matrix and discuss the validity of the matrix theoretically. The advantage of the matrix is its universality and easy implementation in the optical domain owing to its integer-valued elements. In addition, we demonstrate the validity of the matrix through the reconstruction of natural images using Orthogonal Matching Pursuit (OMP) algorithm. Due to the limitation of the memory of the hardware system and personal computer which is used to simulate the process, it is impossible to create such a large matrix that is used to conduct large scale images. In order to solve the problem, the block-wise notion is introduced to conduct large scale images and the experiments results present the validity of this method.

  1. Image and Video Compression with VLSI Neural Networks

    NASA Technical Reports Server (NTRS)

    Fang, W.; Sheu, B.

    1993-01-01

    An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.

  2. Compressive microscopic imaging with "positive-negative" light modulation

    NASA Astrophysics Data System (ADS)

    Yu, Wen-Kai; Yao, Xu-Ri; Liu, Xue-Feng; Lan, Ruo-Ming; Wu, Ling-An; Zhai, Guang-Jie; Zhao, Qing

    2016-07-01

    An experiment on compressive microscopic imaging with single-pixel detector and single-arm has been performed on the basis of "positive-negative" (differential) light modulation of a digital micromirror device (DMD). A magnified image of micron-sized objects illuminated by the microscope's own incandescent lamp has been successfully acquired. The image quality is improved by one more orders of magnitude compared with that obtained by conventional single-pixel imaging scheme with normal modulation using the same sampling rate, and moreover, the system is robust against the instability of light source and may be applied to very weak light condition. Its nature and the analysis of noise sources is discussed deeply. The realization of this technique represents a big step to the practical applications of compressive microscopic imaging in the fields of biology and materials science.

  3. An innovative lossless compression method for discrete-color images.

    PubMed

    Alzahir, Saif; Borici, Arber

    2015-01-01

    In this paper, we present an innovative method for lossless compression of discrete-color images, such as map images, graphics, GIS, as well as binary images. This method comprises two main components. The first is a fixed-size codebook encompassing 8×8 bit blocks of two-tone data along with their corresponding Huffman codes and their relative probabilities of occurrence. The probabilities were obtained from a very large set of discrete color images which are also used for arithmetic coding. The second component is the row-column reduction coding, which will encode those blocks that are not in the codebook. The proposed method has been successfully applied on two major image categories: 1) images with a predetermined number of discrete colors, such as digital maps, graphs, and GIS images and 2) binary images. The results show that our method compresses images from both categories (discrete color and binary images) with 90% in most case and higher than the JBIG-2 by 5%-20% for binary images, and by 2%-6.3% for discrete color images on average. PMID:25330487

  4. Color transformation for the compression of CMYK images

    NASA Astrophysics Data System (ADS)

    de Queiroz, Ricardo L.

    1999-12-01

    A CMYK image is often viewed as a large amount of device- dependent data ready to be printed. In several circumstances, CMYK data needs to be compressed, but the conversion to and from device-independent spaces is imprecise at best. In this paper, with the goal of compressing CMYK images, color space transformations were studied. In order to have a practical importance we developed a new transformation to a YYCC color space, which is device-independent and image-independent, i.e. a simple linear transformation between device-dependent color spaces. The transformation from CMYK to YYCC was studied extensively in image compression. For that a distortion measure that would account for both device-dependence and spatial visual sensitivity has been developed. It is shown that transformation to YYCC consistently outperforms the transformation to other device-dependent 4D color spaces such as YCbCrK, while being competitive with the image- dependent KLT-based approach. Other interesting conclusions were also drawn from the experiments, among them the fact that color transformations are not always advantageous over independent compression of CMYK color planes and the fact that chrominance subsampling is rarely advantageous.

  5. Differentiation applied to lossless compression of medical images.

    PubMed

    Nijim, Y W; Stearns, S D; Mikhael, W B

    1996-01-01

    Lossless compression of medical images using a proposed differentiation technique is explored. This scheme is based on computing weighted differences between neighboring pixel values. The performance of the proposed approach, for the lossless compression of magnetic resonance (MR) images and ultrasonic images, is evaluated and compared with the lossless linear predictor and the lossless Joint Photographic Experts Group (JPEG) standard. The residue sequence of these techniques is coded using arithmetic coding. The proposed scheme yields compression measures, in terms of bits per pixel, that are comparable with or lower than those obtained using the linear predictor and the lossless JPEG standard, respectively, with 8-b medical images. The advantages of the differentiation technique presented here over the linear predictor are: 1) the coefficients of the differentiator are known by the encoder and the decoder, which eliminates the need to compute or encode these coefficients, and 21 the computational complexity is greatly reduced. These advantages are particularly attractive in real time processing for compressing and decompressing medical images. PMID:18215936

  6. Ultrasonic elastography using sector scan imaging and a radial compression.

    PubMed

    Souchon, Rémi; Soualmi, Lahbib; Bertrand, Michel; Chapelon, Jean-Yves; Kallel, Faouzi; Ophir, Jonathan

    2002-05-01

    Elastography is an imaging technique based on strain estimation in soft tissues under quasi-static compression. The stress is usually created by a compression plate, and the target is imaged by an ultrasonic linear array. This configuration is used for breast elastography, and has been investigated both theoretically and experimentally. Phenomena such as strain decay with tissue depth and strain concentrations have been reported. However in some in vivo situations, like prostate or blood vessels imaging, this set-up cannot be used. We propose a device to acquire in vivo elastograms of the prostate. The compression is applied by inflating a balloon that covers a transrectal sector probe. The 1D algorithm used to calculate the radial strain fails if the center of the imaging probe does not correspond to the center of the compressor. Therefore, experimental elastograms are calculated with a 2D algorithm that accounts for tangential displacements of the tissue. In this article, in order to gain a better understanding of the image formation process, the use of ultrasonic sector scans to image the radial compression of a target is investigated. Elastograms of homogeneous phantoms are presented, and compared with simulated images. Both show a strain decay with tissue depth. Then experimental and simulated elastograms of a phantom that contains a hard inclusion are presented, showing that strain concentrations occur as well. A method to compensate for strain decay and therefore to increase the contrast of the strain elastograms is proposed. It is expected that such information will help to interpret and possibly improve the elastograms obtained via radial compression. PMID:12160060

  7. Noise impact on error-free image compression.

    PubMed

    Lo, S B; Krasner, B; Mun, S K

    1990-01-01

    Some radiological images with different levels of noise have been studied using various decomposition methods incorporated with Huffman and Lempel-Ziv coding. When more correlations exist between pixels, these techniques can be made more efficient. However, additional noise disrupts the correlation between adjacent pixels and leads to a less compressed result. Hence, prior to a systematic compression in a picture archiving and communication system (PACS), two main issues must be addressed: the true information range which exists in a specific type of radiological image, and the costs and benefits of compression for the PACS. It is shown that with laser film digitized magnetic resonance images, 10-12 b are produced, although the lower 2-4 b show the characteristics of random noise. The addition of the noise bits is shown to adversely affect the amount of compression given by various reversible compression techniques. The sensitivity of different techniques to different levels of noise is examined in order to suggest strategies for dealing with noise. PMID:18222765

  8. Simultaneous image compression, fusion and encryption algorithm based on compressive sensing and chaos

    NASA Astrophysics Data System (ADS)

    Liu, Xingbin; Mei, Wenbo; Du, Huiqian

    2016-05-01

    In this paper, a novel approach based on compressive sensing and chaos is proposed for simultaneously compressing, fusing and encrypting multi-modal images. The sparsely represented source images are firstly measured with the key-controlled pseudo-random measurement matrix constructed using logistic map, which reduces the data to be processed and realizes the initial encryption. Then the obtained measurements are fused by the proposed adaptive weighted fusion rule. The fused measurement is further encrypted into the ciphertext through an iterative procedure including improved random pixel exchanging technique and fractional Fourier transform. The fused image can be reconstructed by decrypting the ciphertext and using a recovery algorithm. The proposed algorithm not only reduces data volume but also simplifies keys, which improves the efficiency of transmitting data and distributing keys. Numerical results demonstrate the feasibility and security of the proposed scheme.

  9. Measurement kernel design for compressive imaging under device constraints

    NASA Astrophysics Data System (ADS)

    Shilling, Richard; Muise, Robert

    2013-05-01

    We look at the design of projective measurements for compressive imaging based upon image priors and device constraints. If one assumes that image patches from natural imagery can be modeled as a low rank manifold, we develop an optimality criterion for a measurement matrix based upon separating the canonical elements of the manifold prior. We then describe a stochastic search algorithm for finding the optimal measurements under device constraints based upon a subspace mismatch algorithm. The algorithm is then tested on a prototype compressive imaging device designed to collect an 8x4 array of projective measurements simultaneously. This work is based upon work supported by DARPA and the SPAWAR System Center Pacific under Contract No. N66001-11-C-4092. The views expressed are those of the author and do not reflect the official policy or position of the Department of Defense or the U.S. Government.

  10. Iterative compressive sampling for hyperspectral images via source separation

    NASA Astrophysics Data System (ADS)

    Kamdem Kuiteing, S.; Barni, Mauro

    2014-03-01

    Compressive Sensing (CS) is receiving increasing attention as a way to lower storage and compression requirements for on-board acquisition of remote-sensing images. In the case of multi- and hyperspectral images, however, exploiting the spectral correlation poses severe computational problems. Yet, exploiting such a correlation would provide significantly better performance in terms of reconstruction quality. In this paper, we build on a recently proposed 2D CS scheme based on blind source separation to develop a computationally simple, yet accurate, prediction-based scheme for acquisition and iterative reconstruction of hyperspectral images in a CS setting. Preliminary experiments carried out on different hyperspectral images show that our approach yields a dramatic reduction of computational time while ensuring reconstruction performance similar to those of much more complicated 3D reconstruction schemes.

  11. Fractal image compression: A resolution independent representation for imagery

    NASA Technical Reports Server (NTRS)

    Sloan, Alan D.

    1993-01-01

    A deterministic fractal is an image which has low information content and no inherent scale. Because of their low information content, deterministic fractals can be described with small data sets. They can be displayed at high resolution since they are not bound by an inherent scale. A remarkable consequence follows. Fractal images can be encoded at very high compression ratios. This fern, for example is encoded in less than 50 bytes and yet can be displayed at resolutions with increasing levels of detail appearing. The Fractal Transform was discovered in 1988 by Michael F. Barnsley. It is the basis for a new image compression scheme which was initially developed by myself and Michael Barnsley at Iterated Systems. The Fractal Transform effectively solves the problem of finding a fractal which approximates a digital 'real world image'.

  12. Knowledge-based image bandwidth compression and enhancement

    NASA Astrophysics Data System (ADS)

    Saghri, John A.; Tescher, Andrew G.

    1987-01-01

    Techniques for incorporating a priori knowledge in the digital coding and bandwidth compression of image data are described and demonstrated. An algorithm for identifying and highlighting thin lines and point objects prior to coding is presented, and the precoding enhancement of a slightly smoothed version of the image is shown to be more effective than enhancement of the original image. Also considered are readjustment of the local distortion parameter and variable-block-size coding. The line-segment criteria employed in the classification are listed in a table, and sample images demonstrating the effectiveness of the enhancement techniques are presented.

  13. Adaptive polyphase subband decomposition structures for image compression.

    PubMed

    Gerek, O N; Cetin, A E

    2000-01-01

    Subband decomposition techniques have been extensively used for data coding and analysis. In most filter banks, the goal is to obtain subsampled signals corresponding to different spectral regions of the original data. However, this approach leads to various artifacts in images having spatially varying characteristics, such as images containing text, subtitles, or sharp edges. In this paper, adaptive filter banks with perfect reconstruction property are presented for such images. The filters of the decomposition structure which can be either linear or nonlinear vary according to the nature of the signal. This leads to improved image compression ratios. Simulation examples are presented. PMID:18262904

  14. Compressed image transmission based on fountain codes

    NASA Astrophysics Data System (ADS)

    Wu, Jiaji; Wu, Xinhong; Jiao, L. C.

    2011-11-01

    In this paper, we propose a joint source-channel coding (JSCC) scheme for image transmission over wireless channel. In the scheme, fountain codes are integrated into bit-plane coding for channel coding. Compared to traditional erasure codes for error correcting, such as Reed-Solomon codes, fountain codes are rateless and can generate sufficient symbols on the fly. Two schemes, the EEP (Equal Error Protection) scheme and the UEP (Unequal Error Protection) scheme are described in the paper. Furthermore, the UEP scheme performs better than the EEP scheme. The proposed scheme not only can adaptively adjust the length of fountain codes according to channel loss rate but also reconstruct image even on bad channel.

  15. Implementation of modified SPIHT algorithm for Compression of images

    NASA Astrophysics Data System (ADS)

    Kurume, A. V.; Yana, D. M.

    2011-12-01

    We present a throughput-efficient FPGA implementation of the Set Partitioning in Hierarchical Trees (SPIHT) algorithm for compression of images. The SPIHT uses inherent redundancy among wavelet coefficients and suited for both grey and color images. The SPIHT algorithm uses dynamic data structure which hinders hardware realization. we have modified basic SPIHT in two ways, one by using static (fixed) mappings which represent significant information and the other by interchanging the sorting and refinement passes.

  16. Optimized satellite image compression and reconstruction via evolution strategies

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael

    2009-05-01

    This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.

  17. Infrared super-resolution imaging based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Sui, Xiubao; Chen, Qian; Gu, Guohua; Shen, Xuewei

    2014-03-01

    The theoretical basis of traditional infrared super-resolution imaging method is Nyquist sampling theorem. The reconstruction premise is that the relative positions of the infrared objects in the low-resolution image sequences should keep fixed and the image restoration means is the inverse operation of ill-posed issues without fixed rules. The super-resolution reconstruction ability of the infrared image, algorithm's application area and stability of reconstruction algorithm are limited. To this end, we proposed super-resolution reconstruction method based on compressed sensing in this paper. In the method, we selected Toeplitz matrix as the measurement matrix and realized it by phase mask method. We researched complementary matching pursuit algorithm and selected it as the recovery algorithm. In order to adapt to the moving target and decrease imaging time, we take use of area infrared focal plane array to acquire multiple measurements at one time. Theoretically, the method breaks though Nyquist sampling theorem and can greatly improve the spatial resolution of the infrared image. The last image contrast and experiment data indicate that our method is effective in improving resolution of infrared images and is superior than some traditional super-resolution imaging method. The compressed sensing super-resolution method is expected to have a wide application prospect.

  18. Neural networks for data compression and invariant image recognition

    NASA Technical Reports Server (NTRS)

    Gardner, Sheldon

    1989-01-01

    An approach to invariant image recognition (I2R), based upon a model of biological vision in the mammalian visual system (MVS), is described. The complete I2R model incorporates several biologically inspired features: exponential mapping of retinal images, Gabor spatial filtering, and a neural network associative memory. In the I2R model, exponentially mapped retinal images are filtered by a hierarchical set of Gabor spatial filters (GSF) which provide compression of the information contained within a pixel-based image. A neural network associative memory (AM) is used to process the GSF coded images. We describe a 1-D shape function method for coding of scale and rotationally invariant shape information. This method reduces image shape information to a periodic waveform suitable for coding as an input vector to a neural network AM. The shape function method is suitable for near term applications on conventional computing architectures equipped with VLSI FFT chips to provide a rapid image search capability.

  19. A Motion-Compensating Image-Compression Scheme

    NASA Technical Reports Server (NTRS)

    Wong, Carol

    1994-01-01

    Chrominance used (in addition to luminance) in estimating motion. Variable-rate digital coding scheme for compression of color-video-image data designed to deliver pictures of good quality at moderate compressed-data rate of 1 to 2 bits per pixel, or of fair quality at rate less than 1 bit per pixel. Scheme, in principle, implemented by use of commercially available application-specific integrated circuits. Incorporates elements of some prior coding schemes, including motion compensation (MC) and discrete cosine transform (DCT).

  20. JPIC-Rad-Hard JPEG2000 Image Compression ASIC

    NASA Astrophysics Data System (ADS)

    Zervas, Nikos; Ginosar, Ran; Broyde, Amitai; Alon, Dov

    2010-08-01

    JPIC is a rad-hard high-performance image compression ASIC for the aerospace market. JPIC implements tier 1 of the ISO/IEC 15444-1 JPEG2000 (a.k.a. J2K) image compression standard [1] as well as the post compression rate-distortion algorithm, which is part of tier 2 coding. A modular architecture enables employing a single JPIC or multiple coordinated JPIC units. JPIC is designed to support wide data sources of imager in optical, panchromatic and multi-spectral space and airborne sensors. JPIC has been developed as a collaboration of Alma Technologies S.A. (Greece), MBT/IAI Ltd (Israel) and Ramon Chips Ltd (Israel). MBT IAI defined the system architecture requirements and interfaces, The JPEG2K-E IP core from Alma implements the compression algorithm [2]. Ramon Chips adds SERDES interfaces and host interfaces and integrates the ASIC. MBT has demonstrated the full chip on an FPGA board and created system boards employing multiple JPIC units. The ASIC implementation, based on Ramon Chips' 180nm CMOS RadSafe[TM] RH cell library enables superior radiation hardness.

  1. Compressing Image Data While Limiting the Effects of Data Losses

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Klimesh, Matthew

    2006-01-01

    ICER is computer software that can perform both lossless and lossy compression and decompression of gray-scale-image data using discrete wavelet transforms. Designed for primary use in transmitting scientific image data from distant spacecraft to Earth, ICER incorporates an error-containment scheme that limits the adverse effects of loss of data and is well suited to the data packets transmitted by deep-space probes. The error-containment scheme includes utilization of the algorithm described in "Partitioning a Gridded Rectangle Into Smaller Rectangles " (NPO-30479), NASA Tech Briefs, Vol. 28, No. 7 (July 2004), page 56. ICER has performed well in onboard compression of thousands of images transmitted from the Mars Exploration Rovers.

  2. An optimized hybrid encode based compression algorithm for hyperspectral image

    NASA Astrophysics Data System (ADS)

    Wang, Cheng; Miao, Zhuang; Feng, Weiyi; He, Weiji; Chen, Qian; Gu, Guohua

    2013-12-01

    Compression is a kernel procedure in hyperspectral image processing due to its massive data which will bring great difficulty in date storage and transmission. In this paper, a novel hyperspectral compression algorithm based on hybrid encoding which combines with the methods of the band optimized grouping and the wavelet transform is proposed. Given the characteristic of correlation coefficients between adjacent spectral bands, an optimized band grouping and reference frame selection method is first utilized to group bands adaptively. Then according to the band number of each group, the redundancy in the spatial and spectral domain is removed through the spatial domain entropy coding and the minimum residual based linear prediction method. Thus, embedded code streams are obtained by encoding the residual images using the improved embedded zerotree wavelet based SPIHT encode method. In the experments, hyperspectral images collected by the Airborne Visible/ Infrared Imaging Spectrometer (AVIRIS) were used to validate the performance of the proposed algorithm. The results show that the proposed approach achieves a good performance in reconstructed image quality and computation complexity.The average peak signal to noise ratio (PSNR) is increased by 0.21~0.81dB compared with other off-the-shelf algorithms under the same compression ratio.

  3. A Progressive Image Compression Method Based on EZW Algorithm

    NASA Astrophysics Data System (ADS)

    Du, Ke; Lu, Jianming; Yahagi, Takashi

    A simple method based on the EZW algorithm is presented for improving image compression performance. Recent success in wavelet image coding is mainly attributed to recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's EZW(Embedded Zerotree Wavelets)(1), Said and Pearlman's SPIHT(Set Partitioning In Hierarchical Trees)(2), and Bing-Bing Chai's SLCCA(Significance-Linked Connected Component Analysis for Wavelet Image Coding)(3). The EZW algorithm is based on five key concepts: (1) a DWT(Discrete Wavelet Transform) or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, (4) universal lossless data compression which is achieved via adaptive arithmetic coding. and (5) DWT coefficients' degeneration from high scale subbands to low scale subbands. In this paper, we have improved the self-similarity statistical characteristic in concept (5) and present a progressive image compression method.

  4. Influence of Lossy Compressed DEM on Radiometric Correction for Land Cover Classification of Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Moré, G.; Pesquer, L.; Blanes, I.; Serra-Sagristà, J.; Pons, X.

    2012-12-01

    World coverage Digital Elevation Models (DEM) have progressively increased their spatial resolution (e.g., ETOPO, SRTM, or Aster GDEM) and, consequently, their storage requirements. On the other hand, lossy data compression facilitates accessing, sharing and transmitting large spatial datasets in environments with limited storage. However, since lossy compression modifies the original information, rigorous studies are needed to understand its effects and consequences. The present work analyzes the influence of DEM quality -modified by lossy compression-, on the radiometric correction of remote sensing imagery, and the eventual propagation of the uncertainty in the resulting land cover classification. Radiometric correction is usually composed of two parts: atmospheric correction and topographical correction. For topographical correction, DEM provides the altimetry information that allows modeling the incidence radiation on terrain surface (cast shadows, self shadows, etc). To quantify the effects of the DEM lossy compression on the radiometric correction, we use radiometrically corrected images for classification purposes, and compare the accuracy of two standard coding techniques for a wide range of compression ratios. The DEM has been obtained by resampling the DEM v.2 of Catalonia (ICC), originally having 15 m resolution, to the Landsat TM resolution. The Aster DEM has been used to fill the gaps beyond the administrative limits of Catalonia. The DEM has been lossy compressed with two coding standards at compression ratios 5:1, 10:1, 20:1, 100:1 and 200:1. The employed coding standards have been JPEG2000 and CCSDS-IDC; the former is an international ISO/ITU-T standard for almost any type of images, while the latter is a recommendation of the CCSDS consortium for mono-component remote sensing images. Both techniques are wavelet-based followed by an entropy-coding stage. Also, for large compression ratios, both techniques need a post processing for correctly

  5. A novel image fusion approach based on compressive sensing

    NASA Astrophysics Data System (ADS)

    Yin, Hongpeng; Liu, Zhaodong; Fang, Bin; Li, Yanxia

    2015-11-01

    Image fusion can integrate complementary and relevant information of source images captured by multiple sensors into a unitary synthetic image. The compressive sensing-based (CS) fusion approach can greatly reduce the processing speed and guarantee the quality of the fused image by integrating fewer non-zero coefficients. However, there are two main limitations in the conventional CS-based fusion approach. Firstly, directly fusing sensing measurements may bring greater uncertain results with high reconstruction error. Secondly, using single fusion rule may result in the problems of blocking artifacts and poor fidelity. In this paper, a novel image fusion approach based on CS is proposed to solve those problems. The non-subsampled contourlet transform (NSCT) method is utilized to decompose the source images. The dual-layer Pulse Coupled Neural Network (PCNN) model is used to integrate low-pass subbands; while an edge-retention based fusion rule is proposed to fuse high-pass subbands. The sparse coefficients are fused before being measured by Gaussian matrix. The fused image is accurately reconstructed by Compressive Sampling Matched Pursuit algorithm (CoSaMP). Experimental results demonstrate that the fused image contains abundant detailed contents and preserves the saliency structure. These also indicate that our proposed method achieves better visual quality than the current state-of-the-art methods.

  6. Split Bregman's optimization method for image construction in compressive sensing

    NASA Astrophysics Data System (ADS)

    Skinner, D.; Foo, S.; Meyer-Bäse, A.

    2014-05-01

    The theory of compressive sampling (CS) was reintroduced by Candes, Romberg and Tao, and D. Donoho in 2006. Using a priori knowledge that a signal is sparse, it has been mathematically proven that CS can defY Nyquist sampling theorem. Theoretically, reconstruction of a CS image relies on the minimization and optimization techniques to solve this complex almost NP-complete problem. There are many paths to consider when compressing and reconstructing an image but these methods have remained untested and unclear on natural images, such as underwater sonar images. The goal of this research is to perfectly reconstruct the original sonar image from a sparse signal while maintaining pertinent information, such as mine-like object, in Side-scan sonar (SSS) images. Goldstein and Osher have shown how to use an iterative method to reconstruct the original image through a method called Split Bregman's iteration. This method "decouples" the energies using portions of the energy from both the !1 and !2 norm. Once the energies are split, Bregman iteration is used to solve the unconstrained optimization problem by recursively solving the problems simultaneously. The faster these two steps or energies can be solved then the faster the overall method becomes. While the majority of CS research is still focused on the medical field, this paper will demonstrate the effectiveness of the Split Bregman's methods on sonar images.

  7. A compressed sensing approach for enhancing infrared imaging resolution

    NASA Astrophysics Data System (ADS)

    Xiao, Long-long; Liu, Kun; Han, Da-peng; Liu, Ji-ying

    2012-11-01

    This paper presents a novel approach for improving infrared imaging resolution by the use of Compressed Sensing (CS). Instead of sensing raw pixel data, the image sensor measures the compressed samples of the observed image through a coded aperture mask placed on the focal plane of the optical system, and then the image reconstruction can be conducted from these samples using an optimal algorithm. The resolution is determined by the size of the coded aperture mask other than that of the focal plane array (FPA). The attainable quality of the reconstructed image strongly depends on the choice of the coded aperture mode. Based on the framework of CS, we carefully design an optimum mask pattern and use a multiplexing scheme to achieve multiple samples. The gradient projection for sparse reconstruction (GPSR) algorithm is employed to recover the image. The mask radiation effect is discussed by theoretical analyses and numerical simulations. Experimental results are presented to show that the proposed method enhances infrared imaging resolution significantly and ensures imaging quality.

  8. Spatial compression algorithm for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R.

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  9. Image compression using address-vector quantization

    NASA Astrophysics Data System (ADS)

    Nasrabadi, Nasser M.; Feng, Yushu

    1990-12-01

    A novel vector quantization scheme, the address-vector quantizer (A-VQ), is proposed which exploits the interblock correlation by encoding a group of blocks together using an address-codebook (AC). The AC is a set of address-codevectors (ACVs), each representing a combination of addresses or indices. Each element of the ACV is an address of an entry in the LBG-codebook, representing a vector-quantized block. The AC consists of an active (addressable) region and an inactive (nonaddressable) region. During encoding the ACVs in the AC are reordered adaptively to bring the most probable ACVs into the active region. When encoding an ACV, the active region is checked, and if such an address combination exists, its index is transmitted to the receiver. Otherwise, the address of each block is transmitted individually. The SNR of the images encoded by the A-VQ method is the same as that of a memoryless vector quantizer, but the bit rate is by a factor of approximately two.

  10. GPU-specific reformulations of image compression algorithms

    NASA Astrophysics Data System (ADS)

    Matela, Jiří; Holub, Petr; Jirman, Martin; Årom, Martin

    2012-10-01

    Image compression has a number of applications in various fields, where processing throughput and/or latency is a crucial attribute and the main limitation of state-of-the-art implementations of compression algorithms. At the same time contemporary GPU platforms provide tremendous processing power but they call for specific algorithm design. We discuss key components of successful design of compression algorithms for GPUs and demonstrate this on JPEG and JPEG2000 implementations, each of which contains several types of algorithms requiring different approaches to efficient parallelization for GPUs. Performance evaluation of the optimized JPEG and JPEG2000 chain is used to demonstrate the importance of various aspects of GPU programming, especially with respect to real-time applications.

  11. Digital image compression for a 2f multiplexing optical setup

    NASA Astrophysics Data System (ADS)

    Vargas, J.; Amaya, D.; Rueda, E.

    2016-07-01

    In this work a virtual 2f multiplexing system was implemented in combination with digital image compression techniques and redundant information elimination. Depending on the image type to be multiplexed, a memory-usage saving of as much as 99% was obtained. The feasibility of the system was tested using three types of images, binary characters, QR codes, and grey level images. A multiplexing step was implemented digitally, while a demultiplexing step was implemented in a virtual 2f optical setup following real experimental parameters. To avoid cross-talk noise, each image was codified with a specially designed phase diffraction carrier that would allow the separation and relocation of the multiplexed images on the observation plane by simple light propagation. A description of the system is presented together with simulations that corroborate the method. The present work may allow future experimental implementations that will make use of all the parallel processing capabilities of optical systems.

  12. Colored adaptive compressed imaging with a single photodiode.

    PubMed

    Yan, Yiyun; Dai, Huidong; Liu, Xingjiong; He, Weiji; Chen, Qian; Gu, Guohua

    2016-05-10

    Computational ghost imaging is commonly used to reconstruct grayscale images. Currently, however, there is little research aimed at reconstructing color images. In this paper, we theoretically and experimentally demonstrate a colored adaptive compressed imaging method. Benefiting from imaging in YUV color space, the proposed method adequately exploits the sparsity of the U, V components in the wavelet domain, the interdependence between luminance and chrominance, and human visual characteristics. The simulation and experimental results show that our method greatly reduces the measurements required and offers better image quality compared to recovering the red (R), green (G), and blue (B) components separately in RGB color space. As the application of a single photodiode increases, our method shows great potential in many fields. PMID:27168280

  13. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a

  14. Astronomical Image Compression Techniques Based on ACC and KLT Coder

    NASA Astrophysics Data System (ADS)

    Schindler, J.; Páta, P.; Klíma, M.; Fliegel, K.

    This paper deals with a compression of image data in applications in astronomy. Astronomical images have typical specific properties -- high grayscale bit depth, size, noise occurrence and special processing algorithms. They belong to the class of scientific images. Their processing and compression is quite different from the classical approach of multimedia image processing. The database of images from BOOTES (Burst Observer and Optical Transient Exploring System) has been chosen as a source of the testing signal. BOOTES is a Czech-Spanish robotic telescope for observing AGN (active galactic nuclei) and the optical transient of GRB (gamma ray bursts) searching. This paper discusses an approach based on an analysis of statistical properties of image data. A comparison of two irrelevancy reduction methods is presented from a scientific (astrometric and photometric) point of view. The first method is based on a statistical approach, using the Karhunen-Loève transform (KLT) with uniform quantization in the spectral domain. The second technique is derived from wavelet decomposition with adaptive selection of used prediction coefficients. Finally, the comparison of three redundancy reduction methods is discussed. Multimedia format JPEG2000 and HCOMPRESS, designed especially for astronomical images, are compared with the new Astronomical Context Coder (ACC) coder based on adaptive median regression.

  15. Colorimetric-spectral clustering: a tool for multispectral image compression

    NASA Astrophysics Data System (ADS)

    Ciprian, R.; Carbucicchio, M.

    2011-11-01

    In this work a new compression method for multispectral images has been proposed: the 'colorimetric-spectral clustering'. The basic idea arises from the well-known cluster analysis, a multivariate analysis which finds the natural links between objects grouping them into clusters. In the colorimetric-spectral clustering compression method, the objects are the spectral reflectance factors of the multispectral images that are grouped into clusters on the basis of their colour difference. In particular two spectra can belong to the same cluster only if their colour difference is lower than a threshold fixed before starting the compression procedure. The performance of the colorimetric-spectral clustering has been compared to the k-means cluster analysis, in which the Euclidean distance between spectra is considered, to the principal component analysis and to the LabPQR method. The colorimetric-spectral clustering is able to preserve both the spectral and the colorimetric information of a multispectral image, allowing this information to be reproduced for all pixels of the image.

  16. Image Recommendation Algorithm Using Feature-Based Collaborative Filtering

    NASA Astrophysics Data System (ADS)

    Kim, Deok-Hwan

    As the multimedia contents market continues its rapid expansion, the amount of image contents used in mobile phone services, digital libraries, and catalog service is increasing remarkably. In spite of this rapid growth, users experience high levels of frustration when searching for the desired image. Even though new images are profitable to the service providers, traditional collaborative filtering methods cannot recommend them. To solve this problem, in this paper, we propose feature-based collaborative filtering (FBCF) method to reflect the user's most recent preference by representing his purchase sequence in the visual feature space. The proposed approach represents the images that have been purchased in the past as the feature clusters in the multi-dimensional feature space and then selects neighbors by using an inter-cluster distance function between their feature clusters. Various experiments using real image data demonstrate that the proposed approach provides a higher quality recommendation and better performance than do typical collaborative filtering and content-based filtering techniques.

  17. Lossless compression of stromatolite images: a biogenicity index?

    PubMed

    Corsetti, Frank A; Storrie-Lombardi, Michael C

    2003-01-01

    It has been underappreciated that inorganic processes can produce stromatolites (laminated macroscopic constructions commonly attreibuted to microbiological activity), thus calling into question the long-standing use of stromatolites as de facto evidence for ancient life. Using lossless compression on unmagnified reflectance red-green-blue (RGB) images of matched stromatolite-sediment matrix pairs as a complexity metric, the compressibility index (delta(c), the log ratio of the ratio of the compressibility of the matrix versus the target) of a putative abiotic test stromatolite is significantly less than the delta(c) of a putative biotic test stromatolite. There is a clear separation in delta(c) between the different stromatolites discernible at the outcrop scale. In terms of absolute compressibility, the sediment matrix between the stromatolite columns was low in both cases, the putative abiotic stromatolite was similar to the intracolumnar sediment, and the putative biotic stromatolite was much greater (again discernible at the outcrop scale). We propose tht this metric would be useful for evaluating the biogenicity of images obtained by the camera systems available on every Mars surface probe launched to date including Viking, Pathfinder, Beagle, and the two Mars Exploration Rovers. PMID:14994715

  18. Image segmentation by iterative parallel region growing with application to data compression and image analysis

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    1988-01-01

    Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.

  19. Compressive imaging and dual moire laser interferometer as metrology tools

    NASA Astrophysics Data System (ADS)

    Abolbashari, Mehrdad

    Metrology is the science of measurement and deals with measuring different physical aspects of objects. In this research the focus has been on two basic problems that metrologists encounter. The first problem is the trade-off between the range of measurement and the corresponding resolution; measurement of physical parameters of a large object or scene accompanies by losing detailed information about small regions of the object. Indeed, instruments and techniques that perform coarse measurements are different from those that make fine measurements. This problem persists in the field of surface metrology, which deals with accurate measurement and detailed analysis of surfaces. For example, laser interferometry is used for fine measurement (in nanometer scale) while to measure the form of in object, which lies in the field of coarse measurement, a different technique like moire technique is used. We introduced a new technique to combine measurement from instruments with better resolution and smaller measurement range with those with coarser resolution and larger measurement range. We first measure the form of the object with coarse measurement techniques and then make some fine measurement for features in regions of interest. The second problem is the measurement conditions that lead to difficulties in measurement. These conditions include low light condition, large range of intensity variation, hyperspectral measurement, etc. Under low light condition there is not enough light for detector to detect light from object, which results in poor measurements. Large range of intensity variation results in a measurement with some saturated regions on the camera as well as some dark regions. We use compressive sampling based imaging systems to address these problems. Single pixel compressive imaging uses a single detector instead of array of detectors and reconstructs a complete image after several measurements. In this research we examined compressive imaging for different

  20. Compression and storage of multiple images with modulating blazed gratings

    NASA Astrophysics Data System (ADS)

    Yin, Shen; Tao, Shaohua

    2013-07-01

    A method for compressing, storing and reconstructing high-volume data is presented in this paper. Blazed gratings with different orientations and blaze angles are used to superpose many grayscaled images, and customized spatial filters are used to selectively recover the corresponding images from the diffraction spots of the superposed images. The simulation shows that as many as 198 images with a size of 512 pixels × 512 pixels can be stored in a diffractive optical element (DOE) with complex amplitudes of the same size, and the recovered images from the DOE are discernible with high visual quality. Optical encryption/decryption can also be added to the digitized DOE to enhance the security of the stored data.

  1. Remotely sensed image compression based on wavelet transform

    NASA Technical Reports Server (NTRS)

    Kim, Seong W.; Lee, Heung K.; Kim, Kyung S.; Choi, Soon D.

    1995-01-01

    In this paper, we present an image compression algorithm that is capable of significantly reducing the vast amount of information contained in multispectral images. The developed algorithm exploits the spectral and spatial correlations found in multispectral images. The scheme encodes the difference between images after contrast/brightness equalization to remove the spectral redundancy, and utilizes a two-dimensional wavelet transform to remove the spatial redundancy. the transformed images are then encoded by Hilbert-curve scanning and run-length-encoding, followed by Huffman coding. We also present the performance of the proposed algorithm with the LANDSAT MultiSpectral Scanner data. The loss of information is evaluated by PSNR (peak signal to noise ratio) and classification capability.

  2. Real-Time Digital Compression Of Television Image Data

    NASA Technical Reports Server (NTRS)

    Barnes, Scott P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1990-01-01

    Digital encoding/decoding system compresses color television image data in real time for transmission at lower data rates and, consequently, lower bandwidths. Implements predictive coding process, in which each picture element (pixel) predicted from values of prior neighboring pixels, and coded transmission expresses difference between actual and predicted current values. Combines differential pulse-code modulation process with non-linear, nonadaptive predictor, nonuniform quantizer, and multilevel Huffman encoder.

  3. Compression of fingerprint data using the wavelet vector quantization image compression algorithm. 1992 progress report

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1992-04-11

    This report describes the development of a Wavelet Vector Quantization (WVQ) image compression algorithm for fingerprint raster files. The pertinent work was performed at Los Alamos National Laboratory for the Federal Bureau of Investigation. This document describes a previously-sent package of C-language source code, referred to as LAFPC, that performs the WVQ fingerprint compression and decompression tasks. The particulars of the WVQ algorithm and the associated design procedure are detailed elsewhere; the purpose of this document is to report the results of the design algorithm for the fingerprint application and to delineate the implementation issues that are incorporated in LAFPC. Special attention is paid to the computation of the wavelet transform, the fast search algorithm used for the VQ encoding, and the entropy coding procedure used in the transmission of the source symbols.

  4. Hyperspectral pixel classification from coded-aperture compressive imaging

    NASA Astrophysics Data System (ADS)

    Ramirez, Ana; Arce, Gonzalo R.; Sadler, Brian M.

    2012-06-01

    This paper describes a new approach and its associated theoretical performance guarantees for supervised hyperspectral image classification from compressive measurements obtained by a Coded Aperture Snapshot Spectral Imaging System (CASSI). In one snapshot, the two-dimensional focal plane array (FPA) in the CASSI system captures the coded and spectrally dispersed source field of a three-dimensional data cube. Multiple snapshots are used to construct a set of compressive spectral measurements. The proposed approach is based on the concept that each pixel in the hyper-spectral image lies in a low-dimensional subspace obtained from the training samples, and thus it can be represented as a sparse linear combination of vectors in the given subspace. The sparse vector representing the test pixel is then recovered from the set of compressive spectral measurements and it is used to determine the class label of the test pixel. The theoretical performance bounds of the classifier exploit the distance preservation condition satisfied by the multiple shot CASSI system and depend on the number of measurements collected, code aperture pattern, and similarity between spectral signatures in the dictionary. Simulation experiments illustrate the performance of the proposed classification approach.

  5. HVS-motivated quantization schemes in wavelet image compression

    NASA Astrophysics Data System (ADS)

    Topiwala, Pankaj N.

    1996-11-01

    Wavelet still image compression has recently been a focus of intense research, and appears to be maturing as a subject. Considerable coding gains over older DCT-based methods have been achieved, while the computational complexity has been made very competitive. We report here on a high performance wavelet still image compression algorithm optimized for both mean-squared error (MSE) and human visual system (HVS) characteristics. We present the problem of optimal quantization from a Lagrange multiplier point of view, and derive novel solutions. Ideally, all three components of a typical image compression system: transform, quantization, and entropy coding, should be optimized simultaneously. However, the highly nonlinear nature of quantization and encoding complicates the formulation of the total cost function. In this report, we consider optimizing the filter, and then the quantizer, separately, holding the other two components fixed. While optimal bit allocation has been treated in the literature, we specifically address the issue of setting the quantization stepsizes, which in practice is quite different. In this paper, we select a short high- performance filter, develop an efficient scalar MSE- quantizer, and four HVS-motivated quantizers which add some value visually without incurring any MSE losses. A combination of run-length and empirically optimized Huffman coding is fixed in this study.

  6. Geostationary Imaging FTS (GIFTS) Data Processing: Measurement Simulation and Compression

    NASA Technical Reports Server (NTRS)

    Huang, Hung-Lung; Revercomb, H. E.; Thom, J.; Antonelli, P. B.; Osborne, B.; Tobin, D.; Knuteson, R.; Garcia, R.; Dutcher, S.; Li, J.

    2001-01-01

    GIFTS (Geostationary Imaging Fourier Transform Spectrometer), a forerunner of next generation geostationary satellite weather observing systems, will be built to fly on the NASA EO-3 geostationary orbit mission in 2004 to demonstrate the use of large area detector arrays and readouts. Timely high spatial resolution images and quantitative soundings of clouds, water vapor, temperature, and pollutants of the atmosphere for weather prediction and air quality monitoring will be achieved. GIFTS is novel in terms of providing many scientific returns that traditionally can only be achieved by separate advanced imaging and sounding systems. GIFTS' ability to obtain half-hourly high vertical density wind over the full earth disk is revolutionary. However, these new technologies bring forth many challenges for data transmission, archiving, and geophysical data processing. In this paper, we will focus on the aspect of data volume and downlink issues by conducting a GIFTS data compression experiment. We will discuss the scenario of using principal component analysis as a foundation for atmospheric data retrieval and compression of uncalibrated and un-normalized interferograms. The effects of compression on the degradation of the signal and noise reduction in interferogram and spectral domains will be highlighted. A simulation system developed to model the GIFTS instrument measurements is described in detail.

  7. Lossless compression of the geostationary imaging Fourier transform spectrometer (GIFTS) data via predictive partitioned vector quantization

    NASA Astrophysics Data System (ADS)

    Huang, Bormin; Wei, Shih-Chieh; Huang, Allen H.-L.; Smuga-Otto, Maciek; Knuteson, Robert; Revercomb, Henry E.; Smith, William L., Sr.

    2007-09-01

    The Geostationary Imaging Fourier Transform Spectrometer (GIFTS), as part of NASA's New Millennium Program, is an advanced instrument to provide high-temporal-resolution measurements of atmospheric temperature and water vapor, which will greatly facilitate the detection of rapid atmospheric changes associated with destructive weather events, including tornadoes, severe thunderstorms, flash floods, and hurricanes. The Committee on Earth Science and Applications from Space under the National Academy of Sciences recommended that NASA and NOAA complete the fabrication, testing, and space qualification of the GIFTS instrument and that they support the international effort to launch GIFTS by 2008. Lossless data compression is critical for the overall success of the GIFTS experiment, or any other very high data rate experiment where the data is to be disseminated to the user community in real-time and archived for scientific studies and climate assessment. In general, lossless data compression is needed for high data rate hyperspectral sounding instruments such as GIFTS for (1) transmitting the data down to the ground within the bandwidth capabilities of the satellite transmitter and ground station receiving system, (2) compressing the data at the ground station for distribution to the user community (as is traditionally performed with GOES data via satellite rebroadcast), and (3) archival of the data without loss of any information content so that it can be used in scientific studies and climate assessment for many years after the date of the measurements. In this paper we study lossless compression of GIFTS data that has been collected as part of the calibration or ground based tests that were conducted in 2006. The predictive partitioned vector quantization (PPVQ) is investigated for higher lossless compression performance. PPVQ consists of linear prediction, channel partitioning and vector quantization. It yields an average compression ratio of 4.65 on the GIFTS test

  8. An adaptive technique to maximize lossless image data compression of satellite images

    NASA Technical Reports Server (NTRS)

    Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe

    1994-01-01

    Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.

  9. Compressive sensing for direct millimeter-wave holographic imaging.

    PubMed

    Qiao, Lingbo; Wang, Yingxin; Shen, Zongjun; Zhao, Ziran; Chen, Zhiqiang

    2015-04-10

    Direct millimeter-wave (MMW) holographic imaging, which provides both the amplitude and phase information by using the heterodyne mixing technique, is considered a powerful tool for personnel security surveillance. However, MWW imaging systems usually suffer from the problem of high cost or relatively long data acquisition periods for array or single-pixel systems. In this paper, compressive sensing (CS), which aims at sparse sampling, is extended to direct MMW holographic imaging for reducing the number of antenna units or the data acquisition time. First, following the scalar diffraction theory, an exact derivation of the direct MMW holographic reconstruction is presented. Then, CS reconstruction strategies for complex-valued MMW images are introduced based on the derived reconstruction formula. To pursue the applicability for near-field MMW imaging and more complicated imaging targets, three sparsity bases, including total variance, wavelet, and curvelet, are evaluated for the CS reconstruction of MMW images. We also discuss different sampling patterns for single-pixel, linear array and two-dimensional array MMW imaging systems. Both simulations and experiments demonstrate the feasibility of recovering MMW images from measurements at 1/2 or even 1/4 of the Nyquist rate. PMID:25967314

  10. Fast Second Degree Total Variation Method for Image Compressive Sensing

    PubMed Central

    Liu, Pengfei; Xiao, Liang; Zhang, Jun

    2015-01-01

    This paper presents a computationally efficient algorithm for image compressive sensing reconstruction using a second degree total variation (HDTV2) regularization. Firstly, a preferably equivalent formulation of the HDTV2 functional is derived, which can be formulated as a weighted L1-L2 mixed norm of second degree image derivatives under the spectral decomposition framework. Secondly, using the equivalent formulation of HDTV2, we introduce an efficient forward-backward splitting (FBS) scheme to solve the HDTV2-based image reconstruction model. Furthermore, from the averaged non-expansive operator point of view, we make a detailed analysis on the convergence of the proposed FBS algorithm. Experiments on medical images demonstrate that the proposed method outperforms several fast algorithms of the TV and HDTV2 reconstruction models in terms of peak signal to noise ratio (PSNR), structural similarity index (SSIM) and convergence speed. PMID:26361008