Science.gov

Sample records for image compression recommendation

  1. The New CCSDS Image Compression Recommendation

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron B.; Masschelein, Bart; Moury, Gilles; Schafer, Christoph

    2004-01-01

    The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists a two dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An ASIC implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm.

  2. The New CCSDS Image Compression Recommendation

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron; Masschelein, Bart; Moury, Gilles; Schaefer, Christoph

    2005-01-01

    The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists of a two-dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-Earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An Application-Specific Integrated Circuit (ASIC) implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm. Performance results and comparisons with other compressors are given for a test set of space images.

  3. Radiological Image Compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  4. Fractal image compression

    NASA Technical Reports Server (NTRS)

    Barnsley, Michael F.; Sloan, Alan D.

    1989-01-01

    Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed.

  5. Compressive Optical Image Encryption

    PubMed Central

    Li, Jun; Sheng Li, Jiao; Yang Pan, Yang; Li, Rong

    2015-01-01

    An optical image encryption technique based on compressive sensing using fully optical means has been proposed. An object image is first encrypted to a white-sense stationary noise pattern using a double random phase encoding (DRPE) method in a Mach-Zehnder interferometer. Then, the encrypted image is highly compressed to a signal using single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the encrypted image is reconstructed well via compressive sensing theory, and the original image can be decrypted with three reconstructed holograms and the correct keys. The numerical simulations show that the method is effective and suitable for optical image security transmission in future all-optical networks because of the ability of completely optical implementation and substantially smaller hologram data volume. PMID:25992946

  6. Compressive optical image encryption.

    PubMed

    Li, Jun; Sheng Li, Jiao; Yang Pan, Yang; Li, Rong

    2015-01-01

    An optical image encryption technique based on compressive sensing using fully optical means has been proposed. An object image is first encrypted to a white-sense stationary noise pattern using a double random phase encoding (DRPE) method in a Mach-Zehnder interferometer. Then, the encrypted image is highly compressed to a signal using single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the encrypted image is reconstructed well via compressive sensing theory, and the original image can be decrypted with three reconstructed holograms and the correct keys. The numerical simulations show that the method is effective and suitable for optical image security transmission in future all-optical networks because of the ability of completely optical implementation and substantially smaller hologram data volume. PMID:25992946

  7. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  8. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  9. Mosaic image compression

    NASA Astrophysics Data System (ADS)

    Chaudhari, Kapil A.; Reeves, Stanley J.

    2005-02-01

    Most consumer-level digital cameras use a color filter array to capture color mosaic data followed by demosaicking to obtain full-color images. However, many sophisticated demosaicking algorithms are too complex to implement on-board a camera. To use these algorithms, one must transfer the mosaic data from the camera to a computer without introducing compression losses that could generate artifacts in the demosaicked image. The memory required for losslessly stored mosaic images severely restricts the number of images that can be stored in the camera. Therefore, we need an algorithm to compress the original mosaic data losslessly so that it can later be transferred intact for demosaicking. We propose a new lossless compression technique for mosaic images in this paper. Ordinary image compression methods do not apply to mosaic images because of their non-canonical color sampling structure. Because standard compression methods such as JPEG, JPEG2000, etc. are already available in most digital cameras, we have chosen to build our algorithms using a standard method as a key part of the system. The algorithm begins by separating the mosaic image into 3 color (RGB) components. This is followed by an interpolation or down-sampling operation--depending on the particular variation of the algorithm--that makes all three components the same size. Using the three color components, we form a color image that is coded with JPEG. After appropriately reformatting the data, we calculate the residual between the original image and the coded image and then entropy-code the residual values corresponding to the mosaic data.

  10. [Irreversible image compression in radiology. Current status].

    PubMed

    Pinto dos Santos, D; Jungmann, F; Friese, C; Düber, C; Mildenberger, P

    2013-03-01

    Due to increasing amounts of data in radiology methods for image compression appear both economically and technically interesting. Irreversible image compression allows markedly higher reduction of data volume in comparison with reversible compression algorithms but is, however, accompanied by a certain amount of mathematical and visual loss of information. Various national and international radiological societies have published recommendations for the use of irreversible image compression. The degree of acceptable compression varies across modalities and regions of interest.The DICOM standard supports JPEG, which achieves compression through tiling, DCT/DWT and quantization. Although mathematical loss due to rounding up errors and reduction of high frequency information occurs this results in relatively low visual degradation.It is still unclear where to implement irreversible compression in the radiological workflow as only few studies analyzed the impact of irreversible compression on specialized image postprocessing. As long as this is within the limits recommended by the German Radiological Society irreversible image compression could be implemented directly at the imaging modality as it would comply with § 28 of the roentgen act (RöV). PMID:23456043

  11. Progressive compressive imager

    NASA Astrophysics Data System (ADS)

    Evladov, Sergei; Levi, Ofer; Stern, Adrian

    2012-06-01

    We have designed and built a working automatic progressive sampling imaging system based on the vector sensor concept, which utilizes a unique sampling scheme of Radon projections. This sampling scheme makes it possible to progressively add information resulting in tradeoff between compression and the quality of reconstruction. The uniqueness of our sampling is that in any moment of the acquisition process the reconstruction can produce a reasonable version of the image. The advantage of the gradual addition of the samples is seen when the sparsity rate of the object is unknown, and thus the number of needed measurements. We have developed the iterative algorithm OSO (Ordered Sets Optimization) which employs our sampling scheme for creation of nearly uniform distributed sets of samples, which allows the reconstruction of Mega-Pixel images. We present the good quality reconstruction from compressed data ratios of 1:20.

  12. Compressive optical imaging systems

    NASA Astrophysics Data System (ADS)

    Wu, Yuehao

    Compared to the classic Nyquist sampling theorem, Compressed Sensing or Compressive Sampling (CS) was proposed as a more efficient alternative for sampling sparse signals. In this dissertation, we discuss the implementation of the CS theory in building a variety of optical imaging systems. CS-based Imaging Systems (CSISs) exploit the sparsity of optical images in their transformed domains by imposing incoherent CS measurement patterns on them. The amplitudes and locations of sparse frequency components of optical images in their transformed domains can be reconstructed from the CS measurement results by solving an l1-regularized minimization problem. In this work, we review the theoretical background of the CS theory and present two hardware implementation schemes for CSISs, including a single pixel detector based scheme and an array detector based scheme. The first implementation scheme is suitable for acquiring Two-Dimensional (2D) spatial information of the imaging scene. We demonstrate the feasibility of this implementation scheme by developing a single pixel camera, a multispectral imaging system, and an optical sectioning microscope for fluorescence microscopy. The array detector based scheme is suitable for hyperspectral imaging applications, wherein both the spatial and spectral information of the imaging scene are of interest. We demonstrate the feasibility of this scheme by developing a Digital Micromirror Device-based Snapshot Spectral Imaging (DMD-SSI) system, which implements CS measurement processes on the Three-Dimensional (3D) spatial/spectral information of the imaging scene. Tens of spectral images can be reconstructed from the DMD-SSI system simultaneously without any mechanical or temporal scanning processes.

  13. Adaptive compression of image data

    NASA Astrophysics Data System (ADS)

    Hludov, Sergei; Schroeter, Claus; Meinel, Christoph

    1998-09-01

    In this paper we will introduce a method of analyzing images, a criterium to differentiate between images, a compression method of medical images in digital form based on the classification of the image bit plane and finally an algorithm for adaptive image compression. The analysis of the image content is based on a valuation of the relative number and absolute values of the wavelet coefficients. A comparison between the original image and the decoded image will be done by a difference criteria calculated by the wavelet coefficients of the original image and the decoded image of the first and second iteration step of the wavelet transformation. This adaptive image compression algorithm is based on a classification of digital images into three classes and followed by the compression of the image by a suitable compression algorithm. Furthermore we will show that applying these classification rules on DICOM-images is a very effective method to do adaptive compression. The image classification algorithm and the image compression algorithms have been implemented in JAVA.

  14. Image compression using constrained relaxation

    NASA Astrophysics Data System (ADS)

    He, Zhihai

    2007-01-01

    In this work, we develop a new data representation framework, called constrained relaxation for image compression. Our basic observation is that an image is not a random 2-D array of pixels. They have to satisfy a set of imaging constraints so as to form a natural image. Therefore, one of the major tasks in image representation and coding is to efficiently encode these imaging constraints. The proposed data representation and image compression method not only achieves more efficient data compression than the state-of-the-art H.264 Intra frame coding, but also provides much more resilience to wireless transmission errors with an internal error-correction capability.

  15. Compressive sensing in medical imaging

    PubMed Central

    Graff, Christian G.; Sidky, Emil Y.

    2015-01-01

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400

  16. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  17. Lossy Compression of ACS images

    NASA Astrophysics Data System (ADS)

    Cox, Colin

    2004-01-01

    A method of compressing images stored as floating point arrays was proposed several years ago by White and Greenfield. With the increased image sizes encountered in the last few years and the consequent need to distribute large data volumes, the value of applying such a procedure has become more evident. Methods such as this which offer significant compression ratios are lossy and there is always some concern that statistically important information might be discarded. Several astronomical images have been analyzed and, in the examples tested, compression ratios of about six were obtained with no significant information loss.

  18. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  19. Object-Based Image Compression

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    2003-01-01

    Image compression frequently supports reduced storage requirement in a computer system, as well as enhancement of effective channel bandwidth in a communication system, by decreasing the source bit rate through reduction of source redundancy. The majority of image compression techniques emphasize pixel-level operations, such as matching rectangular or elliptical sampling blocks taken from the source data stream, with exemplars stored in a database (e.g., a codebook in vector quantization or VQ). Alternatively, one can represent a source block via transformation, coefficient quantization, and selection of coefficients deemed significant for source content approximation in the decompressed image. This approach, called transform coding (TC), has predominated for several decades in the signal and image processing communities. A further technique that has been employed is the deduction of affine relationships from source properties such as local self-similarity, which supports the construction of adaptive codebooks in a self-VQ paradigm that has been called iterated function systems (IFS). Although VQ, TC, and IFS based compression algorithms have enjoyed varying levels of success for different types of applications, bit rate requirements, and image quality constraints, few of these algorithms examine the higher-level spatial structure of an image, and fewer still exploit this structure to enhance compression ratio. In this paper, we discuss a fourth type of compression algorithm, called object-based compression, which is based on research in joint segmentaton and compression, as well as previous research in the extraction of sketch-like representations from digital imagery. Here, large image regions that correspond to contiguous recognizeable objects or parts of objects are segmented from the source, then represented compactly in the compressed image. Segmentation is facilitated by source properties such as size, shape, texture, statistical properties, and spectral

  20. A programmable image compression system

    NASA Technical Reports Server (NTRS)

    Farrelle, Paul M.

    1989-01-01

    A programmable image compression system which has the necessary flexibility to address diverse imaging needs is described. It can compress and expand single frame video images (monochrome or color) as well as documents and graphics (black and white or color) for archival or transmission applications. Through software control, the compression mode can be set for lossless or controlled quality coding; the image size and bit depth can be varied; and the image source and destination devices can be readily changed. Despite the large combination of image data types, image sources, and algorithms, the system provides a simple consistent interface to the programmer. This system (OPTIPAC) is based on the TITMS320C25 digital signal processing (DSP) chip and has been implemented as a co-processor board for an IBM PC-AT compatible computer. The underlying philosophy can readily be applied to different hardware platforms. By using multiple DSP chips or incorporating algorithm specific chips, the compression and expansion times can be significantly reduced to meet performance requirements.

  1. Astronomical context coder for image compression

    NASA Astrophysics Data System (ADS)

    Pata, Petr; Schindler, Jaromir

    2015-10-01

    Recent lossless still image compression formats are powerful tools for compression of all kind of common images (pictures, text, schemes, etc.). Generally, the performance of a compression algorithm depends on its ability to anticipate the image function of the processed image. In other words, a compression algorithm to be successful, it has to take perfectly the advantage of coded image properties. Astronomical data form a special class of images and they have, among general image properties, also some specific characteristics which are unique. If a new coder is able to correctly use the knowledge of these special properties it should lead to its superior performance on this specific class of images at least in terms of the compression ratio. In this work, the novel lossless astronomical image data compression method will be presented. The achievable compression ratio of this new coder will be compared to theoretical lossless compression limit and also to the recent compression standards of the astronomy and general multimedia.

  2. [Medical image compression: a review].

    PubMed

    Noreña, Tatiana; Romero, Eduardo

    2013-01-01

    Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings. PMID:23715317

  3. Hyperspectral imaging using compressed sensing

    NASA Astrophysics Data System (ADS)

    Ramirez I., Gabriel Eduardo; Manian, Vidya B.

    2012-06-01

    Compressed sensing (CS) has attracted a lot of attention in recent years as a promising signal processing technique that exploits a signal's sparsity to reduce its size. It allows for simple compression that does not require a lot of additional computational power, and would allow physical implementation at the sensor using spatial light multiplexers using Texas Instruments (TI) digital micro-mirror device (DMD). The DMD can be used as a random measurement matrix, reflecting the image off the DMD is the equivalent of an inner product between the images individual pixels and the measurement matrix. CS however is asymmetrical, meaning that the signals recovery or reconstruction from the measurements does require a higher level of computation. This makes the prospect of working with the compressed version of the signal in implementations such as detection or classification much more efficient. If an initial analysis shows nothing of interest, the signal need not be reconstructed. Many hyper-spectral image applications are precisely focused on these areas, and would greatly benefit from a compression technique like CS that could help minimize the light sensor down to a single pixel, lowering costs associated with the cameras while reducing the large amounts of data generated by all the bands. The present paper will show an implementation of CS using a single pixel hyper-spectral sensor, and compare the reconstructed images to those obtained through the use of a regular sensor.

  4. Absolutely lossless compression of medical images.

    PubMed

    Ashraf, Robina; Akbar, Muhammad

    2005-01-01

    Data in medical images is very large and therefore for storage and/or transmission of these images, compression is essential. A method is proposed which provides high compression ratios for radiographic images with no loss of diagnostic quality. In the approach an image is first compressed at a high compression ratio but with loss, and the error image is then compressed losslessly. The resulting compression is not only strictly lossless, but also expected to yield a high compression ratio, especially if the lossy compression technique is good. A neural network vector quantizer (NNVQ) is used as a lossy compressor, while for lossless compression Huffman coding is used. Quality of images is evaluated by comparing with standard compression techniques available. PMID:17281110

  5. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    PubMed Central

    Li, Xiangwei; Lan, Xuguang; Yang, Meng; Xue, Jianru; Zheng, Nanning

    2014-01-01

    Compressive Sensing Imaging (CSI) is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS) acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4∼2 dB comparing with current state-of-the-art, while maintaining a low computational complexity. PMID:25490597

  6. Compressing TV-image data

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Lee, J.; Rice, R. F.; Schlutsmeyer, A. P.

    1981-01-01

    Compressing technique calculates activity estimator for each segment of image line. Estimator is used in conjunction with allowable bits per line, N, to determine number of bits necessary to code each segment and which segments can tolerate truncation. Preprocessed line data are then passed to adaptive variable-length coder, which selects optimum transmission code. Method increases capacity of broadcast and cable television transmissions and helps reduce size of storage medium for video and digital audio recordings.

  7. Snapshot colored compressive spectral imager.

    PubMed

    Correa, Claudia V; Arguello, Henry; Arce, Gonzalo R

    2015-10-01

    Traditional spectral imaging approaches require sensing all the voxels of a scene. Colored mosaic FPA detector-based architectures can acquire sets of the scene's spectral components, but the number of spectral planes depends directly on the number of available filters used on the FPA, which leads to reduced spatiospectral resolutions. Instead of sensing all the voxels of the scene, compressive spectral imaging (CSI) captures coded and dispersed projections of the spatiospectral source. This approach mitigates the resolution issues by exploiting optical phenomena in lenses and other elements, which, in turn, compromise the portability of the devices. This paper presents a compact snapshot colored compressive spectral imager (SCCSI) that exploits the benefits of the colored mosaic FPA detectors and the compression capabilities of CSI sensing techniques. The proposed optical architecture has no moving parts and can capture the spatiospectral information of a scene in a single snapshot by using a dispersive element and a color-patterned detector. The optical and the mathematical models of SCCSI are presented along with a testbed implementation of the system. Simulations and real experiments show the accuracy of SCCSI and compare the reconstructions with those of similar CSI optical architectures, such as the CASSI and SSCSI systems, resulting in improvements of up to 6 dB and 1 dB of PSNR, respectively. PMID:26479928

  8. Longwave infrared compressive hyperspectral imager

    NASA Astrophysics Data System (ADS)

    Dupuis, Julia R.; Kirby, Michael; Cosofret, Bogdan R.

    2015-06-01

    Physical Sciences Inc. (PSI) is developing a longwave infrared (LWIR) compressive sensing hyperspectral imager (CS HSI) based on a single pixel architecture for standoff vapor phase plume detection. The sensor employs novel use of a high throughput stationary interferometer and a digital micromirror device (DMD) converted for LWIR operation in place of the traditional cooled LWIR focal plane array. The CS HSI represents a substantial cost reduction over the state of the art in LWIR HSI instruments. Radiometric improvements for using the DMD in the LWIR spectral range have been identified and implemented. In addition, CS measurement and sparsity bases specifically tailored to the CS HSI instrument and chemical plume imaging have been developed and validated using LWIR hyperspectral image streams of chemical plumes. These bases enable comparable statistics to detection based on uncompressed data. In this paper, we present a system model predicting the overall performance of the CS HSI system. Results from a breadboard build and test validating the system model are reported. In addition, the measurement and sparsity basis work demonstrating the plume detection on compressed hyperspectral images is presented.

  9. Correlation and image compression for limited-bandwidth CCD.

    SciTech Connect

    Thompson, Douglas G.

    2005-07-01

    As radars move to Unmanned Aerial Vehicles with limited-bandwidth data downlinks, the amount of data stored and transmitted with each image becomes more significant. This document gives the results of a study to determine the effect of lossy compression in the image magnitude and phase on Coherent Change Detection (CCD). We examine 44 lossy compression types, plus lossless zlib compression, and test each compression method with over 600 CCD image pairs. We also derive theoretical predictions for the correlation for most of these compression schemes, which compare favorably with the experimental results. We recommend image transmission formats for limited-bandwidth programs having various requirements for CCD, including programs which cannot allow performance degradation and those which have stricter bandwidth requirements at the expense of CCD performance.

  10. High compression image and image sequence coding

    NASA Technical Reports Server (NTRS)

    Kunt, Murat

    1989-01-01

    The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.

  11. Compressive imaging in scattering media.

    PubMed

    Durán, V; Soldevila, F; Irles, E; Clemente, P; Tajahuerce, E; Andrés, P; Lancis, J

    2015-06-01

    One challenge that has long held the attention of scientists is that of clearly seeing objects hidden by turbid media, as smoke, fog or biological tissue, which has major implications in fields such as remote sensing or early diagnosis of diseases. Here, we combine structured incoherent illumination and bucket detection for imaging an absorbing object completely embedded in a scattering medium. A sequence of low-intensity microstructured light patterns is launched onto the object, whose image is accurately reconstructed through the light fluctuations measured by a single-pixel detector. Our technique is noninvasive, does not require coherent sources, raster scanning nor time-gated detection and benefits from the compressive sensing strategy. As a proof of concept, we experimentally retrieve the image of a transilluminated target both sandwiched between two holographic diffusers and embedded in a 6mm-thick sample of chicken breast. PMID:26072804

  12. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  13. Image coding compression based on DCT

    NASA Astrophysics Data System (ADS)

    Feng, Fei; Liu, Peixue; Jiang, Baohua

    2012-04-01

    With the development of computer science and communications, the digital image processing develops more and more fast. High quality images are loved by people, but it will waste more stored space in our computer and it will waste more bandwidth when it is transferred by Internet. Therefore, it's necessary to have an study on technology of image compression. At present, many algorithms about image compression is applied to network and the image compression standard is established. In this dissertation, some analysis on DCT will be written. Firstly, the principle of DCT will be shown. It's necessary to realize image compression, because of the widely using about this technology; Secondly, we will have a deep understanding of DCT by the using of Matlab, the process of image compression based on DCT, and the analysis on Huffman coding; Thirdly, image compression based on DCT will be shown by using Matlab and we can have an analysis on the quality of the picture compressed. It is true that DCT is not the only algorithm to realize image compression. I am sure there will be more algorithms to make the image compressed have a high quality. I believe the technology about image compression will be widely used in the network or communications in the future.

  14. Simultaneous denoising and compression of multispectral images

    NASA Astrophysics Data System (ADS)

    Hagag, Ahmed; Amin, Mohamed; Abd El-Samie, Fathi E.

    2013-01-01

    A new technique for denoising and compression of multispectral satellite images to remove the effect of noise on the compression process is presented. One type of multispectral images has been considered: Landsat Enhanced Thematic Mapper Plus. The discrete wavelet transform (DWT), the dual-tree DWT, and a simple Huffman coder are used in the compression process. Simulation results show that the proposed technique is more effective than other traditional compression-only techniques.

  15. Image Compression in Signal-Dependent Noise

    NASA Astrophysics Data System (ADS)

    Shahnaz, Rubeena; Walkup, John F.; Krile, Thomas F.

    1999-09-01

    The performance of an image compression scheme is affected by the presence of noise, and the achievable compression may be reduced significantly. We investigated the effects of specific signal-dependent-noise (SDN) sources, such as film-grain and speckle noise, on image compression, using JPEG (Joint Photographic Experts Group) standard image compression. For the improvement of compression ratios noisy images are preprocessed for noise suppression before compression is applied. Two approaches are employed for noise suppression. In one approach an estimator designed specifically for the SDN model is used. In an alternate approach, the noise is first transformed into signal-independent noise (SIN) and then an estimator designed for SIN is employed. The performances of these two schemes are compared. The compression results achieved for noiseless, noisy, and restored images are also presented.

  16. Studies on image compression and image reconstruction

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Nori, Sekhar; Araj, A.

    1994-01-01

    During this six month period our works concentrated on three, somewhat different areas. We looked at and developed a number of error concealment schemes for use in a variety of video coding environments. This work is described in an accompanying (draft) Masters thesis. In the thesis we describe application of this techniques to the MPEG video coding scheme. We felt that the unique frame ordering approach used in the MPEG scheme would be a challenge to any error concealment/error recovery technique. We continued with our work in the vector quantization area. We have also developed a new type of vector quantizer, which we call a scan predictive vector quantization. The scan predictive VQ was tested on data processed at Goddard to approximate Landsat 7 HRMSI resolution and compared favorably with existing VQ techniques. A paper describing this work is included. The third area is concerned more with reconstruction than compression. While there is a variety of efficient lossless image compression schemes, they all have a common property that they use past data to encode future data. This is done either via taking differences, context modeling, or by building dictionaries. When encoding large images, this common property becomes a common flaw. When the user wishes to decode just a portion of the image, the requirement that the past history be available forces the decoding of a significantly larger portion of the image than desired by the user. Even with intelligent partitioning of the image dataset, the number of pixels decoded may be four times the number of pixels requested. We have developed an adaptive scanning strategy which can be used with any lossless compression scheme and which lowers the additional number of pixels to be decoded to about 7 percent of the number of pixels requested! A paper describing these results is included.

  17. Spectral image compression for data communications

    NASA Astrophysics Data System (ADS)

    Hauta-Kasari, Markku; Lehtonen, Juha; Parkkinen, Jussi P. S.; Jaeaeskelaeinen, Timo

    2000-12-01

    We report a technique for spectral image compression to be used in the field of data communications. The spectral domain of the images is represented by a low-dimensional component image set, which is used to obtain an efficient compression of the high-dimensional spectral data. The component images are compressed using a similar technique as the JPEG- and MPEG-type compressions use to subsample the chrominance channels. The spectral compression is based on Principal Component Analysis (PCA) combined with color image transmission coding technique of 'chromatic channel subsampling' of the component images. The component images are subsampled using 4:2:2, 4:2:0, and 4:1:1-based compressions. In addition, we extended the test for larger block sizes and larger number of component images than in the original JPEG- and MPEG-standards. Totally 50 natural spectral images were used as test material in our experiments. Several error measures of the compression are reported. The same compressions are done using Independent Component Analysis and the results are compared with PCA. These methods give a good compression ratio while keeping visual quality of color still good. Quantitative comparisons between the original and reconstructed spectral images are presented.

  18. Tomographic Image Compression Using Multidimensional Transforms.

    ERIC Educational Resources Information Center

    Villasenor, John D.

    1994-01-01

    Describes a method for compressing tomographic images obtained using Positron Emission Tomography (PET) and Magnetic Resonance (MR) by applying transform compression using all available dimensions. This takes maximum advantage of redundancy of the data, allowing significant increases in compression efficiency and performance. (13 references) (KRN)

  19. Color space selection for JPEG image compression

    NASA Astrophysics Data System (ADS)

    Moroney, Nathan; Fairchild, Mark D.

    1995-10-01

    The Joint Photographic Experts Group's image compression algorithm has been shown to provide a very efficient and powerful method of compressing images. However, there is little substantive information about which color space should be utilized when implementing the JPEG algorithm. Currently, the JPEG algorithm is set up for use with any three-component color space. The objective of this research is to determine whether or not the color space selected will significantly improve the image compression. The RGB, XYZ, YIQ, CIELAB, CIELUV, and CIELAB LCh color spaces were examined and compared. Both numerical measures and psychophysical techniques were used to assess the results. The final results indicate that the device space, RGB, is the worst color space to compress images. In comparison, the nonlinear transforms of the device space, CIELAB and CIELUV, are the best color spaces to compress images. The XYZ, YIQ, and CIELAB LCh color spaces resulted in intermediate levels of compression.

  20. Compressing images for the Internet

    NASA Astrophysics Data System (ADS)

    Beretta, Giordano B.

    1998-01-01

    The World Wide Web has rapidly become the hot new mass communications medium. Content creators are using similar design and layout styles as in printed magazines, i.e., with many color images and graphics. The information is transmitted over plain telephone lines, where the speed/price trade-off is much more severe than in the case of printed media. The standard design approach is to use palettized color and to limit as much as possible the number of colors used, so that the images can be encoded with a small number of bits per pixel using the Graphics Interchange Format (GIF) file format. The World Wide Web standards contemplate a second data encoding method (JPEG) that allows color fidelity but usually performs poorly on text, which is a critical element of information communicated on this medium. We analyze the spatial compression of color images and describe a methodology for using the JPEG method in a way that allows a compact representation while preserving full color fidelity.

  1. An efficient medical image compression scheme.

    PubMed

    Li, Xiaofeng; Shen, Yi; Ma, Jiachen

    2005-01-01

    In this paper, a fast lossless compression scheme is presented for the medical image. This scheme consists of two stages. In the first stage, a Differential Pulse Code Modulation (DPCM) is used to decorrelate the raw image data, therefore increasing the compressibility of the medical image. In the second stage, an effective scheme based on the Huffman coding method is developed to encode the residual image. This newly proposed scheme could reduce the cost for the Huffman coding table while achieving high compression ratio. With this algorithm, a compression ratio higher than that of the lossless JPEG method for image can be obtained. At the same time, this method is quicker than the lossless JPEG2000. In other words, the newly proposed algorithm provides a good means for lossless medical image compression. PMID:17280962

  2. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  3. Digital Image Compression Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.

    1993-01-01

    The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.

  4. Segmentation-based CT image compression

    NASA Astrophysics Data System (ADS)

    Thammineni, Arunoday; Mukhopadhyay, Sudipta; Kamath, Vidya

    2004-04-01

    The existing image compression standards like JPEG and JPEG 2000, compress the whole image as a single frame. This makes the system simple but inefficient. The problem is acute for applications where lossless compression is mandatory viz. medical image compression. If the spatial characteristics of the image are considered, it can give rise to a more efficient coding scheme. For example, CT reconstructed images have uniform background outside the field of view (FOV). Even the portion within the FOV can be divided as anatomically relevant and irrelevant parts. They have distinctly different statistics. Hence coding them separately will result in more efficient compression. Segmentation is done based on thresholding and shape information is stored using 8-connected differential chain code. Simple 1-D DPCM is used as the prediction scheme. The experiments show that the 1st order entropies of images fall by more than 11% when each segment is coded separately. For simplicity and speed of decoding Huffman code is chosen for entropy coding. Segment based coding will have an overhead of one table per segment but the overhead is minimal. Lossless compression of image based on segmentation resulted in reduction of bit rate by 7%-9% compared to lossless compression of whole image as a single frame by the same prediction coder. Segmentation based scheme also has the advantage of natural ROI based progressive decoding. If it is allowed to delete the diagnostically irrelevant portions, the bit budget can go down as much as 40%. This concept can be extended to other modalities.

  5. Compressive Hyperspectral Imaging With Side Information

    NASA Astrophysics Data System (ADS)

    Yuan, Xin; Tsai, Tsung-Han; Zhu, Ruoyu; Llull, Patrick; Brady, David; Carin, Lawrence

    2015-09-01

    A blind compressive sensing algorithm is proposed to reconstruct hyperspectral images from spectrally-compressed measurements.The wavelength-dependent data are coded and then superposed, mapping the three-dimensional hyperspectral datacube to a two-dimensional image. The inversion algorithm learns a dictionary {\\em in situ} from the measurements via global-local shrinkage priors. By using RGB images as side information of the compressive sensing system, the proposed approach is extended to learn a coupled dictionary from the joint dataset of the compressed measurements and the corresponding RGB images, to improve reconstruction quality. A prototype camera is built using a liquid-crystal-on-silicon modulator. Experimental reconstructions of hyperspectral datacubes from both simulated and real compressed measurements demonstrate the efficacy of the proposed inversion algorithm, the feasibility of the camera and the benefit of side information.

  6. Lossless Compression on MRI Images Using SWT.

    PubMed

    Anusuya, V; Raghavan, V Srinivasa; Kavitha, G

    2014-10-01

    Medical image compression is one of the growing research fields in biomedical applications. Most medical images need to be compressed using lossless compression as each pixel information is valuable. With the wide pervasiveness of medical imaging applications in health-care settings and the increased interest in telemedicine technologies, it has become essential to reduce both storage and transmission bandwidth requirements needed for archival and communication of related data, preferably by employing lossless compression methods. Furthermore, providing random access as well as resolution and quality scalability to the compressed data has become of great utility. Random access refers to the ability to decode any section of the compressed image without having to decode the entire data set. The system proposes to implement a lossless codec using an entropy coder. 3D medical images are decomposed into 2D slices and subjected to 2D-stationary wavelet transform (SWT). The decimated coefficients are compressed in parallel using embedded block coding with optimized truncation of the embedded bit stream. These bit streams are decoded and reconstructed using inverse SWT. Finally, the compression ratio (CR) is evaluated to prove the efficiency of the proposal. As an enhancement, the proposed system concentrates on minimizing the computation time by introducing parallel computing on the arithmetic coding stage as it deals with multiple subslices. PMID:24848945

  7. Lossy compression in nuclear medicine images.

    PubMed Central

    Rebelo, M. S.; Furuie, S. S.; Munhoz, A. C.; Moura, L.; Melo, C. P.

    1993-01-01

    The goal of image compression is to reduce the amount of data needed to represent images. In medical applications, it is not desirable to lose any information and thus lossless compression methods are often used. However, medical imaging systems have intrinsic noise associated to it. The application of a lossy technique, which acts as a low pass filter, reduces the amount of data at a higher rate without any noticeable loss in the information contained in the images. We have compressed images of nuclear medicine using the discrete cosine transform algorithm. The decompressed images were considered reliable for visual inspection. Furthermore, a parameter was computed from these images and no discernible change was found from the results obtained using the original uncompressed images. PMID:8130593

  8. Reversible intraframe compression of medical images.

    PubMed

    Roos, P; Viergever, M A; van Dijke, M A; Peters, J H

    1988-01-01

    The performance of several reversible, intraframe compression methods is compared by applying them to angiographic and magnetic resonance (MR) images. Reversible data compression involves two consecutive steps: decorrelation and coding. The result of the decorrelation step is presented in terms of entropy. Because Huffman coding generally approximates these entropy measures within a few percent, coding has not been investigated separately. It appears that a hierarchical decorrelation method based on interpolation (HINT) outperforms all other methods considered. The compression ratio is around 3 for angiographic images of 8-9 b/pixel, but is considerably less for MR images whose noise level is substantially higher. PMID:18230486

  9. Context-Aware Image Compression

    PubMed Central

    Chan, Jacky C. K.; Mahjoubfar, Ata; Chen, Claire L.; Jalali, Bahram

    2016-01-01

    We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling. PMID:27367904

  10. Wavelet compression efficiency investigation for medical images

    NASA Astrophysics Data System (ADS)

    Moryc, Marcin; Dziech, Wiera

    2006-03-01

    Medical images are acquired or stored digitally. These images can be very large in size and number, and compression can increase the speed of transmission and reduce the cost of storage. In the paper analysis of medical images' approximation using the transform method based on wavelet functions is investigated. The tested clinical images are taken from multiple anatomical regions and modalities (Computer Tomography CT, Magnetic Resonance MR, Ultrasound, Mammography and X-Ray images). To compress medical images, the threshold criterion has been applied. The mean square error (MSE) is used as a measure of approximation quality. Plots of the MSE versus compression percentage and approximated images are included for comparison of approximation efficiency.

  11. Robust retrieval from compressed medical image archives

    NASA Astrophysics Data System (ADS)

    Sidorov, Denis N.; Lerallut, Jean F.; Cocquerez, Jean-Pierre; Azpiroz, Joaquin

    2005-04-01

    Paper addresses the computational aspects of extracting important features directly from compressed images for the purpose of aiding biomedical image retrieval based on content. The proposed method for treatment of compressed medical archives follows the JPEG compression standard and exploits algorithm based on spacial analysis of the image cosine spectrum coefficients amplitude and location. The experiments on modality-specific archive of osteoarticular images show robustness of the method based on measured spectral spatial statistics. The features, which were based on the cosine spectrum coefficients' values, could satisfy different types of queries' modalities (MRI, US, etc), which emphasized texture and edge properties. In particular, it has been shown that there is wealth of information in the AC coefficients of the DCT transform, which can be utilized to support fast content-based image retrieval. The computational cost of proposed signature generation algorithm is low. Influence of conventional and the state-of-the-art compression techniques based on cosine and wavelet integral transforms on the performance of content-based medical image retrieval has been also studied. We found no significant differences in retrieval efficiencies for non-compressed and JPEG2000-compressed images even at the lowest bit rate tested.

  12. Cloud Optimized Image Format and Compression

    NASA Astrophysics Data System (ADS)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  13. Lossless compression of VLSI layout image data.

    PubMed

    Dai, Vito; Zakhor, Avideh

    2006-09-01

    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data. PMID:16948299

  14. Iris Recognition: The Consequences of Image Compression

    NASA Astrophysics Data System (ADS)

    Ives, Robert W.; Bishop, Daniel A.; Du, Yingzi; Belcher, Craig

    2010-12-01

    Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA) is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  15. Postprocessing of Compressed Images via Sequential Denoising.

    PubMed

    Dar, Yehuda; Bruckstein, Alfred M; Elad, Michael; Giryes, Raja

    2016-07-01

    In this paper, we propose a novel postprocessing technique for compression-artifact reduction. Our approach is based on posing this task as an inverse problem, with a regularization that leverages on existing state-of-the-art image denoising algorithms. We rely on the recently proposed Plug-and-Play Prior framework, suggesting the solution of general inverse problems via alternating direction method of multipliers, leading to a sequence of Gaussian denoising steps. A key feature in our scheme is a linearization of the compression-decompression process, so as to get a formulation that can be optimized. In addition, we supply a thorough analysis of this linear approximation for several basic compression procedures. The proposed method is suitable for diverse compression techniques that rely on transform coding. In particular, we demonstrate impressive gains in image quality for several leading compression methods-JPEG, JPEG2000, and HEVC. PMID:27214878

  16. Postprocessing of Compressed Images via Sequential Denoising

    NASA Astrophysics Data System (ADS)

    Dar, Yehuda; Bruckstein, Alfred M.; Elad, Michael; Giryes, Raja

    2016-07-01

    In this work we propose a novel postprocessing technique for compression-artifact reduction. Our approach is based on posing this task as an inverse problem, with a regularization that leverages on existing state-of-the-art image denoising algorithms. We rely on the recently proposed Plug-and-Play Prior framework, suggesting the solution of general inverse problems via Alternating Direction Method of Multipliers (ADMM), leading to a sequence of Gaussian denoising steps. A key feature in our scheme is a linearization of the compression-decompression process, so as to get a formulation that can be optimized. In addition, we supply a thorough analysis of this linear approximation for several basic compression procedures. The proposed method is suitable for diverse compression techniques that rely on transform coding. Specifically, we demonstrate impressive gains in image quality for several leading compression methods - JPEG, JPEG2000, and HEVC.

  17. Data compression for satellite images

    NASA Technical Reports Server (NTRS)

    Chen, P. H.; Wintz, P. A.

    1976-01-01

    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.

  18. Information preserving image compression for archiving NMR images.

    PubMed

    Li, C C; Gokmen, M; Hirschman, A D; Wang, Y

    1991-01-01

    This paper presents a result on information preserving compression of NMR images for the archiving purpose. Both Lynch-Davisson coding and linear predictive coding have been studied. For NMR images of 256 x 256 x 12 resolution, the Lynch-Davisson coding with a block size of 64 as applied to prediction error sequences in the Gray code bit planes of each image gave an average compression ratio of 2.3:1 for 14 testing images. The predictive coding with a third order linear predictor and the Huffman encoding of the prediction error gave an average compression ratio of 3.1:1 for 54 images under test, while the maximum compression ratio achieved was 3.8:1. This result is one step further toward the improvement, albeit small, of the information preserving image compression for medical applications. PMID:1913579

  19. Hyperspectral image data compression based on DSP

    NASA Astrophysics Data System (ADS)

    Fan, Jiming; Zhou, Jiankang; Chen, Xinhua; Shen, Weimin

    2010-11-01

    The huge data volume of hyperspectral image challenges its transportation and store. It is necessary to find an effective method to compress the hyperspectral image. Through analysis and comparison of current various algorithms, a mixed compression algorithm based on prediction, integer wavelet transform and embedded zero-tree wavelet (EZW) is proposed in this paper. We adopt a high-powered Digital Signal Processor (DSP) of TMS320DM642 to realize the proposed algorithm. Through modifying the mixed algorithm and optimizing its algorithmic language, the processing efficiency of the program was significantly improved, compared the non-optimized one. Our experiment show that the mixed algorithm based on DSP runs much faster than the algorithm on personal computer. The proposed method can achieve the nearly real-time compression with excellent image quality and compression performance.

  20. A New Approach for Fingerprint Image Compression

    SciTech Connect

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  1. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  2. FBI compression standard for digitized fingerprint images

    NASA Astrophysics Data System (ADS)

    Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas

    1996-11-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  3. Compression of gray-scale fingerprint images

    NASA Astrophysics Data System (ADS)

    Hopper, Thomas

    1994-03-01

    The FBI has developed a specification for the compression of gray-scale fingerprint images to support paperless identification services within the criminal justice community. The algorithm is based on a scalar quantization of a discrete wavelet transform decomposition of the images, followed by zero run encoding and Huffman encoding.

  4. Universal lossless compression algorithm for textual images

    NASA Astrophysics Data System (ADS)

    al Zahir, Saif

    2012-03-01

    In recent years, an unparalleled volume of textual information has been transported over the Internet via email, chatting, blogging, tweeting, digital libraries, and information retrieval systems. As the volume of text data has now exceeded 40% of the total volume of traffic on the Internet, compressing textual data becomes imperative. Many sophisticated algorithms were introduced and employed for this purpose including Huffman encoding, arithmetic encoding, the Ziv-Lempel family, Dynamic Markov Compression, and Burrow-Wheeler Transform. My research presents novel universal algorithm for compressing textual images. The algorithm comprises two parts: 1. a universal fixed-to-variable codebook; and 2. our row and column elimination coding scheme. Simulation results on a large number of Arabic, Persian, and Hebrew textual images show that this algorithm has a compression ratio of nearly 87%, which exceeds published results including JBIG2.

  5. Recommended frequency of ABPI review for patients wearing compression hosiery.

    PubMed

    Furlong, Winnie

    2015-11-11

    This paper is a sequel to the article 'How often should patients in compression have ABPI recorded?' ( Furlong, 2013 ). Monitoring ankle brachial pressure index (ABPI) is essential, especially in those patients wearing compression hosiery, as it can change over time ( Simon et al, 1994 ; Pankhurst, 2004 ), particularly in the presence of peripheral arterial disease (PAD). Leg ulceration caused by venous disease requires graduated compression ( Wounds UK, 2002 ; Anderson, 2008). Once healed, compression hosiery is required to help prevent ulcer recurrence ( Vandongen and Stacey, 2000 ). The Royal College of Nursing ( RCN, 2006 ) guidelines suggest 3-monthly reviews, including ABPI, with no further guidance. Wounds UK (2002) suggests that patients who have ABPI<0.9, diabetes, reduced mobility or symptoms of claudication should have at least 3/12 Doppler, and that those in compression hosiery without complications who are able to report should have vascular assessment yearly. PMID:26559232

  6. Compressive line sensing underwater imaging system

    NASA Astrophysics Data System (ADS)

    Ouyang, B.; Dalgleish, F. R.; Vuorenkoski, A. K.; Caimi, F. M.; Britton, W.

    2013-05-01

    Compressive sensing (CS) theory has drawn great interest and led to new imaging techniques in many different fields. In recent years, the FAU/HBOI OVOL has conducted extensive research to study the CS based active electro-optical imaging system in the scattering medium such as the underwater environment. The unique features of such system in comparison with the traditional underwater electro-optical imaging system are discussed. Building upon the knowledge from the previous work on a frame based CS underwater laser imager concept, more advantageous for hover-capable platforms such as the Hovering Autonomous Underwater Vehicle (HAUV), a compressive line sensing underwater imaging (CLSUI) system that is more compatible with the conventional underwater platforms where images are formed in whiskbroom fashion, is proposed in this paper. Simulation results are discussed.

  7. Compressive sensing image sensors-hardware implementation.

    PubMed

    Dadkhah, Mohammadreza; Deen, M Jamal; Shirani, Shahram

    2013-01-01

    The compressive sensing (CS) paradigm uses simultaneous sensing and compression to provide an efficient image acquisition technique. The main advantages of the CS method include high resolution imaging using low resolution sensor arrays and faster image acquisition. Since the imaging philosophy in CS imagers is different from conventional imaging systems, new physical structures have been developed for cameras that use the CS technique. In this paper, a review of different hardware implementations of CS encoding in optical and electrical domains is presented. Considering the recent advances in CMOS (complementary metal-oxide-semiconductor) technologies and the feasibility of performing on-chip signal processing, important practical issues in the implementation of CS in CMOS sensors are emphasized. In addition, the CS coding for video capture is discussed. PMID:23584123

  8. Optical Data Compression in Time Stretch Imaging

    PubMed Central

    Chen, Claire Lifan; Mahjoubfar, Ata; Jalali, Bahram

    2015-01-01

    Time stretch imaging offers real-time image acquisition at millions of frames per second and subnanosecond shutter speed, and has enabled detection of rare cancer cells in blood with record throughput and specificity. An unintended consequence of high throughput image acquisition is the massive amount of digital data generated by the instrument. Here we report the first experimental demonstration of real-time optical image compression applied to time stretch imaging. By exploiting the sparsity of the image, we reduce the number of samples and the amount of data generated by the time stretch camera in our proof-of-concept experiments by about three times. Optical data compression addresses the big data predicament in such systems. PMID:25906244

  9. Directly Estimating Endmembers for Compressive Hyperspectral Images

    PubMed Central

    Xu, Hongwei; Fu, Ning; Qiao, Liyan; Peng, Xiyuan

    2015-01-01

    The large volume of hyperspectral images (HSI) generated creates huge challenges for transmission and storage, making data compression more and more important. Compressive Sensing (CS) is an effective data compression technology that shows that when a signal is sparse in some basis, only a small number of measurements are needed for exact signal recovery. Distributed CS (DCS) takes advantage of both intra- and inter- signal correlations to reduce the number of measurements needed for multichannel-signal recovery. HSI can be observed by the DCS framework to reduce the volume of data significantly. The traditional method for estimating endmembers (spectral information) first recovers the images from the compressive HSI and then estimates endmembers via the recovered images. The recovery step takes considerable time and introduces errors into the estimation step. In this paper, we propose a novel method, by designing a type of coherent measurement matrix, to estimate endmembers directly from the compressively observed HSI data via convex geometry (CG) approaches without recovering the images. Numerical simulations show that the proposed method outperforms the traditional method with better estimation speed and better (or comparable) accuracy in both noisy and noiseless cases. PMID:25905699

  10. Spatial versus spectral compression ratio in compressive sensing of hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    August, Yitzhak; Vachman, Chaim; Stern, Adrian

    2013-05-01

    Compressive hyperspectral imaging is based on the fact that hyperspectral data is highly redundant. However, there is no symmetry between the compressibility of the spatial and spectral domains, and that should be taken into account for optimal compressive hyperspectral imaging system design. Here we present a study of the influence of the ratio between the compression in the spatial and spectral domains on the performance of a 3D separable compressive hyperspectral imaging method we recently developed.

  11. Compressive Deconvolution in Medical Ultrasound Imaging.

    PubMed

    Chen, Zhouye; Basarab, Adrian; Kouame, Denis

    2016-03-01

    The interest of compressive sampling in ultrasound imaging has been recently extensively evaluated by several research teams. Following the different application setups, it has been shown that the RF data may be reconstructed from a small number of measurements and/or using a reduced number of ultrasound pulse emissions. Nevertheless, RF image spatial resolution, contrast and signal to noise ratio are affected by the limited bandwidth of the imaging transducer and the physical phenomenon related to US wave propagation. To overcome these limitations, several deconvolution-based image processing techniques have been proposed to enhance the ultrasound images. In this paper, we propose a novel framework, named compressive deconvolution, that reconstructs enhanced RF images from compressed measurements. Exploiting an unified formulation of the direct acquisition model, combining random projections and 2D convolution with a spatially invariant point spread function, the benefit of our approach is the joint data volume reduction and image quality improvement. The proposed optimization method, based on the Alternating Direction Method of Multipliers, is evaluated on both simulated and in vivo data. PMID:26513780

  12. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as phi(sub gamma), are discrete time signals, where gamma represents the dictionary index. A dictionary with a collection of these waveforms is typically complete or overcomplete. Given such a dictionary, the goal is to obtain a representation image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  13. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as lambda, are discrete time signals, where y represents the dictionary index. A dictionary with a collection of these waveforms Is typically complete or over complete. Given such a dictionary, the goal is to obtain a representation Image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  14. Compression of color-mapped images

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, A. C.; Sayood, Khalid

    1992-01-01

    In a standard image coding scenario, pixel-to-pixel correlation nearly always exists in the data, especially if the image is a natural scene. This correlation is what allows predictive coding schemes (e.g., DPCM) to perform efficient compression. In a color-mapped image, the values stored in the pixel array are no longer directly related to the pixel intensity. Two color indices which are numerically adjacent (close) may point to two very different colors. The correlation still exists, but only via the colormap. This fact can be exploited by sorting the color map to reintroduce the structure. The sorting of colormaps is studied and it is shown how the resulting structure can be used in both lossless and lossy compression of images.

  15. Quad Tree Structures for Image Compression Applications.

    ERIC Educational Resources Information Center

    Markas, Tassos; Reif, John

    1992-01-01

    Presents a class of distortion controlled vector quantizers that are capable of compressing images so they comply with certain distortion requirements. Highlights include tree-structured vector quantizers; multiresolution vector quantization; error coding vector quantizer; error coding multiresolution algorithm; and Huffman coding of the quad-tree…

  16. Entangled-photon compressive ghost imaging

    SciTech Connect

    Zerom, Petros; Chan, Kam Wai Clifford; Howell, John C.; Boyd, Robert W.

    2011-12-15

    We have experimentally demonstrated high-resolution compressive ghost imaging at the single-photon level using entangled photons produced by a spontaneous parametric down-conversion source and using single-pixel detectors. For a given mean-squared error, the number of photons needed to reconstruct a two-dimensional image is found to be much smaller than that in quantum ghost imaging experiments employing a raster scan. This procedure not only shortens the data acquisition time, but also suggests a more economical use of photons for low-light-level and quantum image formation.

  17. Effect of Lossy JPEG Compression of an Image with Chromatic Aberrations on Target Measurement Accuracy

    NASA Astrophysics Data System (ADS)

    Matsuoka, R.

    2014-05-01

    This paper reports an experiment conducted to investigate the effect of lossy JPEG compression of an image with chromatic aberrations on the measurement accuracy of target center by the intensity-weighted centroid method. I utilized six images shooting a white sheet with 30 by 20 black filled circles in the experiment. The images were acquired by a digital camera Canon EOS 20D. The image data were compressed by using two compression parameter sets of a downsampling ratio, a quantization table and a Huffman code table utilized in EOS 20D. The experiment results clearly indicate that lossy JPEG compression of an image with chromatic aberrations would produce a significant effect on measurement accuracy of target center by the intensity-weighted centroid method. The maximum displacements of red, green and blue components caused by lossy JPEG compression were 0.20, 0.09, and 0.20 pixels respectively. The results also suggest that the downsampling of the chrominance components Cb and Cr in lossy JPEG compression would produce displacements between uncompressed image data and compressed image data. In conclusion, since the author consider that it would be unable to correct displacements caused by lossy JPEG compression, the author would recommend that lossy JPEG compression before recording an image in a digital camera should not be executed in case of highly precise image measurement by using color images acquired by a non-metric digital camera.

  18. Multi-spectral compressive snapshot imaging using RGB image sensors.

    PubMed

    Rueda, Hoover; Lau, Daniel; Arce, Gonzalo R

    2015-05-01

    Compressive sensing is a powerful sensing and reconstruction framework for recovering high dimensional signals with only a handful of observations and for spectral imaging, compressive sensing offers a novel method of multispectral imaging. Specifically, the coded aperture snapshot spectral imager (CASSI) system has been demonstrated to produce multi-spectral data cubes color images from a single snapshot taken by a monochrome image sensor. In this paper, we expand the theoretical framework of CASSI to include the spectral sensitivity of the image sensor pixels to account for color and then investigate the impact on image quality using either a traditional color image sensor that spatially multiplexes red, green, and blue light filters or a novel Foveon image sensor which stacks red, green, and blue pixels on top of one another. PMID:25969307

  19. Adaptive prediction trees for image compression.

    PubMed

    Robinson, John A

    2006-08-01

    This paper presents a complete general-purpose method for still-image compression called adaptive prediction trees. Efficient lossy and lossless compression of photographs, graphics, textual, and mixed images is achieved by ordering the data in a multicomponent binary pyramid, applying an empirically optimized nonlinear predictor, exploiting structural redundancies between color components, then coding with hex-trees and adaptive runlength/Huffman coders. Color palettization and order statistics prefiltering are applied adaptively as appropriate. Over a diverse image test set, the method outperforms standard lossless and lossy alternatives. The competing lossy alternatives use block transforms and wavelets in well-studied configurations. A major result of this paper is that predictive coding is a viable and sometimes preferable alternative to these methods. PMID:16900671

  20. Realization of hybrid compressive imaging strategies.

    PubMed

    Li, Yun; Sankaranarayanan, Aswin C; Xu, Lina; Baraniuk, Richard; Kelly, Kevin F

    2014-08-01

    The tendency of natural scenes to cluster around low frequencies is not only useful in image compression, it also can prove advantageous in novel infrared and hyperspectral image acquisition. In this paper, we exploit this signal model with two approaches to enhance the quality of compressive imaging as implemented in a single-pixel compressive camera and compare these results against purely random acquisition. We combine projection patterns that can efficiently extract the model-based information with subsequent random projections to form the hybrid pattern sets. With the first approach, we generate low-frequency patterns via a direct transform. As an alternative, we also used principal component analysis of an image library to identify the low-frequency components. We present the first (to the best of our knowledge) experimental validation of this hybrid signal model on real data. For both methods, we acquire comparable quality of reconstructions while acquiring only half the number of measurements needed by traditional random sequences. The optimal combination of hybrid patterns and the effects of noise on image reconstruction are also discussed. PMID:25121526

  1. A recommender system for medical imaging diagnostic.

    PubMed

    Monteiro, Eriksson; Valente, Frederico; Costa, Carlos; Oliveira, José Luís

    2015-01-01

    The large volume of data captured daily in healthcare institutions is opening new and great perspectives about the best ways to use it towards improving clinical practice. In this paper we present a context-based recommender system to support medical imaging diagnostic. The system relies on data mining and context-based retrieval techniques to automatically lookup for relevant information that may help physicians in the diagnostic decision. PMID:25991188

  2. Chest tuberculosis: Radiological review and imaging recommendations

    PubMed Central

    Bhalla, Ashu Seith; Goyal, Ankur; Guleria, Randeep; Gupta, Arun Kumar

    2015-01-01

    Chest tuberculosis (CTB) is a widespread problem, especially in our country where it is one of the leading causes of mortality. The article reviews the imaging findings in CTB on various modalities. We also attempt to categorize the findings into those definitive for active TB, indeterminate for disease activity, and those indicating healed TB. Though various radiological modalities are widely used in evaluation of such patients, no imaging guidelines exist for the use of these modalities in diagnosis and follow-up. Consequently, imaging is not optimally utilized and patients are often unnecessarily subjected to repeated CT examinations, which is undesirable. Based on the available literature and our experience, we propose certain recommendations delineating the role of imaging in the diagnosis and follow-up of such patients. The authors recognize that this is an evolving field and there may be future revisions depending on emergence of new evidence. PMID:26288514

  3. Multi-wavelength compressive computational ghost imaging

    NASA Astrophysics Data System (ADS)

    Welsh, Stephen S.; Edgar, Matthew P.; Jonathan, Phillip; Sun, Baoqing; Padgett, Miles J.

    2013-03-01

    The field of ghost imaging encompasses systems which can retrieve the spatial information of an object through correlated measurements of a projected light field, having spatial resolution, and the associated reflected or transmitted light intensity measured by a photodetector. By employing a digital light projector in a computational ghost imaging system with multiple spectrally filtered photodetectors we obtain high-quality multi-wavelength reconstructions of real macroscopic objects. We compare different reconstruction algorithms and reveal the use of compressive sensing techniques for achieving sub-Nyquist performance. Furthermore, we demonstrate the use of this technology in non-visible and fluorescence imaging applications.

  4. Compressive Hyperspectral Imaging via Approximate Message Passing

    NASA Astrophysics Data System (ADS)

    Tan, Jin; Ma, Yanting; Rueda, Hoover; Baron, Dror; Arce, Gonzalo R.

    2016-03-01

    We consider a compressive hyperspectral imaging reconstruction problem, where three-dimensional spatio-spectral information about a scene is sensed by a coded aperture snapshot spectral imager (CASSI). The CASSI imaging process can be modeled as suppressing three-dimensional coded and shifted voxels and projecting these onto a two-dimensional plane, such that the number of acquired measurements is greatly reduced. On the other hand, because the measurements are highly compressive, the reconstruction process becomes challenging. We previously proposed a compressive imaging reconstruction algorithm that is applied to two-dimensional images based on the approximate message passing (AMP) framework. AMP is an iterative algorithm that can be used in signal and image reconstruction by performing denoising at each iteration. We employed an adaptive Wiener filter as the image denoiser, and called our algorithm "AMP-Wiener." In this paper, we extend AMP-Wiener to three-dimensional hyperspectral image reconstruction, and call it "AMP-3D-Wiener." Applying the AMP framework to the CASSI system is challenging, because the matrix that models the CASSI system is highly sparse, and such a matrix is not suitable to AMP and makes it difficult for AMP to converge. Therefore, we modify the adaptive Wiener filter and employ a technique called damping to solve for the divergence issue of AMP. Our approach is applied in nature, and the numerical experiments show that AMP-3D-Wiener outperforms existing widely-used algorithms such as gradient projection for sparse reconstruction (GPSR) and two-step iterative shrinkage/thresholding (TwIST) given a similar amount of runtime. Moreover, in contrast to GPSR and TwIST, AMP-3D-Wiener need not tune any parameters, which simplifies the reconstruction process.

  5. Microseismic source imaging in a compressed domain

    NASA Astrophysics Data System (ADS)

    Vera Rodriguez, Ismael; Sacchi, Mauricio D.

    2014-08-01

    Microseismic monitoring is an essential tool for the characterization of hydraulic fractures. Fast estimation of the parameters that define a microseismic event is relevant to understand and control fracture development. The amount of data contained in the microseismic records however, poses a challenge for fast continuous detection and evaluation of the microseismic source parameters. Work inspired by the emerging field of Compressive Sensing has showed that it is possible to evaluate source parameters in a compressed domain, thereby reducing processing time. This technique performs well in scenarios where the amplitudes of the signal are above the noise level, as is often the case in microseismic monitoring using downhole tools. This paper extends the idea of the compressed domain processing to scenarios of microseismic monitoring using surface arrays, where the signal amplitudes are commonly at the same level as, or below, the noise amplitudes. To achieve this, we resort to the use of an imaging operator, which has previously been found to produce better results in detection and location of microseismic events from surface arrays. The operator in our method is formed by full-waveform elastodynamic Green's functions that are band-limited by a source time function and represented in the frequency domain. Where full-waveform Green's functions are not available, ray tracing can also be used to compute the required Green's functions. Additionally, we introduce the concept of the compressed inverse, which derives directly from the compression of the migration operator using a random matrix. The described methodology reduces processing time at a cost of introducing distortions into the results. However, the amount of distortion can be managed by controlling the level of compression applied to the operator. Numerical experiments using synthetic and real data demonstrate the reductions in processing time that can be achieved and exemplify the process of selecting the

  6. Combining image-processing and image compression schemes

    NASA Technical Reports Server (NTRS)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  7. Complementary compressive imaging for the telescopic system.

    PubMed

    Yu, Wen-Kai; Liu, Xue-Feng; Yao, Xu-Ri; Wang, Chao; Zhai, Yun; Zhai, Guang-Jie

    2014-01-01

    Conventional single-pixel cameras recover images only from the data recorded in one arm of the digital micromirror device, with the light reflected to the other direction not to be collected. Actually, the sampling in these two reflection orientations is correlated with each other, in view of which we propose a sampling concept of complementary compressive imaging, for the first time to our knowledge. We use this method in a telescopic system and acquire images of a target at about 2.0 km range with 20 cm resolution, with the variance of the noise decreasing by half. The influence of the sampling rate and the integration time of photomultiplier tubes on the image quality is also investigated experimentally. It is evident that this technique has advantages of large field of view over a long distance, high-resolution, high imaging speed, high-quality imaging capabilities, and needs fewer measurements in total than any single-arm sampling, thus can be used to improve the performance of all compressive imaging schemes and opens up possibilities for new applications in the remote-sensing area. PMID:25060569

  8. Patch-primitive driven compressive ghost imaging.

    PubMed

    Hu, Xuemei; Suo, Jinli; Yue, Tao; Bian, Liheng; Dai, Qionghai

    2015-05-01

    Ghost imaging has rapidly developed for about two decades and attracted wide attention from different research fields. However, the practical applications of ghost imaging are still largely limited, by its low reconstruction quality and large required measurements. Inspired by the fact that the natural image patches usually exhibit simple structures, and these structures share common primitives, we propose a patch-primitive driven reconstruction approach to raise the quality of ghost imaging. Specifically, we resort to a statistical learning strategy by representing each image patch with sparse coefficients upon an over-complete dictionary. The dictionary is composed of various primitives learned from a large number of image patches from a natural image database. By introducing a linear mapping between non-overlapping image patches and the whole image, we incorporate the above local prior into the convex optimization framework of compressive ghost imaging. Experiments demonstrate that our method could obtain better reconstruction from the same amount of measurements, and thus reduce the number of requisite measurements for achieving satisfying imaging quality. PMID:25969205

  9. Image Segmentation, Registration, Compression, and Matching

    NASA Technical Reports Server (NTRS)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  10. Lossless Astronomical Image Compression and the Effects of Random Noise

    NASA Technical Reports Server (NTRS)

    Pence, William

    2009-01-01

    In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.

  11. Reconfigurable Hardware for Compressing Hyperspectral Image Data

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh; Namkung, Jeffrey; Villapando, Carlos; Kiely, Aaron; Klimesh, Matthew; Xie, Hua

    2010-01-01

    High-speed, low-power, reconfigurable electronic hardware has been developed to implement ICER-3D, an algorithm for compressing hyperspectral-image data. The algorithm and parts thereof have been the topics of several NASA Tech Briefs articles, including Context Modeler for Wavelet Compression of Hyperspectral Images (NPO-43239) and ICER-3D Hyperspectral Image Compression Software (NPO-43238), which appear elsewhere in this issue of NASA Tech Briefs. As described in more detail in those articles, the algorithm includes three main subalgorithms: one for computing wavelet transforms, one for context modeling, and one for entropy encoding. For the purpose of designing the hardware, these subalgorithms are treated as modules to be implemented efficiently in field-programmable gate arrays (FPGAs). The design takes advantage of industry- standard, commercially available FPGAs. The implementation targets the Xilinx Virtex II pro architecture, which has embedded PowerPC processor cores with flexible on-chip bus architecture. It incorporates an efficient parallel and pipelined architecture to compress the three-dimensional image data. The design provides for internal buffering to minimize intensive input/output operations while making efficient use of offchip memory. The design is scalable in that the subalgorithms are implemented as independent hardware modules that can be combined in parallel to increase throughput. The on-chip processor manages the overall operation of the compression system, including execution of the top-level control functions as well as scheduling, initiating, and monitoring processes. The design prototype has been demonstrated to be capable of compressing hyperspectral data at a rate of 4.5 megasamples per second at a conservative clock frequency of 50 MHz, with a potential for substantially greater throughput at a higher clock frequency. The power consumption of the prototype is less than 6.5 W. The reconfigurability (by means of reprogramming) of

  12. Computed Tomography Image Compressibility and Limitations of Compression Ratio-Based Guidelines.

    PubMed

    Pambrun, Jean-François; Noumeir, Rita

    2015-12-01

    Finding optimal compression levels for diagnostic imaging is not an easy task. Significant compressibility variations exist between modalities, but little is known about compressibility variations within modalities. Moreover, compressibility is affected by acquisition parameters. In this study, we evaluate the compressibility of thousands of computed tomography (CT) slices acquired with different slice thicknesses, exposures, reconstruction filters, slice collimations, and pitches. We demonstrate that exposure, slice thickness, and reconstruction filters have a significant impact on image compressibility due to an increased high frequency content and a lower acquisition signal-to-noise ratio. We also show that compression ratio is not a good fidelity measure. Therefore, guidelines based on compression ratio should ideally be replaced with other compression measures better correlated with image fidelity. Value-of-interest (VOI) transformations also affect the perception of quality. We have studied the effect of value-of-interest transformation and found significant masking of artifacts when window is widened. PMID:25804842

  13. Fpack and Funpack Utilities for FITS Image Compression and Uncompression

    NASA Technical Reports Server (NTRS)

    Pence, W.

    2008-01-01

    Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see http://fits.gsfc.nasa.gov). The associated funpack program restores the compressed image file back to its original state (as long as a lossless compression algorithm is used). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression algorithms. Fpack stores the compressed image using the FITS tiled image compression convention (see http://fits.gsfc.nasa.gov/fits_registry.html). Under this convention, the image is first divided into a user-configurable grid of rectangular tiles, and then each tile is individually compressed and stored in a variable-length array column in a FITS binary table. By default, fpack usually adopts a row-by-row tiling pattern. The FITS image header keywords remain uncompressed for fast access by FITS reading and writing software. The tiled image compression convention can in principle support any number of different compression algorithms. The fpack and funpack utilities call on routines in the CFITSIO library (http://hesarc.gsfc.nasa.gov/fitsio) to perform the actual compression and uncompression of the FITS images, which currently supports the GZIP, Rice, H-compress, and PLIO IRAF pixel list compression algorithms.

  14. On the use of standards for microarray lossless image compression.

    PubMed

    Pinho, Armando J; Paiva, António R C; Neves, António J R

    2006-03-01

    The interest in methods that are able to efficiently compress microarray images is relatively new. This is not surprising, since the appearance and fast growth of the technology responsible for producing these images is also quite recent. In this paper, we present a set of compression results obtained with 49 publicly available images, using three image coding standards: lossless JPEG2000, JBIG, and JPEG-LS. We concluded that the compression technology behind JBIG seems to be the one that offers the best combination of compression efficiency and flexibility for microarray image compression. PMID:16532784

  15. Fast Lossless Compression of Multispectral-Image Data

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew

    2006-01-01

    An algorithm that effects fast lossless compression of multispectral-image data is based on low-complexity, proven adaptive-filtering algorithms. This algorithm is intended for use in compressing multispectral-image data aboard spacecraft for transmission to Earth stations. Variants of this algorithm could be useful for lossless compression of three-dimensional medical imagery and, perhaps, for compressing image data in general.

  16. Selective document image data compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1998-01-01

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel.--(235 words)

  17. Selective document image data compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1998-05-19

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel. 10 figs.

  18. Outer planet Pioneer imaging communications system study. [data compression

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The effects of different types of imaging data compression on the elements of the Pioneer end-to-end data system were studied for three imaging transmission methods. These were: no data compression, moderate data compression, and the advanced imaging communications system. It is concluded that: (1) the value of data compression is inversely related to the downlink telemetry bit rate; (2) the rolling characteristics of the spacecraft limit the selection of data compression ratios; and (3) data compression might be used to perform acceptable outer planet mission at reduced downlink telemetry bit rates.

  19. Centralized and interactive compression of multiview images

    NASA Astrophysics Data System (ADS)

    Gelman, Andriy; Dragotti, Pier Luigi; Velisavljević, Vladan

    2011-09-01

    In this paper, we propose two multiview image compression methods. The basic concept of both schemes is the layer-based representation, in which the captured three-dimensional (3D) scene is partitioned into layers each related to a constant depth in the scene. The first algorithm is a centralized scheme where each layer is de-correlated using a separable multi-dimensional wavelet transform applied across the viewpoint and spatial dimensions. The transform is modified to efficiently deal with occlusions and disparity variations for different depths. Although the method achieves a high compression rate, the joint encoding approach requires the transmission of all data to the users. By contrast, in an interactive setting, the users request only a subset of the captured images, but in an unknown order a priori. We address this scenario in the second algorithm using Distributed Source Coding (DSC) principles which reduces the inter-view redundancy and facilitates random access at the image level. We demonstrate that the proposed centralized and interactive methods outperform H.264/MVC and JPEG 2000, respectively.

  20. Remote sensing image compression assessment based on multilevel distortions

    NASA Astrophysics Data System (ADS)

    Jiang, Hongxu; Yang, Kai; Liu, Tingshan; Zhang, Yongfei

    2014-01-01

    The measurement of visual quality is of fundamental importance to remote sensing image compression, especially for image quality assessment and compression algorithm optimization. We exploit the distortion features of optical remote sensing image compression and propose a full-reference image quality metric based on multilevel distortions (MLD), which assesses image quality by calculating distortions of three levels (such as pixel-level, contexture-level, and content-level) between original images and compressed images. Based on this, a multiscale MLD (MMLD) algorithm is designed and it outperforms the other current methods in our testing. In order to validate the performance of our algorithm, a special remote sensing image compression distortion (RICD) database is constructed, involving 250 remote sensing images compressed with different algorithms and various distortions. Experimental results on RICD and Laboratory for Image and Video Engineering databases show that the proposed MMLD algorithm has better consistency with subjective perception values than current state-of-the-art methods in remote sensing image compression assessment, and the objective assessment results can show the distortion features and visual quality of compressed image well. It is suitable to be the evaluation criteria for optical remote sensing image compression.

  1. Reconfigurable machine for applications in image and video compression

    NASA Astrophysics Data System (ADS)

    Hartenstein, Reiner W.; Becker, Juergen; Kress, Rainier; Reinig, Helmut; Schmidt, Karin

    1995-02-01

    This paper presents a reconfigurable machine for applications in image or video compression. The machine can be used stand alone or as a universal accelerator co-processor for desktop computers for image processing. It is well suited for image compression algorithms such as JPEG for still pictures or for encoding MPEG movies. It provides a much cheaper and more flexible hardware platform than special image compression ASICs and it can substantially accelerate desktop computing.

  2. Improved Compression of Wavelet-Transformed Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Klimesh, Matthew

    2005-01-01

    A recently developed data-compression method is an adaptive technique for coding quantized wavelet-transformed data, nominally as part of a complete image-data compressor. Unlike some other approaches, this method admits a simple implementation and does not rely on the use of large code tables. A common data compression approach, particularly for images, is to perform a wavelet transform on the input data, and then losslessly compress a quantized version of the wavelet-transformed data. Under this compression approach, it is common for the quantized data to include long sequences, or runs, of zeros. The new coding method uses prefixfree codes for the nonnegative integers as part of an adaptive algorithm for compressing the quantized wavelet-transformed data by run-length coding. In the form of run-length coding used here, the data sequence to be encoded is parsed into strings consisting of some number (possibly 0) of zeros, followed by a nonzero value. The nonzero value and the length of the run of zeros are encoded. For a data stream that contains a sufficiently high frequency of zeros, this method is known to be more effective than using a single variable length code to encode each symbol. The specific prefix-free codes used are from two classes of variable-length codes: a class known as Golomb codes, and a class known as exponential-Golomb codes. The codes within each class are indexed by a single integer parameter. The present method uses exponential-Golomb codes for the lengths of the runs of zeros, and Golomb codes for the nonzero values. The code parameters within each code class are determined adaptively on the fly as compression proceeds, on the basis of statistics from previously encoded values. In particular, a simple adaptive method has been devised to select the parameter identifying the particular exponential-Golomb code to use. The method tracks the average number of bits used to encode recent runlengths, and takes the difference between this average

  3. Research on compressive fusion for remote sensing images

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Wan, Guobin; Li, Yuanyuan; Zhao, Xiaoxia; Chong, Xin

    2014-02-01

    A compressive fusion of remote sensing images is presented based on the block compressed sensing (BCS) and non-subsampled contourlet transform (NSCT). Since the BCS requires small memory space and enables fast computation, firstly, the images with large amounts of data can be compressively sampled into block images with structured random matrix. Further, the compressive measurements are decomposed with NSCT and their coefficients are fused by a rule of linear weighting. And finally, the fused image is reconstructed by the gradient projection sparse reconstruction algorithm, together with consideration of blocking artifacts. The field test of remote sensing images fusion shows the validity of the proposed method.

  4. Watermark Compression in Medical Image Watermarking Using Lempel-Ziv-Welch (LZW) Lossless Compression Technique.

    PubMed

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohd; Ali, Mushtaq

    2016-04-01

    In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery. PMID:26429361

  5. Compressed sensing in imaging mass spectrometry

    NASA Astrophysics Data System (ADS)

    Bartels, Andreas; Dülk, Patrick; Trede, Dennis; Alexandrov, Theodore; Maaß, Peter

    2013-12-01

    Imaging mass spectrometry (IMS) is a technique of analytical chemistry for spatially resolved, label-free and multipurpose analysis of biological samples that is able to detect the spatial distribution of hundreds of molecules in one experiment. The hyperspectral IMS data is typically generated by a mass spectrometer analyzing the surface of the sample. In this paper, we propose a compressed sensing approach to IMS which potentially allows for faster data acquisition by collecting only a part of the pixels in the hyperspectral image and reconstructing the full image from this data. We present an integrative approach to perform both peak-picking spectra and denoising m/z-images simultaneously, whereas the state of the art data analysis methods solve these problems separately. We provide a proof of the robustness of the recovery of both the spectra and individual channels of the hyperspectral image and propose an algorithm to solve our optimization problem which is based on proximal mappings. The paper concludes with the numerical reconstruction results for an IMS dataset of a rat brain coronal section.

  6. On-board image compression for the RAE lunar mission

    NASA Technical Reports Server (NTRS)

    Miller, W. H.; Lynch, T. J.

    1976-01-01

    The requirements, design, implementation, and flight performance of an on-board image compression system for the lunar orbiting Radio Astronomy Explorer-2 (RAE-2) spacecraft are described. The image to be compressed is a panoramic camera view of the long radio astronomy antenna booms used for gravity-gradient stabilization of the spacecraft. A compression ratio of 32 to 1 is obtained by a combination of scan line skipping and adaptive run-length coding. The compressed imagery data are convolutionally encoded for error protection. This image compression system occupies about 1000 cu cm and consumes 0.4 W.

  7. Using compressed images in multimedia education

    NASA Astrophysics Data System (ADS)

    Guy, William L.; Hefner, Lance V.

    1996-04-01

    The classic radiologic teaching file consists of hundreds, if not thousands, of films of various ages, housed in paper jackets with brief descriptions written on the jackets. The development of a good teaching file has been both time consuming and voluminous. Also, any radiograph to be copied was unavailable during the reproduction interval, inconveniencing other medical professionals needing to view the images at that time. These factors hinder motivation to copy films of interest. If a busy radiologist already has an adequate example of a radiological manifestation, it is unlikely that he or she will exert the effort to make a copy of another similar image even if a better example comes along. Digitized radiographs stored on CD-ROM offer marked improvement over the copied film teaching files. Our institution has several laser digitizers which are used to rapidly scan radiographs and produce high quality digital images which can then be converted into standard microcomputer (IBM, Mac, etc.) image format. These images can be stored on floppy disks, hard drives, rewritable optical disks, recordable CD-ROM disks, or removable cartridge media. Most hospital computer information systems include radiology reports in their database. We demonstrate that the reports for the images included in the users teaching file can be copied and stored on the same storage media as the images. The radiographic or sonographic image and the corresponding dictated report can then be 'linked' together. The description of the finding or findings of interest on the digitized image is thus electronically tethered to the image. This obviates the need to write much additional detail concerning the radiograph, saving time. In addition, the text on this disk can be indexed such that all files with user specified features can be instantly retrieve and combined in a single report, if desired. With the use of newer image compression techniques, hundreds of cases may be stored on a single CD

  8. Design of real-time remote sensing image compression system

    NASA Astrophysics Data System (ADS)

    Wu, Wenbo; Lei, Ning; Wang, Kun; Wang, Qingyuan; Li, Tao

    2013-08-01

    This paper focuses on the issue of CCD remote sensing image compression. Compared with other images, CCD remote sensing image data is characterized with high speed and high quantized bits. A high speed CCD image compression system is proposed based on ADV212 chip. The system is mainly composed of three devices: FPGA, SRAM and ADV212. In this system, SRAM plays the role of data buffer, ADV212 focuses on data compression and the FPGA is used for image storage and interface bus control. Finally, a system platform is designed to test the performance of compression. Test results show that the proposed scheme can satisfy the real-time processing requirement and there is no obvious difference between the sourced image and the compressed image in respect of image quality.

  9. Coherent radar imaging based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Zhu, Qian; Volz, Ryan; Mathews, John D.

    2015-12-01

    High-resolution radar images in the horizontal spatial domain generally require a large number of different baselines that usually come with considerable cost. In this paper, aspects of compressed sensing (CS) are introduced to coherent radar imaging. We propose a single CS-based formalism that enables the full three-dimensional (3-D)—range, Doppler frequency, and horizontal spatial (represented by the direction cosines) domain—imaging. This new method can not only reduce the system costs and decrease the needed number of baselines by enabling spatial sparse sampling but also achieve high resolution in the range, Doppler frequency, and horizontal space dimensions. Using an assumption of point targets, a 3-D radar signal model for imaging has been derived. By comparing numerical simulations with the fast Fourier transform and maximum entropy methods at different signal-to-noise ratios, we demonstrate that the CS method can provide better performance in resolution and detectability given comparatively few available measurements relative to the number required by Nyquist-Shannon sampling criterion. These techniques are being applied to radar meteor observations.

  10. Image compression system and method having optimized quantization tables

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)

    1998-01-01

    A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.

  11. Oncologic image compression using both wavelet and masking techniques.

    PubMed

    Yin, F F; Gao, Q

    1997-12-01

    A new algorithm has been developed to compress oncologic images using both wavelet transform and field masking methods. A compactly supported wavelet transform is used to decompose the original image into high- and low-frequency subband images. The region-of-interest (ROI) inside an image, such as an irradiated field in an electronic portal image, is identified using an image segmentation technique and is then used to generate a mask. The wavelet transform coefficients outside the mask region are then ignored so that these coefficients can be efficiently coded to minimize the image redundancy. In this study, an adaptive uniform scalar quantization method and Huffman coding with a fixed code book are employed in subsequent compression procedures. Three types of typical oncologic images are tested for compression using this new algorithm: CT, MRI, and electronic portal images with 256 x 256 matrix size and 8-bit gray levels. Peak signal-to-noise ratio (PSNR) is used to evaluate the quality of reconstructed image. Effects of masking and image quality on compression ratio are illustrated. Compression ratios obtained using wavelet transform with and without masking for the same PSNR are compared for all types of images. The addition of masking shows an increase of compression ratio by a factor of greater than 1.5. The effect of masking on the compression ratio depends on image type and anatomical site. A compression ratio of greater than 5 can be achieved for a lossless compression of various oncologic images with respect to the region inside the mask. Examples of reconstructed images with compression ratio greater than 50 are shown. PMID:9434988

  12. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    PubMed

    Gillespy, T; Rowberg, A H

    1994-02-01

    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution. PMID:8172973

  13. Image compression using the W-transform

    SciTech Connect

    Reynolds, W.D. Jr.

    1995-12-31

    The authors present the W-transform for a multiresolution signal decomposition. One of the differences between the wavelet transform and W-transform is that the W-transform leads to a nonorthogonal signal decomposition. Another difference between the two is the manner in which the W-transform handles the endpoints (boundaries) of the signal. This approach does not restrict the length of the signal to be a power of two. Furthermore, it does not call for the extension of the signal thus, the W-transform is a convenient tool for image compression. They present the basic theory behind the W-transform and include experimental simulations to demonstrate its capabilities.

  14. Learning random networks for compression of still and moving images

    NASA Technical Reports Server (NTRS)

    Gelenbe, Erol; Sungur, Mert; Cramer, Christopher

    1994-01-01

    Image compression for both still and moving images is an extremely important area of investigation, with numerous applications to videoconferencing, interactive education, home entertainment, and potential applications to earth observations, medical imaging, digital libraries, and many other areas. We describe work on a neural network methodology to compress/decompress still and moving images. We use the 'point-process' type neural network model which is closer to biophysical reality than standard models, and yet is mathematically much more tractable. We currently achieve compression ratios of the order of 120:1 for moving grey-level images, based on a combination of motion detection and compression. The observed signal-to-noise ratio varies from values above 25 to more than 35. The method is computationally fast so that compression and decompression can be carried out in real-time. It uses the adaptive capabilities of a set of neural networks so as to select varying compression ratios in real-time as a function of quality achieved. It also uses a motion detector which will avoid retransmitting portions of the image which have varied little from the previous frame. Further improvements can be achieved by using on-line learning during compression, and by appropriate compensation of nonlinearities in the compression/decompression scheme. We expect to go well beyond the 250:1 compression level for color images with good quality levels.

  15. Efficient MR image reconstruction for compressed MR imaging.

    PubMed

    Huang, Junzhou; Zhang, Shaoting; Metaxas, Dimitris

    2011-10-01

    In this paper, we propose an efficient algorithm for MR image reconstruction. The algorithm minimizes a linear combination of three terms corresponding to a least square data fitting, total variation (TV) and L1 norm regularization. This has been shown to be very powerful for the MR image reconstruction. First, we decompose the original problem into L1 and TV norm regularization subproblems respectively. Then, these two subproblems are efficiently solved by existing techniques. Finally, the reconstructed image is obtained from the weighted average of solutions from two subproblems in an iterative framework. We compare the proposed algorithm with previous methods in term of the reconstruction accuracy and computation complexity. Numerous experiments demonstrate the superior performance of the proposed algorithm for compressed MR image reconstruction. PMID:21742542

  16. Efficient MR image reconstruction for compressed MR imaging.

    PubMed

    Huang, Junzhou; Zhang, Shaoting; Metaxas, Dimitris

    2010-01-01

    In this paper, we propose an efficient algorithm for MR image reconstruction. The algorithm minimizes a linear combination of three terms corresponding to a least square data fitting, total variation (TV) and L1 norm regularization. This has been shown to be very powerful for the MR image reconstruction. First, we decompose the original problem into L1 and TV norm regularization subproblems respectively. Then, these two subproblems are efficiently solved by existing techniques. Finally, the reconstructed image is obtained from the weighted average of solutions from two subproblems in an iterative framework. We compare the proposed algorithm with previous methods in term of the reconstruction accuracy and computation complexity. Numerous experiments demonstrate the superior performance of the proposed algorithm for compressed MR image reconstruction. PMID:20879224

  17. Lossless compression of medical images using Hilbert scan

    NASA Astrophysics Data System (ADS)

    Sun, Ziguang; Li, Chungui; Liu, Hao; Zhang, Zengfang

    2007-12-01

    The effectiveness of Hilbert scan in lossless medical images compression is discussed. In our methods, after coding of intensities, the pixels in a medical images have been decorrelated with differential pulse code modulation, then the error image has been rearranged using Hilbert scan, finally we implement five coding schemes, such as Huffman coding, RLE, lZW coding, Arithmetic coding, and RLE followed by Huffman coding. The experiments show that the case, which applies DPCM followed by Hilbert scan and then compressed by the Arithmetic coding scheme, has the best compression result, also indicate that Hilbert scan can enhance pixel locality, and increase the compression ratio effectively.

  18. Perceptually lossless wavelet-based compression for medical images

    NASA Astrophysics Data System (ADS)

    Lin, Nai-wen; Yu, Tsaifa; Chan, Andrew K.

    1997-05-01

    In this paper, we present a wavelet-based medical image compression scheme so that images displayed on different devices are perceptually lossless. Since visual sensitivity of human varies with different subbands, we apply the perceptual lossless criteria to quantize the wavelet transform coefficients of each subband such that visual distortions are reduced to unnoticeable. Following this, we use a high compression ratio hierarchical tree to code these coefficients. Experimental results indicate that our perceptually lossless coder achieves a compression ratio 2-5 times higher than typical lossless compression schemes while producing perceptually identical image content on the target display device.

  19. Fast computational scheme of image compression for 32-bit microprocessors

    NASA Technical Reports Server (NTRS)

    Kasperovich, Leonid

    1994-01-01

    This paper presents a new computational scheme of image compression based on the discrete cosine transform (DCT), underlying JPEG and MPEG International Standards. The algorithm for the 2-d DCT computation uses integer operations (register shifts and additions / subtractions only); its computational complexity is about 8 additions per image pixel. As a meaningful example of an on-board image compression application we consider the software implementation of the algorithm for the Mars Rover (Marsokhod, in Russian) imaging system being developed as a part of Mars-96 International Space Project. It's shown that fast software solution for 32-bit microprocessors may compete with the DCT-based image compression hardware.

  20. Texture-based medical image retrieval in compressed domain using compressive sensing.

    PubMed

    Yadav, Kuldeep; Srivastava, Avi; Mittal, Ankush; Ansari, M A

    2014-01-01

    Content-based image retrieval has gained considerable attention in today's scenario as a useful tool in many applications; texture is one of them. In this paper, we focus on texture-based image retrieval in compressed domain using compressive sensing with the help of DC coefficients. Medical imaging is one of the fields which have been affected most, as there had been huge size of image database and getting out the concerned image had been a daunting task. Considering this, in this paper we propose a new model of image retrieval process using compressive sampling, since it allows accurate recovery of image from far fewer samples of unknowns and it does not require a close relation of matching between sampling pattern and characteristic image structure with increase acquisition speed and enhanced image quality. PMID:24589833

  1. Accelerated Compressed Sensing Based CT Image Reconstruction

    PubMed Central

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R.; Paul, Narinder S.; Cobbold, Richard S. C.

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization. PMID:26167200

  2. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.

  3. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-12-30

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.

  4. Digital mammography, cancer screening: Factors important for image compression

    NASA Technical Reports Server (NTRS)

    Clarke, Laurence P.; Blaine, G. James; Doi, Kunio; Yaffe, Martin J.; Shtern, Faina; Brown, G. Stephen; Winfield, Daniel L.; Kallergi, Maria

    1993-01-01

    The use of digital mammography for breast cancer screening poses several novel problems such as development of digital sensors, computer assisted diagnosis (CAD) methods for image noise suppression, enhancement, and pattern recognition, compression algorithms for image storage, transmission, and remote diagnosis. X-ray digital mammography using novel direct digital detection schemes or film digitizers results in large data sets and, therefore, image compression methods will play a significant role in the image processing and analysis by CAD techniques. In view of the extensive compression required, the relative merit of 'virtually lossless' versus lossy methods should be determined. A brief overview is presented here of the developments of digital sensors, CAD, and compression methods currently proposed and tested for mammography. The objective of the NCI/NASA Working Group on Digital Mammography is to stimulate the interest of the image processing and compression scientific community for this medical application and identify possible dual use technologies within the NASA centers.

  5. Wavelet/scalar quantization compression standard for fingerprint images

    SciTech Connect

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.

  6. Compressing industrial computed tomography images by means of contour coding

    NASA Astrophysics Data System (ADS)

    Jiang, Haina; Zeng, Li

    2013-10-01

    An improved method for compressing industrial computed tomography (CT) images is presented. To have higher resolution and precision, the amount of industrial CT data has become larger and larger. Considering that industrial CT images are approximately piece-wise constant, we develop a compression method based on contour coding. The traditional contour-based method for compressing gray images usually needs two steps. The first is contour extraction and then compression, which is negative for compression efficiency. So we merge the Freeman encoding idea into an improved method for two-dimensional contours extraction (2-D-IMCE) to improve the compression efficiency. By exploiting the continuity and logical linking, preliminary contour codes are directly obtained simultaneously with the contour extraction. By that, the two steps of the traditional contour-based compression method are simplified into only one. Finally, Huffman coding is employed to further losslessly compress preliminary contour codes. Experimental results show that this method can obtain a good compression ratio as well as keeping satisfactory quality of compressed images.

  7. Wavelet for Ultrasonic Flaw Enhancement and Image Compression

    NASA Astrophysics Data System (ADS)

    Cheng, W.; Tsukada, K.; Li, L. Q.; Hanasaki, K.

    2003-03-01

    Ultrasonic imaging has been widely used in Non-destructive Testing (NDT) and medical application. However, the image is always degraded by blur and noise. Besides, the pressure on both storage and transmission gives rise to the need of image compression. We apply 2-D Discrete Wavelet Transform (DWT) to C-scan 2-D images to realize flaw enhancement and image compression, taking advantage of DWT scale and orientation selectivity. The Wavelet coefficient thresholding and scalar quantization are employed respectively. Furthermore, we realize the unification of flaw enhancement and image compression in one process. The reconstructed image from the compressed data gives a clearer interpretation of the flaws at a much smaller bit rate.

  8. Parallel image compression circuit for high-speed cameras

    NASA Astrophysics Data System (ADS)

    Nishikawa, Yukinari; Kawahito, Shoji; Inoue, Toru

    2005-02-01

    In this paper, we propose 32 parallel image compression circuits for high-speed cameras. The proposed compression circuits are based on a 4 x 4-point 2-dimensional DCT using a DA method, zigzag scanning of 4 blocks of the 2-D DCT coefficients and a 1-dimensional Huffman coding. The compression engine is designed with FPGAs, and the hardware complexity is compared with JPEG algorithm. It is found that the proposed compression circuits require much less hardware, leading to a compact high-speed implementation of the image compression circuits using parallel processing architecture. The PSNR of the reconstructed image using the proposed encoding method is better than that of JPEG at the region of low compression ratio.

  9. Techniques for region coding in object-based image compression

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    2004-01-01

    Object-based compression (OBC) is an emerging technology that combines region segmentation and coding to produce a compact representation of a digital image or video sequence. Previous research has focused on a variety of segmentation and representation techniques for regions that comprise an image. The author has previously suggested [1] partitioning of the OBC problem into three steps: (1) region segmentation, (2) region boundary extraction and compression, and (3) region contents compression. A companion paper [2] surveys implementationally feasible techniques for boundary compression. In this paper, we analyze several strategies for region contents compression, including lossless compression, lossy VPIC, EPIC, and EBLAST compression, wavelet-based coding (e.g., JPEG-2000), as well as texture matching approaches. This paper is part of a larger study that seeks to develop highly efficient compression algorithms for still and video imagery, which would eventually support automated object recognition (AOR) and semantic lookup of images in large databases or high-volume OBC-format datastreams. Example applications include querying journalistic archives, scientific or medical imaging, surveillance image processing and target tracking, as well as compression of video for transmission over the Internet. Analysis emphasizes time and space complexity, as well as sources of reconstruction error in decompressed imagery.

  10. The impact of skull bone intensity on the quality of compressed CT neuro images

    NASA Astrophysics Data System (ADS)

    Kowalik-Urbaniak, Ilona; Vrscay, Edward R.; Wang, Zhou; Cavaro-Menard, Christine; Koff, David; Wallace, Bill; Obara, Boguslaw

    2012-02-01

    The increasing use of technologies such as CT and MRI, along with a continuing improvement in their resolution, has contributed to the explosive growth of digital image data being generated. Medical communities around the world have recognized the need for efficient storage, transmission and display of medical images. For example, the Canadian Association of Radiologists (CAR) has recommended compression ratios for various modalities and anatomical regions to be employed by lossy JPEG and JPEG2000 compression in order to preserve diagnostic quality. Here we investigate the effects of the sharp skull edges present in CT neuro images on JPEG and JPEG2000 lossy compression. We conjecture that this atypical effect is caused by the sharp edges between the skull bone and the background regions as well as between the skull bone and the interior regions. These strong edges create large wavelet coefficients that consume an unnecessarily large number of bits in JPEG2000 compression because of its bitplane coding scheme, and thus result in reduced quality at the interior region, which contains most diagnostic information in the image. To validate the conjecture, we investigate a segmentation based compression algorithm based on simple thresholding and morphological operators. As expected, quality is improved in terms of PSNR as well as the structural similarity (SSIM) image quality measure, and its multiscale (MS-SSIM) and informationweighted (IW-SSIM) versions. This study not only supports our conjecture, but also provides a solution to improve the performance of JPEG and JPEG2000 compression for specific types of CT images.

  11. Context Modeler for Wavelet Compression of Spectral Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Xie, Hua; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    A context-modeling sub-algorithm has been developed as part of an algorithm that effects three-dimensional (3D) wavelet-based compression of hyperspectral image data. The context-modeling subalgorithm, hereafter denoted the context modeler, provides estimates of probability distributions of wavelet-transformed data being encoded. These estimates are utilized by an entropy coding subalgorithm that is another major component of the compression algorithm. The estimates make it possible to compress the image data more effectively than would otherwise be possible. The following background discussion is prerequisite to a meaningful summary of the context modeler. This discussion is presented relative to ICER-3D, which is the name attached to a particular compression algorithm and the software that implements it. The ICER-3D software is summarized briefly in the preceding article, ICER-3D Hyperspectral Image Compression Software (NPO-43238). Some aspects of this algorithm were previously described, in a slightly more general context than the ICER-3D software, in "Improving 3D Wavelet-Based Compression of Hyperspectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. In turn, ICER-3D is a product of generalization of ICER, another previously reported algorithm and computer program that can perform both lossless and lossy wavelet-based compression and decompression of gray-scale-image data. In ICER-3D, hyperspectral image data are decomposed using a 3D discrete wavelet transform (DWT). Following wavelet decomposition, mean values are subtracted from spatial planes of spatially low-pass subbands prior to encoding. The resulting data are converted to sign-magnitude form and compressed. In ICER-3D, compression is progressive, in that compressed information is ordered so that as more of the compressed data stream is received, successive reconstructions of the hyperspectral image data are of successively higher overall fidelity.

  12. Estimating JPEG2000 compression for image forensics using Benford's Law

    NASA Astrophysics Data System (ADS)

    Qadir, Ghulam; Zhao, Xi; Ho, Anthony T. S.

    2010-05-01

    With the tremendous growth and usage of digital images nowadays, the integrity and authenticity of digital content is becoming increasingly important, and a growing concern to many government and commercial sectors. Image Forensics, based on a passive statistical analysis of the image data only, is an alternative approach to the active embedding of data associated with Digital Watermarking. Benford's Law was first introduced to analyse the probability distribution of the 1st digit (1-9) numbers of natural data, and has since been applied to Accounting Forensics for detecting fraudulent income tax returns [9]. More recently, Benford's Law has been further applied to image processing and image forensics. For example, Fu et al. [5] proposed a Generalised Benford's Law technique for estimating the Quality Factor (QF) of JPEG compressed images. In our previous work, we proposed a framework incorporating the Generalised Benford's Law to accurately detect unknown JPEG compression rates of watermarked images in semi-fragile watermarking schemes. JPEG2000 (a relatively new image compression standard) offers higher compression rates and better image quality as compared to JPEG compression. In this paper, we propose the novel use of Benford's Law for estimating JPEG2000 compression for image forensics applications. By analysing the DWT coefficients and JPEG2000 compression on 1338 test images, the initial results indicate that the 1st digit probability of DWT coefficients follow the Benford's Law. The unknown JPEG2000 compression rates of the image can also be derived, and proved with the help of a divergence factor, which shows the deviation between the probabilities and Benford's Law. Based on 1338 test images, the mean divergence for DWT coefficients is approximately 0.0016, which is lower than DCT coefficients at 0.0034. However, the mean divergence for JPEG2000 images compression rate at 0.1 is 0.0108, which is much higher than uncompressed DWT coefficients. This result

  13. Image compression and transmission based on LAN

    NASA Astrophysics Data System (ADS)

    Huang, Sujuan; Li, Yufeng; Zhang, Zhijiang

    2004-11-01

    In this work an embedded system is designed which implements MPEG-2 LAN transmission of CVBS or S-video signal. The hardware consists of three parts. The first is digitization of analog inputs CVBS or S-video (Y/C) from TV or VTR sources. The second is MPEG-2 compression coding primarily performed by a MPEG-2 1chip audio/video encoder. Its output is MPEG-2 system PS/TS. The third part includes data stream packing, accessing LAN and system control based on an ARM microcontroller. It packs the encoded stream into Ethernet data frames and accesses LAN, and accepts Ethernet data packets bearing control information from the network and decodes corresponding commands to control digitization, coding, and other operations. In order to increase the network transmission rate to conform to the MEPG-2 data stream, an efficient TCP/IP network protocol stack is constructed directly from network hardware provided by the embedded system, instead of using an ordinary operating system for embedded systems. In the design of the network protocol stack to obtain a high LAN transmission rate on a low-end ARM, a special transmission channel is opened for the MPEG-2 stream. The designed system has been tested on an experimental LAN. The experiment shows a maximum LAN transmission rate up to 12.7 Mbps with good sound and image quality, and satisfactory system reliability.

  14. Compressing subbanded image data with Lempel-Ziv-based coders

    NASA Technical Reports Server (NTRS)

    Glover, Daniel; Kwatra, S. C.

    1993-01-01

    A method of improving the compression of image data using Lempel-Ziv-based coding is presented. Image data is first processed with a simple transform, such as the Walsh Hadamard Transform, to produce subbands. The subbanded data can be rounded to eight bits or it can be quantized for higher compression at the cost of some reduction in the quality of the reconstructed image. The data is then run-length coded to take advantage of the large runs of zeros produced by quantization. Compression results are presented and contrasted with a subband compression method using quantization followed by run-length coding and Huffman coding. The Lempel-Ziv-based coding in conjunction with run-length coding produces the best compression results at the same reconstruction quality (compared with the Huffman-based coding) on the image data used.

  15. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  16. Halftoning processing on a JPEG-compressed image

    NASA Astrophysics Data System (ADS)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  17. Investigation into the geometric consequences of processing substantially compressed images

    NASA Astrophysics Data System (ADS)

    Tempelmann, Udo; Nwosu, Zubbi; Zumbrunn, Roland M.

    1995-07-01

    One of the major driving forces behind digital photogrammetric systems is the continued drop in the cost of digital storage systems. However, terrestrial remote sensing systems continue to generate enormous volumes of data due to smaller pixels, larger coverage, and increased multispectral and multitemporal possibilities. Sophisticated compression algorithms have been developed but reduced visual quality of their output, which impedes object identification, and resultant geometric deformation have been limiting factors in employing compression. Compression and decompression time is also an issue but of less importance due to off-line possibilities. Two typical image blocks have been selected, one sub-block from a SPOT image and the other is an image of industrial targets taken with an off-the-shelf CCD. Three common compression algorithms have been chosen: JPEG, Wavelet, and Fractal. The images are run through the compression/decompression cycle, with parameter chosen to cover the whole range of available compression ratios. Points are identified on these images and their locations are compared against those in the originals. These results are presented to assist choice of compression facilities after considerations on metric quality against storage availability. Fractals offer the best visual quality but JPEG, closely followed by wavelets, imposes less geometric defects. JPEG seems to offer the best all-around performance when you consider geometric and visual quality, and compression/decompression speed.

  18. Multispectral Image Compression Based on DSC Combined with CCSDS-IDC

    PubMed Central

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741

  19. Image Compression Using Vector Quantization with Variable Block Size Division

    NASA Astrophysics Data System (ADS)

    Matsumoto, Hiroki; Kichikawa, Fumito; Sasazaki, Kazuya; Maeda, Junji; Suzuki, Yukinori

    In this paper, we propose a method for compressing a still image using vector quantization (VQ). Local fractal dimension (LFD) is computed to divided an image into variable block size. The LFD shows the complexity of local regions of an image, so that a region of an image that shows higher LFD values than those of other regions is partitioned into small blocks of pixels, while a region of an image that shows lower LFD values than those of other regions is partitioned into large blocks. Furthermore, we developed a division and merging algorithm to decrease the number of blocks to encode. This results in improvement of compression rate. We construct code books for respective blocks sizes. To encode an image, a block of pixels is transformed by discrete cosine transform (DCT) and the closest vector is chosen from the code book (CB). In decoding, the code vector corresponding to the index is selected from the CB and then the code vector is transformed by inverse DCT to reconstruct a block of pixels. Computational experiments were carried out to show the effectiveness of the proposed method. Performance of the proposed method is slightly better than that of JPEG. In the case of learning images to construct a CB being different from test images, the compression rate is comparable to compression rates of methods proposed so far, while image quality evaluated by NPIQM (normalized perceptual image quality measure) is almost the highest step. The results show that the proposed method is effective for still image compression.

  20. CMOS low data rate imaging method based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Xiao, Long-long; Liu, Kun; Han, Da-peng

    2012-07-01

    Complementary metal-oxide semiconductor (CMOS) technology enables the integration of image sensing and image compression processing, making improvements on overall system performance possible. We present a CMOS low data rate imaging approach by implementing compressed sensing (CS). On the basis of the CS framework, the image sensor projects the image onto a separable two-dimensional (2D) basis set and measures the corresponding coefficients obtained. First, the electrical current output from the pixels in a column are combined, with weights specified by voltage, in accordance with Kirchhoff's law. The second computation is performed in an analog vector-matrix multiplier (VMM). Each element of the VMM considers the total value of each column as the input and multiplies it by a unique coefficient. Both weights and coefficients are reprogrammable through analog floating-gate (FG) transistors. The image can be recovered from a percentage of these measurements using an optimization algorithm. The percentage, which can be altered flexibly by programming on the hardware circuit, determines the image compression ratio. These novel designs facilitate image compression during the image-capture phase before storage, and have the potential to reduce power consumption. Experimental results demonstrate that the proposed method achieves a large image compression ratio and ensures imaging quality.

  1. OARSI Clinical Trials Recommendations for Hip Imaging in Osteoarthritis

    PubMed Central

    Gold, Garry E.; Cicuttini, Flavia; Crema, Michel D.; Eckstein, Felix; Guermazi, Ali; Kijowski, Richard; Link, Thomas M.; Maheu, Emmanuel; Martel-Pelletier, Johanne; Miller, Colin G.; Pelletier, Jean-Pierre; Peterfy, Charles G.; Potter, Hollis G.; Roemer, Frank W.; Hunter, David. J

    2015-01-01

    Imaging of hip in osteoarthritis (OA) has seen considerable progress in the past decade, with the introduction of new techniques that may be more sensitive to structural disease changes. The purpose of this expert opinion, consensus driven recommendation is to provide detail on how to apply hip imaging in disease modifying clinical trials. It includes information on acquisition methods/ techniques (including guidance on positioning for radiography, sequence/protocol recommendations/ hardware for MRI); commonly encountered problems (including positioning, hardware and coil failures, artifacts associated with various MRI sequences); quality assurance/ control procedures; measurement methods; measurement performance (reliability, responsiveness, and validity); recommendations for trials; and research recommendations. PMID:25952344

  2. A high-speed distortionless predictive image-compression scheme

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Smyth, P.; Wang, H.

    1990-01-01

    A high-speed distortionless predictive image-compression scheme that is based on differential pulse code modulation output modeling combined with efficient source-code design is introduced. Experimental results show that this scheme achieves compression that is very close to the difference entropy of the source.

  3. Increasing FTIR spectromicroscopy speed and resolution through compressive imaging

    SciTech Connect

    Gallet, Julien; Riley, Michael; Hao, Zhao; Martin, Michael C

    2007-10-15

    At the Advanced Light Source at Lawrence Berkeley National Laboratory, we are investigating how to increase both the speed and resolution of synchrotron infrared imaging. Synchrotron infrared beamlines have diffraction-limited spot sizes and high signal to noise, however spectral images must be obtained one point at a time and the spatial resolution is limited by the effects of diffraction. One technique to assist in speeding up spectral image acquisition is described here and uses compressive imaging algorithms. Compressive imaging can potentially attain resolutions higher than allowed by diffraction and/or can acquire spectral images without having to measure every spatial point individually thus increasing the speed of such maps. Here we present and discuss initial tests of compressive imaging techniques performed with ALS Beamline 1.4.3?s Nic-Plan infrared microscope, Beamline 1.4.4 Continuum XL IR microscope, and also with a stand-alone Nicolet Nexus 470 FTIR spectrometer.

  4. Medical image compression algorithm based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Chen, Minghong; Zhang, Guoping; Wan, Wei; Liu, Minmin

    2005-02-01

    With rapid development of electronic imaging and multimedia technology, the telemedicine is applied to modern medical servings in the hospital. Digital medical image is characterized by high resolution, high precision and vast data. The optimized compression algorithm can alleviate restriction in the transmission speed and data storage. This paper describes the characteristics of human vision system based on the physiology structure, and analyses the characteristics of medical image in the telemedicine, then it brings forward an optimized compression algorithm based on wavelet zerotree. After the image is smoothed, it is decomposed with the haar filters. Then the wavelet coefficients are quantified adaptively. Therefore, we can maximize efficiency of compression and achieve better subjective visual image. This algorithm can be applied to image transmission in the telemedicine. In the end, we examined the feasibility of this algorithm with an image transmission experiment in the network.

  5. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    PubMed

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression. PMID:22722754

  6. Image compression and decompression based on gazing area

    NASA Astrophysics Data System (ADS)

    Tsumura, Norimichi; Endo, Chizuko; Haneishi, Hideaki; Miyake, Yoichi

    1996-04-01

    In this paper, we introduce a new method of data compression and decompression technique to search the aimed image based on the gazing area of the image. Many methods of data compression have been proposed. Particularly, JPEG compression technique has been widely used as a standard method. However, this method is not always effective to search the aimed images from the image filing system. In a previous paper, by the eye movement analysis, we found that images have a particular gazing area. It is considered that the gazing area is the most important region of the image, then we considered introducing the information to compress and transmit the image. A method named fixation based progressive image transmission is introduced to transmit the image effectively. In this method, after the gazing area is estimated, the area is first transmitted and then the other regions are transmitted. If we are not interested in the first transmitted image, then we can search other images. Therefore, the aimed image can be searched from the filing system, effectively. We compare the searching time of the proposed method with the conventional method. The result shows that the proposed method is faster than the conventional one to search the aimed image.

  7. Measurement dimensions compressed spectral imaging with a single point detector

    NASA Astrophysics Data System (ADS)

    Liu, Xue-Feng; Yu, Wen-Kai; Yao, Xu-Ri; Dai, Bin; Li, Long-Zhen; Wang, Chao; Zhai, Guang-Jie

    2016-04-01

    An experimental demonstration of spectral imaging with measurement dimensions compressed has been performed. With the method of dual compressed sensing (CS) we derive, the spectral image of a colored object can be obtained with only a single point detector, and sub-sampling is achieved in both spatial and spectral domains. The performances of dual CS spectral imaging are analyzed, including the effects of dual modulation numbers and measurement noise on the imaging quality. Our scheme provides a stable, high-flux measurement approach of spectral imaging.

  8. Wavelet based hierarchical coding scheme for radar image compression

    NASA Astrophysics Data System (ADS)

    Sheng, Wen; Jiao, Xiaoli; He, Jifeng

    2007-12-01

    This paper presents a wavelet based hierarchical coding scheme for radar image compression. Radar signal is firstly quantized to digital signal, and reorganized as raster-scanned image according to radar's repeated period frequency. After reorganization, the reformed image is decomposed to image blocks with different frequency band by 2-D wavelet transformation, each block is quantized and coded by the Huffman coding scheme. A demonstrating system is developed, showing that under the requirement of real time processing, the compression ratio can be very high, while with no significant loss of target signal in restored radar image.

  9. CoGI: Towards Compressing Genomes as an Image.

    PubMed

    Xie, Xiaojing; Zhou, Shuigeng; Guan, Jihong

    2015-01-01

    Genomic science is now facing an explosive increase of data thanks to the fast development of sequencing technology. This situation poses serious challenges to genomic data storage and transferring. It is desirable to compress data to reduce storage and transferring cost, and thus to boost data distribution and utilization efficiency. Up to now, a number of algorithms / tools have been developed for compressing genomic sequences. Unlike the existing algorithms, most of which treat genomes as one-dimensional text strings and compress them based on dictionaries or probability models, this paper proposes a novel approach called CoGI (the abbreviation of Compressing Genomes as an Image) for genome compression, which transforms the genomic sequences to a two-dimensional binary image (or bitmap), then applies a rectangular partition coding algorithm to compress the binary image. CoGI can be used as either a reference-based compressor or a reference-free compressor. For the former, we develop two entropy-based algorithms to select a proper reference genome. Performance evaluation is conducted on various genomes. Experimental results show that the reference-based CoGI significantly outperforms two state-of-the-art reference-based genome compressors GReEn and RLZ-opt in both compression ratio and compression efficiency. It also achieves comparable compression ratio but two orders of magnitude higher compression efficiency in comparison with XM--one state-of-the-art reference-free genome compressor. Furthermore, our approach performs much better than Gzip--a general-purpose and widely-used compressor, in both compression speed and compression ratio. So, CoGI can serve as an effective and practical genome compressor. The source code and other related documents of CoGI are available at: http://admis.fudan.edu.cn/projects/cogi.htm. PMID:26671800

  10. Image compression and encryption scheme based on 2D compressive sensing and fractional Mellin transform

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Li, Haolin; Wang, Di; Pan, Shumin; Zhou, Zhihong

    2015-05-01

    Most of the existing image encryption techniques bear security risks for taking linear transform or suffer encryption data expansion for adopting nonlinear transformation directly. To overcome these difficulties, a novel image compression-encryption scheme is proposed by combining 2D compressive sensing with nonlinear fractional Mellin transform. In this scheme, the original image is measured by measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the nonlinear fractional Mellin transform. The measurement matrices are controlled by chaos map. The Newton Smoothed l0 Norm (NSL0) algorithm is adopted to obtain the decryption image. Simulation results verify the validity and the reliability of this scheme.

  11. Segmentation and thematic classification of color orthophotos over non-compressed and JPEG 2000 compressed images

    NASA Astrophysics Data System (ADS)

    Zabala, A.; Cea, C.; Pons, X.

    2012-04-01

    Lossy compression is now increasingly used due to the enormous amount of images gathered by airborne and satellite sensors. Nevertheless, the implications of these compression procedures have been scarcely assessed. Segmentation before digital image classification is also a technique increasingly used in GEOBIA (GEOgraphic Object-Based Image Analysis). This paper presents an object-oriented application for image analysis using color orthophotos (RGB bands) and a Quickbird image (RGB and a near infrared band). We use different compression levels in order to study the effects of the data loss on the segmentation-based classification results. A set of 4 color orthophotos with 1 m spatial resolution and a 4-band Quickbird satellite image with 0.7 m spatial resolution each covering an area of about 1200 × 1200 m 2 (144 ha) was chosen for the experiment. Those scenes were compressed at 8 compression ratios (between 5:1 and 1000:1) using the JPEG 2000 standard. There were 7 thematic categories: dense vegetation, herbaceous, bare lands, road and asphalt areas, building areas, swimming pools and rivers (if necessary). The best category classification was obtained using a hierarchical classification algorithm over the second segmentation level. The same segmentation and classification methods were applied in order to establish a semi-automatic technique for all 40 images. To estimate the overall accuracy, a confusion matrix was calculated using a photointerpreted ground-truth map (fully covering 25% of each orthophoto). The mean accuracy over non-compressed images was 66% for the orthophotos and 72% for the Quickbird image. It is interesting to obtain this medium overall accuracy to be able to properly assess the compression effects (if the initial overall accuracy is very high, the possible positive effects of compression would not be noticeable). The first and second compression levels (up to 10:1) obtain results similar to the reference ones. Differences in the third to

  12. Compressed/reconstructed test images for CRAF/Cassini

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.

    1991-01-01

    A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.

  13. Effect of severe image compression on face recognition algorithms

    NASA Astrophysics Data System (ADS)

    Zhao, Peilong; Dong, Jiwen; Li, Hengjian

    2015-10-01

    In today's information age, people will depend more and more on computers to obtain and make use of information, there is a big gap between the multimedia information after digitization that has large data and the current hardware technology that can provide the computer storage resources and network band width. For example, there is a large amount of image storage and transmission problem. Image compression becomes useful in cases when images need to be transmitted across networks in a less costly way by increasing data volume while reducing transmission time. This paper discusses image compression to effect on face recognition system. For compression purposes, we adopted the JPEG, JPEG2000, JPEG XR coding standard. The face recognition algorithms studied are SIFT. As a form of an extensive research, Experimental results show that it still maintains a high recognition rate under the high compression ratio, and JPEG XR standards is superior to other two kinds in terms of performance and complexity.

  14. Pre-Processor for Compression of Multispectral Image Data

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron

    2006-01-01

    A computer program that preprocesses multispectral image data has been developed to provide the Mars Exploration Rover (MER) mission with a means of exploiting the additional correlation present in such data without appreciably increasing the complexity of compressing the data.

  15. A High Performance Image Data Compression Technique for Space Applications

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Venbrux, Jack

    2003-01-01

    A highly performing image data compression technique is currently being developed for space science applications under the requirement of high-speed and pushbroom scanning. The technique is also applicable to frame based imaging data. The algorithm combines a two-dimensional transform with a bitplane encoding; this results in an embedded bit string with exact desirable compression rate specified by the user. The compression scheme performs well on a suite of test images acquired from spacecraft instruments. It can also be applied to three-dimensional data cube resulting from hyper-spectral imaging instrument. Flight qualifiable hardware implementations are in development. The implementation is being designed to compress data in excess of 20 Msampledsec and support quantization from 2 to 16 bits. This paper presents the algorithm, its applications and status of development.

  16. An image compression technique for use on token ring networks

    NASA Technical Reports Server (NTRS)

    Gorjala, B.; Sayood, Khalid; Meempat, G.

    1992-01-01

    A low complexity technique for compression of images for transmission over local area networks is presented. The technique uses the synchronous traffic as a side channel for improving the performance of an adaptive differential pulse code modulation (ADPCM) based coder.

  17. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  18. The Pixon Method for Data Compression Image Classification, and Image Reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard; Yahil, Amos

    2002-01-01

    As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.

  19. New Methods for Lossless Image Compression Using Arithmetic Coding.

    ERIC Educational Resources Information Center

    Howard, Paul G.; Vitter, Jeffrey Scott

    1992-01-01

    Identifies four components of a good predictive lossless image compression method: (1) pixel sequence, (2) image modeling and prediction, (3) error modeling, and (4) error coding. Highlights include Laplace distribution and a comparison of the multilevel progressive method for image coding with the prediction by partial precision matching method.…

  20. Architecture for hardware compression/decompression of large images

    NASA Astrophysics Data System (ADS)

    Akil, Mohamed; Perroton, Laurent; Gailhard, Stephane; Denoulet, Julien; Bartier, Frederic

    2001-04-01

    In this article, we present a popular loseless compression/decompression algorithm, GZIP, and the study to implement it on a FPGA based architecture. The algorithm is loseless, and applied to 'bi-level' images of large size. It insures a minimum compression rate for the images we are considering. The proposed architecture for the compressor is based ona hash table and the decompressor is based on a parallel decoder of the Huffman codes.

  1. Planning/scheduling techniques for VQ-based image compression

    NASA Technical Reports Server (NTRS)

    Short, Nicholas M., Jr.; Manohar, Mareboyana; Tilton, James C.

    1994-01-01

    The enormous size of the data holding and the complexity of the information system resulting from the EOS system pose several challenges to computer scientists, one of which is data archival and dissemination. More than ninety percent of the data holdings of NASA is in the form of images which will be accessed by users across the computer networks. Accessing the image data in its full resolution creates data traffic problems. Image browsing using a lossy compression reduces this data traffic, as well as storage by factor of 30-40. Of the several image compression techniques, VQ is most appropriate for this application since the decompression of the VQ compressed images is a table lookup process which makes minimal additional demands on the user's computational resources. Lossy compression of image data needs expert level knowledge in general and is not straightforward to use. This is especially true in the case of VQ. It involves the selection of appropriate codebooks for a given data set and vector dimensions for each compression ratio, etc. A planning and scheduling system is described for using the VQ compression technique in the data access and ingest of raw satellite data.

  2. On the use of the Stockwell transform for image compression

    NASA Astrophysics Data System (ADS)

    Wang, Yanwei; Orchard, Jeff

    2009-02-01

    In this paper, we investigate the use of the Stockwell Transform for image compression. The proposed technique uses the Discrete Orthogonal Stockwell Transform (DOST), an orthogonal version of the Discrete Stockwell Transform (DST). These mathematical transforms provide a multiresolution spatial-frequency representation of a signal or image. First, we give a brief introduction for the Stockwell transform and the DOST. Then we outline a simplistic compression method based on setting the smallest coefficients to zero. In an experiment, we use this compression strategy on three different transforms: the Fast Fourier transform, the Daubechies wavelet transform and the DOST. The results show that the DOST outperforms the two other methods.

  3. PCIF: An Algorithm for Lossless True Color Image Compression

    NASA Astrophysics Data System (ADS)

    Barcucci, Elena; Brlek, Srecko; Brocchi, Stefano

    An efficient algorithm for compressing true color images is proposed. The technique uses a combination of simple and computationally cheap operations. The three main steps consist of predictive image filtering, decomposition of data, and data compression through the use of run length encoding, Huffman coding and grouping the values into polyominoes. The result is a practical scheme that achieves good compression while providing fast decompression. The approach has performance comparable to, and often better than, competing standards such JPEG 2000 and JPEG-LS.

  4. Image compression software for the SOHO LASCO and EIT experiments

    NASA Technical Reports Server (NTRS)

    Grunes, Mitchell R.; Howard, Russell A.; Hoppel, Karl; Mango, Stephen A.; Wang, Dennis

    1994-01-01

    This paper describes the lossless and lossy image compression algorithms to be used on board the Solar Heliospheric Observatory (SOHO) in conjunction with the Large Angle Spectrometric Coronograph and Extreme Ultraviolet Imaging Telescope experiments. It also shows preliminary results obtained using similar prior imagery and discusses the lossy compression artifacts which will result. This paper is in part intended for the use of SOHO investigators who need to understand the results of SOHO compression in order to better allocate the transmission bits which they have been allocated.

  5. Hyperspectral image compression using an online learning method

    NASA Astrophysics Data System (ADS)

    Ülkü, Ä.°rem; Töreyin, B. Uǧur

    2015-05-01

    A hyperspectral image compression method is proposed using an online dictionary learning approach. The online learning mechanism is aimed at utilizing least number of dictionary elements for each hyperspectral image under consideration. In order to meet this "sparsity constraint", basis pursuit algorithm is used. Hyperspectral imagery from AVIRIS datasets are used for testing purposes. Effects of non-zero dictionary elements on the compression performance are analyzed. Results indicate that, the proposed online dictionary learning algorithm may be utilized for higher data rates, as it performs better in terms of PSNR values, as compared with the state-of-the-art predictive lossy compression schemes.

  6. Compression of CCD raw images for digital still cameras

    NASA Astrophysics Data System (ADS)

    Sriram, Parthasarathy; Sudharsanan, Subramania

    2005-03-01

    Lossless compression of raw CCD images captured using color filter arrays has several benefits. The benefits include improved storage capacity, reduced memory bandwidth, and lower power consumption for digital still camera processors. The paper discusses the benefits in detail and proposes the use of a computationally efficient block adaptive scheme for lossless compression. Experimental results are provided that indicate that the scheme performs well for CCD raw images attaining compression factors of more than two. The block adaptive method also compares favorably with JPEG-LS. A discussion is provided indicating how the proposed lossless coding scheme can be incorporated into digital still camera processors enabling lower memory bandwidth and storage requirements.

  7. Imaging industry expectations for compressed sensing in MRI

    NASA Astrophysics Data System (ADS)

    King, Kevin F.; Kanwischer, Adriana; Peters, Rob

    2015-09-01

    Compressed sensing requires compressible data, incoherent acquisition and a nonlinear reconstruction algorithm to force creation of a compressible image consistent with the acquired data. MRI images are compressible using various transforms (commonly total variation or wavelets). Incoherent acquisition of MRI data by appropriate selection of pseudo-random or non-Cartesian locations in k-space is straightforward. Increasingly, commercial scanners are sold with enough computing power to enable iterative reconstruction in reasonable times. Therefore integration of compressed sensing into commercial MRI products and clinical practice is beginning. MRI frequently requires the tradeoff of spatial resolution, temporal resolution and volume of spatial coverage to obtain reasonable scan times. Compressed sensing improves scan efficiency and reduces the need for this tradeoff. Benefits to the user will include shorter scans, greater patient comfort, better image quality, more contrast types per patient slot, the enabling of previously impractical applications, and higher throughput. Challenges to vendors include deciding which applications to prioritize, guaranteeing diagnostic image quality, maintaining acceptable usability and workflow, and acquisition and reconstruction algorithm details. Application choice depends on which customer needs the vendor wants to address. The changing healthcare environment is putting cost and productivity pressure on healthcare providers. The improved scan efficiency of compressed sensing can help alleviate some of this pressure. Image quality is strongly influenced by image compressibility and acceleration factor, which must be appropriately limited. Usability and workflow concerns include reconstruction time and user interface friendliness and response. Reconstruction times are limited to about one minute for acceptable workflow. The user interface should be designed to optimize workflow and minimize additional customer training. Algorithm

  8. Improvements for Image Compression Using Adaptive Principal Component Extraction (APEX)

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1997-01-01

    The issues of image compression and pattern classification have been a primary focus of researchers among a variety of fields including signal and image processing, pattern recognition, data classification, etc. These issues depend on finding an efficient representation of the source data. In this paper we collate our earlier results where we introduced the application of the. Hilbe.rt scan to a principal component algorithm (PCA) with Adaptive Principal Component Extraction (APEX) neural network model. We apply these technique to medical imaging, particularly image representation and compression. We apply the Hilbert scan to the APEX algorithm to improve results

  9. Integer cosine transform for image compression

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Pollara, F.; Shahshahani, M.

    1991-01-01

    This article describes a recently introduced transform algorithm called the integer cosine transform (ICT), which is used in transform-based data compression schemes. The ICT algorithm requires only integer operations on small integers and at the same time gives a rate-distortion performance comparable to that offered by the floating-point discrete cosine transform (DCT). The article addresses the issue of implementation complexity, which is of prime concern for source coding applications of interest in deep-space communications. Complexity reduction in the transform stage of the compression scheme is particularly relevant, since this stage accounts for most (typically over 80 percent) of the computational load.

  10. Symbolic document image compression based on pattern matching techniques

    NASA Astrophysics Data System (ADS)

    Shiah, Chwan-Yi; Yen, Yun-Sheng

    2011-10-01

    In this paper, a novel compression algorithm for Chinese document images is proposed. Initially, documents are segmented into readable components such as characters and punctuation marks. Similar patterns within the text are found by shape context matching and grouped to form a set of prototype symbols. Text redundancies can be removed by replacing repeated symbols by their corresponding prototype symbols. To keep the compression visually lossless, we use a multi-stage symbol clustering procedure to group similar symbols and to ensure that there is no visible error in the decompressed image. In the encoding phase, the resulting data streams are encoded by adaptive arithmetic coding. Our results show that the average compression ratio is better than the international standard JBIG2 and the compressed form of a document image is suitable for a content-based keyword searching operation.

  11. Lossy and lossless compression of MERIS hyperspectral images with exogenous quasi-optimal spectral transforms

    NASA Astrophysics Data System (ADS)

    Akam Bita, Isidore Paul; Barret, Michel; Dalla Vedova, Florio; Gutzwiller, Jean-Louis

    2010-07-01

    Our research focuses on reducing complexity of hyperspectral image codecs based on transform and/or subband coding, so they can be on-board a satellite. It is well-known that the Karhunen Loeve transform (KLT) can be sub-optimal for non Gaussian data. However, it is generally recommended as the best calculable coding transform in practice. Now, for a compression scheme compatible with both the JPEG2000 Part2 standard and the CCSDS recommendations for onboard satellite image compression, the concept and computation of optimal spectral transforms (OST), at high bit-rates, were carried out, under low restrictive hypotheses. These linear transforms are optimal for reducing spectral redundancies of multi- or hyper-spectral images, when the spatial redundancies are reduced with a fixed 2-D discrete wavelet transform. The problem of OST is their heavy computational cost. In this paper we present the performances in coding of a quasi-optimal spectral transform, called exogenous OrthOST, obtained by learning an orthogonal OST on a sample of hyperspectral images from the spectrometer MERIS. Moreover, we compute an integer variant of OrthOST for lossless compression. The performances are compared to the ones of the KLT in both lossy and lossless compressions. We observe good performances of the exogenous OrthOST.

  12. Remote sensing images fusion based on block compressed sensing

    NASA Astrophysics Data System (ADS)

    Yang, Sen-lin; Wan, Guo-bin; Zhang, Bian-lian; Chong, Xin

    2013-08-01

    A novel strategy for remote sensing images fusion is presented based on the block compressed sensing (BCS). Firstly, the multiwavelet transform (MWT) are employed for better sparse representation of remote sensing images. The sparse representations of block images are then compressive sampling by the BCS with an identical scrambled block hadamard operator. Further, the measurements are fused by a linear weighting rule in the compressive domain. And finally, the fused image is reconstructed by the gradient projection sparse reconstruction (GPSR) algorithm. Experiments result analyzes the selection of block dimension and sampling rating, as well as the convergence performance of the proposed method. The field test of remote sensing images fusion shows the validity of the proposed method.

  13. Image analysis and compression: renewed focus on texture

    NASA Astrophysics Data System (ADS)

    Pappas, Thrasyvoulos N.; Zujovic, Jana; Neuhoff, David L.

    2010-01-01

    We argue that a key to further advances in the fields of image analysis and compression is a better understanding of texture. We review a number of applications that critically depend on texture analysis, including image and video compression, content-based retrieval, visual to tactile image conversion, and multimodal interfaces. We introduce the idea of "structurally lossless" compression of visual data that allows significant differences between the original and decoded images, which may be perceptible when they are viewed side-by-side, but do not affect the overall quality of the image. We then discuss the development of objective texture similarity metrics, which allow substantial point-by-point deviations between textures that according to human judgment are essentially identical.

  14. Image Compression Algorithm Altered to Improve Stereo Ranging

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron

    2008-01-01

    A report discusses a modification of the ICER image-data-compression algorithm to increase the accuracy of ranging computations performed on compressed stereoscopic image pairs captured by cameras aboard the Mars Exploration Rovers. (ICER and variants thereof were discussed in several prior NASA Tech Briefs articles.) Like many image compressors, ICER was designed to minimize a mean-square-error measure of distortion in reconstructed images as a function of the compressed data volume. The present modification of ICER was preceded by formulation of an alternative error measure, an image-quality metric that focuses on stereoscopic-ranging quality and takes account of image-processing steps in the stereoscopic-ranging process. This metric was used in empirical evaluation of bit planes of wavelet-transform subbands that are generated in ICER. The present modification, which is a change in a bit-plane prioritization rule in ICER, was adopted on the basis of this evaluation. This modification changes the order in which image data are encoded, such that when ICER is used for lossy compression, better stereoscopic-ranging results are obtained as a function of the compressed data volume.

  15. A Novel Psychovisual Threshold on Large DCT for Image Compression

    PubMed Central

    2015-01-01

    A psychovisual experiment prescribes the quantization values in image compression. The quantization process is used as a threshold of the human visual system tolerance to reduce the amount of encoded transform coefficients. It is very challenging to generate an optimal quantization value based on the contribution of the transform coefficient at each frequency order. The psychovisual threshold represents the sensitivity of the human visual perception at each frequency order to the image reconstruction. An ideal contribution of the transform at each frequency order will be the primitive of the psychovisual threshold in image compression. This research study proposes a psychovisual threshold on the large discrete cosine transform (DCT) image block which will be used to automatically generate the much needed quantization tables. The proposed psychovisual threshold will be used to prescribe the quantization values at each frequency order. The psychovisual threshold on the large image block provides significant improvement in the quality of output images. The experimental results on large quantization tables from psychovisual threshold produce largely free artifacts in the visual output image. Besides, the experimental results show that the concept of psychovisual threshold produces better quality image at the higher compression rate than JPEG image compression. PMID:25874257

  16. Watermarking of ultrasound medical images in teleradiology using compressed watermark.

    PubMed

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohamad; Ali, Mushtaq

    2016-01-01

    The open accessibility of Internet-based medical images in teleradialogy face security threats due to the nonsecured communication media. This paper discusses the spatial domain watermarking of ultrasound medical images for content authentication, tamper detection, and lossless recovery. For this purpose, the image is divided into two main parts, the region of interest (ROI) and region of noninterest (RONI). The defined ROI and its hash value are combined as watermark, lossless compressed, and embedded into the RONI part of images at pixel's least significant bits (LSBs). The watermark lossless compression and embedding at pixel's LSBs preserve image diagnostic and perceptual qualities. Different lossless compression techniques including Lempel-Ziv-Welch (LZW) were tested for watermark compression. The performances of these techniques were compared based on more bit reduction and compression ratio. LZW was found better than others and used in tamper detection and recovery watermarking of medical images (TDARWMI) scheme development to be used for ROI authentication, tamper detection, localization, and lossless recovery. TDARWMI performance was compared and found to be better than other watermarking schemes. PMID:26839914

  17. Preprocessing and compression of Hyperspectral images captured onboard UAVs

    NASA Astrophysics Data System (ADS)

    Herrero, Rolando; Cadirola, Martin; Ingle, Vinay K.

    2015-10-01

    Advancements in image sensors and signal processing have led to the successful development of lightweight hyperspectral imaging systems that are critical to the deployment of Photometry and Remote Sensing (PaRS) capabilities in unmanned aerial vehicles (UAVs). In general, hyperspectral data cubes include a few dozens of spectral bands that are extremely useful for remote sensing applications that range from detection of land vegetation to monitoring of atmospheric products derived from the processing of lower level radiance images. Because these data cubes are captured in the challenging environment of UAVs, where resources are limited, source encoding by means of compression is a fundamental mechanism that considerably improves the overall system performance and reliability. In this paper, we focus on the hyperspectral images captured by a state-of-the-art commercial hyperspectral camera by showing the results of applying ultraspectral data compression to the obtained data set. Specifically the compression scheme that we introduce integrates two stages; (1) preprocessing and (2) compression itself. The outcomes of this procedure are linear prediction coefficients and an error signal that, when encoded, results in a compressed version of the original image. Second, preprocessing and compression algorithms are optimized and have their time complexity analyzed to guarantee their successful deployment using low power ARM based embedded processors in the context of UAVs. Lastly, we compare the proposed architecture against other well known schemes and show how the compression scheme presented in this paper outperforms all of them by providing substantial improvement and delivering both lower compression rates and lower distortion.

  18. International standards activities in image data compression

    NASA Technical Reports Server (NTRS)

    Haskell, Barry

    1989-01-01

    Integrated Services Digital Network (ISDN); coding for color TV, video conferencing, video conferencing/telephone, and still color images; ISO color image coding standard; and ISO still picture standard are briefly discussed. This presentation is represented by viewgraphs only.

  19. Compressive SAR imaging with joint sparsity and local similarity exploitation.

    PubMed

    Shen, Fangfang; Zhao, Guanghui; Shi, Guangming; Dong, Weisheng; Wang, Chenglong; Niu, Yi

    2015-01-01

    Compressive sensing-based synthetic aperture radar (SAR) imaging has shown its superior capability in high-resolution image formation. However, most of those works focus on the scenes that can be sparsely represented in fixed spaces. When dealing with complicated scenes, these fixed spaces lack adaptivity in characterizing varied image contents. To solve this problem, a new compressive sensing-based radar imaging approach with adaptive sparse representation is proposed. Specifically, an autoregressive model is introduced to adaptively exploit the structural sparsity of an image. In addition, similarity among pixels is integrated into the autoregressive model to further promote the capability and thus an adaptive sparse representation facilitated by a weighted autoregressive model is derived. Since the weighted autoregressive model is inherently determined by the unknown image, we propose a joint optimization scheme by iterative SAR imaging and updating of the weighted autoregressive model to solve this problem. Eventually, experimental results demonstrated the validity and generality of the proposed approach. PMID:25686307

  20. Medical Image Compression Using a New Subband Coding Method

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen; Tucker, Doug

    1995-01-01

    A recently introduced iterative complexity- and entropy-constrained subband quantization design algorithm is generalized and applied to medical image compression. In particular, the corresponding subband coder is used to encode Computed Tomography (CT) axial slice head images, where statistical dependencies between neighboring image subbands are exploited. Inter-slice conditioning is also employed for further improvements in compression performance. The subband coder features many advantages such as relatively low complexity and operation over a very wide range of bit rates. Experimental results demonstrate that the performance of the new subband coder is relatively good, both objectively and subjectively.

  1. The FBI compression standard for digitized fingerprint images

    SciTech Connect

    Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.; Hopper, T.

    1996-10-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  2. Image Compression on a VLSI Neural-Based Vector Quantizer.

    ERIC Educational Resources Information Center

    Chen, Oscal T.-C.; And Others

    1992-01-01

    Describes a modified frequency-sensitive self-organization (FSO) algorithm for image data compression and the associated VLSI architecture. Topics discussed include vector quantization; VLSI neural processor architecture; detailed circuit implementation; and a neural network vector quantization prototype chip. Examples of images using the FSO…

  3. Multiview image compression based on LDV scheme

    NASA Astrophysics Data System (ADS)

    Battin, Benjamin; Niquin, Cédric; Vautrot, Philippe; Debons, Didier; Lucas, Laurent

    2011-03-01

    In recent years, we have seen several different approaches dealing with multiview compression. First, we can find the H264/MVC extension which generates quite heavy bitstreams when used on n-views autostereoscopic medias and does not allow inter-view reconstruction. Another solution relies on the MVD (MultiView+Depth) scheme which keeps p views (n > p > 1) and their associated depth-maps. This method is not suitable for multiview compression since it does not exploit the redundancy between the p views, moreover occlusion areas cannot be accurately filled. In this paper, we present our method based on the LDV (Layered Depth Video) approach which keeps one reference view with its associated depth-map and the n-1 residual ones required to fill occluded areas. We first perform a global per-pixel matching step (providing a good consistency between each view) in order to generate one unified-color RGB texture (where a unique color is devoted to all pixels corresponding to the same 3D-point, thus avoiding illumination artifacts) and a signed integer disparity texture. Next, we extract the non-redundant information and store it into two textures (a unified-color one and a disparity one) containing the reference and the n-1 residual views. The RGB texture is compressed with a conventional DCT or DWT-based algorithm and the disparity texture with a lossless dictionary algorithm. Then, we will discuss about the signal deformations generated by our approach.

  4. Compressive spectral integral imaging using a microlens array

    NASA Astrophysics Data System (ADS)

    Feng, Weiyi; Rueda, Hoover; Fu, Chen; Qian, Chen; Arce, Gonzalo R.

    2016-05-01

    In this paper, a compressive spectral integral imaging system using a microlens array (MLA) is proposed. This system can sense the 4D spectro-volumetric information into a compressive 2D measurement image on the detector plane. In the reconstruction process, the 3D spatial information at different depths and the spectral responses of each spatial volume pixel can be obtained simultaneously. In the simulation, sensing of the 3D objects is carried out by optically recording elemental images (EIs) using a scanned pinhole camera. With the elemental images, a spectral data cube with different perspectives and depth information can be reconstructed using the TwIST algorithm in the multi-shot compressive spectral imaging framework. Then, the 3D spatial images with one dimensional spectral information at arbitrary depths are computed using the computational integral imaging method by inversely mapping the elemental images according to geometrical optics. The simulation results verify the feasibility of the proposed system. The 3D volume images and the spectral information of the volume pixels can be successfully reconstructed at the location of the 3D objects. The proposed system can capture both 3D volumetric images and spectral information in a video rate, which is valuable in biomedical imaging and chemical analysis.

  5. Compression of 3D integral images using wavelet decomposition

    NASA Astrophysics Data System (ADS)

    Mazri, Meriem; Aggoun, Amar

    2003-06-01

    This paper presents a wavelet-based lossy compression technique for unidirectional 3D integral images (UII). The method requires the extraction of different viewpoint images from the integral image. A single viewpoint image is constructed by extracting one pixel from each microlens, then each viewpoint image is decomposed using a Two Dimensional Discrete Wavelet Transform (2D-DWT). The resulting array of coefficients contains several frequency bands. The lower frequency bands of the viewpoint images are assembled and compressed using a 3 Dimensional Discrete Cosine Transform (3D-DCT) followed by Huffman coding. This will achieve decorrelation within and between 2D low frequency bands from the different viewpoint images. The remaining higher frequency bands are Arithmetic coded. After decoding and decompression of the viewpoint images using an inverse 3D-DCT and an inverse 2D-DWT, each pixel from every reconstructed viewpoint image is put back into its original position within the microlens to reconstruct the whole 3D integral image. Simulations were performed on a set of four different grey level 3D UII using a uniform scalar quantizer with deadzone. The results for the average of the four UII intensity distributions are presented and compared with previous use of 3D-DCT scheme. It was found that the algorithm achieves better rate-distortion performance, with respect to compression ratio and image quality at very low bit rates.

  6. Context and task-aware knowledge-enhanced compressive imaging

    NASA Astrophysics Data System (ADS)

    Rao, Shankar; Ni, Kang-Yu; Owechko, Yuri

    2013-09-01

    We describe a foveated compressive sensing approach for image analysis applications that utilizes knowledge of the task to be performed to reduce the number of required measurements compared to conventional Nyquist sampling and compressive sensing based approaches. Our Compressive Optical Foveated Architecture (COFA) adapts the dictionary and compressive measurements to structure and sparsity in the signal, task, and scene by reducing measurement and dictionary mutual coherence and increasing sparsity using principles of actionable information and foveated compressive sensing. Actionable information is used to extract task-relevant regions of interest (ROIs) from a low-resolution scene analysis by eliminating the effects of nuisances for occlusion and anomalous motion detection. From the extracted ROIs, preferential measurements are taken using foveation as part of the compressive sensing adaptation process. The task-specific measurement matrix is optimized by using a novel saliency-weighted coherence minimization with respect to the learned signal dictionary. This incorporates the relative usage of the atoms in the dictionary. Therefore, the measurement matrix is not random, as in conventional compressive sensing, but is based on the dictionary structure and atom distributions. We utilize a patch-based method to learn the signal priors. A treestructured dictionary of image patches using KSVD is learned which can sparsely represent any given image patch with the tree-structure. We have implemented COFA in an end-to-end simulation of a vehicle fingerprinting task for aerial surveillance using foveated compressive measurements adapted to hierarchical ROIs consisting of background, roads, and vehicles. Our results show 113x reduction in measurements over conventional sensing and 28x reduction over compressive sensing using random measurements.

  7. Compressive spectral imaging systems based on linear detector

    NASA Astrophysics Data System (ADS)

    Liu, Yanli; Zhong, Xiaoming; Zhao, Haibo; Li, Huan

    2015-08-01

    The spectrometers capture large amount of raw and 3-dimensional (3D) spatial-spectral scene information with 2- dimensional (2D) focal plane arrays(FPA). In many applications, including imaging system and video cameras, the Nyquist rate is so high that too many samples result, making compression a precondition to storage or transmission. Compressive sensing theory employs non-adaptive linear projections that preserve the structure of the signal, the signal is then reconstructed from these projections using an optimization process. This article overview the fundamental spectral imagers based on compressive sensing, the coded aperture snapshot spectral imagers (CASSI) and high-resolution imagers via moving random exposure. Besides that, the article propose a new method to implement spectral imagers with linear detector imager systems based on spectrum compressed. The article describes the system introduction and code process, and it illustrates results with real data and imagery. Simulations are shown to illustrate the performance improvement attained by the new model and complexity of the imaging system greatly reduced by using linear detector.

  8. Improved satellite image compression and reconstruction via genetic algorithms

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael; Lamont, Gary

    2008-10-01

    A wide variety of signal and image processing applications, including the US Federal Bureau of Investigation's fingerprint compression standard [3] and the JPEG-2000 image compression standard [26], utilize wavelets. This paper describes new research that demonstrates how a genetic algorithm (GA) may be used to evolve transforms that outperform wavelets for satellite image compression and reconstruction under conditions subject to quantization error. The new approach builds upon prior work by simultaneously evolving real-valued coefficients representing matched forward and inverse transform pairs at each of three levels of a multi-resolution analysis (MRA) transform. The training data for this investigation consists of actual satellite photographs of strategic urban areas. Test results show that a dramatic reduction in the error present in reconstructed satellite images may be achieved without sacrificing the compression capabilities of the forward transform. The transforms evolved during this research outperform previous start-of-the-art solutions, which optimized coefficients for the reconstruction transform only. These transforms also outperform wavelets, reducing error by more than 0.76 dB at a quantization level of 64. In addition, transforms trained using representative satellite images do not perform quite as well when subsequently tested against images from other classes (such as fingerprints or portraits). This result suggests that the GA developed for this research is automatically learning to exploit specific attributes common to the class of images represented in the training population.

  9. Fast-adaptive near-lossless image compression

    NASA Astrophysics Data System (ADS)

    He, Kejing

    2016-05-01

    The purpose of image compression is to store or transmit image data efficiently. However, most compression methods emphasize the compression ratio rather than the throughput. We propose an encoding process and rules, and consequently a fast-adaptive near-lossless image compression method (FAIC) with good compression ratio. FAIC is a single-pass method, which removes bits from each codeword, then predicts the next pixel value through localized edge detection techniques, and finally uses Golomb-Rice codes to encode the residuals. FAIC uses only logical operations, bitwise operations, additions, and subtractions. Meanwhile, it eliminates the slow operations (e.g., multiplication, division, and logarithm) and the complex entropy coder, which can be a bottleneck in hardware implementations. Besides, FAIC does not depend on any precomputed tables or parameters. Experimental results demonstrate that FAIC achieves good balance between compression ratio and computational complexity in certain range (e.g., peak signal-to-noise ratio >35 dB, bits per pixel>2). It is suitable for applications in which the amount of data is huge or the computation power is limited.

  10. Improved vector quantization scheme for grayscale image compression

    NASA Astrophysics Data System (ADS)

    Hu, Y.-C.; Chen, W.-L.; Lo, C.-C.; Chuang, J.-C.

    2012-06-01

    This paper proposes an improved image coding scheme based on vector quantization. It is well known that the image quality of a VQ-compressed image is poor when a small-sized codebook is used. In order to solve this problem, the mean value of the image block is taken as an alternative block encoding rule to improve the image quality in the proposed scheme. To cut down the storage cost of compressed codes, a two-stage lossless coding approach including the linear prediction technique and the Huffman coding technique is employed in the proposed scheme. The results show that the proposed scheme achieves better image qualities than vector quantization while keeping low bit rates.

  11. Hyperspectral image compression using bands combination wavelet transformation

    NASA Astrophysics Data System (ADS)

    Wang, Wenjie; Zhao, Zhongming; Zhu, Haiqing

    2009-10-01

    Hyperspectral imaging technology is the foreland of the remote sensing development in the 21st century and is one of the most important focuses of the remote sensing domain. Hyperspectral images can provide much more information than multispectral images do and can solve many problems which can't be solved by multispectral imaging technology. However this advantage is at the cost of massy quantity of data that brings difficulties of images' process, storage and transmission. Research on hyperspectral image compression method has important practical significance. This paper intends to do some improvement of the famous KLT-WT-2DSPECK (Karhunen-Loeve transform+ wavelet transformation+ two-dimensional set partitioning embedded block compression) algorithm and advances KLT + bands combination 2DWT + 2DSPECK algorithm. Experiment proves that this method is effective.

  12. Feature-preserving image/video compression

    NASA Astrophysics Data System (ADS)

    Al-Jawad, Naseer; Jassim, Sabah

    2005-10-01

    Advances in digital image processing, the advents of multimedia computing, and the availability of affordable high quality digital cameras have led to increased demand for digital images/videos. There has been a fast growth in the number of information systems that benefit from digital imaging techniques and present many tough challenges. In this paper e are concerned with applications for which image quality is a critical requirement. The fields of medicine, remote sensing, real time surveillance, and image-based automatic fingerprint/face identification systems are all but few examples of such applications. Medical care is increasingly dependent on imaging for diagnostics, surgery, and education. It is estimated that medium size hospitals in the US generate terabytes of MRI images and X-Ray images are generated to be stored in very large databases which are frequently accessed and searched for research and training. On the other hand, the rise of international terrorism and the growth of identity theft have added urgency to the development of new efficient biometric-based person verification/authentication systems. In future, such systems can provide an additional layer of security for online transactions or for real-time surveillance.

  13. Research of the wavelet based ECW remote sensing image compression technology

    NASA Astrophysics Data System (ADS)

    Zhang, Lan; Gu, Xingfa; Yu, Tao; Dong, Yang; Hu, Xinli; Xu, Hua

    2007-11-01

    This paper mainly study the wavelet based ECW remote sensing image compression technology. Comparing with the tradition compression technology JPEG and new compression technology JPEG2000 witch based on wavelet we can find that when compress quite large remote sensing image the ER Mapper Compressed Wavelet (ECW) can has significant advantages. The way how to use the ECW SDK was also discussed and prove that it's also the best and faster way to compress China-Brazil Earth Resource Satellite (CBERS) image.

  14. Compressive image acquisition and classification via secant projections

    NASA Astrophysics Data System (ADS)

    Li, Yun; Hegde, Chinmay; Sankaranarayanan, Aswin C.; Baraniuk, Richard; Kelly, Kevin F.

    2015-06-01

    Given its importance in a wide variety of machine vision applications, extending high-speed object detection and recognition beyond the visible spectrum in a cost-effective manner presents a significant technological challenge. As a step in this direction, we developed a novel approach for target image classification using a compressive sensing architecture. Here we report the first implementation of this approach utilizing the compressive single-pixel camera system. The core of our approach rests on the design of new measurement patterns, or projections, that are tuned to objects of interest. Our measurement patterns are based on the notion of secant projections of image classes that are constructed using two different approaches. Both approaches show at least a twofold improvement in terms of the number of measurements over the conventional, data-oblivious compressive matched filter. As more noise is added to the image, the second method proves to be the most robust.

  15. [Spatially modulated Fourier transform imaging spectrometer data compression research].

    PubMed

    Huang, Min; Xiangli, Bin; Yuan, Yan; Shen, Zhong; Lu, Qun-bo; Wang, Zhong-hou; Liu, Xue-bin

    2010-01-01

    Fourier transform imaging spectrometer is a new technic, and has been developed very fast in recent ten years. When it is used in satellite, because of the limit by the data transmission, the authors need to compress the original data obtained by the Fourier transform imaging spectrometer, then, the data can be transmitted, and can be incepted on the earth and decompressed. Then the authors can do data process to get spectrum data which can be used by user. Data compression technic used in Fourier transform imaging spectrometer is a new technic, and few papers introduce it at home and abroad. In this paper the authors will give a data compression method, which has been used in EDIS, and achieved a good result. PMID:20302132

  16. Compression of Ultrasonic NDT Image by Wavelet Based Local Quantization

    NASA Astrophysics Data System (ADS)

    Cheng, W.; Li, L. Q.; Tsukada, K.; Hanasaki, K.

    2004-02-01

    Compression on ultrasonic image that is always corrupted by noise will cause `over-smoothness' or much distortion. To solve this problem to meet the need of real time inspection and tele-inspection, a compression method based on Discrete Wavelet Transform (DWT) that can also suppress the noise without losing much flaw-relevant information, is presented in this work. Exploiting the multi-resolution and interscale correlation property of DWT, a simple way named DWCs classification, is introduced first to classify detail wavelet coefficients (DWCs) as dominated by noise, signal or bi-effected. A better denoising can be realized by selective thresholding DWCs. While in `Local quantization', different quantization strategies are applied to the DWCs according to their classification and the local image property. It allocates the bit rate more efficiently to the DWCs thus achieve a higher compression rate. Meanwhile, the decompressed image shows the effects of noise suppressed and flaw characters preserved.

  17. Counter-propagation neural network for image compression

    NASA Astrophysics Data System (ADS)

    Sygnowski, Wojciech; Macukow, Bohdan

    1996-08-01

    Recently, several image compression techniques based on neural network algorithms have been developed. In this paper, we propose a new method for image compression--the modified counter-propagation neural network algorithm, which is a combination of the self-organizing map of Kohonen and the outstar structure of Grossberg. This algorithm has been successfully used in many applications. The modification presented has also demonstrated an interesting performance in comparison with the standard techniques. It was found that at the learning stage we can use any image for a network training (without a significant influence on the net operation) and the compression ratio and quality depend on the size of the basic element (the number of pixels in the cluster) and the amount of error tolerated when processing.

  18. A specific measurement matrix in compressive imaging system

    NASA Astrophysics Data System (ADS)

    Wang, Fen; Wei, Ping; Ke, Jun

    2011-11-01

    Compressed sensing or compressive sampling (CS) is a new framework for simultaneous data sampling and compression which was proposed by Candes, Donoho, and Tao several years ago. Ever since the advent of a single-pixel camera, one of the CS applications - compressive imaging (CI, also referred as feature-specific imaging) has aroused more interest of numerous researchers. However, it is still a challenging problem to choose a simple and efficient measurement matrix in such a hardware system, especially for large scale image. In this paper, we propose a new measurement matrix whose rows are the odd rows of N order Hadamard matrix and discuss the validity of the matrix theoretically. The advantage of the matrix is its universality and easy implementation in the optical domain owing to its integer-valued elements. In addition, we demonstrate the validity of the matrix through the reconstruction of natural images using Orthogonal Matching Pursuit (OMP) algorithm. Due to the limitation of the memory of the hardware system and personal computer which is used to simulate the process, it is impossible to create such a large matrix that is used to conduct large scale images. In order to solve the problem, the block-wise notion is introduced to conduct large scale images and the experiments results present the validity of this method.

  19. Compressed image quality metric based on perceptually weighted distortion.

    PubMed

    Hu, Sudeng; Jin, Lina; Wang, Hanli; Zhang, Yun; Kwong, Sam; Kuo, C-C Jay

    2015-12-01

    Objective quality assessment for compressed images is critical to various image compression systems that are essential in image delivery and storage. Although the mean squared error (MSE) is computationally simple, it may not be accurate to reflect the perceptual quality of compressed images, which is also affected dramatically by the characteristics of human visual system (HVS), such as masking effect. In this paper, an image quality metric (IQM) is proposed based on perceptually weighted distortion in terms of the MSE. To capture the characteristics of HVS, a randomness map is proposed to measure the masking effect and a preprocessing scheme is proposed to simulate the processing that occurs in the initial part of HVS. Since the masking effect highly depends on the structural randomness, the prediction error from neighborhood with a statistical model is used to measure the significance of masking. Meanwhile, the imperceptible signal with high frequency could be removed by preprocessing with low-pass filters. The relation is investigated between the distortions before and after masking effect, and a masking modulation model is proposed to simulate the masking effect after preprocessing. The performance of the proposed IQM is validated on six image databases with various compression distortions. The experimental results show that the proposed algorithm outperforms other benchmark IQMs. PMID:26415170

  20. High-speed lossless compression for angiography image sequences

    NASA Astrophysics Data System (ADS)

    Kennedy, Jonathon M.; Simms, Michael; Kearney, Emma; Dowling, Anita; Fagan, Andrew; O'Hare, Neil J.

    2001-05-01

    High speed processing of large amounts of data is a requirement for many diagnostic quality medical imaging applications. A demanding example is the acquisition, storage and display of image sequences in angiography. The functional performance requirements for handling angiography data were identified. A new lossless image compression algorithm was developed, implemented in C++ for the Intel Pentium/MS-Windows environment and optimized for speed of operation. Speeds of up to 6M pixels per second for compression and 12M pixels per second for decompression were measured. This represents an improvement of up to 400% over the next best high-performance algorithm (LOCO-I) without significant reduction in compression ratio. Performance tests were carried out at St. James's Hospital using actual angiography data. Results were compared with the lossless JPEG standard and other leading methods such as JPEG-LS (LOCO-I) and the lossless wavelet approach proposed for JPEG 2000. Our new algorithm represents a significant improvement in the performance of lossless image compression technology without using specialized hardware. It has been applied successfully to image sequence decompression at video rate for angiography, one of the most challenging application areas in medical imaging.

  1. Image and Video Compression with VLSI Neural Networks

    NASA Technical Reports Server (NTRS)

    Fang, W.; Sheu, B.

    1993-01-01

    An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.

  2. Compressive microscopic imaging with "positive-negative" light modulation

    NASA Astrophysics Data System (ADS)

    Yu, Wen-Kai; Yao, Xu-Ri; Liu, Xue-Feng; Lan, Ruo-Ming; Wu, Ling-An; Zhai, Guang-Jie; Zhao, Qing

    2016-07-01

    An experiment on compressive microscopic imaging with single-pixel detector and single-arm has been performed on the basis of "positive-negative" (differential) light modulation of a digital micromirror device (DMD). A magnified image of micron-sized objects illuminated by the microscope's own incandescent lamp has been successfully acquired. The image quality is improved by one more orders of magnitude compared with that obtained by conventional single-pixel imaging scheme with normal modulation using the same sampling rate, and moreover, the system is robust against the instability of light source and may be applied to very weak light condition. Its nature and the analysis of noise sources is discussed deeply. The realization of this technique represents a big step to the practical applications of compressive microscopic imaging in the fields of biology and materials science.

  3. An innovative lossless compression method for discrete-color images.

    PubMed

    Alzahir, Saif; Borici, Arber

    2015-01-01

    In this paper, we present an innovative method for lossless compression of discrete-color images, such as map images, graphics, GIS, as well as binary images. This method comprises two main components. The first is a fixed-size codebook encompassing 8×8 bit blocks of two-tone data along with their corresponding Huffman codes and their relative probabilities of occurrence. The probabilities were obtained from a very large set of discrete color images which are also used for arithmetic coding. The second component is the row-column reduction coding, which will encode those blocks that are not in the codebook. The proposed method has been successfully applied on two major image categories: 1) images with a predetermined number of discrete colors, such as digital maps, graphs, and GIS images and 2) binary images. The results show that our method compresses images from both categories (discrete color and binary images) with 90% in most case and higher than the JBIG-2 by 5%-20% for binary images, and by 2%-6.3% for discrete color images on average. PMID:25330487

  4. Differentiation applied to lossless compression of medical images.

    PubMed

    Nijim, Y W; Stearns, S D; Mikhael, W B

    1996-01-01

    Lossless compression of medical images using a proposed differentiation technique is explored. This scheme is based on computing weighted differences between neighboring pixel values. The performance of the proposed approach, for the lossless compression of magnetic resonance (MR) images and ultrasonic images, is evaluated and compared with the lossless linear predictor and the lossless Joint Photographic Experts Group (JPEG) standard. The residue sequence of these techniques is coded using arithmetic coding. The proposed scheme yields compression measures, in terms of bits per pixel, that are comparable with or lower than those obtained using the linear predictor and the lossless JPEG standard, respectively, with 8-b medical images. The advantages of the differentiation technique presented here over the linear predictor are: 1) the coefficients of the differentiator are known by the encoder and the decoder, which eliminates the need to compute or encode these coefficients, and 21 the computational complexity is greatly reduced. These advantages are particularly attractive in real time processing for compressing and decompressing medical images. PMID:18215936

  5. Color transformation for the compression of CMYK images

    NASA Astrophysics Data System (ADS)

    de Queiroz, Ricardo L.

    1999-12-01

    A CMYK image is often viewed as a large amount of device- dependent data ready to be printed. In several circumstances, CMYK data needs to be compressed, but the conversion to and from device-independent spaces is imprecise at best. In this paper, with the goal of compressing CMYK images, color space transformations were studied. In order to have a practical importance we developed a new transformation to a YYCC color space, which is device-independent and image-independent, i.e. a simple linear transformation between device-dependent color spaces. The transformation from CMYK to YYCC was studied extensively in image compression. For that a distortion measure that would account for both device-dependence and spatial visual sensitivity has been developed. It is shown that transformation to YYCC consistently outperforms the transformation to other device-dependent 4D color spaces such as YCbCrK, while being competitive with the image- dependent KLT-based approach. Other interesting conclusions were also drawn from the experiments, among them the fact that color transformations are not always advantageous over independent compression of CMYK color planes and the fact that chrominance subsampling is rarely advantageous.

  6. Ultrasonic elastography using sector scan imaging and a radial compression.

    PubMed

    Souchon, Rémi; Soualmi, Lahbib; Bertrand, Michel; Chapelon, Jean-Yves; Kallel, Faouzi; Ophir, Jonathan

    2002-05-01

    Elastography is an imaging technique based on strain estimation in soft tissues under quasi-static compression. The stress is usually created by a compression plate, and the target is imaged by an ultrasonic linear array. This configuration is used for breast elastography, and has been investigated both theoretically and experimentally. Phenomena such as strain decay with tissue depth and strain concentrations have been reported. However in some in vivo situations, like prostate or blood vessels imaging, this set-up cannot be used. We propose a device to acquire in vivo elastograms of the prostate. The compression is applied by inflating a balloon that covers a transrectal sector probe. The 1D algorithm used to calculate the radial strain fails if the center of the imaging probe does not correspond to the center of the compressor. Therefore, experimental elastograms are calculated with a 2D algorithm that accounts for tangential displacements of the tissue. In this article, in order to gain a better understanding of the image formation process, the use of ultrasonic sector scans to image the radial compression of a target is investigated. Elastograms of homogeneous phantoms are presented, and compared with simulated images. Both show a strain decay with tissue depth. Then experimental and simulated elastograms of a phantom that contains a hard inclusion are presented, showing that strain concentrations occur as well. A method to compensate for strain decay and therefore to increase the contrast of the strain elastograms is proposed. It is expected that such information will help to interpret and possibly improve the elastograms obtained via radial compression. PMID:12160060

  7. Noise impact on error-free image compression.

    PubMed

    Lo, S B; Krasner, B; Mun, S K

    1990-01-01

    Some radiological images with different levels of noise have been studied using various decomposition methods incorporated with Huffman and Lempel-Ziv coding. When more correlations exist between pixels, these techniques can be made more efficient. However, additional noise disrupts the correlation between adjacent pixels and leads to a less compressed result. Hence, prior to a systematic compression in a picture archiving and communication system (PACS), two main issues must be addressed: the true information range which exists in a specific type of radiological image, and the costs and benefits of compression for the PACS. It is shown that with laser film digitized magnetic resonance images, 10-12 b are produced, although the lower 2-4 b show the characteristics of random noise. The addition of the noise bits is shown to adversely affect the amount of compression given by various reversible compression techniques. The sensitivity of different techniques to different levels of noise is examined in order to suggest strategies for dealing with noise. PMID:18222765

  8. Simultaneous image compression, fusion and encryption algorithm based on compressive sensing and chaos

    NASA Astrophysics Data System (ADS)

    Liu, Xingbin; Mei, Wenbo; Du, Huiqian

    2016-05-01

    In this paper, a novel approach based on compressive sensing and chaos is proposed for simultaneously compressing, fusing and encrypting multi-modal images. The sparsely represented source images are firstly measured with the key-controlled pseudo-random measurement matrix constructed using logistic map, which reduces the data to be processed and realizes the initial encryption. Then the obtained measurements are fused by the proposed adaptive weighted fusion rule. The fused measurement is further encrypted into the ciphertext through an iterative procedure including improved random pixel exchanging technique and fractional Fourier transform. The fused image can be reconstructed by decrypting the ciphertext and using a recovery algorithm. The proposed algorithm not only reduces data volume but also simplifies keys, which improves the efficiency of transmitting data and distributing keys. Numerical results demonstrate the feasibility and security of the proposed scheme.

  9. Fractal image compression: A resolution independent representation for imagery

    NASA Technical Reports Server (NTRS)

    Sloan, Alan D.

    1993-01-01

    A deterministic fractal is an image which has low information content and no inherent scale. Because of their low information content, deterministic fractals can be described with small data sets. They can be displayed at high resolution since they are not bound by an inherent scale. A remarkable consequence follows. Fractal images can be encoded at very high compression ratios. This fern, for example is encoded in less than 50 bytes and yet can be displayed at resolutions with increasing levels of detail appearing. The Fractal Transform was discovered in 1988 by Michael F. Barnsley. It is the basis for a new image compression scheme which was initially developed by myself and Michael Barnsley at Iterated Systems. The Fractal Transform effectively solves the problem of finding a fractal which approximates a digital 'real world image'.

  10. Iterative compressive sampling for hyperspectral images via source separation

    NASA Astrophysics Data System (ADS)

    Kamdem Kuiteing, S.; Barni, Mauro

    2014-03-01

    Compressive Sensing (CS) is receiving increasing attention as a way to lower storage and compression requirements for on-board acquisition of remote-sensing images. In the case of multi- and hyperspectral images, however, exploiting the spectral correlation poses severe computational problems. Yet, exploiting such a correlation would provide significantly better performance in terms of reconstruction quality. In this paper, we build on a recently proposed 2D CS scheme based on blind source separation to develop a computationally simple, yet accurate, prediction-based scheme for acquisition and iterative reconstruction of hyperspectral images in a CS setting. Preliminary experiments carried out on different hyperspectral images show that our approach yields a dramatic reduction of computational time while ensuring reconstruction performance similar to those of much more complicated 3D reconstruction schemes.

  11. Measurement kernel design for compressive imaging under device constraints

    NASA Astrophysics Data System (ADS)

    Shilling, Richard; Muise, Robert

    2013-05-01

    We look at the design of projective measurements for compressive imaging based upon image priors and device constraints. If one assumes that image patches from natural imagery can be modeled as a low rank manifold, we develop an optimality criterion for a measurement matrix based upon separating the canonical elements of the manifold prior. We then describe a stochastic search algorithm for finding the optimal measurements under device constraints based upon a subspace mismatch algorithm. The algorithm is then tested on a prototype compressive imaging device designed to collect an 8x4 array of projective measurements simultaneously. This work is based upon work supported by DARPA and the SPAWAR System Center Pacific under Contract No. N66001-11-C-4092. The views expressed are those of the author and do not reflect the official policy or position of the Department of Defense or the U.S. Government.

  12. Knowledge-based image bandwidth compression and enhancement

    NASA Astrophysics Data System (ADS)

    Saghri, John A.; Tescher, Andrew G.

    1987-01-01

    Techniques for incorporating a priori knowledge in the digital coding and bandwidth compression of image data are described and demonstrated. An algorithm for identifying and highlighting thin lines and point objects prior to coding is presented, and the precoding enhancement of a slightly smoothed version of the image is shown to be more effective than enhancement of the original image. Also considered are readjustment of the local distortion parameter and variable-block-size coding. The line-segment criteria employed in the classification are listed in a table, and sample images demonstrating the effectiveness of the enhancement techniques are presented.

  13. Adaptive polyphase subband decomposition structures for image compression.

    PubMed

    Gerek, O N; Cetin, A E

    2000-01-01

    Subband decomposition techniques have been extensively used for data coding and analysis. In most filter banks, the goal is to obtain subsampled signals corresponding to different spectral regions of the original data. However, this approach leads to various artifacts in images having spatially varying characteristics, such as images containing text, subtitles, or sharp edges. In this paper, adaptive filter banks with perfect reconstruction property are presented for such images. The filters of the decomposition structure which can be either linear or nonlinear vary according to the nature of the signal. This leads to improved image compression ratios. Simulation examples are presented. PMID:18262904

  14. Compressed image transmission based on fountain codes

    NASA Astrophysics Data System (ADS)

    Wu, Jiaji; Wu, Xinhong; Jiao, L. C.

    2011-11-01

    In this paper, we propose a joint source-channel coding (JSCC) scheme for image transmission over wireless channel. In the scheme, fountain codes are integrated into bit-plane coding for channel coding. Compared to traditional erasure codes for error correcting, such as Reed-Solomon codes, fountain codes are rateless and can generate sufficient symbols on the fly. Two schemes, the EEP (Equal Error Protection) scheme and the UEP (Unequal Error Protection) scheme are described in the paper. Furthermore, the UEP scheme performs better than the EEP scheme. The proposed scheme not only can adaptively adjust the length of fountain codes according to channel loss rate but also reconstruct image even on bad channel.

  15. Implementation of modified SPIHT algorithm for Compression of images

    NASA Astrophysics Data System (ADS)

    Kurume, A. V.; Yana, D. M.

    2011-12-01

    We present a throughput-efficient FPGA implementation of the Set Partitioning in Hierarchical Trees (SPIHT) algorithm for compression of images. The SPIHT uses inherent redundancy among wavelet coefficients and suited for both grey and color images. The SPIHT algorithm uses dynamic data structure which hinders hardware realization. we have modified basic SPIHT in two ways, one by using static (fixed) mappings which represent significant information and the other by interchanging the sorting and refinement passes.

  16. Optimized satellite image compression and reconstruction via evolution strategies

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael

    2009-05-01

    This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.

  17. Infrared super-resolution imaging based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Sui, Xiubao; Chen, Qian; Gu, Guohua; Shen, Xuewei

    2014-03-01

    The theoretical basis of traditional infrared super-resolution imaging method is Nyquist sampling theorem. The reconstruction premise is that the relative positions of the infrared objects in the low-resolution image sequences should keep fixed and the image restoration means is the inverse operation of ill-posed issues without fixed rules. The super-resolution reconstruction ability of the infrared image, algorithm's application area and stability of reconstruction algorithm are limited. To this end, we proposed super-resolution reconstruction method based on compressed sensing in this paper. In the method, we selected Toeplitz matrix as the measurement matrix and realized it by phase mask method. We researched complementary matching pursuit algorithm and selected it as the recovery algorithm. In order to adapt to the moving target and decrease imaging time, we take use of area infrared focal plane array to acquire multiple measurements at one time. Theoretically, the method breaks though Nyquist sampling theorem and can greatly improve the spatial resolution of the infrared image. The last image contrast and experiment data indicate that our method is effective in improving resolution of infrared images and is superior than some traditional super-resolution imaging method. The compressed sensing super-resolution method is expected to have a wide application prospect.

  18. Neural networks for data compression and invariant image recognition

    NASA Technical Reports Server (NTRS)

    Gardner, Sheldon

    1989-01-01

    An approach to invariant image recognition (I2R), based upon a model of biological vision in the mammalian visual system (MVS), is described. The complete I2R model incorporates several biologically inspired features: exponential mapping of retinal images, Gabor spatial filtering, and a neural network associative memory. In the I2R model, exponentially mapped retinal images are filtered by a hierarchical set of Gabor spatial filters (GSF) which provide compression of the information contained within a pixel-based image. A neural network associative memory (AM) is used to process the GSF coded images. We describe a 1-D shape function method for coding of scale and rotationally invariant shape information. This method reduces image shape information to a periodic waveform suitable for coding as an input vector to a neural network AM. The shape function method is suitable for near term applications on conventional computing architectures equipped with VLSI FFT chips to provide a rapid image search capability.

  19. A Motion-Compensating Image-Compression Scheme

    NASA Technical Reports Server (NTRS)

    Wong, Carol

    1994-01-01

    Chrominance used (in addition to luminance) in estimating motion. Variable-rate digital coding scheme for compression of color-video-image data designed to deliver pictures of good quality at moderate compressed-data rate of 1 to 2 bits per pixel, or of fair quality at rate less than 1 bit per pixel. Scheme, in principle, implemented by use of commercially available application-specific integrated circuits. Incorporates elements of some prior coding schemes, including motion compensation (MC) and discrete cosine transform (DCT).

  20. JPIC-Rad-Hard JPEG2000 Image Compression ASIC

    NASA Astrophysics Data System (ADS)

    Zervas, Nikos; Ginosar, Ran; Broyde, Amitai; Alon, Dov

    2010-08-01

    JPIC is a rad-hard high-performance image compression ASIC for the aerospace market. JPIC implements tier 1 of the ISO/IEC 15444-1 JPEG2000 (a.k.a. J2K) image compression standard [1] as well as the post compression rate-distortion algorithm, which is part of tier 2 coding. A modular architecture enables employing a single JPIC or multiple coordinated JPIC units. JPIC is designed to support wide data sources of imager in optical, panchromatic and multi-spectral space and airborne sensors. JPIC has been developed as a collaboration of Alma Technologies S.A. (Greece), MBT/IAI Ltd (Israel) and Ramon Chips Ltd (Israel). MBT IAI defined the system architecture requirements and interfaces, The JPEG2K-E IP core from Alma implements the compression algorithm [2]. Ramon Chips adds SERDES interfaces and host interfaces and integrates the ASIC. MBT has demonstrated the full chip on an FPGA board and created system boards employing multiple JPIC units. The ASIC implementation, based on Ramon Chips' 180nm CMOS RadSafe[TM] RH cell library enables superior radiation hardness.

  1. Compressing Image Data While Limiting the Effects of Data Losses

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Klimesh, Matthew

    2006-01-01

    ICER is computer software that can perform both lossless and lossy compression and decompression of gray-scale-image data using discrete wavelet transforms. Designed for primary use in transmitting scientific image data from distant spacecraft to Earth, ICER incorporates an error-containment scheme that limits the adverse effects of loss of data and is well suited to the data packets transmitted by deep-space probes. The error-containment scheme includes utilization of the algorithm described in "Partitioning a Gridded Rectangle Into Smaller Rectangles " (NPO-30479), NASA Tech Briefs, Vol. 28, No. 7 (July 2004), page 56. ICER has performed well in onboard compression of thousands of images transmitted from the Mars Exploration Rovers.

  2. A Progressive Image Compression Method Based on EZW Algorithm

    NASA Astrophysics Data System (ADS)

    Du, Ke; Lu, Jianming; Yahagi, Takashi

    A simple method based on the EZW algorithm is presented for improving image compression performance. Recent success in wavelet image coding is mainly attributed to recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's EZW(Embedded Zerotree Wavelets)(1), Said and Pearlman's SPIHT(Set Partitioning In Hierarchical Trees)(2), and Bing-Bing Chai's SLCCA(Significance-Linked Connected Component Analysis for Wavelet Image Coding)(3). The EZW algorithm is based on five key concepts: (1) a DWT(Discrete Wavelet Transform) or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, (4) universal lossless data compression which is achieved via adaptive arithmetic coding. and (5) DWT coefficients' degeneration from high scale subbands to low scale subbands. In this paper, we have improved the self-similarity statistical characteristic in concept (5) and present a progressive image compression method.

  3. An optimized hybrid encode based compression algorithm for hyperspectral image

    NASA Astrophysics Data System (ADS)

    Wang, Cheng; Miao, Zhuang; Feng, Weiyi; He, Weiji; Chen, Qian; Gu, Guohua

    2013-12-01

    Compression is a kernel procedure in hyperspectral image processing due to its massive data which will bring great difficulty in date storage and transmission. In this paper, a novel hyperspectral compression algorithm based on hybrid encoding which combines with the methods of the band optimized grouping and the wavelet transform is proposed. Given the characteristic of correlation coefficients between adjacent spectral bands, an optimized band grouping and reference frame selection method is first utilized to group bands adaptively. Then according to the band number of each group, the redundancy in the spatial and spectral domain is removed through the spatial domain entropy coding and the minimum residual based linear prediction method. Thus, embedded code streams are obtained by encoding the residual images using the improved embedded zerotree wavelet based SPIHT encode method. In the experments, hyperspectral images collected by the Airborne Visible/ Infrared Imaging Spectrometer (AVIRIS) were used to validate the performance of the proposed algorithm. The results show that the proposed approach achieves a good performance in reconstructed image quality and computation complexity.The average peak signal to noise ratio (PSNR) is increased by 0.21~0.81dB compared with other off-the-shelf algorithms under the same compression ratio.

  4. Influence of Lossy Compressed DEM on Radiometric Correction for Land Cover Classification of Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Moré, G.; Pesquer, L.; Blanes, I.; Serra-Sagristà, J.; Pons, X.

    2012-12-01

    World coverage Digital Elevation Models (DEM) have progressively increased their spatial resolution (e.g., ETOPO, SRTM, or Aster GDEM) and, consequently, their storage requirements. On the other hand, lossy data compression facilitates accessing, sharing and transmitting large spatial datasets in environments with limited storage. However, since lossy compression modifies the original information, rigorous studies are needed to understand its effects and consequences. The present work analyzes the influence of DEM quality -modified by lossy compression-, on the radiometric correction of remote sensing imagery, and the eventual propagation of the uncertainty in the resulting land cover classification. Radiometric correction is usually composed of two parts: atmospheric correction and topographical correction. For topographical correction, DEM provides the altimetry information that allows modeling the incidence radiation on terrain surface (cast shadows, self shadows, etc). To quantify the effects of the DEM lossy compression on the radiometric correction, we use radiometrically corrected images for classification purposes, and compare the accuracy of two standard coding techniques for a wide range of compression ratios. The DEM has been obtained by resampling the DEM v.2 of Catalonia (ICC), originally having 15 m resolution, to the Landsat TM resolution. The Aster DEM has been used to fill the gaps beyond the administrative limits of Catalonia. The DEM has been lossy compressed with two coding standards at compression ratios 5:1, 10:1, 20:1, 100:1 and 200:1. The employed coding standards have been JPEG2000 and CCSDS-IDC; the former is an international ISO/ITU-T standard for almost any type of images, while the latter is a recommendation of the CCSDS consortium for mono-component remote sensing images. Both techniques are wavelet-based followed by an entropy-coding stage. Also, for large compression ratios, both techniques need a post processing for correctly

  5. A novel image fusion approach based on compressive sensing

    NASA Astrophysics Data System (ADS)

    Yin, Hongpeng; Liu, Zhaodong; Fang, Bin; Li, Yanxia

    2015-11-01

    Image fusion can integrate complementary and relevant information of source images captured by multiple sensors into a unitary synthetic image. The compressive sensing-based (CS) fusion approach can greatly reduce the processing speed and guarantee the quality of the fused image by integrating fewer non-zero coefficients. However, there are two main limitations in the conventional CS-based fusion approach. Firstly, directly fusing sensing measurements may bring greater uncertain results with high reconstruction error. Secondly, using single fusion rule may result in the problems of blocking artifacts and poor fidelity. In this paper, a novel image fusion approach based on CS is proposed to solve those problems. The non-subsampled contourlet transform (NSCT) method is utilized to decompose the source images. The dual-layer Pulse Coupled Neural Network (PCNN) model is used to integrate low-pass subbands; while an edge-retention based fusion rule is proposed to fuse high-pass subbands. The sparse coefficients are fused before being measured by Gaussian matrix. The fused image is accurately reconstructed by Compressive Sampling Matched Pursuit algorithm (CoSaMP). Experimental results demonstrate that the fused image contains abundant detailed contents and preserves the saliency structure. These also indicate that our proposed method achieves better visual quality than the current state-of-the-art methods.

  6. Split Bregman's optimization method for image construction in compressive sensing

    NASA Astrophysics Data System (ADS)

    Skinner, D.; Foo, S.; Meyer-Bäse, A.

    2014-05-01

    The theory of compressive sampling (CS) was reintroduced by Candes, Romberg and Tao, and D. Donoho in 2006. Using a priori knowledge that a signal is sparse, it has been mathematically proven that CS can defY Nyquist sampling theorem. Theoretically, reconstruction of a CS image relies on the minimization and optimization techniques to solve this complex almost NP-complete problem. There are many paths to consider when compressing and reconstructing an image but these methods have remained untested and unclear on natural images, such as underwater sonar images. The goal of this research is to perfectly reconstruct the original sonar image from a sparse signal while maintaining pertinent information, such as mine-like object, in Side-scan sonar (SSS) images. Goldstein and Osher have shown how to use an iterative method to reconstruct the original image through a method called Split Bregman's iteration. This method "decouples" the energies using portions of the energy from both the !1 and !2 norm. Once the energies are split, Bregman iteration is used to solve the unconstrained optimization problem by recursively solving the problems simultaneously. The faster these two steps or energies can be solved then the faster the overall method becomes. While the majority of CS research is still focused on the medical field, this paper will demonstrate the effectiveness of the Split Bregman's methods on sonar images.

  7. A compressed sensing approach for enhancing infrared imaging resolution

    NASA Astrophysics Data System (ADS)

    Xiao, Long-long; Liu, Kun; Han, Da-peng; Liu, Ji-ying

    2012-11-01

    This paper presents a novel approach for improving infrared imaging resolution by the use of Compressed Sensing (CS). Instead of sensing raw pixel data, the image sensor measures the compressed samples of the observed image through a coded aperture mask placed on the focal plane of the optical system, and then the image reconstruction can be conducted from these samples using an optimal algorithm. The resolution is determined by the size of the coded aperture mask other than that of the focal plane array (FPA). The attainable quality of the reconstructed image strongly depends on the choice of the coded aperture mode. Based on the framework of CS, we carefully design an optimum mask pattern and use a multiplexing scheme to achieve multiple samples. The gradient projection for sparse reconstruction (GPSR) algorithm is employed to recover the image. The mask radiation effect is discussed by theoretical analyses and numerical simulations. Experimental results are presented to show that the proposed method enhances infrared imaging resolution significantly and ensures imaging quality.

  8. Spatial compression algorithm for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R.

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  9. Image compression using address-vector quantization

    NASA Astrophysics Data System (ADS)

    Nasrabadi, Nasser M.; Feng, Yushu

    1990-12-01

    A novel vector quantization scheme, the address-vector quantizer (A-VQ), is proposed which exploits the interblock correlation by encoding a group of blocks together using an address-codebook (AC). The AC is a set of address-codevectors (ACVs), each representing a combination of addresses or indices. Each element of the ACV is an address of an entry in the LBG-codebook, representing a vector-quantized block. The AC consists of an active (addressable) region and an inactive (nonaddressable) region. During encoding the ACVs in the AC are reordered adaptively to bring the most probable ACVs into the active region. When encoding an ACV, the active region is checked, and if such an address combination exists, its index is transmitted to the receiver. Otherwise, the address of each block is transmitted individually. The SNR of the images encoded by the A-VQ method is the same as that of a memoryless vector quantizer, but the bit rate is by a factor of approximately two.

  10. GPU-specific reformulations of image compression algorithms

    NASA Astrophysics Data System (ADS)

    Matela, Jiří; Holub, Petr; Jirman, Martin; Årom, Martin

    2012-10-01

    Image compression has a number of applications in various fields, where processing throughput and/or latency is a crucial attribute and the main limitation of state-of-the-art implementations of compression algorithms. At the same time contemporary GPU platforms provide tremendous processing power but they call for specific algorithm design. We discuss key components of successful design of compression algorithms for GPUs and demonstrate this on JPEG and JPEG2000 implementations, each of which contains several types of algorithms requiring different approaches to efficient parallelization for GPUs. Performance evaluation of the optimized JPEG and JPEG2000 chain is used to demonstrate the importance of various aspects of GPU programming, especially with respect to real-time applications.

  11. Digital image compression for a 2f multiplexing optical setup

    NASA Astrophysics Data System (ADS)

    Vargas, J.; Amaya, D.; Rueda, E.

    2016-07-01

    In this work a virtual 2f multiplexing system was implemented in combination with digital image compression techniques and redundant information elimination. Depending on the image type to be multiplexed, a memory-usage saving of as much as 99% was obtained. The feasibility of the system was tested using three types of images, binary characters, QR codes, and grey level images. A multiplexing step was implemented digitally, while a demultiplexing step was implemented in a virtual 2f optical setup following real experimental parameters. To avoid cross-talk noise, each image was codified with a specially designed phase diffraction carrier that would allow the separation and relocation of the multiplexed images on the observation plane by simple light propagation. A description of the system is presented together with simulations that corroborate the method. The present work may allow future experimental implementations that will make use of all the parallel processing capabilities of optical systems.

  12. Colored adaptive compressed imaging with a single photodiode.

    PubMed

    Yan, Yiyun; Dai, Huidong; Liu, Xingjiong; He, Weiji; Chen, Qian; Gu, Guohua

    2016-05-10

    Computational ghost imaging is commonly used to reconstruct grayscale images. Currently, however, there is little research aimed at reconstructing color images. In this paper, we theoretically and experimentally demonstrate a colored adaptive compressed imaging method. Benefiting from imaging in YUV color space, the proposed method adequately exploits the sparsity of the U, V components in the wavelet domain, the interdependence between luminance and chrominance, and human visual characteristics. The simulation and experimental results show that our method greatly reduces the measurements required and offers better image quality compared to recovering the red (R), green (G), and blue (B) components separately in RGB color space. As the application of a single photodiode increases, our method shows great potential in many fields. PMID:27168280

  13. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a

  14. Astronomical Image Compression Techniques Based on ACC and KLT Coder

    NASA Astrophysics Data System (ADS)

    Schindler, J.; Páta, P.; Klíma, M.; Fliegel, K.

    This paper deals with a compression of image data in applications in astronomy. Astronomical images have typical specific properties -- high grayscale bit depth, size, noise occurrence and special processing algorithms. They belong to the class of scientific images. Their processing and compression is quite different from the classical approach of multimedia image processing. The database of images from BOOTES (Burst Observer and Optical Transient Exploring System) has been chosen as a source of the testing signal. BOOTES is a Czech-Spanish robotic telescope for observing AGN (active galactic nuclei) and the optical transient of GRB (gamma ray bursts) searching. This paper discusses an approach based on an analysis of statistical properties of image data. A comparison of two irrelevancy reduction methods is presented from a scientific (astrometric and photometric) point of view. The first method is based on a statistical approach, using the Karhunen-Loève transform (KLT) with uniform quantization in the spectral domain. The second technique is derived from wavelet decomposition with adaptive selection of used prediction coefficients. Finally, the comparison of three redundancy reduction methods is discussed. Multimedia format JPEG2000 and HCOMPRESS, designed especially for astronomical images, are compared with the new Astronomical Context Coder (ACC) coder based on adaptive median regression.

  15. Colorimetric-spectral clustering: a tool for multispectral image compression

    NASA Astrophysics Data System (ADS)

    Ciprian, R.; Carbucicchio, M.

    2011-11-01

    In this work a new compression method for multispectral images has been proposed: the 'colorimetric-spectral clustering'. The basic idea arises from the well-known cluster analysis, a multivariate analysis which finds the natural links between objects grouping them into clusters. In the colorimetric-spectral clustering compression method, the objects are the spectral reflectance factors of the multispectral images that are grouped into clusters on the basis of their colour difference. In particular two spectra can belong to the same cluster only if their colour difference is lower than a threshold fixed before starting the compression procedure. The performance of the colorimetric-spectral clustering has been compared to the k-means cluster analysis, in which the Euclidean distance between spectra is considered, to the principal component analysis and to the LabPQR method. The colorimetric-spectral clustering is able to preserve both the spectral and the colorimetric information of a multispectral image, allowing this information to be reproduced for all pixels of the image.

  16. Image Recommendation Algorithm Using Feature-Based Collaborative Filtering

    NASA Astrophysics Data System (ADS)

    Kim, Deok-Hwan

    As the multimedia contents market continues its rapid expansion, the amount of image contents used in mobile phone services, digital libraries, and catalog service is increasing remarkably. In spite of this rapid growth, users experience high levels of frustration when searching for the desired image. Even though new images are profitable to the service providers, traditional collaborative filtering methods cannot recommend them. To solve this problem, in this paper, we propose feature-based collaborative filtering (FBCF) method to reflect the user's most recent preference by representing his purchase sequence in the visual feature space. The proposed approach represents the images that have been purchased in the past as the feature clusters in the multi-dimensional feature space and then selects neighbors by using an inter-cluster distance function between their feature clusters. Various experiments using real image data demonstrate that the proposed approach provides a higher quality recommendation and better performance than do typical collaborative filtering and content-based filtering techniques.

  17. Lossless compression of stromatolite images: a biogenicity index?

    PubMed

    Corsetti, Frank A; Storrie-Lombardi, Michael C

    2003-01-01

    It has been underappreciated that inorganic processes can produce stromatolites (laminated macroscopic constructions commonly attreibuted to microbiological activity), thus calling into question the long-standing use of stromatolites as de facto evidence for ancient life. Using lossless compression on unmagnified reflectance red-green-blue (RGB) images of matched stromatolite-sediment matrix pairs as a complexity metric, the compressibility index (delta(c), the log ratio of the ratio of the compressibility of the matrix versus the target) of a putative abiotic test stromatolite is significantly less than the delta(c) of a putative biotic test stromatolite. There is a clear separation in delta(c) between the different stromatolites discernible at the outcrop scale. In terms of absolute compressibility, the sediment matrix between the stromatolite columns was low in both cases, the putative abiotic stromatolite was similar to the intracolumnar sediment, and the putative biotic stromatolite was much greater (again discernible at the outcrop scale). We propose tht this metric would be useful for evaluating the biogenicity of images obtained by the camera systems available on every Mars surface probe launched to date including Viking, Pathfinder, Beagle, and the two Mars Exploration Rovers. PMID:14994715

  18. Image segmentation by iterative parallel region growing with application to data compression and image analysis

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    1988-01-01

    Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.

  19. Compressive imaging and dual moire laser interferometer as metrology tools

    NASA Astrophysics Data System (ADS)

    Abolbashari, Mehrdad

    Metrology is the science of measurement and deals with measuring different physical aspects of objects. In this research the focus has been on two basic problems that metrologists encounter. The first problem is the trade-off between the range of measurement and the corresponding resolution; measurement of physical parameters of a large object or scene accompanies by losing detailed information about small regions of the object. Indeed, instruments and techniques that perform coarse measurements are different from those that make fine measurements. This problem persists in the field of surface metrology, which deals with accurate measurement and detailed analysis of surfaces. For example, laser interferometry is used for fine measurement (in nanometer scale) while to measure the form of in object, which lies in the field of coarse measurement, a different technique like moire technique is used. We introduced a new technique to combine measurement from instruments with better resolution and smaller measurement range with those with coarser resolution and larger measurement range. We first measure the form of the object with coarse measurement techniques and then make some fine measurement for features in regions of interest. The second problem is the measurement conditions that lead to difficulties in measurement. These conditions include low light condition, large range of intensity variation, hyperspectral measurement, etc. Under low light condition there is not enough light for detector to detect light from object, which results in poor measurements. Large range of intensity variation results in a measurement with some saturated regions on the camera as well as some dark regions. We use compressive sampling based imaging systems to address these problems. Single pixel compressive imaging uses a single detector instead of array of detectors and reconstructs a complete image after several measurements. In this research we examined compressive imaging for different

  20. Remotely sensed image compression based on wavelet transform

    NASA Technical Reports Server (NTRS)

    Kim, Seong W.; Lee, Heung K.; Kim, Kyung S.; Choi, Soon D.

    1995-01-01

    In this paper, we present an image compression algorithm that is capable of significantly reducing the vast amount of information contained in multispectral images. The developed algorithm exploits the spectral and spatial correlations found in multispectral images. The scheme encodes the difference between images after contrast/brightness equalization to remove the spectral redundancy, and utilizes a two-dimensional wavelet transform to remove the spatial redundancy. the transformed images are then encoded by Hilbert-curve scanning and run-length-encoding, followed by Huffman coding. We also present the performance of the proposed algorithm with the LANDSAT MultiSpectral Scanner data. The loss of information is evaluated by PSNR (peak signal to noise ratio) and classification capability.

  1. Compression and storage of multiple images with modulating blazed gratings

    NASA Astrophysics Data System (ADS)

    Yin, Shen; Tao, Shaohua

    2013-07-01

    A method for compressing, storing and reconstructing high-volume data is presented in this paper. Blazed gratings with different orientations and blaze angles are used to superpose many grayscaled images, and customized spatial filters are used to selectively recover the corresponding images from the diffraction spots of the superposed images. The simulation shows that as many as 198 images with a size of 512 pixels × 512 pixels can be stored in a diffractive optical element (DOE) with complex amplitudes of the same size, and the recovered images from the DOE are discernible with high visual quality. Optical encryption/decryption can also be added to the digitized DOE to enhance the security of the stored data.

  2. Real-Time Digital Compression Of Television Image Data

    NASA Technical Reports Server (NTRS)

    Barnes, Scott P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1990-01-01

    Digital encoding/decoding system compresses color television image data in real time for transmission at lower data rates and, consequently, lower bandwidths. Implements predictive coding process, in which each picture element (pixel) predicted from values of prior neighboring pixels, and coded transmission expresses difference between actual and predicted current values. Combines differential pulse-code modulation process with non-linear, nonadaptive predictor, nonuniform quantizer, and multilevel Huffman encoder.

  3. Compression of fingerprint data using the wavelet vector quantization image compression algorithm. 1992 progress report

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1992-04-11

    This report describes the development of a Wavelet Vector Quantization (WVQ) image compression algorithm for fingerprint raster files. The pertinent work was performed at Los Alamos National Laboratory for the Federal Bureau of Investigation. This document describes a previously-sent package of C-language source code, referred to as LAFPC, that performs the WVQ fingerprint compression and decompression tasks. The particulars of the WVQ algorithm and the associated design procedure are detailed elsewhere; the purpose of this document is to report the results of the design algorithm for the fingerprint application and to delineate the implementation issues that are incorporated in LAFPC. Special attention is paid to the computation of the wavelet transform, the fast search algorithm used for the VQ encoding, and the entropy coding procedure used in the transmission of the source symbols.

  4. HVS-motivated quantization schemes in wavelet image compression

    NASA Astrophysics Data System (ADS)

    Topiwala, Pankaj N.

    1996-11-01

    Wavelet still image compression has recently been a focus of intense research, and appears to be maturing as a subject. Considerable coding gains over older DCT-based methods have been achieved, while the computational complexity has been made very competitive. We report here on a high performance wavelet still image compression algorithm optimized for both mean-squared error (MSE) and human visual system (HVS) characteristics. We present the problem of optimal quantization from a Lagrange multiplier point of view, and derive novel solutions. Ideally, all three components of a typical image compression system: transform, quantization, and entropy coding, should be optimized simultaneously. However, the highly nonlinear nature of quantization and encoding complicates the formulation of the total cost function. In this report, we consider optimizing the filter, and then the quantizer, separately, holding the other two components fixed. While optimal bit allocation has been treated in the literature, we specifically address the issue of setting the quantization stepsizes, which in practice is quite different. In this paper, we select a short high- performance filter, develop an efficient scalar MSE- quantizer, and four HVS-motivated quantizers which add some value visually without incurring any MSE losses. A combination of run-length and empirically optimized Huffman coding is fixed in this study.

  5. Geostationary Imaging FTS (GIFTS) Data Processing: Measurement Simulation and Compression

    NASA Technical Reports Server (NTRS)

    Huang, Hung-Lung; Revercomb, H. E.; Thom, J.; Antonelli, P. B.; Osborne, B.; Tobin, D.; Knuteson, R.; Garcia, R.; Dutcher, S.; Li, J.

    2001-01-01

    GIFTS (Geostationary Imaging Fourier Transform Spectrometer), a forerunner of next generation geostationary satellite weather observing systems, will be built to fly on the NASA EO-3 geostationary orbit mission in 2004 to demonstrate the use of large area detector arrays and readouts. Timely high spatial resolution images and quantitative soundings of clouds, water vapor, temperature, and pollutants of the atmosphere for weather prediction and air quality monitoring will be achieved. GIFTS is novel in terms of providing many scientific returns that traditionally can only be achieved by separate advanced imaging and sounding systems. GIFTS' ability to obtain half-hourly high vertical density wind over the full earth disk is revolutionary. However, these new technologies bring forth many challenges for data transmission, archiving, and geophysical data processing. In this paper, we will focus on the aspect of data volume and downlink issues by conducting a GIFTS data compression experiment. We will discuss the scenario of using principal component analysis as a foundation for atmospheric data retrieval and compression of uncalibrated and un-normalized interferograms. The effects of compression on the degradation of the signal and noise reduction in interferogram and spectral domains will be highlighted. A simulation system developed to model the GIFTS instrument measurements is described in detail.

  6. Hyperspectral pixel classification from coded-aperture compressive imaging

    NASA Astrophysics Data System (ADS)

    Ramirez, Ana; Arce, Gonzalo R.; Sadler, Brian M.

    2012-06-01

    This paper describes a new approach and its associated theoretical performance guarantees for supervised hyperspectral image classification from compressive measurements obtained by a Coded Aperture Snapshot Spectral Imaging System (CASSI). In one snapshot, the two-dimensional focal plane array (FPA) in the CASSI system captures the coded and spectrally dispersed source field of a three-dimensional data cube. Multiple snapshots are used to construct a set of compressive spectral measurements. The proposed approach is based on the concept that each pixel in the hyper-spectral image lies in a low-dimensional subspace obtained from the training samples, and thus it can be represented as a sparse linear combination of vectors in the given subspace. The sparse vector representing the test pixel is then recovered from the set of compressive spectral measurements and it is used to determine the class label of the test pixel. The theoretical performance bounds of the classifier exploit the distance preservation condition satisfied by the multiple shot CASSI system and depend on the number of measurements collected, code aperture pattern, and similarity between spectral signatures in the dictionary. Simulation experiments illustrate the performance of the proposed classification approach.

  7. Lossless compression of the geostationary imaging Fourier transform spectrometer (GIFTS) data via predictive partitioned vector quantization

    NASA Astrophysics Data System (ADS)

    Huang, Bormin; Wei, Shih-Chieh; Huang, Allen H.-L.; Smuga-Otto, Maciek; Knuteson, Robert; Revercomb, Henry E.; Smith, William L., Sr.

    2007-09-01

    The Geostationary Imaging Fourier Transform Spectrometer (GIFTS), as part of NASA's New Millennium Program, is an advanced instrument to provide high-temporal-resolution measurements of atmospheric temperature and water vapor, which will greatly facilitate the detection of rapid atmospheric changes associated with destructive weather events, including tornadoes, severe thunderstorms, flash floods, and hurricanes. The Committee on Earth Science and Applications from Space under the National Academy of Sciences recommended that NASA and NOAA complete the fabrication, testing, and space qualification of the GIFTS instrument and that they support the international effort to launch GIFTS by 2008. Lossless data compression is critical for the overall success of the GIFTS experiment, or any other very high data rate experiment where the data is to be disseminated to the user community in real-time and archived for scientific studies and climate assessment. In general, lossless data compression is needed for high data rate hyperspectral sounding instruments such as GIFTS for (1) transmitting the data down to the ground within the bandwidth capabilities of the satellite transmitter and ground station receiving system, (2) compressing the data at the ground station for distribution to the user community (as is traditionally performed with GOES data via satellite rebroadcast), and (3) archival of the data without loss of any information content so that it can be used in scientific studies and climate assessment for many years after the date of the measurements. In this paper we study lossless compression of GIFTS data that has been collected as part of the calibration or ground based tests that were conducted in 2006. The predictive partitioned vector quantization (PPVQ) is investigated for higher lossless compression performance. PPVQ consists of linear prediction, channel partitioning and vector quantization. It yields an average compression ratio of 4.65 on the GIFTS test

  8. An adaptive technique to maximize lossless image data compression of satellite images

    NASA Technical Reports Server (NTRS)

    Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe

    1994-01-01

    Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.

  9. Compressed sensing sparse reconstruction for coherent field imaging

    NASA Astrophysics Data System (ADS)

    Bei, Cao; Xiu-Juan, Luo; Yu, Zhang; Hui, Liu; Ming-Lai, Chen

    2016-04-01

    Return signal processing and reconstruction plays a pivotal role in coherent field imaging, having a significant influence on the quality of the reconstructed image. To reduce the required samples and accelerate the sampling process, we propose a genuine sparse reconstruction scheme based on compressed sensing theory. By analyzing the sparsity of the received signal in the Fourier spectrum domain, we accomplish an effective random projection and then reconstruct the return signal from as little as 10% of traditional samples, finally acquiring the target image precisely. The results of the numerical simulations and practical experiments verify the correctness of the proposed method, providing an efficient processing approach for imaging fast-moving targets in the future. Project supported by the National Natural Science Foundation of China (Grant No. 61505248) and the Fund from Chinese Academy of Sciences, the Light of “Western” Talent Cultivation Plan “Dr. Western Fund Project” (Grant No. Y429621213).

  10. Adaptive predictive multiplicative autoregressive model for medical image compression.

    PubMed

    Chen, Z D; Chang, R F; Kuo, W J

    1999-02-01

    In this paper, an adaptive predictive multiplicative autoregressive (APMAR) method is proposed for lossless medical image coding. The adaptive predictor is used for improving the prediction accuracy of encoded image blocks in our proposed method. Each block is first adaptively predicted by one of the seven predictors of the JPEG lossless mode and a local mean predictor. It is clear that the prediction accuracy of an adaptive predictor is better than that of a fixed predictor. Then the residual values are processed by the MAR model with Huffman coding. Comparisons with other methods [MAR, SMAR, adaptive JPEG (AJPEG)] on a series of test images show that our method is suitable for reversible medical image compression. PMID:10232675

  11. Block-based image compression with parameter-assistant inpainting.

    PubMed

    Xiong, Zhiwei; Sun, Xiaoyan; Wu, Feng

    2010-06-01

    This correspondence presents an image compression approach that integrates our proposed parameter-assistant inpainting (PAI) to exploit visual redundancy in color images. In this scheme, we study different distributions of image regions and represent them with a model class. Based on that, an input image at the encoder side is divided into featured and non-featured regions at block level. The featured blocks fitting the predefined model class are coded by a few parameters, whereas the non-featured blocks are coded traditionally. At the decoder side, the featured regions are restored through PAI relying on both delivered parameters and surrounding information. Experimental results show that our method outperforms JPEG in featured regions by an average bit-rate saving of 76% at similar perceptual quality levels. PMID:20215076

  12. Compressed Sensing Inspired Image Reconstruction from Overlapped Projections

    PubMed Central

    Yang, Lin; Lu, Yang; Wang, Ge

    2010-01-01

    The key idea discussed in this paper is to reconstruct an image from overlapped projections so that the data acquisition process can be shortened while the image quality remains essentially uncompromised. To perform image reconstruction from overlapped projections, the conventional reconstruction approach (e.g., filtered backprojection (FBP) algorithms) cannot be directly used because of two problems. First, overlapped projections represent an imaging system in terms of summed exponentials, which cannot be transformed into a linear form. Second, the overlapped measurement carries less information than the traditional line integrals. To meet these challenges, we propose a compressive sensing-(CS-) based iterative algorithm for reconstruction from overlapped data. This algorithm starts with a good initial guess, relies on adaptive linearization, and minimizes the total variation (TV). Then, we demonstrated the feasibility of this algorithm in numerical tests. PMID:20689701

  13. Edge-Based Image Compression with Homogeneous Diffusion

    NASA Astrophysics Data System (ADS)

    Mainberger, Markus; Weickert, Joachim

    It is well-known that edges contain semantically important image information. In this paper we present a lossy compression method for cartoon-like images that exploits information at image edges. These edges are extracted with the Marr-Hildreth operator followed by hysteresis thresholding. Their locations are stored in a lossless way using JBIG. Moreover, we encode the grey or colour values at both sides of each edge by applying quantisation, subsampling and PAQ coding. In the decoding step, information outside these encoded data is recovered by solving the Laplace equation, i.e. we inpaint with the steady state of a homogeneous diffusion process. Our experiments show that the suggested method outperforms the widely-used JPEG standard and can even beat the advanced JPEG2000 standard for cartoon-like images.

  14. Fast Second Degree Total Variation Method for Image Compressive Sensing

    PubMed Central

    Liu, Pengfei; Xiao, Liang; Zhang, Jun

    2015-01-01

    This paper presents a computationally efficient algorithm for image compressive sensing reconstruction using a second degree total variation (HDTV2) regularization. Firstly, a preferably equivalent formulation of the HDTV2 functional is derived, which can be formulated as a weighted L1-L2 mixed norm of second degree image derivatives under the spectral decomposition framework. Secondly, using the equivalent formulation of HDTV2, we introduce an efficient forward-backward splitting (FBS) scheme to solve the HDTV2-based image reconstruction model. Furthermore, from the averaged non-expansive operator point of view, we make a detailed analysis on the convergence of the proposed FBS algorithm. Experiments on medical images demonstrate that the proposed method outperforms several fast algorithms of the TV and HDTV2 reconstruction models in terms of peak signal to noise ratio (PSNR), structural similarity index (SSIM) and convergence speed. PMID:26361008

  15. Compressive sensing for direct millimeter-wave holographic imaging.

    PubMed

    Qiao, Lingbo; Wang, Yingxin; Shen, Zongjun; Zhao, Ziran; Chen, Zhiqiang

    2015-04-10

    Direct millimeter-wave (MMW) holographic imaging, which provides both the amplitude and phase information by using the heterodyne mixing technique, is considered a powerful tool for personnel security surveillance. However, MWW imaging systems usually suffer from the problem of high cost or relatively long data acquisition periods for array or single-pixel systems. In this paper, compressive sensing (CS), which aims at sparse sampling, is extended to direct MMW holographic imaging for reducing the number of antenna units or the data acquisition time. First, following the scalar diffraction theory, an exact derivation of the direct MMW holographic reconstruction is presented. Then, CS reconstruction strategies for complex-valued MMW images are introduced based on the derived reconstruction formula. To pursue the applicability for near-field MMW imaging and more complicated imaging targets, three sparsity bases, including total variance, wavelet, and curvelet, are evaluated for the CS reconstruction of MMW images. We also discuss different sampling patterns for single-pixel, linear array and two-dimensional array MMW imaging systems. Both simulations and experiments demonstrate the feasibility of recovering MMW images from measurements at 1/2 or even 1/4 of the Nyquist rate. PMID:25967314

  16. Review and Implementation of the Emerging CCSDS Recommended Standard for Multispectral and Hyperspectral Lossless Image Coding

    NASA Technical Reports Server (NTRS)

    Sanchez, Jose Enrique; Auge, Estanislau; Santalo, Josep; Blanes, Ian; Serra-Sagrista, Joan; Kiely, Aaron

    2011-01-01

    A new standard for image coding is being developed by the MHDC working group of the CCSDS, targeting onboard compression of multi- and hyper-spectral imagery captured by aircraft and satellites. The proposed standard is based on the "Fast Lossless" adaptive linear predictive compressor, and is adapted to better overcome issues of onboard scenarios. In this paper, we present a review of the state of the art in this field, and provide an experimental comparison of the coding performance of the emerging standard in relation to other state-of-the-art coding techniques. Our own independent implementation of the MHDC Recommended Standard, as well as of some of the other techniques, has been used to provide extensive results over the vast corpus of test images from the CCSDS-MHDC.

  17. Three-dimensional active imaging using compressed gating

    NASA Astrophysics Data System (ADS)

    Dai, Huidong; He, Weiji; Miao, Zhuang; Chen, Yunfei; Gu, Guohua

    2013-09-01

    Due to the numerous applications employed 3D data such as target detection and recognition, three-dimensional (3D) active imaging draws great interest recently. Employing a pulsed laser as the illumination source and an intensified sensor as the image sensor, the 3D active imaging method emits and then records laser pulses to infer the distance between the target and the sensor. One of the limitations of the 3D active imaging is that acquiring depth map with high depth resolution requires a full range sweep, as well as a large number of detections, which limits the detection speed. In this work, a compressed gating method combining the 3D active imaging and compressive sensing (CS) is proposed on the basis of the random gating method to achieve the depth map reconstruction from a significantly reduced number of detections. Employing random sequences to control the sensor gate, this method estimates the distance and reconstructs the depth map in the framework of CS. A simulation was carried out to estimate the performance of the proposed method. A scene generated by the 3ds Max was employed as target and a reconstruction algorithm was used to recover the depth map in the simulation. The simulation results have shown that the proposed method can reconstruct the depth map with slight reconstruction error using as low as 7% detections that the conventional method requires and achieve perfect reconstruction from about 10% detections under the same depth resolution. It has also indicated that the number of detections required is affected by depth resolution, noise generated by a variety of reasons and complexity of the target scene. According to the simulation results, the compressed gating method is able to be used in the case of long range with high depth resolution and robust to various types of noise. In addition, the method is able to be used for multiple-return signals measurement without increase in the number of detections.

  18. Compound image compression for real-time computer screen image transmission.

    PubMed

    Lin, Tony; Hao, Pengwei

    2005-08-01

    We present a compound image compression algorithm for real-time applications of computer screen image transmission. It is called shape primitive extraction and coding (SPEC). Real-time image transmission requires that the compression algorithm should not only achieve high compression ratio, but also have low complexity and provide excellent visual quality. SPEC first segments a compound image into text/graphics pixels and pictorial pixels, and then compresses the text/graphics pixels with a new lossless coding algorithm and the pictorial pixels with the standard lossy JPEG, respectively. The segmentation first classifies image blocks into picture and text/graphics blocks by thresholding the number of colors of each block, then extracts shape primitives of text/graphics from picture blocks. Dynamic color palette that tracks recent text/graphics colors is used to separate small shape primitives of text/graphics from pictorial pixels. Shape primitives are also extracted from text/graphics blocks. All shape primitives from both block types are losslessly compressed by using a combined shape-based and palette-based coding algorithm. Then, the losslessly coded bitstream is fed into a LZW coder. Experimental results show that the SPEC has very low complexity and provides visually lossless quality while keeping competitive compression ratios. PMID:16121449

  19. Recommendations

    ERIC Educational Resources Information Center

    Brazelton, G. Blue; Renn, Kristen A.; Stewart, Dafina-Lazarus

    2015-01-01

    In this chapter, the editors provide a summary of the information shared in this sourcebook about the success of students who have minoritized identities of sexuality or gender and offer recommendations for policy, practice, and further research.

  20. Development of a compressive sampling hyperspectral imager prototype

    NASA Astrophysics Data System (ADS)

    Barducci, Alessandro; Guzzi, Donatella; Lastri, Cinzia; Nardino, Vanni; Marcoionni, Paolo; Pippi, Ivan

    2013-10-01

    Compressive sensing (CS) is a new technology that investigates the chance to sample signals at a lower rate than the traditional sampling theory. The main advantage of CS is that compression takes place during the sampling phase, making possible significant savings in terms of the ADC, data storage memory, down-link bandwidth, and electrical power absorption. The CS technology could have primary importance for spaceborne missions and technology, paving the way to noteworthy reductions of payload mass, volume, and cost. On the contrary, the main CS disadvantage is made by the intensive off-line data processing necessary to obtain the desired source estimation. In this paper we summarize the CS architecture and its possible implementations for Earth observation, giving evidence of possible bottlenecks hindering this technology. CS necessarily employs a multiplexing scheme, which should produce some SNR disadvantage. Moreover, this approach would necessitate optical light modulators and 2-dim detector arrays of high frame rate. This paper describes the development of a sensor prototype at laboratory level that will be utilized for the experimental assessment of CS performance and the related reconstruction errors. The experimental test-bed adopts a push-broom imaging spectrometer, a liquid crystal plate, a standard CCD camera and a Silicon PhotoMultiplier (SiPM) matrix. The prototype is being developed within the framework of the ESA ITI-B Project titled "Hyperspectral Passive Satellite Imaging via Compressive Sensing".

  1. Contour-Based Image Compression for Fast Real-Time Coding

    NASA Astrophysics Data System (ADS)

    Vasilyev, Sergei

    A new method based on simultaneous contouring the image content with subsequent converting of the contours to a compact chained bit-flow, thus providing efficient spatial image compression, is proposed. It is computationally inexpensive and can be directly applied to compressing the high-resolution bitonal imagery, allowing to approach the ultimate speed performance. Combining the method with other compression schemes, for example, Huffman-type or arithmetic encoding, provides better lossless compression to the current telecommunication compression standards. The problems of method application to compressing the color images for remote sensing and mapping applications, as well as lossy method implementation, are discussed.

  2. Motion-compensated compressed sensing for dynamic imaging

    NASA Astrophysics Data System (ADS)

    Sundaresan, Rajagopalan; Kim, Yookyung; Nadar, Mariappan S.; Bilgin, Ali

    2010-08-01

    The recently introduced Compressed Sensing (CS) theory explains how sparse or compressible signals can be reconstructed from far fewer samples than what was previously believed possible. The CS theory has attracted significant attention for applications such as Magnetic Resonance Imaging (MRI) where long acquisition times have been problematic. This is especially true for dynamic MRI applications where high spatio-temporal resolution is needed. For example, in cardiac cine MRI, it is desirable to acquire the whole cardiac volume within a single breath-hold in order to avoid artifacts due to respiratory motion. Conventional MRI techniques do not allow reconstruction of high resolution image sequences from such limited amount of data. Vaswani et al. recently proposed an extension of the CS framework to problems with partially known support (i.e. sparsity pattern). In their work, the problem of recursive reconstruction of time sequences of sparse signals was considered. Under the assumption that the support of the signal changes slowly over time, they proposed using the support of the previous frame as the "known" part of the support for the current frame. While this approach works well for image sequences with little or no motion, motion causes significant change in support between adjacent frames. In this paper, we illustrate how motion estimation and compensation techniques can be used to reconstruct more accurate estimates of support for image sequences with substantial motion (such as cardiac MRI). Experimental results using phantoms as well as real MRI data sets illustrate the improved performance of the proposed technique.

  3. Information-theoretic assessment of imaging systems via data compression

    NASA Astrophysics Data System (ADS)

    Aiazzi, Bruno; Alparone, Luciano; Baronti, Stefano

    2001-12-01

    This work focuses on estimating the information conveyed to a user by either multispectral or hyperspectral image data. The goal is establishing the extent to which an increase in spectral resolution can increase the amount of usable information. As a matter of fact, a tradeoff exists between spatial and spectral resolution, due to physical constraints of sensors imaging with a prefixed SNR. Lossless data compression is exploited to measure the useful information content. In fact, the bit rate achieved by the reversible compression process takes into account both the contribution of the observation noise i.e., information regarded as statistical uncertainty, the relevance of which is null to a user, and the intrinsic information of hypothetically noise-free data. An entropic model of the image source is defined and, once the standard deviation of the noise, assumed to be Gaussian and possibly nonwhite, has been preliminarily estimated, such a model is inverted to yield an estimate of the information content of the noise-free source from the code rate. Results both of noise and of information assessment are reported and discussed on synthetic noisy images, on Landsat TM data, and on AVIRIS data.

  4. A comparison of select image-compression algorithms for an electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.

  5. Degradative encryption: An efficient way to protect SPIHT compressed images

    NASA Astrophysics Data System (ADS)

    Xiang, Tao; Qu, Jinyu; Yu, Chenyun; Fu, Xinwen

    2012-11-01

    Degradative encryption, a new selective image encryption paradigm, is proposed to encrypt only a small part of image data to make the detail blurred but keep the skeleton discernible. The efficiency is further optimized by combining compression and encryption. A format-compliant degradative encryption algorithm based on set partitioning in hierarchical trees (SPIHT) is then proposed, and the scheme is designed to work in progressive mode for gaining a tradeoff between efficiency and security. Extensive experiments are conducted to evaluate the strength and efficiency of the scheme, and it is found that less than 10% data need to be encrypted for a secure degradation. In security analysis, the scheme is verified to be immune to cryptographic attacks as well as those adversaries utilizing image processing techniques. The scheme can find its wide applications in online try-and-buy service on mobile devices, searchable multimedia encryption in cloud computing, etc.

  6. Single image non-uniformity correction using compressive sensing

    NASA Astrophysics Data System (ADS)

    Jian, Xian-zhong; Lu, Rui-zhi; Guo, Qiang; Wang, Gui-pu

    2016-05-01

    A non-uniformity correction (NUC) method for an infrared focal plane array imaging system was proposed. The algorithm, based on compressive sensing (CS) of single image, overcame the disadvantages of "ghost artifacts" and bulk calculating costs in traditional NUC algorithms. A point-sampling matrix was designed to validate the measurements of CS on the time domain. The measurements were corrected using the midway infrared equalization algorithm, and the missing pixels were solved with the regularized orthogonal matching pursuit algorithm. Experimental results showed that the proposed method can reconstruct the entire image with only 25% pixels. A small difference was found between the correction results using 100% pixels and the reconstruction results using 40% pixels. Evaluation of the proposed method on the basis of the root-mean-square error, peak signal-to-noise ratio, and roughness index (ρ) proved the method to be robust and highly applicable.

  7. Compressed Sensing Photoacoustic Imaging Based on Fast Alternating Direction Algorithm

    PubMed Central

    Liu, Xueyan; Peng, Dong; Guo, Wei; Ma, Xibo; Yang, Xin; Tian, Jie

    2012-01-01

    Photoacoustic imaging (PAI) has been employed to reconstruct endogenous optical contrast present in tissues. At the cost of longer calculations, a compressive sensing reconstruction scheme can achieve artifact-free imaging with fewer measurements. In this paper, an effective acceleration framework using the alternating direction method (ADM) was proposed for recovering images from limited-view and noisy observations. Results of the simulation demonstrated that the proposed algorithm could perform favorably in comparison to two recently introduced algorithms in computational efficiency and data fidelity. In particular, it ran considerably faster than these two methods. PAI with ADM can improve convergence speed with fewer ultrasonic transducers, enabling a high-performance and cost-effective PAI system for biomedical applications. PMID:23365553

  8. Lensfree color imaging on a nanostructured chip using compressive decoding

    PubMed Central

    Khademhosseinieh, Bahar; Biener, Gabriel; Sencan, Ikbal; Ozcan, Aydogan

    2010-01-01

    We demonstrate subpixel level color imaging capability on a lensfree incoherent on-chip microscopy platform. By using a nanostructured substrate, the incoherent emission from the object plane is modulated to create a unique far-field diffraction pattern corresponding to each point at the object plane. These lensfree diffraction patterns are then sampled in the far-field using a color sensor-array, where the pixels have three different types of color filters at red, green, and blue (RGB) wavelengths. The recorded RGB diffraction patterns (for each point on the structured substrate) form a basis that can be used to rapidly reconstruct any arbitrary multicolor incoherent object distribution at subpixel resolution, using a compressive sampling algorithm. This lensfree computational imaging platform could be quite useful to create a compact fluorescent on-chip microscope that has color imaging capability. PMID:21173866

  9. Compressive adaptive ghost imaging via sharing mechanism and fellow relationship.

    PubMed

    Huo, Yaoran; He, Hongjie; Chen, Fan

    2016-04-20

    For lower sampling rate and better imaging quality, a compressive adaptive ghost imaging is proposed by adopting the sharing mechanism and fellow relationship in the wavelet tree. The sharing mechanisms, including intrascale and interscale sharing mechanisms, and fellow relationship are excavated from the wavelet tree and utilized for sampling. The shared coefficients, which are part of the approximation subband, are localized according to the parent coefficients and sampled based on the interscale sharing mechanism and fellow relationship. The sampling rate can be reduced owing to the fact that some shared coefficients can be calculated by adopting the parent coefficients and the sampled sum of shared coefficients. According to the shared coefficients and parent coefficients, the proposed method predicts the positions of significant coefficients and samples them based on the intrascale sharing mechanism. The ghost image, reconstructed by the significant coefficients and the coarse image at the given largest scale, achieves better quality because the significant coefficients contain more detailed information. The simulations demonstrate that the proposed method improves the imaging quality at the same sampling rate and also achieves a lower sampling rate for the same imaging quality for different types of target object images in noise-free and noisy environments. PMID:27140111

  10. Efficient burst image compression using H.265/HEVC

    NASA Astrophysics Data System (ADS)

    Roodaki-Lavasani, Hoda; Lainema, Jani

    2014-02-01

    New imaging use cases are emerging as more powerful camera hardware is entering consumer markets. One family of such use cases is based on capturing multiple pictures instead of just one when taking a photograph. That kind of a camera operation allows e.g. selecting the most successful shot from a sequence of images, showing what happened right before or after the shot was taken or combining the shots by computational means to improve either visible characteristics of the picture (such as dynamic range or focus) or the artistic aspects of the photo (e.g. by superimposing pictures on top of each other). Considering that photographic images are typically of high resolution and quality and the fact that these kind of image bursts can consist of at least tens of individual pictures, an efficient compression algorithm is desired. However, traditional video coding approaches fail to provide the random access properties these use cases require to achieve near-instantaneous access to the pictures in the coded sequence. That feature is critical to allow users to browse the pictures in an arbitrary order or imaging algorithms to extract desired pictures from the sequence quickly. This paper proposes coding structures that provide such random access properties while achieving coding efficiency superior to existing image coders. The results indicate that using HEVC video codec with a single reference picture fixed for the whole sequence can achieve nearly as good compression as traditional IPPP coding structures. It is also shown that the selection of the reference frame can further improve the coding efficiency.

  11. Double-image encryption scheme combining DWT-based compressive sensing with discrete fractional random transform

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Yang, Jianping; Tan, Changfa; Pan, Shumin; Zhou, Zhihong

    2015-11-01

    A new discrete fractional random transform based on two circular matrices is designed and a novel double-image encryption-compression scheme is proposed by combining compressive sensing with discrete fractional random transform. The two random circular matrices and the measurement matrix utilized in compressive sensing are constructed by using a two-dimensional sine Logistic modulation map. Two original images can be compressed, encrypted with compressive sensing and connected into one image. The resulting image is re-encrypted by Arnold transform and the discrete fractional random transform. Simulation results and security analysis demonstrate the validity and security of the scheme.

  12. Recommending images of user interests from the biomedical literature

    NASA Astrophysics Data System (ADS)

    Clukey, Steven; Xu, Songhua

    2013-03-01

    Every year hundreds of thousands of biomedical images are published in journals and conferences. Consequently, finding images relevant to one's interests becomes an ever daunting task. This vast amount of literature creates a need for intelligent and easy-to-use tools that can help researchers effectively navigate through the content corpus and conveniently locate materials of their interests. Traditionally, literature search tools allow users to query content using topic keywords. However, manual query composition is often time and energy consuming. A better system would be one that can automatically deliver relevant content to a researcher without having the end user manually manifest one's search intent and interests via search queries. Such a computer-aided assistance for information access can be provided by a system that first determines a researcher's interests automatically and then recommends images relevant to the person's interests accordingly. The technology can greatly improve a researcher's ability to stay up to date in their fields of study by allowing them to efficiently browse images and documents matching their needs and interests among the vast amount of the biomedical literature. A prototype system implementation of the technology can be accessed via http://www.smartdataware.com.

  13. Spine imaging after lumbar disc replacement: pitfalls and current recommendations

    PubMed Central

    Robinson, Yohan; Sandén, Bengt

    2009-01-01

    Background Most lumbar artificial discs are still composed of stainless steel alloys, which prevents adequate postoperative diagnostic imaging of the operated region when using magnetic resonance imaging (MRI). Thus patients with postoperative radicular symptoms or claudication after stainless steel implants often require alternative diagnostic procedures. Methods Possible complications of lumbar total disc replacement (TDR) are reviewed from the available literature and imaging recommendations given with regard to implant type. Two illustrative cases are presented in figures. Results Access-related complications, infections, implant wear, loosening or fracture, polyethylene inlay dislodgement, facet joint hypertrophy, central stenosis, and ankylosis of the operated segment can be visualised both in titanium and stainless steel implants, but require different imaging modalities due to magnetic artifacts in MRI. Conclusion Alternative radiographic procedures should be considered when evaluating patients following TDR. Postoperative complications following lumbar TDR including spinal stenosis causing radiculopathy and implant loosening can be visualised by myelography and radionucleotide techniques as an adjunct to plain film radiographs. Even in the presence of massive stainless steel TDR implants lumbar radicular stenosis and implant loosening can be visualised if myelography and radionuclide techniques are applied. PMID:19619332

  14. Compressive fluorescence microscopy for biological and hyperspectral imaging.

    PubMed

    Studer, Vincent; Bobin, Jérome; Chahid, Makhlad; Mousavi, Hamed Shams; Candes, Emmanuel; Dahan, Maxime

    2012-06-26

    The mathematical theory of compressed sensing (CS) asserts that one can acquire signals from measurements whose rate is much lower than the total bandwidth. Whereas the CS theory is now well developed, challenges concerning hardware implementations of CS-based acquisition devices--especially in optics--have only started being addressed. This paper presents an implementation of compressive sensing in fluorescence microscopy and its applications to biomedical imaging. Our CS microscope combines a dynamic structured wide-field illumination and a fast and sensitive single-point fluorescence detection to enable reconstructions of images of fluorescent beads, cells, and tissues with undersampling ratios (between the number of pixels and number of measurements) up to 32. We further demonstrate a hyperspectral mode and record images with 128 spectral channels and undersampling ratios up to 64, illustrating the potential benefits of CS acquisition for higher-dimensional signals, which typically exhibits extreme redundancy. Altogether, our results emphasize the interest of CS schemes for acquisition at a significantly reduced rate and point to some remaining challenges for CS fluorescence microscopy. PMID:22689950

  15. Real-time Image Generation for Compressive Light Field Displays

    NASA Astrophysics Data System (ADS)

    Wetzstein, G.; Lanman, D.; Hirsch, M.; Raskar, R.

    2013-02-01

    With the invention of integral imaging and parallax barriers in the beginning of the 20th century, glasses-free 3D displays have become feasible. Only today—more than a century later—glasses-free 3D displays are finally emerging in the consumer market. The technologies being employed in current-generation devices, however, are fundamentally the same as what was invented 100 years ago. With rapid advances in optical fabrication, digital processing power, and computational perception, a new generation of display technology is emerging: compressive displays exploring the co-design of optical elements and computational processing while taking particular characteristics of the human visual system into account. In this paper, we discuss real-time implementation strategies for emerging compressive light field displays. We consider displays composed of multiple stacked layers of light-attenuating or polarization-rotating layers, such as LCDs. The involved image generation requires iterative tomographic image synthesis. We demonstrate that, for the case of light field display, computed tomographic light field synthesis maps well to operations included in the standard graphics pipeline, facilitating efficient GPU-based implementations with real-time framerates.

  16. Block-based reconstructions for compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Correa, Claudia V.; Arguello, Henry; Arce, Gonzalo R.

    2013-05-01

    Coded Aperture Snapshot Spectral Imaging system (CASSI) captures spectral information of a scene using a reduced amount of focal plane array (FPA) projections. These projections are highly structured and localized such that each measurement contains information of a small portion of the data cube. Compressed sensing reconstruction algorithms are then used to recover the underlying 3-dimensional (3D) scene. The computational burden to recover a hyperspectral scene in CASSI is overwhelming for some applications such that reconstructions can take hours in desktop architectures. This paper presents a new method to reconstruct a hyperspectral signal from its compressive measurements using several overlapped block reconstructions. This approach exploits the structure of the CASSI sensing matrix to separately reconstruct overlapped regions of the 3D scene. The resultant reconstructions are then assembled to obtain the full recovered data cube. Typically, block-processing causes undesired artifacts in the recovered signal. Vertical and horizontal overlaps between adjacent blocks are then used to avoid these artifacts and increase the quality of reconstructed images. The reconstruction time and the quality of the reconstructed images are calculated as a function of the block-size and the amount of overlapped regions. Simulations show that the quality of the reconstructions is increased up to 6 dB and the reconstruction time is reduced up to 4 times when using block-based reconstruction instead of full data cube recovery at once. The proposed method is suitable for multi-processor architectures in which each core recovers one block at a time.

  17. Objective index of image fidelity for JPEG2000 compressed body CT images

    SciTech Connect

    Kim, Kil Joong; Lee, Kyoung Ho; Kang, Heung-Sik; Kim, So Yeon; Kim, Young Hoon; Kim, Bohyoung; Seo, Jinwook; Mantiuk, Rafal

    2009-07-15

    Compression ratio (CR) has been the de facto standard index of compression level for medical images. The aim of the study is to evaluate the CR, peak signal-to-noise ratio (PSNR), and a perceptual quality metric (high-dynamic range visual difference predictor HDR-VDP) as objective indices of image fidelity for Joint Photographic Experts Group (JPEG) 2000 compressed body computed tomography (CT) images, from the viewpoint of visually lossless compression approach. A total of 250 body CT images obtained with five different scan protocols (5-mm-thick abdomen, 0.67-mm-thick abdomen, 5-mm-thick lung, 0.67-mm-thick lung, and 5-mm-thick low-dose lung) were compressed to one of five CRs (reversible, 6:1, 8:1, 10:1, and 15:1). The PSNR and HDR-VDP values were calculated for the 250 pairs of the original and compressed images. By alternately displaying an original and its compressed image on the same monitor, five radiologists independently determined if the pair was distinguishable or indistinguishable. The kappa statistic for the interobserver agreement among the five radiologists' responses was 0.70. According to the radiologists' responses, the number of distinguishable image pairs tended to significantly differ among the five scan protocols at 6:1-10:1 compressions (Fisher-Freeman-Halton exact tests). Spearman's correlation coefficients between each of the CR, PSNR, and HDR-VDP and the number of radiologists who responded as distinguishable were 0.72, -0.77, and 0.85, respectively. Using the radiologists' pooled responses as the reference standards, the areas under the receiver-operating-characteristic curves for the CR, PSNR, and HDR-VDP were 0.87, 0.93, and 0.97, respectively, showing significant differences between the CR and PSNR (p=0.04), or HDR-VDP (p<0.001), and between the PSNR and HDR-VDP (p<0.001). In conclusion, the CR is less suitable than the PSNR or HDR-VDP as an objective index of image fidelity for JPEG2000 compressed body CT images. The HDR-VDP is more

  18. Compressed Sensing MR Image Reconstruction Exploiting TGV and Wavelet Sparsity

    PubMed Central

    Du, Huiqian; Han, Yu; Mei, Wenbo

    2014-01-01

    Compressed sensing (CS) based methods make it possible to reconstruct magnetic resonance (MR) images from undersampled measurements, which is known as CS-MRI. The reference-driven CS-MRI reconstruction schemes can further decrease the sampling ratio by exploiting the sparsity of the difference image between the target and the reference MR images in pixel domain. Unfortunately existing methods do not work well given that contrast changes are incorrectly estimated or motion compensation is inaccurate. In this paper, we propose to reconstruct MR images by utilizing the sparsity of the difference image between the target and the motion-compensated reference images in wavelet transform and gradient domains. The idea is attractive because it requires neither the estimation of the contrast changes nor multiple times motion compensations. In addition, we apply total generalized variation (TGV) regularization to eliminate the staircasing artifacts caused by conventional total variation (TV). Fast composite splitting algorithm (FCSA) is used to solve the proposed reconstruction problem in order to improve computational efficiency. Experimental results demonstrate that the proposed method can not only reduce the computational cost but also decrease sampling ratio or improve the reconstruction quality alternatively. PMID:25371704

  19. Compressed sensing MR image reconstruction exploiting TGV and wavelet sparsity.

    PubMed

    Zhao, Di; Du, Huiqian; Han, Yu; Mei, Wenbo

    2014-01-01

    Compressed sensing (CS) based methods make it possible to reconstruct magnetic resonance (MR) images from undersampled measurements, which is known as CS-MRI. The reference-driven CS-MRI reconstruction schemes can further decrease the sampling ratio by exploiting the sparsity of the difference image between the target and the reference MR images in pixel domain. Unfortunately existing methods do not work well given that contrast changes are incorrectly estimated or motion compensation is inaccurate. In this paper, we propose to reconstruct MR images by utilizing the sparsity of the difference image between the target and the motion-compensated reference images in wavelet transform and gradient domains. The idea is attractive because it requires neither the estimation of the contrast changes nor multiple times motion compensations. In addition, we apply total generalized variation (TGV) regularization to eliminate the staircasing artifacts caused by conventional total variation (TV). Fast composite splitting algorithm (FCSA) is used to solve the proposed reconstruction problem in order to improve computational efficiency. Experimental results demonstrate that the proposed method can not only reduce the computational cost but also decrease sampling ratio or improve the reconstruction quality alternatively. PMID:25371704

  20. COxSwAIN: Compressive Sensing for Advanced Imaging and Navigation

    NASA Technical Reports Server (NTRS)

    Kurwitz, Richard; Pulley, Marina; LaFerney, Nathan; Munoz, Carlos

    2015-01-01

    The COxSwAIN project focuses on building an image and video compression scheme that can be implemented in a small or low-power satellite. To do this, we used Compressive Sensing, where the compression is performed by matrix multiplications on the satellite and reconstructed on the ground. Our paper explains our methodology and demonstrates the results of the scheme, being able to achieve high quality image compression that is robust to noise and corruption.

  1. Best parameters selection for wavelet packet-based compression of magnetic resonance images.

    PubMed

    Abu-Rezq, A N; Tolba, A S; Khuwaja, G A; Foda, S G

    1999-10-01

    Transmission of compressed medical images is becoming a vital tool in telemedicine. Thus new methods are needed for efficient image compression. This study discovers the best design parameters for a data compression scheme applied to digital magnetic resonance (MR) images. The proposed technique aims at reducing the transmission cost while preserving the diagnostic information. By selecting the wavelet packet's filters, decomposition level, and subbands that are better adapted to the frequency characteristics of the image, one may achieve better image representation in the sense of lower entropy or minimal distortion. Experimental results show that the selection of the best parameters has a dramatic effect on the data compression rate of MR images. In all cases, decomposition at three or four levels with the Coiflet 5 wavelet (Coif 5) results in better compression performance than the other wavelets. Image resolution is found to have a remarkable effect on the compression rate. PMID:10529302

  2. Improving multispectral satellite image compression using onboard subpixel registration

    NASA Astrophysics Data System (ADS)

    Albinet, Mathieu; Camarero, Roberto; Isnard, Maxime; Poulet, Christophe; Perret, Jokin

    2013-09-01

    Future CNES earth observation missions will have to deal with an ever increasing telemetry data rate due to improvements in resolution and addition of spectral bands. Current CNES image compressors implement a discrete wavelet transform (DWT) followed by a bit plane encoding (BPE) but only on a mono spectral basis and do not profit from the multispectral redundancy of the observed scenes. Recent CNES studies have proven a substantial gain on the achievable compression ratio, +20% to +40% on selected scenarios, by implementing a multispectral compression scheme based on a Karhunen Loeve transform (KLT) followed by the classical DWT+BPE. But such results can be achieved only on perfectly registered bands; a default of registration as low as 0.5 pixel ruins all the benefits of multispectral compression. In this work, we first study the possibility to implement a multi-bands subpixel onboard registration based on registration grids generated on-the-fly by the satellite attitude control system and simplified resampling and interpolation techniques. Indeed bands registration is usually performed on ground using sophisticated techniques too computationally intensive for onboard use. This fully quantized algorithm is tuned to meet acceptable registration performances within stringent image quality criteria, with the objective of onboard real-time processing. In a second part, we describe a FPGA implementation developed to evaluate the design complexity and, by extrapolation, the data rate achievable on a spacequalified ASIC. Finally, we present the impact of this approach on the processing chain not only onboard but also on ground and the impacts on the design of the instrument.

  3. Honey Bee Mating Optimization Vector Quantization Scheme in Image Compression

    NASA Astrophysics Data System (ADS)

    Horng, Ming-Huwi

    The vector quantization is a powerful technique in the applications of digital image compression. The traditionally widely used method such as the Linde-Buzo-Gray (LBG) algorithm always generated local optimal codebook. Recently, particle swarm optimization (PSO) is adapted to obtain the near-global optimal codebook of vector quantization. In this paper, we applied a new swarm algorithm, honey bee mating optimization, to construct the codebook of vector quantization. The proposed method is called the honey bee mating optimization based LBG (HBMO-LBG) algorithm. The results were compared with the other two methods that are LBG and PSO-LBG algorithms. Experimental results showed that the proposed HBMO-LBG algorithm is more reliable and the reconstructed images get higher quality than those generated form the other three methods.

  4. Underwater Acoustic Matched Field Imaging Based on Compressed Sensing

    PubMed Central

    Yan, Huichen; Xu, Jia; Long, Teng; Zhang, Xudong

    2015-01-01

    Matched field processing (MFP) is an effective method for underwater target imaging and localizing, but its performance is not guaranteed due to the nonuniqueness and instability problems caused by the underdetermined essence of MFP. By exploiting the sparsity of the targets in an imaging area, this paper proposes a compressive sensing MFP (CS-MFP) model from wave propagation theory by using randomly deployed sensors. In addition, the model’s recovery performance is investigated by exploring the lower bounds of the coherence parameter of the CS dictionary. Furthermore, this paper analyzes the robustness of CS-MFP with respect to the displacement of the sensors. Subsequently, a coherence-excluding coherence optimized orthogonal matching pursuit (CCOOMP) algorithm is proposed to overcome the high coherent dictionary problem in special cases. Finally, some numerical experiments are provided to demonstrate the effectiveness of the proposed CS-MFP method. PMID:26457708

  5. Underwater Acoustic Matched Field Imaging Based on Compressed Sensing.

    PubMed

    Yan, Huichen; Xu, Jia; Long, Teng; Zhang, Xudong

    2015-01-01

    Matched field processing (MFP) is an effective method for underwater target imaging and localizing, but its performance is not guaranteed due to the nonuniqueness and instability problems caused by the underdetermined essence of MFP. By exploiting the sparsity of the targets in an imaging area, this paper proposes a compressive sensing MFP (CS-MFP) model from wave propagation theory by using randomly deployed sensors. In addition, the model's recovery performance is investigated by exploring the lower bounds of the coherence parameter of the CS dictionary. Furthermore, this paper analyzes the robustness of CS-MFP with respect to the displacement of the sensors. Subsequently, a coherence-excluding coherence optimized orthogonal matching pursuit (CCOOMP) algorithm is proposed to overcome the high coherent dictionary problem in special cases. Finally, some numerical experiments are provided to demonstrate the effectiveness of the proposed CS-MFP method. PMID:26457708

  6. A linear mixture analysis-based compression for hyperspectral image analysis

    SciTech Connect

    C. I. Chang; I. W. Ginsberg

    2000-06-30

    In this paper, the authors present a fully constrained least squares linear spectral mixture analysis-based compression technique for hyperspectral image analysis, particularly, target detection and classification. Unlike most compression techniques that directly deal with image gray levels, the proposed compression approach generates the abundance fractional images of potential targets present in an image scene and then encodes these fractional images so as to achieve data compression. Since the vital information used for image analysis is generally preserved and retained in the abundance fractional images, the loss of information may have very little impact on image analysis. In some occasions, it even improves analysis performance. Airborne visible infrared imaging spectrometer (AVIRIS) data experiments demonstrate that it can effectively detect and classify targets while achieving very high compression ratios.

  7. High dynamic range coherent imaging using compressed sensing.

    PubMed

    He, Kuan; Sharma, Manoj Kumar; Cossairt, Oliver

    2015-11-30

    In both lensless Fourier transform holography (FTH) and coherent diffraction imaging (CDI), a beamstop is used to block strong intensities which exceed the limited dynamic range of the sensor, causing a loss in low-frequency information, making high quality reconstructions difficult or even impossible. In this paper, we show that an image can be recovered from high-frequencies alone, thereby overcoming the beamstop problem in both FTH and CDI. The only requirement is that the object is sparse in a known basis, a common property of most natural and manmade signals. The reconstruction method relies on compressed sensing (CS) techniques, which ensure signal recovery from incomplete measurements. Specifically, in FTH, we perform compressed sensing (CS) reconstruction of captured holograms and show that this method is applicable not only to standard FTH, but also multiple or extended reference FTH. For CDI, we propose a new phase retrieval procedure, which combines Fienup's hybrid input-output (HIO) method and CS. Both numerical simulations and proof-of-principle experiments are shown to demonstrate the effectiveness and robustness of the proposed CS-based reconstructions in dealing with missing data in both FTH and CDI. PMID:26698723

  8. Area and power efficient DCT architecture for image compression

    NASA Astrophysics Data System (ADS)

    Dhandapani, Vaithiyanathan; Ramachandran, Seshasayanan

    2014-12-01

    The discrete cosine transform (DCT) is one of the major components in image and video compression systems. The final output of these systems is interpreted by the human visual system (HVS), which is not perfect. The limited perception of human visualization allows the algorithm to be numerically approximate rather than exact. In this paper, we propose a new matrix for discrete cosine transform. The proposed 8 × 8 transformation matrix contains only zeros and ones which requires only adders, thus avoiding the need for multiplication and shift operations. The new class of transform requires only 12 additions, which highly reduces the computational complexity and achieves a performance in image compression that is comparable to that of the existing approximated DCT. Another important aspect of the proposed transform is that it provides an efficient area and power optimization while implementing in hardware. To ensure the versatility of the proposal and to further evaluate the performance and correctness of the structure in terms of speed, area, and power consumption, the model is implemented on Xilinx Virtex 7 field programmable gate array (FPGA) device and synthesized with Cadence® RTL Compiler® using UMC 90 nm standard cell library. The analysis obtained from the implementation indicates that the proposed structure is superior to the existing approximation techniques with a 30% reduction in power and 12% reduction in area.

  9. A CMOS Imager with Focal Plane Compression using Predictive Coding

    NASA Technical Reports Server (NTRS)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  10. Block-based conditional entropy coding for medical image compression

    NASA Astrophysics Data System (ADS)

    Bharath Kumar, Sriperumbudur V.; Nagaraj, Nithin; Mukhopadhyay, Sudipta; Xu, Xiaofeng

    2003-05-01

    In this paper, we propose a block-based conditional entropy coding scheme for medical image compression using the 2-D integer Haar wavelet transform. The main motivation to pursue conditional entropy coding is that the first-order conditional entropy is always theoretically lesser than the first and second-order entropies. We propose a sub-optimal scan order and an optimum block size to perform conditional entropy coding for various modalities. We also propose that a similar scheme can be used to obtain a sub-optimal scan order and an optimum block size for other wavelets. The proposed approach is motivated by a desire to perform better than JPEG2000 in terms of compression ratio. We hint towards developing a block-based conditional entropy coder, which has the potential to perform better than JPEG2000. Though we don't indicate a method to achieve the first-order conditional entropy coder, the use of conditional adaptive arithmetic coder would achieve arbitrarily close to the theoretical conditional entropy. All the results in this paper are based on the medical image data set of various bit-depths and various modalities.

  11. Sparse radar imaging using 2D compressed sensing

    NASA Astrophysics Data System (ADS)

    Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying

    2014-10-01

    Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.

  12. Dynamic contrast-based quantization for lossy wavelet image compression.

    PubMed

    Chandler, Damon M; Hemami, Sheila S

    2005-04-01

    This paper presents a contrast-based quantization strategy for use in lossy wavelet image compression that attempts to preserve visual quality at any bit rate. Based on the results of recent psychophysical experiments using near-threshold and suprathreshold wavelet subband quantization distortions presented against natural-image backgrounds, subbands are quantized such that the distortions in the reconstructed image exhibit root-mean-squared contrasts selected based on image, subband, and display characteristics and on a measure of total visual distortion so as to preserve the visual system's ability to integrate edge structure across scale space. Within a single, unified framework, the proposed contrast-based strategy yields images which are competitive in visual quality with results from current visually lossless approaches at high bit rates and which demonstrate improved visual quality over current visually lossy approaches at low bit rates. This strategy operates in the context of both nonembedded and embedded quantization, the latter of which yields a highly scalable codestream which attempts to maintain visual quality at all bit rates; a specific application of the proposed algorithm to JPEG-2000 is presented. PMID:15825476

  13. Learning-based compressed sensing for infrared image super resolution

    NASA Astrophysics Data System (ADS)

    Zhao, Yao; Sui, Xiubao; Chen, Qian; Wu, Shaochi

    2016-05-01

    This paper presents an infrared image super-resolution method based on compressed sensing (CS). First, the reconstruction model under the CS framework is established and a Toeplitz matrix is selected as the sensing matrix. Compared with traditional learning-based methods, the proposed method uses a set of sub-dictionaries instead of two coupled dictionaries to recover high resolution (HR) images. And Toeplitz sensing matrix allows the proposed method time-efficient. Second, all training samples are divided into several feature spaces by using the proposed adaptive k-means classification method, which is more accurate than the standard k-means method. On the basis of this approach, a complex nonlinear mapping from the HR space to low resolution (LR) space can be converted into several compact linear mappings. Finally, the relationships between HR and LR image patches can be obtained by multi-sub-dictionaries and HR infrared images are reconstructed by the input LR images and multi-sub-dictionaries. The experimental results show that the proposed method is quantitatively and qualitatively more effective than other state-of-the-art methods.

  14. Interlabial masses in little girls: review and imaging recommendations

    SciTech Connect

    Nussbaum, A.R.; Lebowitz, R.L.

    1983-07-01

    When an interlabial mass is seen on physical examination in a little girl, there is often confusion about its etiology, its implications, and what should be done next. Five common interlabial masses, which superficially are strikingly similar, include a prolapsed ectopic ureterocele, a prolapsed urethra, a paraurethral cyst, hydro(metro)colpos, and rhabdomyosarcoma of the vagina (botryoid sarcoma). A prolapsed ectopic ureterocele occurs in white girls as a smooth mass which protrudes from the urethral meatus so that urine exits circumferentially. A prolapsed urethra occurs in black girls and resembles a donut with the urethral meatus in the center. A paraurethral cyst is smaller and displaces the meatus, so that the urinary stream is eccentric. Hydro(metro)colpos from hymenal imperforation presents as a smooth mass that fills the vaginal introitus, as opposed to the introital grapelike cluster of masses of botryoid sarcoma. Recommendations for efficient imaging are presented.

  15. Vertebral Compression Fracture with Intravertebral Vacuum Cleft Sign: Pathogenesis, Image, and Surgical Intervention

    PubMed Central

    Wu, Ai-Min; Ni, Wen-Fei

    2013-01-01

    The intravertebral vacuum cleft (IVC) sign in vertebral compression fracture patients has obtained much attention. The pathogenesis, image character and efficacy of surgical intervention were disputed. Many pathogenesis theories were proposed, and its image characters are distinct from malignancy and infection. Percutaneous vertebroplasty (PVP) or percutaneous kyphoplasty (PKP) have been the main therapeutic methods for these patients in recent years. Avascular necrosis theory is the most supported; PVP could relieve back pain, restore vertebral body height and correct the kyphotic angulation (KA), and is recommended for these patients. PKP seems to be more effective for the correction of KA and lower cement leakage. The Kümmell's disease with IVC sign reported by modern authors was incomplete consistent with syndrome reported by Dr. Hermann Kümmell. PMID:23741556

  16. 3-D Adaptive Sparsity Based Image Compression With Applications to Optical Coherence Tomography.

    PubMed

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A; Farsiu, Sina

    2015-06-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  17. 3-D Adaptive Sparsity Based Image Compression with Applications to Optical Coherence Tomography

    PubMed Central

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A.; Farsiu, Sina

    2015-01-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  18. High-resolution hyperspectral single-pixel imaging system based on compressive sensing

    NASA Astrophysics Data System (ADS)

    Magalha~es, Filipe; Abolbashari, Mehrdad; Araújo, Francisco M.; Correia, Miguel V.; Farahi, Faramarz

    2012-07-01

    For the first time, a high-resolution hyperspectral single-pixel imaging system based on compressive sensing is presented and demonstrated. The system integrates a digital micro-mirror device array to optically compress the image to be acquired and an optical spectrum analyzer to enable high spectral resolution. The system's ability to successfully reconstruct images with 10 pm spectral resolution is proven.

  19. Image-Data Compression Using Edge-Optimizing Algorithm for WFA Inference.

    ERIC Educational Resources Information Center

    Culik, Karel II; Kari, Jarkko

    1994-01-01

    Presents an inference algorithm that produces a weighted finite automata (WFA), in particular, the grayness functions of graytone images. Image-data compression results based on the new inference algorithm produces a WFA with a relatively small number of edges. Image-data compression results alone and in combination with wavelets are discussed.…

  20. Potential of compressed sensing in quantitative MR imaging of cancer

    PubMed Central

    Smith, David S.; Li, Xia; Abramson, Richard G.; Chad Quarles, C.; Yankeelov, Thomas E.

    2013-01-01

    Abstract Classic signal processing theory dictates that, in order to faithfully reconstruct a band-limited signal (e.g., an image), the sampling rate must be at least twice the maximum frequency contained within the signal, i.e., the Nyquist frequency. Recent developments in applied mathematics, however, have shown that it is often possible to reconstruct signals sampled below the Nyquist rate. This new method of compressed sensing (CS) requires that the signal have a concise and extremely dense representation in some mathematical basis. Magnetic resonance imaging (MRI) is particularly well suited for CS approaches, owing to the flexibility of data collection in the spatial frequency (Fourier) domain available in most MRI protocols. With custom CS acquisition and reconstruction strategies, one can quickly obtain a small subset of the full data and then iteratively reconstruct images that are consistent with the acquired data and sparse by some measure. Successful use of CS results in a substantial decrease in the time required to collect an individual image. This extra time can then be harnessed to increase spatial resolution, temporal resolution, signal-to-noise, or any combination of the three. In this article, we first review the salient features of CS theory and then discuss the specific barriers confronting CS before it can be readily incorporated into clinical quantitative MRI studies of cancer. We finally illustrate applications of the technique by describing examples of CS in dynamic contrast-enhanced MRI and dynamic susceptibility contrast MRI. PMID:24434808

  1. Coded aperture design in mismatched compressive spectral imaging.

    PubMed

    Galvis, Laura; Arguello, Henry; Arce, Gonzalo R

    2015-11-20

    Compressive spectral imaging (CSI) senses a scene by using two-dimensional coded projections such that the number of measurements is far less than that used in spectral scanning-type instruments. An architecture that efficiently implements CSI is the coded aperture snapshot spectral imager (CASSI). A physical limitation of the CASSI is the system resolution, which is determined by the lowest resolution element used in the detector and the coded aperture. Although the final resolution of the system is usually given by the detector, in the CASSI, for instance, the use of a low resolution coded aperture implemented using a digital micromirror device (DMD), which induces the grouping of pixels in superpixels in the detector, is decisive to the final resolution. The mismatch occurs by the differences in the pitch size of the DMD mirrors and focal plane array (FPA) pixels. A traditional solution to this mismatch consists of grouping several pixels in square features, which subutilizes the DMD and the detector resolution and, therefore, reduces the spatial and spectral resolution of the reconstructed spectral images. This paper presents a model for CASSI which admits the mismatch and permits exploiting the maximum resolution of the coding element and the FPA sensor. A super-resolution algorithm and a synthetic coded aperture are developed in order to solve the mismatch. The mathematical models are verified using a real implementation of CASSI. The results of the experiments show a significant gain in spatial and spectral imaging quality over the traditional grouping pixel technique. PMID:26836551

  2. Accurate reconstruction of hyperspectral images from compressive sensing measurements

    NASA Astrophysics Data System (ADS)

    Greer, John B.; Flake, J. C.

    2013-05-01

    The emerging field of Compressive Sensing (CS) provides a new way to capture data by shifting the heaviest burden of data collection from the sensor to the computer on the user-end. This new means of sensing requires fewer measurements for a given amount of information than traditional sensors. We investigate the efficacy of CS for capturing HyperSpectral Imagery (HSI) remotely. We also introduce a new family of algorithms for constructing HSI from CS measurements with Split Bregman Iteration [Goldstein and Osher,2009]. These algorithms combine spatial Total Variation (TV) with smoothing in the spectral dimension. We examine models for three different CS sensors: the Coded Aperture Snapshot Spectral Imager-Single Disperser (CASSI-SD) [Wagadarikar et al.,2008] and Dual Disperser (CASSI-DD) [Gehm et al.,2007] cameras, and a hypothetical random sensing model closer to CS theory, but not necessarily implementable with existing technology. We simulate the capture of remotely sensed images by applying the sensor forward models to well-known HSI scenes - an AVIRIS image of Cuprite, Nevada and the HYMAP Urban image. To measure accuracy of the CS models, we compare the scenes constructed with our new algorithm to the original AVIRIS and HYMAP cubes. The results demonstrate the possibility of accurately sensing HSI remotely with significantly fewer measurements than standard hyperspectral cameras.

  3. Spatially Regularized Compressed Sensing for High Angular Resolution Diffusion Imaging

    PubMed Central

    Rathi, Yogesh; Dolui, Sudipto

    2013-01-01

    Despite the relative recency of its inception, the theory of compressive sampling (aka compressed sensing) (CS) has already revolutionized multiple areas of applied sciences, a particularly important instance of which is medical imaging. Specifically, the theory has provided a different perspective on the important problem of optimal sampling in magnetic resonance imaging (MRI), with an ever-increasing body of works reporting stable and accurate reconstruction of MRI scans from the number of spectral measurements which would have been deemed unacceptably small as recently as five years ago. In this paper, the theory of CS is employed to palliate the problem of long acquisition times, which is known to be a major impediment to the clinical application of high angular resolution diffusion imaging (HARDI). Specifically, we demonstrate that a substantial reduction in data acquisition times is possible through minimization of the number of diffusion encoding gradients required for reliable reconstruction of HARDI scans. The success of such a minimization is primarily due to the availability of spherical ridgelet transformation, which excels in sparsifying HARDI signals. What makes the resulting reconstruction procedure even more accurate is a combination of the sparsity constraints in the diffusion domain with additional constraints imposed on the estimated diffusion field in the spatial domain. Accordingly, the present paper describes an original way to combine the diffusion-and spatial-domain constraints to achieve a maximal reduction in the number of diffusion measurements, while sacrificing little in terms of reconstruction accuracy. Finally, details are provided on an efficient numerical scheme which can be used to solve the aforementioned reconstruction problem by means of standard and readily available estimation tools. The paper is concluded with experimental results which support the practical value of the proposed reconstruction methodology. PMID:21536524

  4. Measuring image quality performance on image versions saved with different file format and compression ratio

    NASA Astrophysics Data System (ADS)

    Mitjà, Carles; Escofet, Jaume; Bover, Toni

    2012-06-01

    Digitization of existing documents containing images is an important body of work for many archives ranging from individuals to institutional organizations. The methods and file formats used in this digitization is usually a trade off between budget, file volume size and image quality, while not necessarily in this order. The use of most commons and standardized file formats, JPEG and TIFF, prompts the operator to decide the compression ratio that affects both the final file volume size and the quality of the resulting image version. The evaluation of the image quality achieved by a system can be done by means of several measures and methods, being the Modulation Transfer Function (MTF) one of most used. The methods employed by the compression algorithms affect in a different way the two basic features of the image contents, edges and textures. Those basic features are too differently affected by the amount of noise generated at the digitization stage. Therefore, the target used in the measurement should be related with the features usually presents in general imaging. This work presents a comparison between the results obtained by measuring the MTF of images taken with a professional camera system and saved in several file formats compression ratios. In order to accomplish with the needs early stated, the MTF measurement has been done by two separate methods using the slanted edge and dead leaves targets respectively. The measurement results are shown and compared related with the respective file volume size.

  5. Auto-shape lossless compression of pharynx and esophagus fluoroscopic images.

    PubMed

    Arif, Arif Sameh; Mansor, Sarina; Logeswaran, Rajasvaran; Karim, Hezerul Abdul

    2015-02-01

    The massive number of medical images produced by fluoroscopic and other conventional diagnostic imaging devices demand a considerable amount of space for data storage. This paper proposes an effective method for lossless compression of fluoroscopic images. The main contribution in this paper is the extraction of the regions of interest (ROI) in fluoroscopic images using appropriate shapes. The extracted ROI is then effectively compressed using customized correlation and the combination of Run Length and Huffman coding, to increase compression ratio. The experimental results achieved show that the proposed method is able to improve the compression ratio by 400 % as compared to that of traditional methods. PMID:25628161

  6. Some experiments in compressing angiocardiographic images according to the Peano-Hilbert scan path.

    PubMed

    Pinciroli, F; Combi, C; Pozzi, G; Portoni, L; Negretto, M; Invernizzi, G

    1994-06-01

    We defined and implemented three new irreversible compression techniques for digital angiocardiographic static images: brightness error limitation (BEL), pseudo-gradient adaptive brightness error limitation (PABEL), pseudo-gradient adaptive brightness and contrast error limitation (PABCEL). To scan digital images we implemented an algorithm based on the Peano-Hilbert plane filling curve. We applied our compression techniques to 168 static images selected from angiocardiographic 35-mm films. We achieved best compression results applying the PABCEL method, obtaining a mean compression ratio of about 8:1. Consulted cardiologists did not find significant diagnostic differences between original images and reconstructed ones. PMID:7956166

  7. Prediction of optimal operation point existence and parameters in lossy compression of noisy images

    NASA Astrophysics Data System (ADS)

    Zemliachenko, Alexander N.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2014-10-01

    This paper deals with lossy compression of images corrupted by additive white Gaussian noise. For such images, compression can be characterized by existence of optimal operation point (OOP). In OOP, MSE or other metric derived between compressed and noise-free image might have optimum, i.e., maximal noise removal effect takes place. If OOP exists, then it is reasonable to compress an image in its neighbourhood. If no, more "careful" compression is reasonable. In this paper, we demonstrate that existence of OOP can be predicted based on very simple and fast analysis of discrete cosine transform (DCT) statistics in 8x8 blocks. Moreover, OOP can be predicted not only for conventional metrics as MSE or PSNR but also for visual quality metrics. Such prediction can be useful in automatic compression of multi- and hyperspectral remote sensing images.

  8. Toward an image compression algorithm for the high-resolution electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.

  9. A novel joint data-hiding and compression scheme based on SMVQ and image inpainting.

    PubMed

    Chuan Qin; Chin-Chen Chang; Yi-Ping Chiu

    2014-03-01

    In this paper, we propose a novel joint data-hiding and compression scheme for digital images using side match vector quantization (SMVQ) and image inpainting. The two functions of data hiding and image compression can be integrated into one single module seamlessly. On the sender side, except for the blocks in the leftmost and topmost of the image, each of the other residual blocks in raster-scanning order can be embedded with secret data and compressed simultaneously by SMVQ or image inpainting adaptively according to the current embedding bit. Vector quantization is also utilized for some complex blocks to control the visual distortion and error diffusion caused by the progressive compression. After segmenting the image compressed codes into a series of sections by the indicator bits, the receiver can achieve the extraction of secret bits and image decompression successfully according to the index values in the segmented sections. Experimental results demonstrate the effectiveness of the proposed scheme. PMID:23649221

  10. Rate and power efficient image compressed sensing and transmission

    NASA Astrophysics Data System (ADS)

    Olanigan, Saheed; Cao, Lei; Viswanathan, Ramanarayanan

    2016-01-01

    This paper presents a suboptimal quantization and transmission scheme for multiscale block-based compressed sensing images over wireless channels. The proposed method includes two stages: dealing with quantization distortion and transmission errors. First, given the total transmission bit rate, the optimal number of quantization bits is assigned to the sensed measurements in different wavelet sub-bands so that the total quantization distortion is minimized. Second, given the total transmission power, the energy is allocated to different quantization bit layers based on their different error sensitivities. The method of Lagrange multipliers with Karush-Kuhn-Tucker conditions is used to solve both optimization problems, for which the first problem can be solved with relaxation and the second problem can be solved completely. The effectiveness of the scheme is illustrated through simulation results, which have shown up to 10 dB improvement over the method without the rate and power optimization in medium and low signal-to-noise ratio cases.

  11. Lifting-based reversible color transformations for image compression

    NASA Astrophysics Data System (ADS)

    Malvar, Henrique S.; Sullivan, Gary J.; Srinivasan, Sridhar

    2008-08-01

    This paper reviews a set of color spaces that allow reversible mapping between red-green-blue and luma-chroma representations in integer arithmetic. The YCoCg transform and its reversible form YCoCg-R can improve coding gain by over 0.5 dB with respect to the popular YCrCb transform, while achieving much lower computational complexity. We also present extensions of the YCoCg transform for four-channel CMYK pixel data. Thanks to their reversibility under integer arithmetic, these transforms are useful for both lossy and lossless compression. Versions of these transforms are used in the HD Photo image coding technology (which is the basis for the upcoming JPEG XR standard) and in recent editions of the H.264/MPEG-4 AVC video coding standard.

  12. Hardware Implementation of a Lossless Image Compression Algorithm Using a Field Programmable Gate Array

    NASA Astrophysics Data System (ADS)

    Klimesh, M.; Stanton, V.; Watola, D.

    2000-10-01

    We describe a hardware implementation of a state-of-the-art lossless image compression algorithm. The algorithm is based on the LOCO-I (low complexity lossless compression for images) algorithm developed by Weinberger, Seroussi, and Sapiro, with modifications to lower the implementation complexity. In this setup, the compression itself is performed entirely in hardware using a field programmable gate array and a small amount of random access memory. The compression speed achieved is 1.33 Mpixels/second. Our algorithm yields about 15 percent better compression than the Rice algorithm.

  13. Television image compression and small animal remote monitoring

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Jackson, Robert W.

    1990-01-01

    It was shown that a subject can reliably discriminate a difference in video image quality (using a specific commercial product) for image compression levels ranging from 384 kbits per second to 1536 kbits per second. However, their discriminations are significantly influenced by whether or not the TV camera is stable or moving and whether or not the animals are quiescent or active, which is correlated with illumination level (daylight versus night illumination, respectively). The highest video rate used here was 1.54 megabits per second, which is about 18 percent of the so-called normal TV resolution of 8.4MHz. Since this video rate was judged to be acceptable by 27 of the 34 subjects (79 percent), for monitoring the general health and status of small animals within their illuminated (lights on) cages (regardless of whether the camera was stable or moved), it suggests that an immediate Space Station Freedom to ground bandwidth reduction of about 80 percent can be tolerated without a significant loss in general monitoring capability. Another general conclusion is that the present methodology appears to be effective in quantifying visual judgments of video image quality.

  14. Television image compression and small animal remote monitoring

    NASA Astrophysics Data System (ADS)

    Haines, Richard F.; Jackson, Robert W.

    1990-04-01

    It was shown that a subject can reliably discriminate a difference in video image quality (using a specific commercial product) for image compression levels ranging from 384 kbits per second to 1536 kbits per second. However, their discriminations are significantly influenced by whether or not the TV camera is stable or moving and whether or not the animals are quiescent or active, which is correlated with illumination level (daylight versus night illumination, respectively). The highest video rate used here was 1.54 megabits per second, which is about 18 percent of the so-called normal TV resolution of 8.4MHz. Since this video rate was judged to be acceptable by 27 of the 34 subjects (79 percent), for monitoring the general health and status of small animals within their illuminated (lights on) cages (regardless of whether the camera was stable or moved), it suggests that an immediate Space Station Freedom to ground bandwidth reduction of about 80 percent can be tolerated without a significant loss in general monitoring capability. Another general conclusion is that the present methodology appears to be effective in quantifying visual judgments of video image quality.

  15. Intelligent fuzzy approach for fast fractal image compression

    NASA Astrophysics Data System (ADS)

    Nodehi, Ali; Sulong, Ghazali; Al-Rodhaan, Mznah; Al-Dhelaan, Abdullah; Rehman, Amjad; Saba, Tanzila

    2014-12-01

    Fractal image compression (FIC) is recognized as a NP-hard problem, and it suffers from a high number of mean square error (MSE) computations. In this paper, a two-phase algorithm was proposed to reduce the MSE computation of FIC. In the first phase, based on edge property, range and domains are arranged. In the second one, imperialist competitive algorithm (ICA) is used according to the classified blocks. For maintaining the quality of the retrieved image and accelerating algorithm operation, we divided the solutions into two groups: developed countries and undeveloped countries. Simulations were carried out to evaluate the performance of the developed approach. Promising results thus achieved exhibit performance better than genetic algorithm (GA)-based and Full-search algorithms in terms of decreasing the number of MSE computations. The number of MSE computations was reduced by the proposed algorithm for 463 times faster compared to the Full-search algorithm, although the retrieved image quality did not have a considerable change.

  16. Introduction to compressive sampling and applications in THz imaging

    NASA Astrophysics Data System (ADS)

    Coltuc, Daniela

    2015-02-01

    The Compressive Sensing (CS) is an emergent theory that provides an alternative to Shannon/Nyquist Sampling Theorem. By CS, a sparse signal can be perfectly recovered from a number of measurements, which is significantly lower than the number of periodic samples required by Sampling Theorem. The THz radiation is nowadays of high interest due to its capability to emphasize the molecular structure of matter. In imaging applications, one of the problems is the sensing device: the THz detectors are slow and bulky and cannot be integrated in large arrays like the CCD. The CS can provide an efficient solution for THz imaging. This solution is the single pixel camera with CS, a concept developed at Rice University that has materialized in several laboratory models and an IR camera released on the market in 2013. We reconsidered this concept in view of THz application and, at present, we have an experimental model for a THz camera. The paper has an extended section dedicated to the CS theory and single pixel camera architecture. In the end, we briefly presents the hardware and software solutions of our model, some characteristics and a first image obtained in visible domain.

  17. Near lossless medical image compression using JPEG-LS and cubic spline interpolation

    NASA Astrophysics Data System (ADS)

    Lin, Tsung-Ching; Chen, Chien-Wen; Chen, Shi-Huang; Truong, Trieu-Kien

    2008-08-01

    In this paper, a near lossless medical image compression scheme combining JPEG-LS with cubic spline interpolation (CSI) is presented. The CSI is developed to subsample image data with minimal distortion and to achieve image compression. It has been shown in literatures that the CSI can be combined with the transform-based image compression algorithm to develop a modified image compression codec, which obtains a higher compression ratio and a better subjective quality of reconstructed image than the standard transform-based codecs. This paper combines the CSI with lossless JPEG-LS to form the modified JPEG-LS scheme and further makes use of this modified codec to medical image compression. By comparing with the JPEG-LS image compression standard, experimental results show that the compression ratio increased over 3 times for the proposed scheme with similar visual quality. The proposed scheme reduces the loading for storing and transmission of image, therefore it is suitable for low bit-rate telemedicine application. The modified JPEG-LS can reduce the loading of storing and transmitting of medical image.

  18. Multiresolution graph Fourier transform for compression of piecewise smooth images.

    PubMed

    Hu, Wei; Cheung, Gene; Ortega, Antonio; Au, Oscar C

    2015-01-01

    Piecewise smooth (PWS) images (e.g., depth maps or animation images) contain unique signal characteristics such as sharp object boundaries and slowly varying interior surfaces. Leveraging on recent advances in graph signal processing, in this paper, we propose to compress the PWS images using suitable graph Fourier transforms (GFTs) to minimize the total signal representation cost of each pixel block, considering both the sparsity of the signal's transform coefficients and the compactness of transform description. Unlike fixed transforms, such as the discrete cosine transform, we can adapt GFT to a particular class of pixel blocks. In particular, we select one among a defined search space of GFTs to minimize total representation cost via our proposed algorithms, leveraging on graph optimization techniques, such as spectral clustering and minimum graph cuts. Furthermore, for practical implementation of GFT, we introduce two techniques to reduce computation complexity. First, at the encoder, we low-pass filter and downsample a high-resolution (HR) pixel block to obtain a low-resolution (LR) one, so that a LR-GFT can be employed. At the decoder, upsampling and interpolation are performed adaptively along HR boundaries coded using arithmetic edge coding, so that sharp object boundaries can be well preserved. Second, instead of computing GFT from a graph in real-time via eigen-decomposition, the most popular LR-GFTs are pre-computed and stored in a table for lookup during encoding and decoding. Using depth maps and computer-graphics images as examples of the PWS images, experimental results show that our proposed multiresolution-GFT scheme outperforms H.264 intra by 6.8 dB on average in peak signal-to-noise ratio at the same bit rate. PMID:25494508

  19. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information

    NASA Technical Reports Server (NTRS)

    Pence, William D.; White, R. L.; Seaman, R.

    2010-01-01

    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  20. Medical image compression based on a morphological representation of wavelet coefficients.

    PubMed

    Phelan, N C; Ennis, J T

    1999-08-01

    Image compression is fundamental to the efficient and cost-effective use of digital medical imaging technology and applications. Wavelet transform techniques currently provide the most promising approach to high-quality image compression which is essential for diagnostic medical applications. A novel approach to image compression based on the wavelet decomposition has been developed which utilizes the shape or morphology of wavelet transform coefficients in the wavelet domain to isolate and retain significant coefficients corresponding to image structure and features. The remaining coefficients are further compressed using a combination of run-length and Huffman coding. The technique has been implemented and applied to full 16 bit medical image data for a range of compression ratios. Objective peak signal-to-noise ratio performance of the compression technique was analyzed. Results indicate that good reconstructed image quality can be achieved at compression ratios of up to 15:1 for the image types studied. This technique represents an effective approach to the compression of diagnostic medical images and is worthy of further, more thorough, evaluation of diagnostic quality and accuracy in a clinical setting. PMID:10501061

  1. Effect of Breast Compression on Lesion Characteristic Visibility with Diffraction-Enhanced Imaging

    SciTech Connect

    Faulconer, L.; Parham, C; Connor, D; Kuzmiak, C; Koomen, M; Lee, Y; Cho, K; Rafoth, J; Livasy, C; et al.

    2010-01-01

    Conventional mammography can not distinguish between transmitted, scattered, or refracted x-rays, thus requiring breast compression to decrease tissue depth and separate overlapping structures. Diffraction-enhanced imaging (DEI) uses monochromatic x-rays and perfect crystal diffraction to generate images with contrast based on absorption, refraction, or scatter. Because DEI possesses inherently superior contrast mechanisms, the current study assesses the effect of breast compression on lesion characteristic visibility with DEI imaging of breast specimens. Eleven breast tissue specimens, containing a total of 21 regions of interest, were imaged by DEI uncompressed, half-compressed, or fully compressed. A fully compressed DEI image was displayed on a soft-copy mammography review workstation, next to a DEI image acquired with reduced compression, maintaining all other imaging parameters. Five breast imaging radiologists scored image quality metrics considering known lesion pathology, ranking their findings on a 7-point Likert scale. When fully compressed DEI images were compared to those acquired with approximately a 25% difference in tissue thickness, there was no difference in scoring of lesion feature visibility. For fully compressed DEI images compared to those acquired with approximately a 50% difference in tissue thickness, across the five readers, there was a difference in scoring of lesion feature visibility. The scores for this difference in tissue thickness were significantly different at one rocking curve position and for benign lesion characterizations. These results should be verified in a larger study because when evaluating the radiologist scores overall, we detected a significant difference between the scores reported by the five radiologists. Reducing the need for breast compression might increase patient comfort during mammography. Our results suggest that DEI may allow a reduction in compression without substantially compromising clinical image

  2. Compressive Source Separation: Theory and Methods for Hyperspectral Imaging

    NASA Astrophysics Data System (ADS)

    Golbabaee, Mohammad; Arberet, Simon; Vandergheynst, Pierre

    2013-12-01

    With the development of numbers of high resolution data acquisition systems and the global requirement to lower the energy consumption, the development of efficient sensing techniques becomes critical. Recently, Compressed Sampling (CS) techniques, which exploit the sparsity of signals, have allowed to reconstruct signal and images with less measurements than the traditional Nyquist sensing approach. However, multichannel signals like Hyperspectral images (HSI) have additional structures, like inter-channel correlations, that are not taken into account in the classical CS scheme. In this paper we exploit the linear mixture of sources model, that is the assumption that the multichannel signal is composed of a linear combination of sources, each of them having its own spectral signature, and propose new sampling schemes exploiting this model to considerably decrease the number of measurements needed for the acquisition and source separation. Moreover, we give theoretical lower bounds on the number of measurements required to perform reconstruction of both the multichannel signal and its sources. We also proposed optimization algorithms and extensive experimentation on our target application which is HSI, and show that our approach recovers HSI with far less measurements and computational effort than traditional CS approaches.

  3. Clipping service: ATR-based SAR image compression

    NASA Astrophysics Data System (ADS)

    Rodkey, David L.; Welby, Stephen P.; Hostetler, Larry D.

    1996-06-01

    Future wide area surveillance systems such as the Tier II+ and Tier III- unmanned aerial vehicles (UAVs) will be gathering cast amounts of high resolution SAR data for transmission to ground stations and subsequent analysis by image interpreters to provide critical and timely information to field commanders. This extremely high data rate presents two problems. First, wide bandwidth data link channels which would be needed to transmit this high data rate presents two problems. First, wide bandwidth data link channels which would be needed to transmit this imagery to a ground station are both expensive and difficult to obtain. Second, the volume of data which is generated by the system will quickly saturate any human-based analysis system without some degree of computer assistance. The ARPA sponsored clipping service program seeks to apply automatic target recognition (ATR) technology to perform 'intelligent' data compression on this imagery in a way which will provide a product on the ground that preserves essential information for further processing either by the military analyst or by a ground-based ATR system. An ATR system on board the UAV would examine the imagery data stream in real time determining regions of interest. Imagery from those regions would be transmitted to the ground in a manner which preserved most or all of the information contained in the original image. The remainder of the imagery would be transmitted to the ground with lesser fidelity. This paper presents system analysis deriving the operational requirements for the clipping service system and examines candidate architectures.

  4. Research on spatial coding compressive spectral imaging and its applicability for rural survey

    NASA Astrophysics Data System (ADS)

    Chen, Yuheng; Ji, Yiqun; Zhou, Jiankang; Chen, Xinhua; Shen, Weimin

    Compressive spectral imaging combines traditional spectral imaging method with new concept of compressive sensing thus has the advantages such as reducing acquisition data amount, realizing snapshot imaging for large field of view and increasing image signal-to-noise and its preliminary application effectiveness has been explored by early usage on the occasions such as high-speed imaging and fluorescent imaging. In this paper, the application potentiality for spatial coding compressive spectral imaging technique on rural survey is revealed. The physical model for spatial coding compressive spectral imaging is built on which its data flow procession is analyzed and its data reconstruction issue is concluded. The existing sparse reconstruction methods are reviewed thus specific module based on the two-step iterative shrinkage/thresholding algorithm is built so as to execute the imaging data reconstruction. The simulating imaging experiment based on AVIRIS visible band data of a specific selected rural scene is carried out. The spatial identification and spectral featuring extraction capacity for different ground species are evaluated by visual judgment of both single band image and spectral curve. The data fidelity evaluation parameters (RMSE and PSNR) are put forward so as to verify the data fidelity maintaining ability of this compressive imaging method quantitatively. The application potentiality of spatial coding compressive spectral imaging on rural survey, crop monitoring, vegetation inspection and further agricultural development demand is verified in this paper.

  5. Recommendations for rescue of a submerged unresponsive compressed-gas diver.

    PubMed

    Mitchell, S J; Bennett, M H; Bird, N; Doolette, D J; Hobbs, G W; Kay, E; Moon, R E; Neuman, T S; Vann, R D; Walker, R; Wyatt, H A

    2012-01-01

    The Diving Committee of the Undersea and Hyperbaric Medical Society has reviewed available evidence in relation to the medical aspects of rescuing a submerged unresponsive compressed-gas diver. The rescue process has been subdivided into three phases, and relevant questions have been addressed as follows. Phase 1, preparation for ascent: If the regulator is out of the mouth, should it be replaced? If the diver is in the tonic or clonic phase of a seizure, should the ascent be delayed until the clonic phase has subsided? Are there any special considerations for rescuing rebreather divers? Phase 2, retrieval to the surface: What is a "safe" ascent rate? If the rescuer has a decompression obligation, should they take the victim to the surface? If the regulator is in the mouth and the victim is breathing, does this change the ascent procedures? If the regulator is in the mouth, the victim is breathing, and the victim has a decompression obligation, does this change the ascent procedures? Is it necessary to hold the victim's head in a particular position? Is it necessary to press on the victim's chest to ensure exhalation? Are there any special considerations for rescuing rebreather divers? Phase 3, procedure at the surface: Is it possible to make an assessment of breathing in the water? Can effective rescue breaths be delivered in the water? What is the likelihood of persistent circulation after respiratory arrest? Does the recent advocacy for "compression-only resuscitation" suggest that rescue breaths should not be administered to a non-breathing diver? What rules should guide the relative priority of in-water rescue breaths over accessing surface support where definitive CPR can be started? A "best practice" decision tree for submerged diver rescue has been proposed. PMID:23342767

  6. Venous compression syndromes: clinical features, imaging findings and management

    PubMed Central

    Liu, R; Oliveira, G R; Ganguli, S; Kalva, S

    2013-01-01

    Extrinsic venous compression is caused by compression of the veins in tight anatomic spaces by adjacent structures, and is seen in a number of locations. Venous compression syndromes, including Paget–Schroetter syndrome, Nutcracker syndrome, May–Thurner syndrome and popliteal venous compression will be discussed. These syndromes are usually seen in young, otherwise healthy individuals, and can lead to significant overall morbidity. Aside from clinical findings and physical examination, diagnosis can be made with ultrasound, CT, or MR conventional venography. Symptoms and haemodynamic significance of the compression determine the ideal treatment method. PMID:23908347

  7. The wavelet/scalar quantization compression standard for digital fingerprint images

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  8. A segmentation-based lossless image coding method for high-resolution medical image compression.

    PubMed

    Shen, L; Rangayyan, R M

    1997-06-01

    Lossless compression techniques are essential in archival and communication of medical images. In this paper, a new segmentation-based lossless image coding (SLIC) method is proposed, which is based on a simple but efficient region growing procedure. The embedded region growing procedure produces an adaptive scanning pattern for the image with the help of a very-few-bits-needed discontinuity index map. Along with this scanning pattern, an error image data part with a very small dynamic range is generated. Both the error image data and the discontinuity index map data parts are then encoded by the Joint Bi-level Image experts Group (JBIG) method. The SLIC method resulted in, on the average, lossless compression to about 1.6 h/pixel from 8 b, and to about 2.9 h/pixel from 10 b with a database of ten high-resolution digitized chest and breast images. In comparison with direct coding by JBIG, Joint Photographic Experts Group (JPEG), hierarchical interpolation (HINT), and two-dimensional Burg Prediction plus Huffman error coding methods, the SLIC method performed better by 4% to 28% on the database used. PMID:9184892

  9. Rapid MR spectroscopic imaging of lactate using compressed sensing

    NASA Astrophysics Data System (ADS)

    Vidya Shankar, Rohini; Agarwal, Shubhangi; Geethanath, Sairam; Kodibagkar, Vikram D.

    2015-03-01

    Imaging lactate metabolism in vivo may improve cancer targeting and therapeutics due to its key role in the development, maintenance, and metastasis of cancer. The long acquisition times associated with magnetic resonance spectroscopic imaging (MRSI), which is a useful technique for assessing metabolic concentrations, are a deterrent to its routine clinical use. The objective of this study was to combine spectral editing and prospective compressed sensing (CS) acquisitions to enable precise and high-speed imaging of the lactate resonance. A MRSI pulse sequence with two key modifications was developed: (1) spectral editing components for selective detection of lactate, and (2) a variable density sampling mask for pseudo-random under-sampling of the k-space `on the fly'. The developed sequence was tested on phantoms and in vivo in rodent models of cancer. Datasets corresponding to the 1X (fully-sampled), 2X, 3X, 4X, 5X, and 10X accelerations were acquired. The under-sampled datasets were reconstructed using a custom-built algorithm in MatlabTM, and the fidelity of the CS reconstructions was assessed in terms of the peak amplitudes, SNR, and total acquisition time. The accelerated reconstructions demonstrate a reduction in the scan time by up to 90% in vitro and up to 80% in vivo, with negligible loss of information when compared with the fully-sampled dataset. The proposed unique combination of spectral editing and CS facilitated rapid mapping of the spatial distribution of lactate at high temporal resolution. This technique could potentially be translated to the clinic for the routine assessment of lactate changes in solid tumors.

  10. Design of vector quantizer for image compression using self-organizing feature map and surface fitting.

    PubMed

    Laha, Arijit; Pal, Nikhil R; Chanda, Bhabatosh

    2004-10-01

    We propose a new scheme of designing a vector quantizer for image compression. First, a set of codevectors is generated using the self-organizing feature map algorithm. Then, the set of blocks associated with each code vector is modeled by a cubic surface for better perceptual fidelity of the reconstructed images. Mean-removed vectors from a set of training images is used for the construction of a generic codebook. Further, Huffman coding of the indices generated by the encoder and the difference-coded mean values of the blocks are used to achieve better compression ratio. We proposed two indices for quantitative assessment of the psychovisual quality (blocking effect) of the reconstructed image. Our experiments on several training and test images demonstrate that the proposed scheme can produce reconstructed images of good quality while achieving compression at low bit rates. Index Terms-Cubic surface fitting, generic codebook, image compression, self-organizing feature map, vector quantization. PMID:15462140

  11. On encryption-compression tradeoff of pre/post-filtered images

    NASA Astrophysics Data System (ADS)

    Gurijala, Aparna; Khayam, Syed A.; Radha, Hayder; Deller, J. R., Jr.

    2005-09-01

    Advances in network communications have necessitated secure local-storage and transmission of multimedia content. In particular, military networks need to securely store sensitive imagery which at a later stage may be transmitted over bandwidth-constrained wireless networks. This work investigates compression efficiency of JPEG and JPEG 2000 standards for encrypted images. An encryption technique proposed by Kuo et al. in [4] is employed. The technique scrambles the phase spectrum of an image by addition of the phase of an all-pass pre-filter. The post-filter inverts the encryption process, provided the correct pseudo-random filter coefficients are available at the receiver. Additional benefits of pre/post-filter encryption include the prevention of blocking effects and better robustness to channel noise [4]. Since both JPEG and JPEG 2000 exploit spatial and perceptual redundancies for compression, pre/post-filtered (encrypted) images are susceptible to compression inefficiencies. The PSNR difference between the unencrypted and pre/post-filtered images after decompression is determined for various compression rates. Compression efficiency decreases with an increase in compression rate. For JPEG and JPEG 2000 compression rates between 0.5 to 2.5 bpp, the difference in PSNR is negligible. Partial encryption is proposed wherein a subset of image phase coefficients are scrambled. Due to the phase sensitivity of images, even partial scrambling of the phase information results in unintelligible data. The effect of compression on partially encrypted images is observed for various bit-rates. When 25% of image phase coefficients are scrambled, the JPEG and JPEG 2000 compression performance of encrypted images is almost similar to that of unencrypted images for compression rates in the 0.5 to 3.5 bpp range.

  12. Revisiting the Recommended Geometry for the Diametrally Compressed Ceramic C-Ring Specimen

    SciTech Connect

    Jadaan, Osama M.; Wereszczak, Andrew A

    2009-04-01

    A study conducted several years ago found that a stated allowable width/thickness (b/t) ratio in ASTM C1323 (Standard Test Method for Ultimate Strength of Advanced Ceramics with Diametrally Compressed C-Ring Specimens at Ambient Temperature) could ultimately cause the prediction of a non-conservative probability of survival when the measured C-ring strength was scaled to a different size. Because of that problem, this study sought to reevaluate the stress state and geometry of the C-ring specimen and suggest changes to ASTM C1323 that would resolve that issue. Elasticity, mechanics of materials, and finite element solutions were revisited with the C ring geometry. To avoid the introduction of more than 2% error, it was determined that the C ring width/thickness (b/t) ratio should range between 1-3 and that its inner radius/outer radius (ri/ro) ratio should range between 0.50-0.95. ASTM C1323 presently allows for b/t to be as large as 4 so that ratio should be reduced to 3.

  13. Efficient compression of motion-compensated sub-images with Karhunen-Loeve transform in three-dimensional integral imaging

    NASA Astrophysics Data System (ADS)

    Kang, Ho-Hyun; Shin, Dong-Hak; Kim, Eun-Soo

    2010-03-01

    An approach to highly enhance the compression efficiency of the integral images by applying the Karhunen-Loeve transform (KLT) algorithm to the motion-compensated sub-images is proposed. The sub-images transformed from the elemental images picked-up from the three-dimensional (3D) object might represent the different perspectives of the object. Thus, the similarity among the sub-images gets better than that among the elemental images, so that an improvement of compression efficiency of the sub-images could be obtained. However, motion vectors occurred among the sub-images might result in an additional increase of image data to be compressed. Accordingly, in this paper, motion vectors have been estimated and compensated in all sub-image in advance. Then the KLT algorithm was applied to these motion-compensated sub-images for compression. It is shown from some experimental results that compression efficiency of the proposed method has been improved up to 24.44%, 40.62%, respectively, on the average compared to that of the conventional KLT compression method and that of the JPEG.

  14. Medical image processing using novel wavelet filters based on atomic functions: optimal medical image compression.

    PubMed

    Landin, Cristina Juarez; Reyes, Magally Martinez; Martin, Anabelem Soberanes; Rosas, Rosa Maria Valdovinos; Ramirez, Jose Luis Sanchez; Ponomaryov, Volodymyr; Soto, Maria Dolores Torres

    2011-01-01

    The analysis of different Wavelets including novel Wavelet families based on atomic functions are presented, especially for ultrasound (US) and mammography (MG) images compression. This way we are able to determine with what type of filters Wavelet works better in compression of such images. Key properties: Frequency response, approximation order, projection cosine, and Riesz bounds were determined and compared for the classic Wavelets W9/7 used in standard JPEG2000, Daubechies8, Symlet8, as well as for the complex Kravchenko-Rvachev Wavelets ψ(t) based on the atomic functions up(t),  fup (2)(t), and eup(t). The comparison results show significantly better performance of novel Wavelets that is justified by experiments and in study of key properties. PMID:21431590

  15. Venous thoracic outlet compression and the Paget-Schroetter syndrome: a review and recommendations for management.

    PubMed

    Thompson, J F; Winterborn, R J; Bays, S; White, H; Kinsella, D C; Watkinson, A F

    2011-10-01

    Paget Schroetter syndrome, or effort thrombosis of the axillosubclavian venous system, is distinct from other forms of upper limb deep vein thrombosis. It occurs in younger patients and often is secondary to competitive sport, music, or strenuous occupation. If untreated, there is a higher incidence of disabling venous hypertension than was previously appreciated. Anticoagulation alone or in combination with thrombolysis leads to a high rate of rethrombosis. We have established a multidisciplinary protocol over 15 years, based on careful patient selection and a combination of lysis, decompressive surgery, and postoperative percutaneous venoplasty. During the past 10 years, a total of 232 decompression procedures have been performed. This article reviews the literature and presents the Exeter Protocol along with practical recommendations for management. PMID:21448772

  16. Venous Thoracic Outlet Compression and the Paget-Schroetter Syndrome: A Review and Recommendations for Management

    SciTech Connect

    Thompson, J. F. Winterborn, R. J.; Bays, S.; White, H.; Kinsella, D. C.; Watkinson, A. F.

    2011-10-15

    Paget Schroetter syndrome, or effort thrombosis of the axillosubclavian venous system, is distinct from other forms of upper limb deep vein thrombosis. It occurs in younger patients and often is secondary to competitive sport, music, or strenuous occupation. If untreated, there is a higher incidence of disabling venous hypertension than was previously appreciated. Anticoagulation alone or in combination with thrombolysis leads to a high rate of rethrombosis. We have established a multidisciplinary protocol over 15 years, based on careful patient selection and a combination of lysis, decompressive surgery, and postoperative percutaneous venoplasty. During the past 10 years, a total of 232 decompression procedures have been performed. This article reviews the literature and presents the Exeter Protocol along with practical recommendations for management.

  17. Spectral compression algorithms for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R.

    2007-10-16

    A method for spectrally compressing data sets enables the efficient analysis of very large multivariate images. The spectral compression algorithm uses a factored representation of the data that can be obtained from Principal Components Analysis or other factorization technique. Furthermore, a block algorithm can be used for performing common operations more efficiently. An image analysis can be performed on the factored representation of the data, using only the most significant factors. The spectral compression algorithm can be combined with a spatial compression algorithm to provide further computational efficiencies.

  18. Image Algebra Matlab language version 2.3 for image processing and compression research

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Hayden, Eric

    2010-08-01

    Image algebra is a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra was developed under DARPA and US Air Force sponsorship at University of Florida for over 15 years beginning in 1984. Image algebra has been implemented in a variety of programming languages designed specifically to support the development of image processing and computer vision algorithms and software. The University of Florida has been associated with development of the languages FORTRAN, Ada, Lisp, and C++. The latter implementation involved a class library, iac++, that supported image algebra programming in C++. Since image processing and computer vision are generally performed with operands that are array-based, the Matlab™ programming language is ideal for implementing the common subset of image algebra. Objects include sets and set operations, images and operations on images, as well as templates and image-template convolution operations. This implementation, called Image Algebra Matlab (IAM), has been found to be useful for research in data, image, and video compression, as described herein. Due to the widespread acceptance of the Matlab programming language in the computing community, IAM offers exciting possibilities for supporting a large group of users. The control over an object's computational resources provided to the algorithm designer by Matlab means that IAM programs can employ versatile representations for the operands and operations of the algebra, which are supported by the underlying libraries written in Matlab. In a previous publication, we showed how the functionality of IAC++ could be carried forth into a Matlab implementation, and provided practical details of a prototype implementation called IAM Version 1. In this paper, we further elaborate the purpose and structure of image algebra, then present a maturing implementation of Image Algebra Matlab called IAM Version 2.3, which extends the previous implementation

  19. Adjustable lossless image compression based on a natural splitting of an image into drawing, shading, and fine-grained components

    NASA Technical Reports Server (NTRS)

    Novik, Dmitry A.; Tilton, James C.

    1993-01-01

    The compression, or efficient coding, of single band or multispectral still images is becoming an increasingly important topic. While lossy compression approaches can produce reconstructions that are visually close to the original, many scientific and engineering applications require exact (lossless) reconstructions. However, the most popular and efficient lossless compression techniques do not fully exploit the two-dimensional structural links existing in the image data. We describe here a general approach to lossless data compression that effectively exploits two-dimensional structural links of any length. After describing in detail two main variants on this scheme, we discuss experimental results.

  20. Low-complexity wavelet filter design for image compression

    NASA Technical Reports Server (NTRS)

    Majani, E.

    1994-01-01

    Image compression algorithms based on the wavelet transform are an increasingly attractive and flexible alternative to other algorithms based on block orthogonal transforms. While the design of orthogonal wavelet filters has been studied in significant depth, the design of nonorthogonal wavelet filters, such as linear-phase (LP) filters, has not yet reached that point. Of particular interest are wavelet transforms with low complexity at the encoder. In this article, we present known and new parameterizations of the two families of LP perfect reconstruction (PR) filters. The first family is that of all PR LP filters with finite impulse response (FIR), with equal complexity at the encoder and decoder. The second family is one of LP PR filters, which are FIR at the encoder and infinite impulse response (IIR) at the decoder, i.e., with controllable encoder complexity. These parameterizations are used to optimize the subband/wavelet transform coding gain, as defined for nonorthogonal wavelet transforms. Optimal LP wavelet filters are given for low levels of encoder complexity, as well as their corresponding integer approximations, to allow for applications limited to using integer arithmetic. These optimal LP filters yield larger coding gains than orthogonal filters with an equivalent complexity. The parameterizations described in this article can be used for the optimization of any other appropriate objective function.

  1. Prediction of coefficients for lossless compression of multispectral images

    NASA Astrophysics Data System (ADS)

    Ruedin, Ana M. C.; Acevedo, Daniel G.

    2005-08-01

    We present a lossless compressor for multispectral Landsat images that exploits interband and intraband correlations. The compressor operates on blocks of 256 x 256 pixels, and performs two kinds of predictions. For bands 1, 2, 3, 4, 5, 6.2 and 7, the compressor performs an integer-to-integer wavelet transform, which is applied to each block separately. The wavelet coefficients that have not yet been encoded are predicted by means of a linear combination of already coded coefficients that belong to the same orientation and spatial location in the same band, and coefficients of the same location from other spectral bands. A fast block classification is performed in order to use the best weights for each landscape. The prediction errors or differences are finally coded with an entropy - based coder. For band 6.1, we do not use wavelet transforms, instead, a median edge detector is applied to predict a pixel, with the information of the neighbouring pixels and the equalized pixel from band 6.2. This technique exploits better the great similarity between histograms of bands 6.1 and 6.2. The prediction differences are finally coded with a context-based entropy coder. The two kinds of predictions used reduce both spatial and spectral correlations, increasing the compression rates. Our compressor has shown to be superior to the lossless compressors Winzip, LOCO-I, PNG and JPEG2000.

  2. Telemedicine + OCT: toward design of optimized algorithms for high-quality compressed images

    NASA Astrophysics Data System (ADS)

    Mousavi, Mahta; Lurie, Kristen; Land, Julian; Javidi, Tara; Ellerbee, Audrey K.

    2014-03-01

    Telemedicine is an emerging technology that aims to provide clinical healthcare at a distance. Among its goals, the transfer of diagnostic images over telecommunication channels has been quite appealing to the medical community. When viewed as an adjunct to biomedical device hardware, one highly important consideration aside from the transfer rate and speed is the accuracy of the reconstructed image at the receiver end. Although optical coherence tomography (OCT) is an established imaging technique that is ripe for telemedicine, the effects of OCT data compression, which may be necessary on certain telemedicine platforms, have not received much attention in the literature. We investigate the performance and efficiency of several lossless and lossy compression techniques for OCT data and characterize their effectiveness with respect to achievable compression ratio, compression rate and preservation of image quality. We examine the effects of compression in the interferogram vs. A-scan domain as assessed with various objective and subjective metrics.

  3. Super-resolution images fusion via compressed sensing and low-rank matrix decomposition

    NASA Astrophysics Data System (ADS)

    Ren, Kan; Xu, Fuyuan

    2015-01-01

    Most of available image fusion approaches cannot achieve higher spatial resolution than the multisource images. In this paper we propose a novel simultaneous images super-resolution and fusion approach via the recently developed compressed sensing and multiscale dictionaries learning technology. Under the sparse prior of image patches and the framework of compressed sensing, multisource images fusion is reduced to a task of signal recovery from the compressive measurements. Then a set of multiscale dictionaries are learned from some groups of example high-resolution (HR) image patches via a nonlinear optimization algorithm. Moreover, a linear weights fusion rule is advanced to obtain the fused high-resolution image at each scale. Finally the high-resolution image is derived by performing a low-rank decomposition on the recovered high-resolution images at multiple scales. Some experiments are taken to investigate the performance of our proposed method, and the results prove its superiority to the counterparts.

  4. Extension of wavelet compression algorithms to 3D and 4D image data: exploitation of data coherence in higher dimensions allows very high compression ratios

    NASA Astrophysics Data System (ADS)

    Zeng, Li; Jansen, Christian; Unser, Michael A.; Hunziker, Patrick

    2001-12-01

    High resolution multidimensional image data yield huge datasets. For compression and analysis, 2D approaches are often used, neglecting the information coherence in higher dimensions, which can be exploited for improved compression. We designed a wavelet compression algorithm suited for data of arbitrary dimensions, and assessed its ability for compression of 4D medical images. Basically, separable wavelet transforms are done in each dimension, followed by quantization and standard coding. Results were compared with conventional 2D wavelet. We found that in 4D heart images, this algorithm allowed high compression ratios, preserving diagnostically important image features. For similar image quality, compression ratios using the 3D/4D approaches were typically much higher (2-4 times per added dimension) than with the 2D approach. For low-resolution images created with the requirement to keep predefined key diagnostic information (contractile function of the heart), compression ratios up to 2000 could be achieved. Thus, higher-dimensional wavelet compression is feasible, and by exploitation of data coherence in higher image dimensions allows much higher compression than comparable 2D approaches. The proven applicability of this approach to multidimensional medical imaging has important implications especially for the fields of image storage and transmission and, specifically, for the emerging field of telemedicine.

  5. MTF as a quality measure for compressed images transmitted over computer networks

    NASA Astrophysics Data System (ADS)

    Hadar, Ofer; Stern, Adrian; Huber, Merav; Huber, Revital

    1999-12-01

    One result of the recent advances in different components of imaging systems technology is that, these systems have become more resolution-limited and less noise-limited. The most useful tool utilized in characterization of resolution- limited systems is the Modulation Transfer Function (MTF). The goal of this work is to use the MTF as an image quality measure of image compression implemented by the JPEG (Joint Photographic Expert Group) algorithm and transmitted MPEG (Motion Picture Expert Group) compressed video stream through a lossy packet network. Although we realize that the MTF is not an ideal parameter with which to measure image quality after compression and transmission due to the non- linearity shift invariant process, we examine the conditions under which it can be used as an approximated criterion for image quality. The advantage in using the MTF of the compression algorithm is that it can be easily combined with the overall MTF of the imaging system.

  6. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-01-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters

  7. Evaluation of raster image compression in the context of large-format document processing

    NASA Astrophysics Data System (ADS)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    We investigate the task of wide format still image manipulation and compression, within the framework of a document printing and copying data path. A typical document processing chain can benefit from the use of data compression, especially when it manages wide format color documents. In order to develop a new approach to use data compression for wide format printing systems, we expose in this article the benchmarking process of compression applied to large documents. Standard algorithms, from the imaging and document processing industry have been chosen for the compression of wide format color raster images. A database of image files has been created and classified for this purpose. The goal is to evaluate the performance in terms of data-flow reduction, along with quality losses in case of lossy compression. For the sake of a precise evaluation of performance of these compression algorithms, we include time measurements of the sole compression and decompression processes. A comparison of the memory footprint of each compression and decompression algorithms helps also to appreciate their resource consumptions.

  8. Optical image encryption via photon-counting imaging and compressive sensing based ptychography

    NASA Astrophysics Data System (ADS)

    Rawat, Nitin; Hwang, In-Chul; Shi, Yishi; Lee, Byung-Geun

    2015-06-01

    In this study, we investigate the integration of compressive sensing (CS) and photon-counting imaging (PCI) techniques with a ptychography-based optical image encryption system. Primarily, the plaintext real-valued image is optically encrypted and recorded via a classical ptychography technique. Further, the sparse-based representations of the original encrypted complex data can be produced by combining CS and PCI techniques with the primary encrypted image. Such a combination takes an advantage of reduced encrypted samples (i.e., linearly projected random compressive complex samples and photon-counted complex samples) that can be exploited to realize optical decryption, which inherently serves as a secret key (i.e., independent to encryption phase keys) and makes an intruder attack futile. In addition to this, recording fewer encrypted samples provides a substantial bandwidth reduction in online transmission. We demonstrate that the fewer sparse-based complex samples have adequate information to realize decryption. To the best of our knowledge, this is the first report on integrating CS and PCI with conventional ptychography-based optical image encryption.

  9. Wavelet-based vector quantization for high-fidelity compression and fast transmission of medical images.

    PubMed

    Mitra, S; Yang, S; Kustov, V

    1998-11-01

    Compression of medical images has always been viewed with skepticism, since the loss of information involved is thought to affect diagnostic information. However, recent research indicates that some wavelet-based compression techniques may not effectively reduce the image quality, even when subjected to compression ratios up to 30:1. The performance of a recently designed wavelet-based adaptive vector quantization is compared with a well-known wavelet-based scalar quantization technique to demonstrate the superiority of the former technique at compression ratios higher than 30:1. The use of higher compression with high fidelity of the reconstructed images allows fast transmission of images over the Internet for prompt inspection by radiologists at remote locations in an emergency situation, while higher quality images follow in a progressive manner if desired. Such fast and progressive transmission can also be used for downloading large data sets such as the Visible Human at a quality desired by the users for research or education. This new adaptive vector quantization uses a neural networks-based clustering technique for efficient quantization of the wavelet-decomposed subimages, yielding minimal distortion in the reconstructed images undergoing high compression. Results of compression up to 100:1 are shown for 24-bit color and 8-bit monochrome medical images. PMID:9848058

  10. Pornographic image recognition and filtering using incremental learning in compressed domain

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Wang, Chao; Zhuo, Li; Geng, Wenhao

    2015-11-01

    With the rapid development and popularity of the network, the openness, anonymity, and interactivity of networks have led to the spread and proliferation of pornographic images on the Internet, which have done great harm to adolescents' physical and mental health. With the establishment of image compression standards, pornographic images are mainly stored with compressed formats. Therefore, how to efficiently filter pornographic images is one of the challenging issues for information security. A pornographic image recognition and filtering method in the compressed domain is proposed by using incremental learning, which includes the following steps: (1) low-resolution (LR) images are first reconstructed from the compressed stream of pornographic images, (2) visual words are created from the LR image to represent the pornographic image, and (3) incremental learning is adopted to continuously adjust the classification rules to recognize the new pornographic image samples after the covering algorithm is utilized to train and recognize the visual words in order to build the initial classification model of pornographic images. The experimental results show that the proposed pornographic image recognition method using incremental learning has a higher recognition rate as well as costing less recognition time in the compressed domain.

  11. A novel color image compression algorithm using the human visual contrast sensitivity characteristics

    NASA Astrophysics Data System (ADS)

    Yao, Juncai; Liu, Guizhong

    2016-07-01

    In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.

  12. Compression of Medical Images Using Enhanced Vector Quantizer Designed with Self Organizing Feature Maps

    NASA Astrophysics Data System (ADS)

    Dandawate, Yogesh H.; Joshi, Madhuri A.; Umrani, Shrirang

    Now a days all medical imaging equipments give output as digital image and non-invasive techniques are becoming cheaper, the database of images is becoming larger. This archive of images increases up to significant size and in telemedicine-based applications the storage and transmission requires large memory and bandwidth respectively. There is a need for compression to save memory space and fast transmission over internet and 3G mobile with good quality decompressed image, even though compression is lossy. This paper presents a novel approach for designing enhanced vector quantizer, which uses Kohonen's Self Organizing neural network. The vector quantizer (codebook) is designed by training with a neatly designed training image and by selective training approach .Compressing; images using it gives better quality. The quality analysis of decompressed images is evaluated by using various quality measures along with conventionally used PSNR.

  13. A Lossless hybrid wavelet-fractal compression for welding radiographic images.

    PubMed

    Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud

    2016-01-01

    In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm. PMID:26890900

  14. A visual sensitivity based low-bit-rate image compression algorithm

    NASA Astrophysics Data System (ADS)

    Xia, Qing; Li, Xiaoguang; Li, Zhuo

    2013-03-01

    In this paper, we present a visual sensitivity based low-bit-rate image compression algorithm. Using the idea that different image regions have different perceptual significance relative to quality, the input image is divided into edges, textures and smooth regions. For the edges, the standard JPEG algorithm with an appropriate quantitative step is applied so that the details can be preserved. For the textures, the JPEG algorithm is applied on the down-scale version. For the smooth regions, a skipping scheme is employed in the compression process so as to save bits. Experimental results show the superior performance of our method in terms of both compression efficiency and visual quality.

  15. Student Images of Agriculture: Survey Highlights and Recommendations.

    ERIC Educational Resources Information Center

    Mallory, Mary E.; Sommer, Robert

    1986-01-01

    The high school students studied were unaware of the range of opportunities in agricultural careers. It was recommended that the University of California, Davis initiate a public relations campaign, with television advertising, movies, and/or public service announcements focusing on exciting, high-tech agricultural research and enterprise. (CT)

  16. Analyzing the Effect of JPEG Compression on Local Variance of Image Intensity.

    PubMed

    Yang, Jianquan; Zhu, Guopu; Shi, Yun-Qing

    2016-06-01

    The local variance of image intensity is a typical measure of image smoothness. It has been extensively used, for example, to measure the visual saliency or to adjust the filtering strength in image processing and analysis. However, to the best of our knowledge, no analytical work has been reported about the effect of JPEG compression on image local variance. In this paper, a theoretical analysis on the variation of local variance caused by JPEG compression is presented. First, the expectation of intensity variance of 8×8 non-overlapping blocks in a JPEG image is derived. The expectation is determined by the Laplacian parameters of the discrete cosine transform coefficient distributions of the original image and the quantization step sizes used in the JPEG compression. Second, some interesting properties that describe the behavior of the local variance under different degrees of JPEG compression are discussed. Finally, both the simulation and the experiments are performed to verify our derivation and discussion. The theoretical analysis presented in this paper provides some new insights into the behavior of local variance under JPEG compression. Moreover, it has the potential to be used in some areas of image processing and analysis, such as image enhancement, image quality assessment, and image filtering. PMID:27093626

  17. Visual sensitivity correlated tone reproduction for low dynamic range images in the compression field

    NASA Astrophysics Data System (ADS)

    Lee, Geun-Young; Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-11-01

    An image toning method for low dynamic range image compression is presented. The proposed method inserts tone mapping into JPEG baseline instead of postprocessing. First, an image is decomposed into detail, base, and surrounding components in terms of the discrete cosine transform coefficients. Subsequently, a luminance-adaptive tone mapping based on the human visual sensitivity properties is applied. In addition, compensation modules are added to enhance the visually sensitive factors, such as saturation, sharpness, and gamma. A comparative study confirms that the transmitted compression images have good image quality.

  18. A new lossless compression algorithm for satellite earth science multi-spectral imagers

    NASA Astrophysics Data System (ADS)

    Gladkova, Irina; Gottipati, Srikanth; Grossberg, Michael

    2007-09-01

    Multispectral imaging is becoming an increasingly important tool for monitoring the earth and its environment from space borne and airborne platforms. Multispectral imaging data consists of visible and IR measurements from a scene across space and spectrum. Growing data rates resulting from faster scanning and finer spatial and spectral resolution makes compression an increasingly critical tool to reduce data volume for transmission and archiving. Examples of multispectral sensors we consider include the NASA 36 band MODIS imager, Meteosat 2nd generation 12 band SEVIRI imager, GOES R series 16 band ABI imager, current generation GOES 5 band imager, and Japan's 5 band MTSAT imager. Conventional lossless compression algorithms are not able to reach satisfactory compression ratios nor are they near the upper limits for lossless compression on imager data as estimated from the Shannon entropy. We introduce a new lossless compression algorithm developed for the NOAA-NESDIS satellite based Earth science multispectral imagers. The algorithm is based on capturing spectral correlations using spectral prediction, and spatial correlations with a linear transform encoder. Our results are evaluated by comparison with current sattelite compression algorithms such the new CCSDS standard compression algorithm, and JPEG2000. The algorithm as presented has been designed to work with NOAA's scientific data and so is purely lossless but lossy modes can be supported. The compression algorithm also structures the data in a way that makes it easy to incorporate robust error correction using FEC coding methods as TPC and LDPC for satellite use. This research was funded by NOAA-NESDIS for its Earth observing satellite program and NOAA goals.

  19. Lossy compression of hyperspectral images based on noise parameters estimation and variance stabilizing transform

    NASA Astrophysics Data System (ADS)

    Zemliachenko, Alexander N.; Kozhemiakin, Ruslan A.; Uss, Mikhail L.; Abramov, Sergey K.; Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Vozel, Benoît; Chehdi, Kacem

    2014-01-01

    A problem of lossy compression of hyperspectral images is considered. A specific aspect is that we assume a signal-dependent model of noise for data acquired by new generation sensors. Moreover, a signal-dependent component of the noise is assumed dominant compared to a signal-independent noise component. Sub-band (component-wise) lossy compression is studied first, and it is demonstrated that optimal operation point (OOP) can exist. For such OOP, the mean square error between compressed and noise-free images attains global or, at least, local minimum, i.e., a good effect of noise removal (filtering) is reached. In practice, we show how compression in the neighborhood of OOP can be carried out, when a noise-free image is not available. Two approaches for reaching this goal are studied. First, lossy compression directly applied to the original data is considered. According to another approach, lossy compression is applied to images after direct variance stabilizing transform (VST) with properly adjusted parameters. Inverse VST has to be performed only after data decompression. It is shown that the second approach has certain advantages. One of them is that the quantization step for a coder can be set the same for all sub-band images. This offers favorable prerequisites for applying three-dimensional (3-D) methods of lossy compression for sub-band images combined into groups after VST. Two approaches to 3-D compression, based on the discrete cosine transform, are proposed and studied. A first approach presumes obtaining the reference and "difference" images for each group. A second performs compression directly for sub-images in a group. We show that it is a good choice to have 16 sub-images in each group. The abovementioned approaches are tested for Hyperion hyperspectral data. It is demonstrated that the compression ratio of about 15-20 can be provided for hyperspectral image compression in the neighborhood of OOP for 3-D coders, which is sufficiently larger than

  20. Introduction of heat map to fidelity assessment of compressed CT images

    SciTech Connect

    Lee, Hyunna; Kim, Bohyoung; Seo, Jinwook; Park, Seongjin; Shin, Yeong-Gil; Kim, Kil Joong; Lee, Kyoung Ho

    2011-08-15

    Purpose: This study aimed to introduce heat map, a graphical data presentation method widely used in gene expression experiments, to the presentation and interpretation of image fidelity assessment data of compressed computed tomography (CT) images. Methods: The authors used actual assessment data that consisted of five radiologists' responses to 720 computed tomography images compressed using both Joint Photographic Experts Group 2000 (JPEG2000) 2D and JPEG2000 3D compressions. They additionally created data of two artificial radiologists, which were generated by partly modifying the data from two human radiologists. Results: For each compression, the entire data set, including the variations among radiologists and among images, could be compacted into a small color-coded grid matrix of the heat map. A difference heat map depicted the advantage of 3D compression over 2D compression. Dendrograms showing hierarchical agglomerative clustering results were added to the heat maps to illustrate the similarities in the data patterns among radiologists and among images. The dendrograms were used to identify two artificial radiologists as outliers, whose data were created by partly modifying the responses of two human radiologists. Conclusions: The heat map can illustrate a quick visual extract of the overall data as well as the entirety of large complex data in a compact space while visualizing the variations among observers and among images. The heat map with the dendrograms can be used to identify outliers or to classify observers and images based on the degree of similarity in the response patterns.

  1. Real-time lossy compression of hyperspectral images using iterative error analysis on graphics processing units

    NASA Astrophysics Data System (ADS)

    Sánchez, Sergio; Plaza, Antonio

    2012-06-01

    Hyperspectral image compression is an important task in remotely sensed Earth Observation as the dimensionality of this kind of image data is ever increasing. This requires on-board compression in order to optimize the donwlink connection when sending the data to Earth. A successful algorithm to perform lossy compression of remotely sensed hyperspectral data is the iterative error analysis (IEA) algorithm, which applies an iterative process which allows controlling the amount of information loss and compression ratio depending on the number of iterations. This algorithm, which is based on spectral unmixing concepts, can be computationally expensive for hyperspectral images with high dimensionality. In this paper, we develop a new parallel implementation of the IEA algorithm for hyperspectral image compression on graphics processing units (GPUs). The proposed implementation is tested on several different GPUs from NVidia, and is shown to exhibit real-time performance in the analysis of an Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) data sets collected over different locations. The proposed algorithm and its parallel GPU implementation represent a significant advance towards real-time onboard (lossy) compression of hyperspectral data where the quality of the compression can be also adjusted in real-time.

  2. Lossless data compression of grid-based digital elevation models: A png image format evaluation

    NASA Astrophysics Data System (ADS)

    Scarmana, G.

    2014-05-01

    At present, computers, lasers, radars, planes and satellite technologies make possible very fast and accurate topographic data acquisition for the production of maps. However, the problem of managing and manipulating this data efficiently remains. One particular type of map is the elevation map. When stored on a computer, it is often referred to as a Digital Elevation Model (DEM). A DEM is usually a square matrix of elevations. It is like an image, except that it contains a single channel of information (that is, elevation) and can be compressed in a lossy or lossless manner by way of existing image compression protocols. Compression has the effect of reducing memory requirements and speed of transmission over digital links, while maintaining the integrity of data as required. In this context, this paper investigates the effects of the PNG (Portable Network Graphics) lossless image compression protocol on floating-point elevation values for 16-bit DEMs of dissimilar terrain characteristics. The PNG is a robust, universally supported, extensible, lossless, general-purpose and patent-free image format. Tests demonstrate that the compression ratios and run decompression times achieved with the PNG lossless compression protocol can be comparable to, or better than, proprietary lossless JPEG variants, other image formats and available lossless compression algorithms.

  3. Compressive Sensing Based Bio-Inspired Shape Feature Detection CMOS Imager

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A. (Inventor)

    2015-01-01

    A CMOS imager integrated circuit using compressive sensing and bio-inspired detection is presented which integrates novel functions and algorithms within a novel hardware architecture enabling efficient on-chip implementation.

  4. Improving signal-to-noise ratio performance of compressive imaging based on spatial correlation

    NASA Astrophysics Data System (ADS)

    Mao, Tianyi; Chen, Qian; He, Weiji; Zou, Yunhao; Dai, Huidong; Gu, Guohua

    2016-07-01

    In this paper, compressive imaging based on spatial correlation (CISC), which uses second-order correlation with the measurement matrix, is introduced to improve the signal-to-noise ratio performance of compressive imaging (CI). Numerical simulations and experiments are performed as well. Referred to the results, it can be seen that CISC performs much better than CI in three common noise environments. This provides the great opportunity to pave the way for real applications.

  5. Injectant mole-fraction imaging in compressible mixing flows using planar laser-induced iodine fluorescence

    NASA Technical Reports Server (NTRS)

    Hartfield, Roy J., Jr.; Abbitt, John D., III; Mcdaniel, James C.

    1989-01-01

    A technique is described for imaging the injectant mole-fraction distribution in nonreacting compressible mixing flow fields. Planar fluorescence from iodine, seeded into air, is induced by a broadband argon-ion laser and collected using an intensified charge-injection-device array camera. The technique eliminates the thermodynamic dependence of the iodine fluorescence in the compressible flow field by taking the ratio of two images collected with identical thermodynamic flow conditions but different iodine seeding conditions.

  6. Improving signal-to-noise ratio performance of compressive imaging based on spatial correlation

    NASA Astrophysics Data System (ADS)

    Mao, Tianyi; Chen, Qian; He, Weiji; Zou, Yunhao; Dai, Huidong; Gu, Guohua

    2016-08-01

    In this paper, compressive imaging based on spatial correlation (CISC), which uses second-order correlation with the measurement matrix, is introduced to improve the signal-to-noise ratio performance of compressive imaging (CI). Numerical simulations and experiments are performed as well. Referred to the results, it can be seen that CISC performs much better than CI in three common noise environments. This provides the great opportunity to pave the way for real applications.

  7. Compressed Sensing for Millimeter-wave Ground Based SAR/ISAR Imaging

    NASA Astrophysics Data System (ADS)

    Yiğit, Enes

    2014-11-01

    Millimeter-wave (MMW) ground based (GB) synthetic aperture radar (SAR) and inverse SAR (ISAR) imaging are the powerful tools for the detection of foreign object debris (FOD) and concealed objects that requires wide bandwidths and highly frequent samplings in both slow-time and fast-time domains according to Shannon/Nyquist sampling theorem. However, thanks to the compressive sensing (CS) theory GB-SAR/ISAR data can be reconstructed by much fewer random samples than the Nyquist rate. In this paper, the impact of both random frequency sampling and random spatial domain data collection of a SAR/ISAR sensor on reconstruction quality of a scene of interest was studied. To investigate the feasibility of using proposed CS framework, different experiments for various FOD-like and concealed object-like targets were carried out at the Ka and W band frequencies of the MMW. The robustness and effectiveness of the recommend CS-based reconstruction configurations were verified through a comparison among each other by using integrated side lobe ratios (ISLR) of the images.

  8. Comparison of modified burrows wheeler transform with JPEG and JPEG2000 for image compression

    NASA Astrophysics Data System (ADS)

    Shabani, Sonia; Zolghadrasli, Alireza

    2012-01-01

    In this research we present a new method for image compression which is a combination of lossy and lossless image coder .In lossy part we use a 3-level decomposition of discrete wavelet transform with hard threshold technique. lossless image coding part is Burrows Wheeler Transform (BWT) which has received considerable attention in recent years because of its simplicity and effectiveness. The BWT is used with other additional methods of image compression such as, MTF, RLE and entropy coding algorithms. the effectiveness of over method is presented and compare with JPEG and JPEG2000 for different images concidering criteria such as PSNR ,Compression Ratio (CR) and images quality(HVS) in our future works we use BWT with other source coding algorithm such as Lempel-Ziv and different mother function for DWT.

  9. Block based image compression technique using rank reduction and wavelet difference reduction

    NASA Astrophysics Data System (ADS)

    Bolotnikova, Anastasia; Rasti, Pejman; Traumann, Andres; Lusi, Iiris; Daneshmand, Morteza; Noroozi, Fatemeh; Samuel, Kadri; Sarkar, Suman; Anbarjafari, Gholamreza

    2015-12-01

    In this paper a new block based lossy image compression technique which is using rank reduction of the image and wavelet difference reduction (WDR) technique, is proposed. Rank reduction is obtained by applying singular value decomposition (SVD). The input image is divided into blocks of equal sizes after which quantization by SVD is carried out on each block followed by WDR technique. Reconstruction is carried out by decompressing each blocks bit streams and then merging all of them to obtain the decompressed image. The visual and quantitative experimental results of the proposed image compression technique are shown and also compared with those of the WDR technique and JPEG2000. From the results of the comparison, the proposed image compression technique outperforms the WDR and JPEG2000 techniques.

  10. Karhunen-Loève transform for compressive sampling hyperspectral images

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Yan, Jingwen; Zheng, Xianwei; Peng, Hong; Guo, Di; Qu, Xiaobo

    2015-01-01

    Compressed sensing (CS) is a new jointly sampling and compression technology for remote sensing. In hyperspectral imaging, a typical CS method encodes the two-dimensional (2-D) spatial information of each spectral band or encodes the third spectral information simultaneously. However, encoding the spatial information is much easier than encoding the spectral information. Therefore, it is crucial to make use of the spectral information to improve the compression rate on 2-D CS data. We propose to encode the third spectral information with an adaptive Karhunen-Loève transform. With a mathematical proof, we show that interspectral correlations are preserved among 2-D randomly encoded spatial information. This property means that one can compress 2-D CS data effectively with a Karhunen-Loève transform. Experiments demonstrate that the proposed method can better reconstruct both spectral curves and spatial images than traditional compression methods at the bit rates 0 to 1.

  11. Estimate of DTM Degradation due to Image Compression for the Stereo Camera of the Bepicolombo Mission

    NASA Astrophysics Data System (ADS)

    Re, C.; Simioni, E.; Cremonese, G.; Roncella, R.; Forlani, G.; Langevin, Y.; Da Deppo, V.; Naletto, G.; Salemi, G.

    2016-06-01

    The great amount of data that will be produced during the imaging of Mercury by the stereo camera (STC) of the BepiColombo mission needs a compromise with the restrictions imposed by the band downlink that could drastically reduce the duration and frequency of the observations. The implementation of an on-board real time data compression strategy preserving as much information as possible is therefore mandatory. The degradation that image compression might cause to the DTM accuracy is worth to be investigated. During the stereo-validation procedure of the innovative STC imaging system, several image pairs of an anorthosite sample and a modelled piece of concrete have been acquired under different illumination angles. This set of images has been used to test the effects of the compression algorithm (Langevin and Forni, 2000) on the accuracy of the DTM produced by dense image matching. Different configurations taking in account at the same time both the illumination of the surface and the compression ratio, have been considered. The accuracy of the DTMs is evaluated by comparison with a high resolution laser-scan acquisition of the same targets. The error assessment included also an analysis on the image plane indicating the influence of the compression procedure on the image measurements.

  12. Comparison of Open Source Compression Algorithms on Vhr Remote Sensing Images for Efficient Storage Hierarchy

    NASA Astrophysics Data System (ADS)

    Akoguz, A.; Bozkurt, S.; Gozutok, A. A.; Alp, G.; Turan, E. G.; Bogaz, M.; Kent, S.

    2016-06-01

    High resolution level in satellite imagery came with its fundamental problem as big amount of telemetry data which is to be stored after the downlink operation. Moreover, later the post-processing and image enhancement steps after the image is acquired, the file sizes increase even more and then it gets a lot harder to store and consume much more time to transmit the data from one source to another; hence, it should be taken into account that to save even more space with file compression of the raw and various levels of processed data is a necessity for archiving stations to save more space. Lossless data compression algorithms that will be examined in this study aim to provide compression without any loss of data holding spectral information. Within this objective, well-known open source programs supporting related compression algorithms have been implemented on processed GeoTIFF images of Airbus Defence & Spaces SPOT 6 & 7 satellites having 1.5 m. of GSD, which were acquired and stored by ITU Center for Satellite Communications and Remote Sensing (ITU CSCRS), with the algorithms Lempel-Ziv-Welch (LZW), Lempel-Ziv-Markov chain Algorithm (LZMA & LZMA2), Lempel-Ziv-Oberhumer (LZO), Deflate & Deflate 64, Prediction by Partial Matching (PPMd or PPM2), Burrows-Wheeler Transform (BWT) in order to observe compression performances of these algorithms over sample datasets in terms of how much of the image data can be compressed by ensuring lossless compression.

  13. High capacity image steganography method based on framelet and compressive sensing

    NASA Astrophysics Data System (ADS)

    Xiao, Moyan; He, Zhibiao

    2015-12-01

    To improve the capacity and imperceptibility of image steganography, a novel high capacity and imperceptibility image steganography method based on a combination of framelet and compressive sensing (CS) is put forward. Firstly, SVD (Singular Value Decomposition) transform to measurement values obtained by compressive sensing technique to the secret data. Then the singular values in turn embed into the low frequency coarse subbands of framelet transform to the blocks of the cover image which is divided into non-overlapping blocks. Finally, use inverse framelet transforms and combine to obtain the stego image. The experimental results show that the proposed steganography method has a good performance in hiding capacity, security and imperceptibility.

  14. The effects of image compression on quantitative measurements of digital panoramic radiographs

    PubMed Central

    Apaydın, Burcu; Yılmaz, Hasan-Hüseyin

    2012-01-01

    Objectives: The aims of this study were to explore how image compression affects density, fractal dimension, linear and angular measurements on digital panoramic images and assess inter and intra-observer repeatability of these measurements. Study Design: Sixty-one digital panoramic images in TIFF format (Tagged Image File Format) were compressed to JPEG (Joint Photographic Experts Group) images. Two observers measured gonial angle, antegonial angle, mandibular cortical width, coronal pulp width of maxillary and mandibular first molar, tooth length of maxillary and mandibular first molar on the left side of these images twice. Fractal dimension of the selected regions of interests were calculated and the density of each panoramic radiograph as a whole were also measured on TIFF and JPEG compressed images. Intra-observer and inter-observer consistency was evaluated with Cronbach’s alpha. Paired samples t-test and Kolmogorov-Smirnov test was used to evaluate the difference between the measurements of TIFF and JPEG compressed images. Results: The repeatability of angular measurements had the highest Cronbach’s alpha value (0.997). There was statistically significant difference for both of the observers in mandibular cortical width (MCW) measurements (1st ob. p: 0.002; 2nd ob. p: 0.003), density (p<0.001) and fractal dimension (p<0.001) between TIFF and JPEG images. There was statistically significant difference for the first observer in antegonial angle (1st ob p< 0.001) and maxillary molar coronal pulp width (1st ob. p<0.001) between JPEG and TIFF files. Conclusions: The repeatability of angular measurements is better than linear measurements. Mandibular cortical width, fractal dimension and density are affected from compression. Observer dependent factors might also cause statistically significant differences between the measurements in TIFF and JPEG images. Key words:Digital panoramic radiography, image compression, linear measurements, angular measurements

  15. Characterization of Diesel and Gasoline Compression Ignition Combustion in a Rapid Compression-Expansion Machine using OH* Chemiluminescence Imaging

    NASA Astrophysics Data System (ADS)

    Krishnan, Sundar Rajan; Srinivasan, Kalyan Kumar; Stegmeir, Matthew

    2015-11-01

    Direct-injection compression ignition combustion of diesel and gasoline were studied in a rapid compression-expansion machine (RCEM) using high-speed OH* chemiluminescence imaging. The RCEM (bore = 84 mm, stroke = 110-250 mm) was used to simulate engine-like operating conditions at the start of fuel injection. The fuels were supplied by a high-pressure fuel cart with an air-over-fuel pressure amplification system capable of providing fuel injection pressures up to 2000 bar. A production diesel fuel injector was modified to provide a single fuel spray for both diesel and gasoline operation. Time-resolved combustion pressure in the RCEM was measured using a Kistler piezoelectric pressure transducer mounted on the cylinder head and the instantaneous piston displacement was measured using an inductive linear displacement sensor (0.05 mm resolution). Time-resolved, line-of-sight OH* chemiluminescence images were obtained using a Phantom V611 CMOS camera (20.9 kHz @ 512 x 512 pixel resolution, ~ 48 μs time resolution) coupled with a short wave pass filter (cut-off ~ 348 nm). The instantaneous OH* distributions, which indicate high temperature flame regions within the combustion chamber, were used to discern the characteristic differences between diesel and gasoline compression ignition combustion. The authors gratefully acknowledge facilities support for the present work from the Energy Institute at Mississippi State University.

  16. Along-track scanning using a liquid crystal compressive hyperspectral imager.

    PubMed

    Oiknine, Yaniv; August, Isaac; Stern, Adrian

    2016-04-18

    In various applications, such as remote sensing and quality inspection, hyperspectral (HS) imaging is performed by spatially scanning an object. In this work, we present a new compressive hyperspectral imaging method that performs along-track scanning. The method relies on the compressive sensing miniature ultra-spectral imaging (CS-MUSI) system, which uses a single liquid crystal (LC) cell for spectral encoding and provides a more efficient way of HS data acquisition, compared to classical spatial scanning based systems. The experimental results show that a compression ratio of about 1:10 can be reached. Owing to the inherent compression, the captured data is preprepared for efficient storage and transmission. PMID:27137283

  17. Compression of multispectral fluorescence microscopic images based on a modified set partitioning in hierarchal trees

    NASA Astrophysics Data System (ADS)

    Mansoor, Awais; Robinson, J. Paul; Rajwa, Bartek

    2009-02-01

    Modern automated microscopic imaging techniques such as high-content screening (HCS), high-throughput screening, 4D imaging, and multispectral imaging are capable of producing hundreds to thousands of images per experiment. For quick retrieval, fast transmission, and storage economy, these images should be saved in a compressed format. A considerable number of techniques based on interband and intraband redundancies of multispectral images have been proposed in the literature for the compression of multispectral and 3D temporal data. However, these works have been carried out mostly in the elds of remote sensing and video processing. Compression for multispectral optical microscopy imaging, with its own set of specialized requirements, has remained under-investigated. Digital photography{oriented 2D compression techniques like JPEG (ISO/IEC IS 10918-1) and JPEG2000 (ISO/IEC 15444-1) are generally adopted for multispectral images which optimize visual quality but do not necessarily preserve the integrity of scientic data, not to mention the suboptimal performance of 2D compression techniques in compressing 3D images. Herein we report our work on a new low bit-rate wavelet-based compression scheme for multispectral fluorescence biological imaging. The sparsity of signicant coefficients in high-frequency subbands of multispectral microscopic images is found to be much greater than in natural images; therefore a quad-tree concept such as Said et al.'s SPIHT1 along with correlation of insignicant wavelet coefficients has been proposed to further exploit redundancy at high-frequency subbands. Our work propose a 3D extension to SPIHT, incorporating a new hierarchal inter- and intra-spectral relationship amongst the coefficients of 3D wavelet-decomposed image. The new relationship, apart from adopting the parent-child relationship of classical SPIHT, also brought forth the conditional "sibling" relationship by relating only the insignicant wavelet coefficients of subbands

  18. Hyperspectral images lossless compression using the 3D binary EZW algorithm

    NASA Astrophysics Data System (ADS)

    Cheng, Kai-jen; Dill, Jeffrey

    2013-02-01

    This paper presents a transform based lossless compression for hyperspectral images which is inspired by Shapiro (1993)'s EZW algorithm. The proposed compression method uses a hybrid transform which includes an integer Karhunrn-Loeve transform (KLT) and integer discrete wavelet transform (DWT). The integer KLT is employed to eliminate the presence of correlations among the bands of the hyperspectral image. The integer 2D discrete wavelet transform (DWT) is applied to eliminate the correlations in the spatial dimensions and produce wavelet coefficients. These coefficients are then coded by a proposed binary EZW algorithm. The binary EZW eliminates the subordinate pass of conventional EZW by coding residual values, and produces binary sequences. The binary EZW algorithm combines the merits of well-known EZW and SPIHT algorithms, and it is computationally simpler for lossless compression. The proposed method was applied to AVIRIS images and compared to other state-of-the-art image compression techniques. The results show that the proposed lossless image compression is more efficient and it also has higher compression ratio than other algorithms.

  19. An introduction to video image compression and authentication technology for safeguards applications

    SciTech Connect

    Johnson, C.S.

    1995-07-01

    Verification of a video image has been a major problem for safeguards for several years. Various verification schemes have been tried on analog video signals ever since the mid-1970`s. These schemes have provided a measure of protection but have never been widely adopted. The development of reasonably priced complex video processing integrated circuits makes it possible to digitize a video image and then compress the resulting digital file into a smaller file without noticeable loss of resolution. Authentication and/or encryption algorithms can be more easily applied to digital video files that have been compressed. The compressed video files require less time for algorithm processing and image transmission. An important safeguards application for authenticated, compressed, digital video images is in unattended video surveillance systems and remote monitoring systems. The use of digital images in the surveillance system makes it possible to develop remote monitoring systems that send images over narrow bandwidth channels such as the common telephone line. This paper discusses the video compression process, authentication algorithm, and data format selected to transmit and store the authenticated images.

  20. Informational analysis for compressive sampling in radar imaging.

    PubMed

    Zhang, Jingxiong; Yang, Ke

    2015-01-01

    Compressive sampling or compressed sensing (CS) works on the assumption of the sparsity or compressibility of the underlying signal, relies on the trans-informational capability of the measurement matrix employed and the resultant measurements, operates with optimization-based algorithms for signal reconstruction and is thus able to complete data compression, while acquiring data, leading to sub-Nyquist sampling strategies that promote efficiency in data acquisition, while ensuring certain accuracy criteria. Information theory provides a framework complementary to classic CS theory for analyzing information mechanisms and for determining the necessary number of measurements in a CS environment, such as CS-radar, a radar sensor conceptualized or designed with CS principles and techniques. Despite increasing awareness of information-theoretic perspectives on CS-radar, reported research has been rare. This paper seeks to bridge the gap in the interdisciplinary area of CS, radar and information theory by analyzing information flows in CS-radar from sparse scenes to measurements and determining sub-Nyquist sampling rates necessary for scene reconstruction within certain distortion thresholds, given differing scene sparsity and average per-sample signal-to-noise ratios (SNRs). Simulated studies were performed to complement and validate the information-theoretic analysis. The combined strategy proposed in this paper is valuable for information-theoretic orientated CS-radar system analysis and performance evaluation. PMID:25811226

  1. Informational Analysis for Compressive Sampling in Radar Imaging

    PubMed Central

    Zhang, Jingxiong; Yang, Ke

    2015-01-01

    Compressive sampling or compressed sensing (CS) works on the assumption of the sparsity or compressibility of the underlying signal, relies on the trans-informational capability of the measurement matrix employed and the resultant measurements, operates with optimization-based algorithms for signal reconstruction and is thus able to complete data compression, while acquiring data, leading to sub-Nyquist sampling strategies that promote efficiency in data acquisition, while ensuring certain accuracy criteria. Information theory provides a framework complementary to classic CS theory for analyzing information mechanisms and for determining the necessary number of measurements in a CS environment, such as CS-radar, a radar sensor conceptualized or designed with CS principles and techniques. Despite increasing awareness of information-theoretic perspectives on CS-radar, reported research has been rare. This paper seeks to bridge the gap in the interdisciplinary area of CS, radar and information theory by analyzing information flows in CS-radar from sparse scenes to measurements and determining sub-Nyquist sampling rates necessary for scene reconstruction within certain distortion thresholds, given differing scene sparsity and average per-sample signal-to-noise ratios (SNRs). Simulated studies were performed to complement and validate the information-theoretic analysis. The combined strategy proposed in this paper is valuable for information-theoretic orientated CS-radar system analysis and performance evaluation. PMID:25811226

  2. Multiframe adaptive Wiener filter super-resolution with JPEG2000-compressed images

    NASA Astrophysics Data System (ADS)

    Narayanan, Barath Narayanan; Hardie, Russell C.; Balster, Eric J.

    2014-12-01

    Historically, Joint Photographic Experts Group 2000 (JPEG2000) image compression and multiframe super-resolution (SR) image processing techniques have evolved separately. In this paper, we propose and compare novel processing architectures for applying multiframe SR with JPEG2000 compression. We propose a modified adaptive Wiener filter (AWF) SR method and study its performance as JPEG2000 is incorporated in different ways. In particular, we perform compression prior to SR and compare this to compression after SR. We also compare both independent-frame compression and difference-frame compression approaches. We find that some of the SR artifacts that result from compression can be reduced by decreasing the assumed global signal-to-noise ratio (SNR) for the AWF SR method. We also propose a novel spatially adaptive SNR estimate for the AWF designed to compensate for the spatially varying compression artifacts in the input frames. The experimental results include the use of simulated imagery for quantitative analysis. We also include real-video results for subjective analysis.

  3. [Recommendations of the ESC guidelines regarding cardiovascular imaging].

    PubMed

    Sechtem, U; Greulich, S; Ong, P

    2016-08-01

    Cardiac imaging plays a key role in the diagnosis and risk stratification in the ESC guidelines for the management of patients with stable coronary artery disease. Demonstration of myocardial ischaemia guides the decision which further diagnostic and therapeutic strategy should be followed in these patients. One should, however, not forget that there are no randomised studies supporting this type of management. In patients with a low pretest probability coronary CT angiography is the optimal tool to exclude coronary artery stenoses rapidly and effectively. In the near future, however, better data is needed showing how much cardiac imaging is really necessary and how cost-effective it is in patients with stable coronary artery disease. PMID:27388914

  4. A new approach to compressive strength assessment of concrete: Image processing technique

    NASA Astrophysics Data System (ADS)

    Başyiǧit, Celalettin; ćomak, Bekir; Kilinçarslan, Şemsettin

    2012-09-01

    In this study, the compressive strength levels of different concrete classes were estimated using an image processing technique. A series of different concretes were prepared by applying different water/cement ratios. The percentages of cement matrix, aggregate, and air void were calculated by processing the images obtained from the surfaces of hardened concretes. The relation between the parameters that were calculated via image processing and the compressive strengths of the concretes produced were examined. By this means, the compressive strength levels of concretes were estimated one by one via the developed image processing software and ImageJ. It was found that the compressive strength levels of concretes can be estimated with a high level of correlation by using the values obtained via the image processing technique. The developed software can be used to estimate the compressive strength levels of concretes. In addition, in considering concrete age, cure conditions, and relative humidity, the method used in this study can be used together with destructive and non-destructive test methods.

  5. A novel compression algorithm for infrared thermal image sequence based on K-means method

    NASA Astrophysics Data System (ADS)

    Zhang, Jin-Yu; Xu, Wei; Zhang, Wei; Meng, Xiangbin; Zhang, Yong

    2014-05-01

    High resolution in space and time is becoming the new trend of thermographic inspection of equipments, therefore, the development of a fast and precise processing and data store technique of high resolution thermal image should be well studied. This article will propose a novel global compression algorithm, which will provide an effective way to improve the precision and processing speed of thermal image data. This new algorithm is based on the decay of the temperature of thermograph and the feature of thermal image morphology. Firstly, it will sort the data in space according to K-means method. Then it will employ classic fitting calculation to fit all the typical temperature decay curves. At last, it will use the fitting parameters of the curves as the parameters for compression and reconstruction of thermal image sequence to achieve the method for which the thermal image sequence can be compressed in space and time simultaneously. To validate the proposed new algorithm, the authors used two embedded defective specimens made of different materials to do the experiment. The results show that the proposed infrared thermal image sequence compression processing algorithm is an effective solution with high speed and high precision. Compared to the conventional method, the global compression algorithm is not only noise resistant but also can improve the computing speed in hundreds of times.

  6. Hardware Implementation of Lossless Adaptive Compression of Data From a Hyperspectral Imager

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didlier; Aranki, Nazeeh I.; Klimesh, Matthew A.; Bakhshi, Alireza

    2012-01-01

    Efficient onboard data compression can reduce the data volume from hyperspectral imagers on NASA and DoD spacecraft in order to return as much imagery as possible through constrained downlink channels. Lossless compression is important for signature extraction, object recognition, and feature classification capabilities. To provide onboard data compression, a hardware implementation of a lossless hyperspectral compression algorithm was developed using a field programmable gate array (FPGA). The underlying algorithm is the Fast Lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral- Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), p. 26 with the modification reported in Lossless, Multi-Spectral Data Comressor for Improved Compression for Pushbroom-Type Instruments (NPO-45473), NASA Tech Briefs, Vol. 32, No. 7 (July 2008) p. 63, which provides improved compression performance for data from pushbroom-type imagers. An FPGA implementation of the unmodified FL algorithm was previously developed and reported in Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System (NPO-46867), NASA Tech Briefs, Vol. 36, No. 5 (May 2012) p. 42. The essence of the FL algorithm is adaptive linear predictive compression using the sign algorithm for filter adaption. The FL compressor achieves a combination of low complexity and compression effectiveness that exceeds that of stateof- the-art techniques currently in use. The modification changes the predictor structure to tolerate differences in sensitivity of different detector elements, as occurs in pushbroom-type imagers, which are suitable for spacecraft use. The FPGA implementation offers a low-cost, flexible solution compared to traditional ASIC (application specific integrated circuit) and can be integrated as an intellectual property (IP) for part of, e.g., a design that manages the instrument interface. The FPGA implementation was benchmarked on the Xilinx

  7. Medical image compression using cubic spline interpolation with bit-plane compensation

    NASA Astrophysics Data System (ADS)

    Truong, Trieu-Kien; Chen, Shi-Huang; Lin, Tsung-Ching

    2007-03-01

    In this paper, a modified medical image compression algorithm using cubic spline interpolation (CSI) is presented for telemedicine applications. The CSI is developed in order to subsample image data with minimal distortion and to achieve compression. It has been shown in literatures that the CSI can be combined with the JPEG algorithms to develop a modified JPEG codec, which obtains a higher compression ratio and a better quality of reconstructed image than the standard JPEG. However, this modified JPEG codec will lose some high-frequency components of medical images during compression process. To minimize the drawback arose from loss of these high-frequency components, this paper further makes use of bit-plane compensation to the modified JPEG codec. The bit-plane compensation algorithm used in this paper is modified from JBIG2 standard. Experimental results show that the proposed scheme can increase 20~30% compression ratio of original JPEG medical data compression system with similar visual quality. This system can reduce the loading of telecommunication networks and is quite suitable for low bit-rate telemedicine applications.

  8. The Cyborg Astrobiologist: matching of prior textures by image compression for geological mapping and novelty detection

    NASA Astrophysics Data System (ADS)

    McGuire, P. C.; Bonnici, A.; Bruner, K. R.; Gross, C.; Ormö, J.; Smosna, R. A.; Walter, S.; Wendt, L.

    2014-07-01

    We describe an image-comparison technique of Heidemann and Ritter (2008a, b), which uses image compression, and is capable of: (i) detecting novel textures in a series of images, as well as of: (ii) alerting the user to the similarity of a new image to a previously observed texture. This image-comparison technique has been implemented and tested using our Astrobiology Phone-cam system, which employs Bluetooth communication to send images to a local laptop server in the field for the image-compression analysis. We tested the system in a field site displaying a heterogeneous suite of sandstones, limestones, mudstones and coal beds. Some of the rocks are partly covered with lichen. The image-matching procedure of this system performed very well with data obtained through our field test, grouping all images of yellow lichens together and grouping all images of a coal bed together, and giving 91% accuracy for similarity detection. Such similarity detection could be employed to make maps of different geological units. The novelty-detection performance of our system was also rather good (64% accuracy). Such novelty detection may become valuable in searching for new geological units, which could be of astrobiological interest. The current system is not directly intended for mapping and novelty detection of a second field site based on image-compression analysis of an image database from a first field site, although our current system could be further developed towards this end. Furthermore, the image-comparison technique is an unsupervised technique that is not capable of directly classifying an image as containing a particular geological feature; labelling of such geological features is done post facto by human geologists associated with this study, for the purpose of analysing the system's performance. By providing more advanced capabilities for similarity detection and novelty detection, this image-compression technique could be useful in giving more scientific autonomy

  9. On independent color space transformations for the compression of CMYK images.

    PubMed

    de Queiroz, R L

    1999-01-01

    Device and image-independent color space transformations for the compression of CMYK images were studied. A new transformation (to a YYCC color space) was developed and compared to known ones. Several tests were conducted leading to interesting conclusions. Among them, color transformations are not always advantageous over independent compression of CMYK color planes. Another interesting conclusion is that chrominance subsampling is rarely advantageous in this context. Also, it is shown that transformation to YYCC consistently outperforms the transformation to YCbCrK, while being competitive with the image-dependent KLT-based approach. PMID:18267416

  10. Context-dependent JPEG backward-compatible high-dynamic range image compression

    NASA Astrophysics Data System (ADS)

    Korshunov, Pavel; Ebrahimi, Touradj

    2013-10-01

    High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.

  11. Evaluation of algorithms for lossless compression of continuous-tone images

    NASA Astrophysics Data System (ADS)

    Savakis, Andreas E.

    2002-01-01

    Lossless image compression algorithms for continuous-tone images have received a great deal of attention in recent years. However, reports on benchmarking their performance have been limited. In this paper, we present a comparative study of the following algorithms: UNIX compress, gzip, LZW, Group 3, Group 4, JBIG, old lossless JPEG, JPEG-LS based on LOCO, CALIC, FELICS, S + P transform, and PNG. The test images consist of two sets of eight bits/pixel continuous-tone images: one set contains nine pictorial images, and another set contains eight document images, obtained from the standard set of CCITT images that were scanned and printed using eight bits/pixel at 200 dpi. In cases where the algorithm under consideration could only be applied to binary data, the bitplanes of the gray scale image were decomposed, with and without Gray encoding, and the compression was applied to individual bit planes. The results show that the best compression is obtained using the CALIC and JPEG--LS algorithms.

  12. Incidental findings in emergency imaging: frequency, recommendations, and compliance with consensus guidelines.

    PubMed

    Hanna, Tarek N; Shekhani, Haris; Zygmont, Matthew E; Kerchberger, James Matthew; Johnson, Jamlik-Omari

    2016-04-01

    The purpose of this study was to evaluate the frequency of incidental findings (IFs) in emergency department (ED) imaging reports and evaluate the adherence of imaging recommendations to consensus societal guidelines for IFs. A retrospective review of consecutive ED computed tomography (CT) and ultrasonography (US) reports from two university-affiliated EDs over a 2-month period was performed. Each imaging report was reviewed in its entirety, and incidental findings were documented along with recommendations for additional imaging. Imaging recommendations were compared to published societal guidelines from the American College of Radiology (ACR) and Fleischner Society. Three thousand one hundred thirty-one total cases consisting of 1967 CTs and 1164 US contained 514 incidental findings (16.4 %), with 329 CT IFs (64 %) and 185 US IFs (36 %). The ovary was the most common organ for an IF (n = 214, 42 %). Of all IFs, 347 (67.5 %) recommendations were concordant with societal guidelines and 167 (32.5 %) were discordant. 39.8 % of CT recommendations were discordant, while 19.5 % of US recommendations were discordant (p < 0.0001). Incidental findings are commonly encountered in the emergent setting. Variable adherence to societal guidelines is noted. Targeted radiologist education and technological solutions may decrease rates of discordance. PMID:26842832

  13. The Cyborg Astrobiologist: Image Compression for Geological Mapping and Novelty Detection

    NASA Astrophysics Data System (ADS)

    McGuire, P. C.; Bonnici, A.; Bruner, K. R.; Gross, C.; Ormö, J.; Smosna, R. A.; Walter, S.; Wendt, L.

    2013-09-01

    We describe an image-comparison technique of Heidemann and Ritter [4,5] that uses image compression, and is capable of: (i) detecting novel textures in a series of images, as well as of: (ii) alerting the user to the similarity of a new image to a previously-observed texture. This image-comparison technique has been implemented and tested using our Astrobiology Phone-cam system, which employs Bluetooth communication to send images to a local laptop server in the field for the image-compression analysis. We tested the system in a field site displaying a heterogeneous suite of sandstones, limestones, mudstones and coalbeds. Some of the rocks are partly covered with lichen. The image-matching procedure of this system performed very well with data obtained through our field test, grouping all images of yellow lichens together and grouping all images of a coal bed together, and giving a 91% accuracy for similarity detection. Such similarity detection could be employed to make maps of different geological units. The novelty-detection performance of our system was also rather good (a 64% accuracy). Such novelty detection may become valuable in searching for new geological units, which could be of astrobiological interest. By providing more advanced capabilities for similarity detection and novelty detection, this image-compression technique could be useful in giving more scientific autonomy to robotic planetary rovers, and in assisting human astronauts in their geological exploration.

  14. Performance analysis of reversible image compression techniques for high-resolution digital teleradiology.

    PubMed

    Kuduvalli, G R; Rangayyan, R M

    1992-01-01

    The performances of a number of block-based, reversible, compression algorithms suitable for compression of very-large-format images (4096x4096 pixels or more) are compared to that of a novel two-dimensional linear predictive coder developed by extending the multichannel version of the Burg algorithm to two dimensions. The compression schemes implemented are: Huffman coding, Lempel-Ziv coding, arithmetic coding, two-dimensional linear predictive coding (in addition to the aforementioned one), transform coding using discrete Fourier-, discrete cosine-, and discrete Walsh transforms, linear interpolative coding, and combinations thereof. The performances of these coding techniques for a few mammograms and chest radiographs digitized to sizes up to 4096x4096 10 b pixels are discussed. Compression from 10 b to 2.5-3.0 b/pixel on these images has been achieved without any loss of information. The modified multichannel linear predictor outperforms the other methods while offering certain advantages in implementation. PMID:18222885

  15. Lossless compression of hyperspectral images using C-DPCM-APL with reference bands selection

    NASA Astrophysics Data System (ADS)

    Wang, Keyan; Liao, Huilin; Li, Yunsong; Zhang, Shanshan; Wu, Xianyun

    2014-05-01

    The availability of hyperspectral images has increased in recent years, which is used in military and civilian applications, such as target recognition, surveillance, geological mapping and environmental monitoring. Because of its abundant data quantity and special importance, now it exists lossless compression methods of hyperspectral images mainly exploiting the strong spatial or spectral correlation. C-DPCM-APL is a method that achieves highest lossless compression ratio on the CCSDS hyperspectral images acquired in 2006 but consuming longest processing time among existing lossless compression methods to determine the optimal prediction length for each band. C-DPCM-APL gets best compression performance mainly via using optimal prediction length but ignoring the correlationship between reference bands and the current band which is a crucial factor that influences the precision of prediction. Considering this, we propose a method that selects reference bands according to the atmospheric absorption characteristic of hyperspectral images. Experiments on CCSDS 2006 images data set show that the proposed reduces the computation complexity heavily without decaying its lossless compression performance when compared to C-DPCM-APL.

  16. Evaluating Texture Compression Masking Effects Using Objective Image Quality Assessment Metrics.

    PubMed

    Griffin, Wesley; Olano, Marc

    2015-08-01

    Texture compression is widely used in real-time rendering to reduce storage and bandwidth requirements. Recent research in compression algorithms has explored both reduced fixed bit rate and variable bit rate algorithms. The results are evaluated at the individual texture level using mean square error, peak signal-to-noise ratio, or visual image inspection. We argue this is the wrong evaluation approach. Compression artifacts in individual textures are likely visually masked in final rendered images and this masking is not accounted for when evaluating individual textures. This masking comes from both geometric mapping of textures onto models and the effects of combining different textures on the same model such as diffuse, gloss, and bump maps. We evaluate final rendered images using rigorous perceptual error metrics. Our method samples the space of viewpoints in a scene, renders the scene from each viewpoint using variations of compressed textures, and then compares each to a ground truth using uncompressed textures from the same viewpoint. We show that masking has a significant effect on final rendered image quality, masking effects and perceptual sensitivity to masking varies by the type of texture, graphics hardware compression algorithms are too conservative, and reduced bit rates are possible while maintaining final rendered image quality. PMID:26357259

  17. Lossless compression of hyperspectral images using conventional recursive least-squares predictor with adaptive prediction bands

    NASA Astrophysics Data System (ADS)

    Gao, Fang; Guo, Shuxu

    2016-01-01

    An efficient lossless compression scheme for hyperspectral images using conventional recursive least-squares (CRLS) predictor with adaptive prediction bands is proposed. The proposed scheme first calculates the preliminary estimates to form the input vector of the CRLS predictor. Then the number of bands used in prediction is adaptively selected by an exhaustive search for the number that minimizes the prediction residual. Finally, after prediction, the prediction residuals are sent to an adaptive arithmetic coder. Experiments on the newer airborne visible/infrared imaging spectrometer (AVIRIS) images in the consultative committee for space data systems (CCSDS) test set show that the proposed scheme yields an average compression performance of 3.29 (bits/pixel), 5.57 (bits/pixel), and 2.44 (bits/pixel) on the 16-bit calibrated images, the 16-bit uncalibrated images, and the 12-bit uncalibrated images, respectively. Experimental results demonstrate that the proposed scheme obtains compression results very close to clustered differential pulse code modulation-with-adaptive-prediction-length, which achieves best lossless compression performance for AVIRIS images in the CCSDS test set, and outperforms other current state-of-the-art schemes with relatively low computation complexity.

  18. Adaptive uniform grayscale coded aperture design for high dynamic range compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Diaz, Nelson; Rueda, Hoover; Arguello, Henry

    2016-05-01

    Imaging spectroscopy is an important area with many applications in surveillance, agriculture and medicine. The disadvantage of conventional spectroscopy techniques is that they collect the whole datacube. In contrast, compressive spectral imaging systems capture snapshot compressive projections, which are the input of reconstruction algorithms to yield the underlying datacube. Common compressive spectral imagers use coded apertures to perform the coded projections. The coded apertures are the key elements in these imagers since they define the sensing matrix of the system. The proper design of the coded aperture entries leads to a good quality in the reconstruction. In addition, the compressive measurements are prone to saturation due to the limited dynamic range of the sensor, hence the design of coded apertures must consider saturation. The saturation errors in compressive measurements are unbounded and compressive sensing recovery algorithms only provide solutions for bounded noise or bounded with high probability. In this paper it is proposed the design of uniform adaptive grayscale coded apertures (UAGCA) to improve the dynamic range of the estimated spectral images by reducing the saturation levels. The saturation is attenuated between snapshots using an adaptive filter which updates the entries of the grayscale coded aperture based on the previous snapshots. The coded apertures are optimized in terms of transmittance and number of grayscale levels. The advantage of the proposed method is the efficient use of the dynamic range of the image sensor. Extensive simulations show improvements in the image reconstruction of the proposed method compared with grayscale coded apertures (UGCA) and adaptive block-unblock coded apertures (ABCA) in up to 10 dB.

  19. Comparison of image compression techniques for high quality based on properties of visual perception

    NASA Astrophysics Data System (ADS)

    Algazi, V. Ralph; Reed, Todd R.

    1991-12-01

    The growing interest and importance of high quality imaging has several roots: Imaging and graphics, or more broadly multimedia, as the predominant means of man-machine interaction on computers, and the rapid maturing of advanced television technology. Because of their economic importance, proposed advanced television standards are being discussed and evaluated for rapid adoption. These advanced standards are based on well known image compression techniques, used for very low bit rate video communications as well. In this paper, we examine the expected improvement in image quality that advanced television and imaging techniques should bring about. We then examine and discuss the data compression techniques which are commonly used, to determine if they are capable of providing the achievable gain in quality, and to assess some of their limitations. We also discuss briefly the potential of these techniques for very high quality imaging and display applications, which extend beyond the range of existing and proposed television standards.

  20. An image compression algorithm for a high-resolution digital still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    The Electronic Still Camera (ESC) project will provide for the capture and transmission of high-quality images without the use of film. The image quality will be superior to video and will approach the quality of 35mm film. The camera, which will have the same general shape and handling as a 35mm camera, will be able to send images to earth in near real-time. Images will be stored in computer memory (RAM) in removable cartridges readable by a computer. To save storage space, the image will be compressed and reconstructed at the time of viewing. Both lossless and loss-y image compression algorithms are studied, described, and compared.

  1. Method and apparatus for optical encoding with compressible imaging

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B. (Inventor)

    2006-01-01

    The present invention presents an optical encoder with increased conversion rates. Improvement in the conversion rate is a result of combining changes in the pattern recognition encoder's scale pattern with an image sensor readout technique which takes full advantage of those changes, and lends itself to operation by modern, high-speed, ultra-compact microprocessors and digital signal processors (DSP) or field programmable gate array (FPGA) logic elements which can process encoder scale images at the highest speeds. Through these improvements, all three components of conversion time (reciprocal conversion rate)--namely exposure time, image readout time, and image processing time--are minimized.

  2. Performance analysis of compression algorithms for noisy multispectral underwater images of small targets

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.

    1997-07-01

    Underwater (UW) imagery presents several challenging problems for the developer of automated target recognition (ATR) algorithms, due to the presence of noise, point-spread function (PSF) effects resulting from camera or media inhomogeneities, and loss of contrast and resolution due to in-water scattering and absorption. Additional problems include the effects of sensor noise upon lossy image compression transformations, which can produce feature aliasing in the reconstructed imagery. Low-distortion, high- compression image transformations have been developed that facilitate transmission along a low-bandwidth uplink of compressed imagery acquired by a UW vehicle to a surface processing or viewing station. In early research that employed visual pattern image coding and the recently- developed BLAST transform, compression ratios ranging from 6,500:1 to 16,500:1 were reported, based on prefiltered six- band multispectral imagery of resolution 720 X 480 pixels. The prefiltering step, which removes unwanted background objects, is key to achieving high compression. This paper contains an analysis of several common compression algorithms, together with BLAST, to determine compression ratio, information loss, and computational efficiency achievable on a database of UW imagery. Information loss is derived rom the modulation transfer function, as well as several measures of spatial complexity that have been reported in the literature. Algorithms are expressed in image algebra, a concise notation that rigorously unifies linear and nonlinear mathematics in the image domain an has ben implemented on a variety of workstations and parallel processors. Thus, our algorithms are feasible, widely portable, and can be implemented on digital signal processors and fast parallel machines.

  3. Application Of Hadamard, Haar, And Hadamard-Haar Transformation To Image Coding And Bandwidth Compression

    NASA Astrophysics Data System (ADS)

    Choras, Ryszard S.

    1983-03-01

    The paper presents a numerical techniques of transform image coding for the image codklg for the image bandwidth compression. Unitary transformations called Hadamard, Haar and Hadamard-Haar transformations are definied and developed. 'Te described the construction of the transformation matrices and presents algorithms for computation of the transformations and theirs inverse. Considered transformations are asolied to iiaa e processing and theirs utility and effectiveness are compared with other discrete transforms on the basic of some standard performance criteria.

  4. A mixed transform approach for efficient compression of medical images.

    PubMed

    Ramaswamy, A; Mikhael, W B

    1996-01-01

    A novel technique is presented to compress medical data employing two or more mutually nonorthogonal transforms. Both lossy and lossless compression implementations are considered. The signal is first resolved into subsignals such that each subsignal is compactly represented in a particular transform domain. An efficient lossy representation of the signal is achieved by superimposing the dominant coefficients corresponding to each subsignal. The residual error, which is the difference between the original signal and the reconstructed signal is properly formulated. Adaptive algorithms in conjunction with an optimization strategy are developed to minimize this error. Both two-dimensional (2-D) and three-dimensional (3-D) approaches for the technique are developed. It is shown that for a given number of retained coefficients, the discrete cosine transform (DCT)-Walsh mixed transform representation yields a more compact representation than using DCT or Walsh alone. This lossy technique is further extended for the lossless case. The coefficients are quantized and the signal is reconstructed. The resulting reconstructed signal samples are rounded to the nearest integer and the modified residual error is computed. This error is transmitted employing a lossless technique such as the Huffman coding. It is shown that for a given number of retained coefficients, the mixed transforms again produces the smaller rms-modified residual error. The first-order entropy of the error is also smaller for the mixed-transforms technique than for the DCT, thus resulting in smaller length Huffman codes. PMID:18215915

  5. Compression of color facial images using feature correction two-stage vector quantization.

    PubMed

    Huang, J; Wang, Y

    1999-01-01

    A feature correction two-stage vector quantization (FC2VQ) algorithm was previously developed to compress gray-scale photo identification (ID) pictures. This algorithm is extended to color images in this work. Three options are compared, which apply the FC2VQ algorithm in RGB, YCbCr, and Karhunen-Loeve transform (KLT) color spaces, respectively. The RGB-FC2VQ algorithm is found to yield better image quality than KLT-FC2VQ or YCbCr-FC2VQ at similar bit rates. With the RGB-FC2VQ algorithm, a 128 x 128 24-b color ID image (49,152 bytes) can be compressed down to about 500 bytes with satisfactory quality. When the codeword indices are further compressed losslessly using a first order Huffman coder, this size is further reduced to about 450 bytes. PMID:18262869

  6. Simultaneous optical image compression and encryption using error-reduction phase retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Liu, Zhengjun; Liu, Shutian

    2015-12-01

    We report a simultaneous image compression and encryption scheme based on solving a typical optical inverse problem. The secret images to be processed are multiplexed as the input intensities of a cascaded diffractive optical system. At the output plane, a compressed complex-valued data with a lot fewer measurements can be obtained by utilizing error-reduction phase retrieval algorithm. The magnitude of the output image can serve as the final ciphertext while its phase serves as the decryption key. Therefore the compression and encryption are simultaneously completed without additional encoding and filtering operations. The proposed strategy can be straightforwardly applied to the existing optical security systems that involve diffraction and interference. Numerical simulations are performed to demonstrate the validity and security of the proposal.

  7. Correlation modeling for compression of computed tomography images.

    PubMed

    Munoz-Gomez, Juan; Bartrina-Rapesta, Joan; Marcellin, Michael W; Serra-Sagristà, Joan

    2013-09-01

    Computed tomography (CT) is a noninvasive medical test obtained via a series of X-ray exposures resulting in 3-D images that aid medical diagnosis. Previous approaches for coding such 3-D images propose to employ multicomponent transforms to exploit correlation among CT slices, but these approaches do not always improve coding performance with respect to a simpler slice-by-slice coding approach. In this paper, we propose a novel analysis which accurately predicts when the use of a multicomponent transform is profitable. This analysis models the correlation coefficient r based on image acquisition parameters readily available at acquisition time. Extensive experimental results from multiple image sensors suggest that multicomponent transforms are appropriate for images with correlation coefficient r in excess of 0.87. PMID:25055372

  8. Geostatistical analysis of Landsat-TM lossy compression images in a high-performance computing environment

    NASA Astrophysics Data System (ADS)

    Pesquer, Lluís; Cortés, Ana; Serral, Ivette; Pons, Xavier

    2011-11-01

    The main goal of this study is to characterize the effects of lossy image compression procedures on the spatial patterns of remotely sensed images, as well as to test the performance of job distribution tools specifically designed for obtaining geostatistical parameters (variogram) in a High Performance Computing (HPC) environment. To this purpose, radiometrically and geometrically corrected Landsat-5 TM images from April, July, August and September 2006 were compressed using two different methods: Band-Independent Fixed-Rate (BIFR) and three-dimensional Discrete Wavelet Transform (3d-DWT) applied to the JPEG 2000 standard. For both methods, a wide range of compression ratios (2.5:1, 5:1, 10:1, 50:1, 100:1, 200:1 and 400:1, from soft to hard compression) were compared. Variogram analyses conclude that all compression ratios maintain the variogram shapes and that the higher ratios (more than 100:1) reduce variance in the sill parameter of about 5%. Moreover, the parallel solution in a distributed environment demonstrates that HPC offers a suitable scientific test bed for time demanding execution processes, as in geostatistical analyses of remote sensing images.

  9. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  10. Compression of compound images and video for enabling rich media in embedded systems

    NASA Astrophysics Data System (ADS)

    Said, Amir

    2004-01-01

    It is possible to improve the features supported by devices with embedded systems by increasing the processor computing power, but this always results in higher costs, complexity, and power consumption. An interesting alternative is to use the growing networking infrastructures to do remote processing and visualization, with the embedded system mainly responsible for communications and user interaction. This enables devices to behave as if much more "intelligent" to users, at very low costs and power. In this article we explain how compression can make some of these solutions more bandwidth-efficient, enabling devices to simply decompress very rich graphical information and user interfaces that had been rendered elsewhere. The mixture of natural images and video with text, graphics, and animations simultaneously in the same frame is called compound video. We present a new method for compression of compound images and video, which is able to efficiently identify the different components during compression, and use an appropriate coding method. Our system uses lossless compression for graphics and text, and, on natural images and highly detailed parts, it uses lossy compression with dynamically varying quality. Since it was designed for embedded systems with very limited resources, and it has small executable size, and low complexity for classification, compression and decompression. Other compression methods (e.g., MPEG) can do the same, but are very inefficient for compound content. High-level graphics languages can be bandwidth-efficient, but are much less reliable (e.g., supporting Asian fonts), and are many orders of magnitude more complex. Numerical tests show the very significant gains in compression achieved by these systems.

  11. Adaptive constructive neural networks using Hermite polynomials for compression of still and moving images

    NASA Astrophysics Data System (ADS)

    Ma, Liying; Khorasani, Khashayar; Azimi-Sadjadi, Mahmood R.

    2002-03-01

    Compression of digital images has been a very important subject of research for several decades, and a vast number of techniques have been proposed. In particular, the possibility of image compression using Neural Networks (Nns) has been considered by many researchers in recent years, and several Feed-forward Neural Networks (FNNs) have been proposed with reported promising experimental results. Constructive One-Hidden-Layer Feedforward Neural Network (OHL-FNN) is one such architecture. At previous SPIE conferences, we have proposed a new constructive OHL-FNN using Hermite polynomials for regression and recognition problems, and good experimental results were demonstrated. In this paper, we first modify and then apply our proposed OHL-FNN to compress still and moving images and investigated its performance in terms of both training and generalization capabilities. Extensive experimental results for still images (Lena, Lake, and Girl) and moving images (football game) are presented. It is revealed that the performance of the constructive OHL-FNN using Hermite polynomials is quite good for both still and moving image compression.

  12. Texture- and multiple-template-based algorithm for lossless compression of error-diffused images.

    PubMed

    Huang, Yong-Huai; Chung, Kuo-Liang

    2007-05-01

    Recently, several efficient context-based arithmetic coding algorithms have been developed successfully for lossless compression of error-diffused images. In this paper, we first present a novel block- and texture-based approach to train the multiple-template according to the most representative texture features. Based on the trained multiple template, we next present an efficient texture- and multiple-template-based (TM-based) algorithm for lossless compression of error-diffused images. In our proposed TM-based algorithm, the input image is divided into many blocks and for each block, the best template is adaptively selected from the multiple-template based on the texture feature of that block. Under 20 testing error-diffused images and the personal computer with Intel Celeron 2.8-GHz CPU, experimental results demonstrate that with a little encoding time degradation, 0.365 s (0.901 s) on average, the compression improvement ratio of our proposed TM-based algorithm over the joint bilevel image group (JBIG) standard [over the previous block arithmetic coding for image compression (BACIC) algorithm proposed by Reavy and Boncelet is 24%] (19.4%). Under the same condition, the compression improvement ratio of our proposed algorithm over the previous algorithm by Lee and Park is 17.6% and still only has a little encoding time degradation (0.775 s on average). In addition, the encoding time required in the previous free tree-based algorithm is 109.131 s on average while our proposed algorithm takes 0.995 s; the average compression ratio of our proposed TM-based algorithm, 1.60, is quite competitive to that of the free tree-based algorithm, 1.62. PMID:17491457

  13. Assessment of low-contrast detectability for compressed digital chest images

    NASA Astrophysics Data System (ADS)

    Cook, Larry T.; Insana, Michael F.; McFadden, Michael A.; Hall, Timothy J.; Cox, Glendon G.

    1994-04-01

    The ability of human observers to detect low-contrast targets in screen-film (SF) images, computed radiographic (CR) images, and compressed CR images was measured using contrast detail (CD) analysis. The results of these studies were used to design a two- alternative forced-choice (2AFC) experiment to investigate the detectability of nodules in adult chest radiographs. CD curves for a common screen-film system were compared with CR images compressed up to 125:1. Data from clinical chest exams were used to define a CD region of clinical interest that sufficiently challenged the observer. From that data, simulated lesions were introduced into 100 normal CR chest films, and forced-choice observer performance studies were performed. CR images were compressed using a full-frame discrete cosine transform (FDCT) technique, where the 2D Fourier space was divided into four areas of different quantization depending on the cumulative power spectrum (energy) of each image. The characteristic curve of the CR images was adjusted so that optical densities matched those of the SF system. The CD curves for SF and uncompressed CR systems were statistically equivalent. The slope of the CD curve for each was - 1.0 as predicted by the Rose model. There was a significant degradation in detection found for CR images compressed to 125:1. Furthermore, contrast-detail analysis demonstrated that many pulmonary nodules encountered in clinical practice are significantly above the average observer threshold for detection. We designed a 2AFC observer study using simulated 1-cm lesions introduced into normal CR chest radiographs. Detectability was reduced for all compressed CR radiographs.

  14. Reduction of blocking effects for the JPEG baseline image compression standard

    NASA Technical Reports Server (NTRS)

    Zweigle, Gregary C.; Bamberger, Roberto H.

    1992-01-01

    Transform coding has been chosen for still image compression in the Joint Photographic Experts Group (JPEG) standard. Although transform coding performs superior to many other image compression methods and has fast algorithms for implementation, it is limited by a blocking effect at low bit rates. The blocking effect is inherent in all nonoverlapping transforms. This paper presents a technique for reducing blocking while remaining compatible with the JPEG standard. Simulations show that the system results in subjective performance improvements, sacrificing only a marginal increase in bit rate.

  15. Consensus recommendations for a standardized Brain Tumor Imaging Protocol in clinical trials.

    PubMed

    Ellingson, Benjamin M; Bendszus, Martin; Boxerman, Jerrold; Barboriak, Daniel; Erickson, Bradley J; Smits, Marion; Nelson, Sarah J; Gerstner, Elizabeth; Alexander, Brian; Goldmacher, Gregory; Wick, Wolfgang; Vogelbaum, Michael; Weller, Michael; Galanis, Evanthia; Kalpathy-Cramer, Jayashree; Shankar, Lalitha; Jacobs, Paula; Pope, Whitney B; Yang, Dewen; Chung, Caroline; Knopp, Michael V; Cha, Soonme; van den Bent, Martin J; Chang, Susan; Yung, W K Al; Cloughesy, Timothy F; Wen, Patrick Y; Gilbert, Mark R

    2015-09-01

    A recent joint meeting was held on January 30, 2014, with the US Food and Drug Administration (FDA), National Cancer Institute (NCI), clinical scientists, imaging experts, pharmaceutical and biotech companies, clinical trials cooperative groups, and patient advocate groups to discuss imaging endpoints for clinical trials in glioblastoma. This workshop developed a set of priorities and action items including the creation of a standardized MRI protocol for multicenter studies. The current document outlines consensus recommendations for a standardized Brain Tumor Imaging Protocol (BTIP), along with the scientific and practical justifications for these recommendations, resulting from a series of discussions between various experts involved in aspects of neuro-oncology neuroimaging for clinical trials. The minimum recommended sequences include: (i) parameter-matched precontrast and postcontrast inversion recovery-prepared, isotropic 3D T1-weighted gradient-recalled echo; (ii) axial 2D T2-weighted turbo spin-echo acquired after contrast injection and before postcontrast 3D T1-weighted images to control timing of images after contrast administration; (iii) precontrast, axial 2D T2-weighted fluid-attenuated inversion recovery; and (iv) precontrast, axial 2D, 3-directional diffusion-weighted images. Recommended ranges of sequence parameters are provided for both 1.5 T and 3 T MR systems. PMID:26250565

  16. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  17. Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder.

    PubMed

    August, Isaac; Oiknine, Yaniv; AbuLeil, Marwan; Abdulhalim, Ibrahim; Stern, Adrian

    2016-01-01

    Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems. PMID:27004447

  18. Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder

    NASA Astrophysics Data System (ADS)

    August, Isaac; Oiknine, Yaniv; Abuleil, Marwan; Abdulhalim, Ibrahim; Stern, Adrian

    2016-03-01

    Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems.

  19. Secured and progressive transmission of compressed images on the Internet: application to telemedicine

    NASA Astrophysics Data System (ADS)

    Babel, Marie; Parrein, Benoît; Déforges, Olivier; Normand, Nicolas; Guédon, Jean-Pierre; Ronsin, Joseph

    2004-12-01

    Within the framework of telemedicine, the amount of images leads first to use efficient lossless compression methods for the aim of storing information. Furthermore, multiresolution scheme including Region of Interest (ROI) processing is an important feature for a remote access to medical images. What is more, the securization of sensitive data (e.g. metadata from DICOM images) constitutes one more expected functionality: indeed the lost of IP packets could have tragic effects on a given diagnosis. For this purpose, we present in this paper an original scalable image compression technique (LAR method) used in association with a channel coding method based on the Mojette Transform, so that a hierarchical priority encoding system is elaborated. This system provides a solution for secured transmission of medical images through low-bandwidth networks such as the Internet.

  20. Secured and progressive transmission of compressed images on the Internet: application to telemedicine

    NASA Astrophysics Data System (ADS)

    Babel, Marie; Parrein, Benoit; Deforges, Olivier; Normand, Nicolas; Guedon, Jean-Pierre; Ronsin, Joseph

    2005-01-01

    Within the framework of telemedicine, the amount of images leads first to use efficient lossless compression methods for the aim of storing information. Furthermore, multiresolution scheme including Region of Interest (ROI) processing is an important feature for a remote access to medical images. What is more, the securization of sensitive data (e.g. metadata from DICOM images) constitutes one more expected functionality: indeed the lost of IP packets could have tragic effects on a given diagnosis. For this purpose, we present in this paper an original scalable image compression technique (LAR method) used in association with a channel coding method based on the Mojette Transform, so that a hierarchical priority encoding system is elaborated. This system provides a solution for secured transmission of medical images through low-bandwidth networks such as the Internet.

  1. Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder

    PubMed Central

    August, Isaac; Oiknine, Yaniv; AbuLeil, Marwan; Abdulhalim, Ibrahim; Stern, Adrian

    2016-01-01

    Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems. PMID:27004447

  2. ASFNR recommendations for clinical performance of MR dynamic susceptibility contrast perfusion imaging of the brain.

    PubMed

    Welker, K; Boxerman, J; Kalnin, A; Kaufmann, T; Shiroishi, M; Wintermark, M

    2015-06-01

    MR perfusion imaging is becoming an increasingly common means of evaluating a variety of cerebral pathologies, including tumors and ischemia. In particular, there has been great interest in the use of MR perfusion imaging for both assessing brain tumor grade and for monitoring for tumor recurrence in previously treated patients. Of the various techniques devised for evaluating cerebral perfusion imaging, the dynamic susceptibility contrast method has been employed most widely among clinical MR imaging practitioners. However, when implementing DSC MR perfusion imaging in a contemporary radiology practice, a neuroradiologist is confronted with a large number of decisions. These include choices surrounding appropriate patient selection, scan-acquisition parameters, data-postprocessing methods, image interpretation, and reporting. Throughout the imaging literature, there is conflicting advice on these issues. In an effort to provide guidance to neuroradiologists struggling to implement DSC perfusion imaging in their MR imaging practice, the Clinical Practice Committee of the American Society of Functional Neuroradiology has provided the following recommendations. This guidance is based on review of the literature coupled with the practice experience of the authors. While the ASFNR acknowledges that alternate means of carrying out DSC perfusion imaging may yield clinically acceptable results, the following recommendations should provide a framework for achieving routine success in this complicated-but-rewarding aspect of neuroradiology MR imaging practice. PMID:25907520

  3. OARSI Clinical Trials Recommendations: Hand imaging in clinical trials in osteoarthritis.

    PubMed

    Hunter, D J; Arden, N; Cicuttini, F; Crema, M D; Dardzinski, B; Duryea, J; Guermazi, A; Haugen, I K; Kloppenburg, M; Maheu, E; Miller, C G; Martel-Pelletier, J; Ochoa-Albíztegui, R E; Pelletier, J-P; Peterfy, C; Roemer, F; Gold, G E

    2015-05-01

    Tremendous advances have occurred in our understanding of the pathogenesis of hand osteoarthritis (OA) and these are beginning to be applied to trials targeted at modification of the disease course. The purpose of this expert opinion, consensus driven exercise is to provide detail on how one might use and apply hand imaging assessments in disease modifying clinical trials. It includes information on acquisition methods/techniques (including guidance on positioning for radiography, sequence/protocol recommendations/hardware for MRI); commonly encountered problems (including positioning, hardware and coil failures, sequences artifacts); quality assurance/control procedures; measurement methods; measurement performance (reliability, responsiveness, validity); recommendations for trials; and research recommendations. PMID:25952345

  4. Research on the principle and experimentation of optical compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Chen, Yuheng; Chen, Xinhua; Zhou, Jiankang; Ji, Yiqun; Shen, Weimin

    2013-12-01

    The optical compressive spectral imaging method is a novel spectral imaging technique that draws in the inspiration of compressed sensing, which takes on the advantages such as reducing acquisition data amount, realizing snapshot imaging, increasing signal to noise ratio and so on. Considering the influence of the sampling quality on the ultimate imaging quality, researchers match the sampling interval with the modulation interval in former reported imaging system, while the depressed sampling rate leads to the loss on the original spectral resolution. To overcome that technical defect, the demand for the matching between the sampling interval and the modulation interval is disposed of and the spectral channel number of the designed experimental device increases more than threefold comparing to that of the previous method. Imaging experiment is carried out by use of the experiment installation and the spectral data cube of the shooting target is reconstructed with the acquired compressed image by use of the two-step iterative shrinkage/thresholding algorithms. The experimental result indicates that the spectral channel number increases effectively and the reconstructed data stays high-fidelity. The images and spectral curves are able to accurately reflect the spatial and spectral character of the target.

  5. The Value of Radionuclide Bone Imaging in Defining Fresh Fractures Among Osteoporotic Vertebral Compression Fractures.

    PubMed

    Zhao, Quan-Ming; Gu, Xiao-Feng; Liu, Zhong-Tang; Cheng, Li

    2016-05-01

    Vertebral fractures are the most common osteoporotic fractures. To perform percutaneous vertebral body cement augmentation, it is essential to accurately identify the affected vertebrae. The study evaluated the role of radionuclide bone imaging in identifying fresh osteoporotic vertebral compression fractures. A prospective study of 39 patients with acute osteoporotic vertebral compression fractures was carried out. All patients underwent magnetic resonance imaging (MRI) and radionuclide bone imaging to determine if the fractures were fresh, followed by percutaneous kyphoplasty for the fresh fractures. The positive rate on radionuclide bone imaging was 92.1% (82/89), and the positive rate on MRI was 93.3% (83/89), with no statistically significant difference (P > 0.05). Eighty-one vertebrae had the same positive identification by both radionuclide bone imaging and MRI, and 5 of the same vertebrae were diagnosed negative by both techniques. One patient with positive radionuclide bone imaging was negative according to MRI, and 2 patients were entirely positive by MRI but negative by radionuclide bone imaging. A kappa test showed good consistency between the 2 methods for detecting the affected vertebrae (Kappa = 0.751, P < 0.01). Radionuclide bone imaging is as sensitive as MRI in the diagnosis of fresh osteoporotic vertebral compression fracture, making it an effective method for detecting affected vertebrae for percutaneous vertebroplasty. PMID:27159858

  6. Compression of Encrypted Images Using Set Partitioning In Hierarchical Trees Algorithm

    NASA Astrophysics Data System (ADS)

    Sarika, G.; Unnithan, Harikuttan; Peter, Smitha

    2011-10-01

    When it is desired to transmit redundant data over an insecure channel, it is customary to encrypt the data. For encrypted real world sources such as images, the use of Markova properties in the slepian-wolf decoder does not work well for gray scale images. Here in this paper we propose a method of compression of an encrypted image. In the encoder section, the image is first encrypted and then it undergoes compression in resolution. The cipher function scrambles only the pixel values, but does not shuffle the pixel locations. After down sampling, each sub-image is encoded independently and the resulting syndrome bits are transmitted. The received image undergoes a joint decryption and decompression in the decoder section. By using the local statistics based on the image, it is recovered back. Here the decoder gets only lower resolution version of the image. In addition, this method provides only partial access to the current source at the decoder side, which improves the decoder's learning of the source statistics. The source dependency is exploited to improve the compression efficiency. This scheme provides better coding efficiency and less computational complexity.

  7. Spaceborne multiview image compression based on adaptive disparity compensation with rate-distortion optimization

    NASA Astrophysics Data System (ADS)

    Li, Shigao; Su, Kehua; Jia, Liming

    2016-01-01

    Disparity compensation (DC) and transform coding are incorporated into a hybrid coding to reduce the code-rate of multiview images. However, occlusion and inaccurate disparity estimations (DE) impair the performance of DC, especially in spaceborne images. This paper proposes an adaptive disparity-compensation scheme for the compression of spaceborne multiview images, including stereo image pairs and three-line-scanner images. DC with adaptive loop filter is used to remove redundancy between reference images and target images and a wavelet-based coding method is used to encode reference images and residue images. In occlusion regions, the DC efficiency may be poor because no interview correlation exists. A rate-distortion optimization method is thus designed to select the best prediction mode for local regions. Experimental results show that the proposed scheme can provide significant coding gain compared with some other similar coding schemes, and the time complexity is also competitive.

  8. A new simultaneous compression and encryption method for images suitable to recognize form by optical correlation

    NASA Astrophysics Data System (ADS)

    Alfalou, Ayman; Elbouz, Marwa; Jridi, Maher; Loussert, Alain

    2009-09-01

    In some recognition form applications (which require multiple images: facial identification or sign-language), many images should be transmitted or stored. This requires the use of communication systems with a good security level (encryption) and an acceptable transmission rate (compression rate). In the literature, several encryption and compression techniques can be found. In order to use optical correlation, encryption and compression techniques cannot be deployed independently and in a cascade manner. Otherwise, our system will suffer from two major problems. In fact, we cannot simply use these techniques in a cascade manner without considering the impact of one technique over another. Secondly, a standard compression can affect the correlation decision, because the correlation is sensitive to the loss of information. To solve both problems, we developed a new technique to simultaneously compress & encrypt multiple images using a BPOF optimized filter. The main idea of our approach consists in multiplexing the spectrums of different transformed images by a Discrete Cosine Transform (DCT). To this end, the spectral plane should be divided into several areas and each of them corresponds to the spectrum of one image. On the other hand, Encryption is achieved using the multiplexing, a specific rotation functions, biometric encryption keys and random phase keys. A random phase key is widely used in optical encryption approaches. Finally, many simulations have been conducted. Obtained results corroborate the good performance of our approach. We should also mention that the recording of the multiplexed and encrypted spectra is optimized using an adapted quantification technique to improve the overall compression rate.

  9. Lossless compression of medical images using Hilbert space-filling curves.

    PubMed

    Liang, Jan-Yie; Chen, Chih-Sheng; Huang, Chua-Huang; Liu, Li

    2008-04-01

    A Hilbert space-filling curve is a curve traversing the 2(n) x 2(n)two-dimensional space and it visits neighboring points consecutively without crossing itself. The application of Hilbert space-filling curves in image processing is to rearrange image pixels in order to enhance pixel locality. A computer program of the Hilbert space-filling curve ordering generated from a tensor product formula is used to rearrange pixels of medical images. We implement four lossless encoding schemes, run-length encoding, LZ77 coding, LZW coding, and Huffman coding, along with the Hilbert space-filling curve ordering. Combination of these encoding schemes are also implemented to study the effectiveness of various compression methods. In addition, differential encoding is employed to medical images to study different format of image representation to the above encoding schemes. In the paper, we report the testing results of compression ratio and performance evaluation. The experiments show that the pre-processing operation of differential encoding followed by the Hilbert space-filling curve ordering and the compression method of LZW coding followed by Huffman coding will give the best compression result. PMID:18248789

  10. Adaptive compression of slowly varying images transmitted over Wireless Sensor Networks.

    PubMed

    Nikolakopoulos, George; Kandris, Dionisis; Tzes, Anthony

    2010-01-01

    In this article a scheme for image transmission over Wireless Sensor Networks (WSN) with an adaptive compression factor is introduced. The proposed control architecture affects the quality of the transmitted images according to: (a) the traffic load within the network and (b) the level of details contained in an image frame. Given an approximate transmission period, the adaptive compression mechanism applies Quad Tree Decomposition (QTD) with a varying decomposition compression factor based on a gradient adaptive approach. For the initialization of the proposed control scheme, the desired a priori maximum bound for the transmission time delay is being set, while a tradeoff among the quality of the decomposed image frame and the time needed for completing the transmission of the frame should be taken under consideration. Based on the proposed control mechanism, the quality of the slowly varying transmitted image frames is adaptively deviated based on the measured time delay in the transmission. The efficacy of the adaptive compression control scheme is validated through extended experimental results. PMID:22163598

  11. Image data compression and it's effect on the accuracy of fringe-based images for 3-D gauging using a phase stepping method

    NASA Astrophysics Data System (ADS)

    Harvey, D. M.; Arshad, N. M.; Hobson, C. A.

    2001-04-01

    This paper examines the effects of data compression on fringe images. Using the JPEG still image compression method firstly comparisons of errors introduced in a standard test image and in fringe images are made. The work shows that at compression levels of 6 : 1 a 512×512×8 bit fringe image can be reduced in size to allow a CCD digital camera to be directly connected for image input to the parallel port of a PC. The errors introduced into angular and smooth fringe images by the compression and decompression process are small, 0.06% and 0.14%, respectively. This enabled successful fringe analysis by a phase stepping system, with compression levels up to 16 : 1 using JPEG, before any significant artefacts were introduced into the processed images.

  12. Imaging evidence and recommendations for traumatic brain injury: conventional neuroimaging techniques.

    PubMed

    Wintermark, Max; Sanelli, Pina C; Anzai, Yoshimi; Tsiouris, A John; Whitlow, Christopher T

    2015-02-01

    Imaging plays an essential role in identifying intracranial injury in patients with traumatic brain injury (TBI). The goals of imaging include (1) detecting injuries that may require immediate surgical or procedural intervention, (2) detecting injuries that may benefit from early medical therapy or vigilant neurologic supervision, and (3) determining the prognosis of patients to tailor rehabilitative therapy or help with family counseling and discharge planning. In this article, the authors perform a review of the evidence on the utility of various imaging techniques in patients presenting with TBI to provide guidance for evidence-based, clinical imaging protocols. The intent of this article is to suggest practical imaging recommendations for patients presenting with TBI across different practice settings and to simultaneously provide the rationale and background evidence supporting their use. These recommendations should ultimately assist referring physicians faced with the task of ordering appropriate imaging tests in particular patients with TBI for whom they are providing care. These recommendations should also help radiologists advise their clinical colleagues on appropriate imaging utilization for patients with TBI. PMID:25456317

  13. Despeckling of medical ultrasound images using data and rate adaptive lossy compression.

    PubMed

    Gupta, Nikhil; Swamy, M N S; Plotkin, Eugene

    2005-06-01

    A novel technique for despeckling the medical ultrasound images using lossy compression is presented. The logarithm of the input image is first transformed to the multiscale wavelet domain. It is then shown that the subband coefficients of the log-transformed ultrasound image can be successfully modeled using the generalized Laplacian distribution. Based on this modeling, a simple adaptation of the zero-zone and reconstruction levels of the uniform threshold quantizer is proposed in order to achieve simultaneous despeckling and quantization. This adaptation is based on: (1) an estimate of the corrupting speckle noise level in the image; (2) the estimated statistics of the noise-free subband coefficients; and (3) the required compression rate. The Laplacian distribution is considered as a special case of the generalized Laplacian distribution and its efficacy is demonstrated for the problem under consideration. Context-based classification is also applied to the noisy coefficients to enhance the performance of the subband coder. Simulation results using a contrast detail phantom image and several real ultrasound images are presented. To validate the performance of the proposed scheme, comparison with two two-stage schemes, wherein the speckled image is first filtered and then compressed using the state-of-the-art JPEG2000 encoder, is presented. Experimental results show that the proposed scheme works better, both in terms of the signal to noise ratio and the visual quality. PMID:15957598

  14. Context-based lossless image compression with optimal codes for discretized Laplacian distributions

    NASA Astrophysics Data System (ADS)

    Giurcaneanu, Ciprian Doru; Tabus, Ioan; Stanciu, Cosmin

    2003-05-01

    Lossless image compression has become an important research topic, especially in relation with the JPEG-LS standard. Recently, the techniques known for designing optimal codes for sources with infinite alphabets have been applied for the quantized Laplacian sources which have probability mass functions with two geometrically decaying tails. Due to the simple parametric model of the source distribution the Huffman iterations are possible to be carried out analytically, using the concept of reduced source, and the final codes are obtained as a sequence of very simple arithmetic operations, avoiding the need to store coding tables. We propose the use of these (optimal) codes in conjunction with context-based prediction, for noiseless compression of images. To reduce further the average code length, we design Escape sequences to be employed when the estimation of the distribution parameter is unreliable. Results on standard test files show improvements in compression ratio when comparing with JPEG-LS.

  15. Low complexity DCT engine for image and video compression

    NASA Astrophysics Data System (ADS)

    Jridi, Maher; Ouerhani, Yousri; Alfalou, Ayman

    2013-02-01

    In this paper, we defined a low complexity 2D-DCT architecture. The latter will be able to transform spatial pixels to spectral pixels while taking into account the constraints of the considered compression standard. Indeed, this work is our first attempt to obtain one reconfigurable multistandard DCT. Due to our new matrix decomposition, we could define one common 2D-DCT architecture. The constant multipliers can be configured to handle the case of RealDCT and/or IntDCT (multiplication by 2). Our optimized algorithm not only provides a reduction of computational complexity, but also leads to scalable pipelined design in systolic arrays. Indeed, the 8 × 8 StdDCT can be computed by using 4×4 StdDCT which can be obtained by calculating 2×2 StdDCT. Besides, the proposed structure can be extended to deal with higher number of N (i.e. 16 × 16 and 32 × 32). The performance of the proposed architecture are better when compared with conventional designs. In particular, for N = 4, it is found that the proposed design have nearly third the area-time complexity of the existing DCT structures. This gain is expected to be higher for a greater size of 2D-DCT.

  16. Grid-Independent Compressive Imaging and Fourier Phase Retrieval

    ERIC Educational Resources Information Center

    Liao, Wenjing

    2013-01-01

    This dissertation is composed of two parts. In the first part techniques of band exclusion(BE) and local optimization(LO) are proposed to solve linear continuum inverse problems independently of the grid spacing. The second part is devoted to the Fourier phase retrieval problem. Many situations in optics, medical imaging and signal processing call…

  17. Development of an Image Compression and Authentication Module for video surveillance systems

    SciTech Connect

    Hale, W.R.; Johnson, C.S.; DeKeyser, P.

    1995-07-01

    An Image Compression and Authentication Module (ICAM) has been designed to perform the digitization, compression, and authentication of video images in a camera enclosure. The ICAM makes it possible to build video surveillance systems that protect the transmission and storage of video images. The ICAM functions with both NTSC 525 line and PAL 625 line cameras and contains a neuron chip (integrated circuit) permitting it to be interfaced with a local operating network which is part of the Modular Integrated Monitor System (MIMS). The MIMS can be used to send commands to the ICAM from a central controller or any sensor on the network. The ICAM is capable of working as a stand alone unit or it can be integrated into a network of other cameras. As a stand alone unit it sends its video images directly over a high speed serial digital link to a central controller for storage. A number of ICAMs can be multiplexed on a single coaxial cable. In this case, images are captured by each ICAM and held until the MIMS delivers commands for an individual image to be transmitted for review or storage. The ICAM can capture images on a time interval basis or upon receipt of a trigger signal from another sensor on the network. An ICAM which collects images based on other sensor signals, forms the basis of an intelligent {open_quotes}front end{close_quotes} image collection system. The burden of image review associated with present video systems is reduced by only recording the images with significant action. The cards used in the ICAM can also be used to decompress and display the compressed images on a NTSC/PAL monitor.

  18. An LCD driver with on-chip frame buffer and 3 times image compression

    NASA Astrophysics Data System (ADS)

    Sung, Star; Baudia, Jacques

    2008-01-01

    An LCD Driver with on-chip frame buffer and 3 times image compression codec reaching visually lossless image quality is presented. The frame buffer compression codec can encode and decode up to eight pixels in one clock cycle. Integrating a whole frame buffer with RGB=888 bits into the display driver sharply reduces power dissipated between the IO pad and PCB board at a cost of 50% IC die area increase. The existing working chip (STE2102, a ram-less LCD Driver with die size of 170mm x 12mm) is manufactured by ST Micro 0.18μm high voltage CMOS process. A new chip design with on-chip frame buffer SRAM and 3 times compression codec supporting QVGA (320x240) is completed which reduces the frame buffer SRAM density and area by a factor of ~3.0 times and cuts the power consumption of the on-chip SRAM frame buffer by ~9.0 times of which 3 times is contributed by less capacitive bit line load and another 3 times from data rate reduction from image compression. The compression codec having 25K gates in encoder and 10K in decoder accepts both YUV and RGB color formats. An on-chip color-space-conversion unit converts the decompressed YUV components with 420, 422 and 444 formats to be RGB format before driving out to be displayed. The high image quality is achieved by applying some patented proprietary compression algorithms including accurate prediction in DPCM, a Golomb-Rice like VLC coding with accurate predictive divider and an intelligent bit rate distribution control.

  19. Compressed sensing based on the improved wavelet transform for image processing

    NASA Astrophysics Data System (ADS)

    Pang, Peng; Gao, Wei; Song, Zongxi; XI, Jiang-bo

    2014-09-01

    Compressed sensing theory is a new sampling theory that can sample signal in a below sampling rate than the traditional Nyquist sampling theory. Compressed sensing theory that has given a revolutionary solution is a novel sampling and processing theory under the condition that the signal is sparse or compressible. This paper investigates how to improve the theory of CS and its application in imaging system. According to the properties of wavelet transform sub-bands, an improved compressed sensing algorithm based on the single layer wavelet transform was proposed. Based on the feature that the most information was preserved on the low-pass layer after the wavelet transform, the improved compressed sensing algorithm only measured the low-pass wavelet coefficients of the image but preserving the high-pass wavelet coefficients. The signal can be restricted exactly by using the appropriate reconstruction algorithms. The reconstruction algorithm is the key point that most researchers focus on and significant progress has been made. For the reconstruction, in order to improve the orthogonal matching pursuit (OMP) algorithm, increased the iteration layers make sure low-pass wavelet coefficients could be recovered by measurements exactly. Then the image could be reconstructed by using the inverse wavelet transform. Compared the original compressed sensing algorithm, simulation results demonstrated that the proposed algorithm decreased the processed data, signal processed time decreased obviously and the recovered image quality improved to some extent. The PSNR of the proposed algorithm was improved about 2 to 3 dB. Experimental results show that the proposed algorithm exhibits its superiority over other known CS reconstruction algorithms in the literature at the same measurement rates, while with a faster convergence speed.

  20. Encrypted Three-dimensional Dynamic Imaging using Snapshot Time-of-flight Compressed Ultrafast Photography.

    PubMed

    Liang, Jinyang; Gao, Liang; Hai, Pengfei; Li, Chiye; Wang, Lihong V

    2015-01-01

    Compressed ultrafast photography (CUP), a computational imaging technique, is synchronized with short-pulsed laser illumination to enable dynamic three-dimensional (3D) imaging. By leveraging the time-of-flight (ToF) information of pulsed light backscattered by the object, ToF-CUP can reconstruct a volumetric image from a single camera snapshot. In addition, the approach unites the encryption of depth data with the compressed acquisition of 3D data in a single snapshot measurement, thereby allowing efficient and secure data storage and transmission. We demonstrated high-speed 3D videography of moving objects at up to 75 volumes per second. The ToF-CUP camera was applied to track the 3D position of a live comet goldfish. We have also imaged a moving object obscured by a scattering medium. PMID:26503834

  1. Encrypted Three-dimensional Dynamic Imaging using Snapshot Time-of-flight Compressed Ultrafast Photography

    NASA Astrophysics Data System (ADS)

    Liang, Jinyang; Gao, Liang; Hai, Pengfei; Li, Chiye; Wang, Lihong V.

    2015-10-01

    Compressed ultrafast photography (CUP), a computational imaging technique, is synchronized with short-pulsed laser illumination to enable dynamic three-dimensional (3D) imaging. By leveraging the time-of-flight (ToF) information of pulsed light backscattered by the object, ToF-CUP can reconstruct a volumetric image from a single camera snapshot. In addition, the approach unites the encryption of depth data with the compressed acquisition of 3D data in a single snapshot measurement, thereby allowing efficient and secure data storage and transmission. We demonstrated high-speed 3D videography of moving objects at up to 75 volumes per second. The ToF-CUP camera was applied to track the 3D position of a live comet goldfish. We have also imaged a moving object obscured by a scattering medium.

  2. Signal reduction in fluorescence imaging using radio frequency-multiplexed excitation by compressed sensing

    NASA Astrophysics Data System (ADS)

    Chan, Antony C. S.; Lam, Edmund Y.; Tsia, Kevin K.

    2014-11-01

    Fluorescence imaging using radio frequency-multiplexed excitation (FIRE) has emerged to enable an order-of-magnitude higher frame rate than the current technologies. Similar to all high-speed realtime imaging modalities, FIRE inherently generates massive image data continuously. While this technology entails high-throughput data sampling, processing, and storage in real-time, strategies in data compression on the fly is also beneficial. We here report that it is feasible to exploit the radio frequency-multiplexed excitation scheme in FIRE for implementing compressed sensing (CS) without any modification of the FIRE system. We numerically demonstrate that CS-FIRE can reduce the effective data rate by 95% without severe degradation of image quality.

  3. Encrypted Three-dimensional Dynamic Imaging using Snapshot Time-of-flight Compressed Ultrafast Photography

    PubMed Central

    Liang, Jinyang; Gao, Liang; Hai, Pengfei; Li, Chiye; Wang, Lihong V.

    2015-01-01

    Compressed ultrafast photography (CUP), a computational imaging technique, is synchronized with short-pulsed laser illumination to enable dynamic three-dimensional (3D) imaging. By leveraging the time-of-flight (ToF) information of pulsed light backscattered by the object, ToF-CUP can reconstruct a volumetric image from a single camera snapshot. In addition, the approach unites the encryption of depth data with the compressed acquisition of 3D data in a single snapshot measurement, thereby allowing efficient and secure data storage and transmission. We demonstrated high-speed 3D videography of moving objects at up to 75 volumes per second. The ToF-CUP camera was applied to track the 3D position of a live comet goldfish. We have also imaged a moving object obscured by a scattering medium. PMID:26503834

  4. Radiological image compression using error-free irreversible two-dimensional direct-cosine-transform coding techniques.

    PubMed

    Huang, H K; Lo, S C; Ho, B K; Lou, S L

    1987-05-01

    Some error-free and irreversible two-dimensional direct-cosine-transform (2D-DCT) coding, image-compression techniques applied to radiological images are discussed in this paper. Run-length coding and Huffman coding are described, and examples are given for error-free image compression. In the case of irreversible 2D-DCT coding, the block-quantization technique and the full-frame bit-allocation (FFBA) technique are described. Error-free image compression can achieve a compression ratio from 2:1 to 3:1, whereas the irreversible 2D-DCT coding compression technique can, in general, achieve a much higher acceptable compression ratio. The currently available block-quantization hardware may lead to visible block artifacts at certain compression ratios, but FFBA may be employed with the same or higher compression ratios without generating such artifacts. An even higher compression ratio can be achieved if the image is compressed by using first FFBA and then Huffman coding. The disadvantages of FFBA are that it is sensitive to sharp edges and no hardware is available. This paper also describes the design of the FFBA technique. PMID:3598750

  5. Application of Compressed Sensing to 2-D Ultrasonic Propagation Imaging System data

    SciTech Connect

    Mascarenas, David D.; Farrar, Charles R.; Chong, See Yenn; Lee, J.R.; Park, Gyu Hae; Flynn, Eric B.

    2012-06-29

    The Ultrasonic Propagation Imaging (UPI) System is a unique, non-contact, laser-based ultrasonic excitation and measurement system developed for structural health monitoring applications. The UPI system imparts laser-induced ultrasonic excitations at user-defined locations on a structure of interest. The response of these excitations is then measured by piezoelectric transducers. By using appropriate data reconstruction techniques, a time-evolving image of the response can be generated. A representative measurement of a plate might contain 800x800 spatial data measurement locations and each measurement location might be sampled at 500 instances in time. The result is a total of 640,000 measurement locations and 320,000,000 unique measurements. This is clearly a very large set of data to collect, store in memory and process. The value of these ultrasonic response images for structural health monitoring applications makes tackling these challenges worthwhile. Recently compressed sensing has presented itself as a candidate solution for directly collecting relevant information from sparse, high-dimensional measurements. The main idea behind compressed sensing is that by directly collecting a relatively small number of coefficients it is possible to reconstruct the original measurement. The coefficients are obtained from linear combinations of (what would have been the original direct) measurements. Often compressed sensing research is simulated by generating compressed coefficients from conventionally collected measurements. The simulation approach is necessary because the direct collection of compressed coefficients often requires compressed sensing analog front-ends that are currently not commercially available. The ability of the UPI system to make measurements at user-defined locations presents a unique capability on which compressed measurement techniques may be directly applied. The application of compressed sensing techniques on this data holds the potential to

  6. DSP accelerator for the wavelet compression/decompression of high- resolution images

    SciTech Connect

    Hunt, M.A.; Gleason, S.S.; Jatko, W.B.

    1993-07-23

    A Texas Instruments (TI) TMS320C30-based S-Bus digital signal processing (DSP) module was used to accelerate a wavelet-based compression and decompression algorithm applied to high-resolution fingerprint images. The law enforcement community, together with the National Institute of Standards and Technology (NISI), is adopting a standard based on the wavelet transform for the compression, transmission, and decompression of scanned fingerprint images. A two-dimensional wavelet transform of the input image is computed. Then spatial/frequency regions are automatically analyzed for information content and quantized for subsequent Huffman encoding. Compression ratios range from 10:1 to 30:1 while maintaining the level of image quality necessary for identification. Several prototype systems were developed using SUN SPARCstation 2 with a 1280 {times} 1024 8-bit display, 64-Mbyte random access memory (RAM), Tiber distributed data interface (FDDI), and Spirit-30 S-Bus DSP-accelerators from Sonitech. The final implementation of the DSP-accelerated algorithm performed the compression or decompression operation in 3.5 s per print. Further increases in system throughput were obtained by adding several DSP accelerators operating in parallel.

  7. Research on the imaging of concrete defect based on the pulse compression technique

    NASA Astrophysics Data System (ADS)

    Li, Chang-Zheng; Zhang, Bi-Xing; Shi, Fang-Fang; Xie, Fu-Li

    2013-06-01

    When the synthetic aperture focusing technology (SAFT) is used for the detection of the concrete, the signal-to-noise ratio (SNR) and detection depth are not satisfactory. Therefore, the application of SAFT is usually limited. In this paper, we propose an improved SAFT technique for the detection of concrete based on the pulse compression technique used in the Radar domain. The proposed method first transmits a linear frequency modulation (LFM) signal, and then compresses the echo signal using the matched filtering method, after which a compressed signal with a narrower main lobe and higher SNR is obtained. With our improved SAFT, the compressed signals are manipulated in the imaging process and the image contrast is improved. Results show that the SNR is improved and the imaging resolution is guaranteed compared with the conventional short-pulse method. From theoretical and experimental results, we show that the proposed method can suppress noise and improve imaging contrast, and can also be used to detect multiple defects in concrete.

  8. High Order Entropy-Constrained Residual VQ for Lossless Compression of Images

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen

    1995-01-01

    High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance.

  9. VLSI-based Video Event Triggering for Image Data Compression

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1994-01-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  10. VLSI-based video event triggering for image data compression

    NASA Astrophysics Data System (ADS)

    Williams, Glenn L.

    1994-02-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  11. Compression of binary images on a hypercube machine

    SciTech Connect

    Scheuermann, P.; Yaagoub, A. . Electrical Engineering and Computer Science); Ouksel, M.A. . IDS Dept.)

    1994-10-01

    The S-tree linear representation is an efficient structure for representing binary images which requires three bits for each disjoint binary region. The authors present parallel algorithms for encoding and decoding the S-tree representation from/onto a binary pixel array in a hypercube connected machine. Both the encoding and the decoding algorithms make use of a condensation procedure in order to produce the final result cooperatively. The encoding algorithm conceptually uses a pyramid configuration, where in each iteration half of the processors in the grid below it remain active. The decoding algorithm is based on the observation that each processor an independently decode a given binary region if it contains in its memory an S-tree segment augmented with a linear prefix. They analyze the algorithms in terms of processing and communication time and present results of experiments performed with real and randomly generated images that verify the theoretical results.

  12. A joint source-channel distortion model for JPEG compressed images.

    PubMed

    Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C

    2006-06-01

    The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation. PMID:16764262

  13. Distributed Compressive Sensing of Hyperspectral Images Using Low Rank and Structure Similarity Property

    NASA Astrophysics Data System (ADS)

    Huang, Bingchao; Xu, Ke; Wan, Jianwei; Liu, Xu

    2015-11-01

    An efficient method and system for distributed compressive sensing of hyperspectral images is presented, which exploit the low rank and structure similarity property of hyperspectral imagery. In this paper, by integrating the respective characteristics of DSC and CS, a distributed compressive sensing framework is proposed to simultaneously capture and compress hyperspectral images. At the encoder, every band image is measured independently, where almost all computation burdens can be shifted to the decoder, resulting in a very low-complexity encoder. It is simple to operate and easy to hardware implementation. At the decoder, each band image is reconstructed by the method of total variation norm minimize. During each band reconstruction, the low rand structure of band images and spectrum structure similarity are used to give birth to the new regularizers. With combining the new regularizers and other regularizer, we can sufficiently exploit the spatial correlation, spectral correlation and spectral structural redundancy in hyperspectral imagery. A numerical optimization algorithm is also proposed to solve the reconstruction model by augmented Lagrangian multiplier method. Experimental results show that this method can effectively improve the reconstruction quality of hyperspectral images.

  14. Prospective acceleration of diffusion tensor imaging with compressed sensing using adaptive dictionaries

    PubMed Central

    McClymont, Darryl; Teh, Irvin; Whittington, Hannah J.; Grau, Vicente

    2015-01-01

    Purpose Diffusion MRI requires acquisition of multiple diffusion‐weighted images, resulting in long scan times. Here, we investigate combining compressed sensing and a fast imaging sequence to dramatically reduce acquisition times in cardiac diffusion MRI. Methods Fully sampled and prospectively undersampled diffusion tensor imaging data were acquired in five rat hearts at acceleration factors of between two and six using a fast spin echo (FSE) sequence. Images were reconstructed using a compressed sensing framework, enforcing sparsity by means of decomposition by adaptive dictionaries. A tensor was fit to the reconstructed images and fiber tractography was performed. Results Acceleration factors of up to six were achieved, with a modest increase in root mean square error of mean apparent diffusion coefficient (ADC), fractional anisotropy (FA), and helix angle. At an acceleration factor of six, mean values of ADC and FA were within 2.5% and 5% of the ground truth, respectively. Marginal differences were observed in the fiber tracts. Conclusion We developed a new k‐space sampling strategy for acquiring prospectively undersampled diffusion‐weighted data, and validated a novel compressed sensing reconstruction algorithm based on adaptive dictionaries. The k‐space undersampling and FSE acquisition each reduced acquisition times by up to 6× and 8×, respectively, as compared to fully sampled spin echo imaging. Magn Reson Med 76:248–258, 2016. © 2015 Wiley Periodicals, Inc. PMID:26302363

  15. Evaluation of onboard hyperspectral-image compression techniques for a parallel push-broom sensor

    SciTech Connect

    Briles, S.

    1996-04-01

    A single hyperspectral imaging sensor can produce frames with spatially-continuous rows of differing, but adjacent, spectral wavelength. If the frame sample-rate of the sensor is such that subsequent hyperspectral frames are spatially shifted by one row, then the sensor can be thought of as a parallel (in wavelength) push-broom sensor. An examination of data compression techniques for such a sensor is presented. The compression techniques are intended to be implemented onboard a space-based platform and to have implementation speeds that match the date rate of the sensor. Data partitions examined extend from individually operating on a single hyperspectral frame to operating on a data cube comprising the two spatial axes and the spectral axis. Compression algorithms investigated utilize JPEG-based image compression, wavelet-based compression and differential pulse code modulation. Algorithm performance is quantitatively presented in terms of root-mean-squared error and root-mean-squared correlation coefficient error. Implementation issues are considered in algorithm development.

  16. Application of permutation to lossless compression of multispectral thematic mapper images

    NASA Astrophysics Data System (ADS)

    Arnavut, Ziya; Narumalani, Sunil

    1996-12-01

    The goal of data compression is to find shorter representations for any given data. In a data storage application, this s done in order to save storage space on an auxiliary device or, in the case of a communication scenario, to increase channel throughput. Because remotely sensed data require tremendous amounts of transmission and storage space, it is essential to find good algorithms that utilize the spatial and spectral characteristics of these data to compress them. A new technique is presented that uses a spectral and spatial correlation to create orderly data for the compression of multispectral remote sensing data, such as those acquired by the Landsat Thematic Mapper (TM) sensor system. The method described simply compresses one of the bands using the standard Joint Photographic Expert Group (JPEG) compression, and then orders the next band's data with respect to the previous sorting permutation. Then, the move-to-front coding technique is used to lower the source entropy before actually encoding the data. Owing to the correlation between visible bands of TM images, it was observed that this method yields tremendous gain on these brands (on an average 0.3 to 0.5 bits/pixel compared with lossless JPEG) and can be successfully used for multispectral images where the spectral distances between bands are close.

  17. Supporting image algebra in the Matlab programming language for compression research

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Wilson, Joseph N.; Hayden, Eric T.

    2009-08-01

    Image algebra is a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra was developed under DARPA and US Air Force sponsorship at University of Florida for over 15 years beginning in 1984. Image algebra has been implemented in a variety of programming languages designed specifically to support the development of image processing and computer vision programs. The University of Florida has been associated with implementations supporting the languages FORTRAN, Ada, Lisp, and C++. The latter implementation involved the implementation of a class library, iac++, that supported image algebra programming in C++. Since image processing and computer vision are generally performed with operands that are array-based, the MatlabTM programming language is ideal for implementing the common subset of image algebra. Objects include sets and set operations, images and operations on images, as well as templates and image-template convolution operations. This implementation has been found to be useful for research in data, image, and video compression, as described herein. Due to the widespread acceptance of the Matlab programming language in the computing community, this new implementation offers exciting possibilities for supporting a large group of users. The control over an object's computational resources provided to the algorithm designer by Matlab means that the image algebra Matlab (IAM) library can employ versatile representations for the operands and operations of the algebra. In this paper, we first outline the purpose and structure of image algebra, then present IAM notation in relationship to the preceding (IAC++) implementation. We then provide examples to show how IAM is more convenient and more readily supports efficient algorithm development. Additionally, we show how image algebra and IAM can be employed in compression algorithm development and analysis.

  18. Luminance-model-based DCT quantization for color image compression

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Peterson, Heidi A.

    1992-01-01

    A model is developed to approximate visibility thresholds for discrete cosine transform (DCT) coefficient quantization error based on the peak-to-peak luminance of the error image. Experimentally measured visibility thresholds for R, G, and B DCT basis functions can be predicted by a simple luminance-based detection model. This model allows DCT coefficient quantization matrices to be designed for display conditions other than those of the experimental measurements: other display luminances, other veiling luminances, and other spatial frequencies (different pixel spacings, viewing distances, and aspect ratios).

  19. Comparative color space analysis of difference images from adjacent visible human slices for lossless compression

    NASA Astrophysics Data System (ADS)

    Thoma, George R.; Pipkin, Ryan; Mitra, Sunanda

    1997-10-01

    This paper reports the compression ratio performance of the RGB, YIQ, and HSV color plane models for the lossless coding of the National Library of Medicine's Visible Human (VH) color data set. In a previous study the correlation between adjacent VH slices was exploited using the RGB color plane model. The results of that study suggested an investigation into possible improvements using the other two color planes, and alternative differencing methods. YIQ and HSV, also know a HSI, both represent the image by separating the intensity from the color information, and we anticipated higher correlation between the intensity components of adjacent VH slices. However the compression ratio did not improve by the transformation from RGB into the other color plane models, since in order to maintain lossless performance, YIQ and HSV both require more bits to store each pixel. This increase in file size is not offset by the increase in compression due to the higher correlation of the intensity value, the best performance being achieved with the RGB color plane model. This study also explored three methods of differencing: average reference image, alternating reference image, and cascaded difference from single reference. The best method proved to be the first iteration of the cascaded difference from single reference. In this method, a single reference image is chosen, and the difference between it and its neighbor is calculated. Then the difference between the neighbor and its next neighbor is calculated. This method requires that all preceding images up to the reference image be reconstructed before the target image is available. The compression ratios obtained from this method are significantly better than the competing methods.

  20. Multi-modal multi-fractal boundary encoding in object-based image compression

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    2006-08-01

    The compact representation of region boundary contours is key to efficient representation and compression of digital images using object-based compression (OBC). In OBC, regions are coded in terms of their texture, color, and shape. Given the appropriate representation scheme, high compression ratios (e.g., 500:1 <= CR <= 2,500:1) have been reported for selected images. Because a region boundary is often represented with more parameters than the region contents, it is crucial to maximize the boundary compression ratio by reducing these parameters. Researchers have elsewhere shown that cherished boundary encoding techniques such as chain coding, simplicial complexes, or quadtrees, to name but a few, are inadequate to support OBC within the aforementioned CR range. Several existing compression standards such as MPEG support efficient boundary representation, but do not necessarily support OBC at CR >= 500:1 . Siddiqui et al. exploited concepts from fractal geometry to encode and compress region boundaries based on fractal dimension, reporting CR = 286.6:1 in one test. However, Siddiqui's algorithm is costly and appears to contain ambiguities. In this paper, we first discuss fractal dimension and OBC compression ratio, then enhance Siddiqui's algorithm, achieving significantly higher CR for a wide variety of boundary types. In particular, our algorithm smoothes a region boundary B, then extracts its inflection or control points P, which are compactly represented. The fractal dimension D is computed locally for the detrended B. By appropriate subsampling, one efficiently segments disjoint clusters of D values subject to a preselected tolerance, thereby partitioning B into a multifractal. This is accomplished using four possible compression modes. In contrast, previous researchers have characterized boundary variance with one fractal dimension, thereby producing a monofractal. At its most complex, the compressed representation contains P, a spatial marker, and a D value

  1. An infrared-visible image fusion scheme based on NSCT and compressed sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Maldague, Xavier

    2015-05-01

    Image fusion, as a research hot point nowadays in the field of infrared computer vision, has been developed utilizing different varieties of methods. Traditional image fusion algorithms are inclined to bring problems, such as data storage shortage and computational complexity increase, etc. Compressed sensing (CS) uses sparse sampling without knowing the priori knowledge and greatly reconstructs the image, which reduces the cost and complexity of image processing. In this paper, an advanced compressed sensing image fusion algorithm based on non-subsampled contourlet transform (NSCT) is proposed. NSCT provides better sparsity than the wavelet transform in image representation. Throughout the NSCT decomposition, the low-frequency and high-frequency coefficients can be obtained respectively. For the fusion processing of low-frequency coefficients of infrared and visible images , the adaptive regional energy weighting rule is utilized. Thus only the high-frequency coefficients are specially measured. Here we use sparse representation and random projection to obtain the required values of high-frequency coefficients, afterwards, the coefficients of each image block can be fused via the absolute maximum selection rule and/or the regional standard deviation rule. In the reconstruction of the compressive sampling results, a gradient-based iterative algorithm and the total variation (TV) method are employed to recover the high-frequency coefficients. Eventually, the fused image is recovered by inverse NSCT. Both the visual effects and the numerical computation results after experiments indicate that the presented approach achieves much higher quality of image fusion, accelerates the calculations, enhances various targets and extracts more useful information.

  2. A Statistical Model for Quantized AC Block DCT Coefficients in JPEG Compression and its Application to Detecting Potential Compression History in Bitmap Images

    NASA Astrophysics Data System (ADS)

    Narayanan, Gopal; Shi, Yun Qing

    We first develop a probability mass function (PMF) for quantized block discrete cosine transform (DCT) coefficients in JPEG compression using statistical analysis of quantization, with a Generalized Gaussian model being considered as the PDF for non-quantized block DCT coefficients. We subsequently propose a novel method to detect potential JPEG compression history in bitmap images using the PMF that has been developed. We show that this method outperforms a classical approach to compression history detection in terms of effectiveness. We also show that it detects history with both independent JPEG group (IJG) and custom quantization tables.

  3. A survey on palette reordering methods for improving the compression of color-indexed images.

    PubMed

    Pinho, Armando J; Neves, António J R

    2004-11-01

    Palette reordering is a well-known and very effective approach for improving the compression of color-indexed images. In this paper, we provide a survey of palette reordering methods, and we give experimental results comparing the ability of seven of them in improving the compression efficiency of JPEG-LS and lossless JPEG 2000. We concluded that the pairwise merging heuristic proposed by Memon et al. is the most effective, but also the most computationally demanding. Moreover, we found that the second most effective method is a modified version of Zeng's reordering technique, which was 3%-5% worse than pairwise merging, but much faster. PMID:15540450

  4. Programmable vision processor/controller for flexible implementation of current and future image compression standards

    SciTech Connect

    Bailey, D.; Cressa, M.; Fandrianto, J.; Neubauer, D.; Rainnie, H.K.J.; Chi-Shin Wang

    1992-10-01

    The image compression algorithm standardization process has been in motion for over five years. Due to the broad range of interests that gave input at the national and international levels, the three products of this effort, px64, JPEG, and MPEG, combine flexibility and quality. The standardization process also included a number of semiconductor companies interested in creating supporting products, which are now nearing completion. One of the first highly integrated products dedicated to video compression available from an IC manufacturer is IIT`s Vision Processor/Controller. 5 figs., 1 tab.

  5. Optimizing Spectral Power Compression with respect to Inference Performance for Recognition of Tumor Patterns in Ultrasound Images

    PubMed Central

    Grunwald, Sorin; Neagoe, Victor-Emil

    2003-01-01

    Imaging modalities are widely used to explore and diagnose diseases. Feature extraction methods are used to quantitatively describe and identify objects of interest in acquired images, typically involving data compression. The extracted features are subject to clinical inference, whereby the compression ratio used for feature extraction can affect the inference performance. In this paper, a new method is introduced which allows for optimal data compression with respect to performance maximization of uncertain inference. The model introduced herein identifies objects of interest using selective data compression in the frequency domain. It quantifies the amount of information provided by the inference involving these objects, calculates the inference efficiency, and estimates its cost. By analyzing the effect of data compression on inference efficiency and cost, the method allows for the optimal selection of the compression ratio. The method is applied to prostate cancer diagnosis in ultrasound images. PMID:14728175

  6. Visually Lossless Data Compression for Real-Time Frame/Pushbroom Space Science Imagers

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Venbrux, Jack; Bhatia, Prakash; Miller, Warner H.

    2000-01-01

    A visually lossless data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also applicable to frame based imaging and is error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on a block transform of a hybrid of modulated lapped transform (MLT) and discrete cosine transform (DCT), or a 2-dimensional lapped transform, followed by bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate as desired by the user. The approach requires no unique table to maximize its performance. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Flight qualified hardware implementations are in development; a functional chip set is expected by the end of 2001. The chip set is being designed to compress data in excess of 20 Msamples/sec and support quantizations from 2 to 16 bits.

  7. Tampered Region Localization of Digital Color Images Based on JPEG Compression Noise

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Dong, Jing; Tan, Tieniu

    With the availability of various digital image edit tools, seeing is no longer believing. In this paper, we focus on tampered region localization for image forensics. We propose an algorithm which can locate tampered region(s) in a lossless compressed tampered image when its unchanged region is output of JPEG decompressor. We find the tampered region and the unchanged region have different responses for JPEG compression. The tampered region has stronger high frequency quantization noise than the unchanged region. We employ PCA to separate different spatial frequencies quantization noises, i.e. low, medium and high frequency quantization noise, and extract high frequency quantization noise for tampered region localization. Post-processing is involved to get final localization result. The experimental results prove the effectiveness of our proposed method.

  8. Independent transmission of sign language interpreter in DVB: assessment of image compression

    NASA Astrophysics Data System (ADS)

    Zatloukal, Petr; Bernas, Martin; Dvořák, LukáÅ.¡

    2015-02-01

    Sign language on television provides information to deaf that they cannot get from the audio content. If we consider the transmission of the sign language interpreter over an independent data stream, the aim is to ensure sufficient intelligibility and subjective image quality of the interpreter with minimum bit rate. The work deals with the ROI-based video compression of Czech sign language interpreter implemented to the x264 open source library. The results of this approach are verified in subjective tests with the deaf. They examine the intelligibility of sign language expressions containing minimal pairs for different levels of compression and various resolution of image with interpreter and evaluate the subjective quality of the final image for a good viewing experience.

  9. A modified JPEG-LS lossless compression method for remote sensing images

    NASA Astrophysics Data System (ADS)

    Deng, Lihua; Huang, Zhenghua

    2015-12-01

    As many variable length source coders, JPEG-LS is highly vulnerable to channel errors which occur in the transmission of remote sensing images. The error diffusion is one of the important factors which infect its robustness. The common method of improving the error resilience of JPEG-LS is dividing the image into many strips or blocks, and then coding each of them independently, but this method reduces the coding efficiency. In this paper, a block based JPEP-LS lossless compression method with an adaptive parameter is proposed. In the modified scheme, the threshold parameter RESET is adapted to an image and the compression efficiency is close to that of the conventional JPEG-LS.

  10. A survey of quality measures for gray-scale image compression

    NASA Technical Reports Server (NTRS)

    Eskicioglu, Ahmet M.; Fisher, Paul S.

    1993-01-01

    Although a variety of techniques are available today for gray-scale image compression, a complete evaluation of these techniques cannot be made as there is no single reliable objective criterion for measuring the error in compressed images. The traditional subjective criteria are burdensome, and usually inaccurate or inconsistent. On the other hand, being the most common objective criterion, the mean square error (MSE) does not have a good correlation with the viewer's response. It is now understood that in order to have a reliable quality measure, a representative model of the complex human visual system is required. In this paper, we survey and give a classification of the criteria for the evaluation of monochrome image quality.

  11. Kronecker compressive sensing-based mechanism with fully independent sampling dimensions for hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Zhao, Rongqiang; Wang, Qiang; Shen, Yi

    2015-11-01

    We propose a new approach for Kronecker compressive sensing of hyperspectral (HS) images, including the imaging mechanism and the corresponding reconstruction method. The proposed mechanism is able to compress the data of all dimensions when sampling, which can be achieved by three fully independent sampling devices. As a result, the mechanism greatly reduces the control points and memory requirement. In addition, we can also select the suitable sparsifying bases and generate the corresponding optimized sensing matrices or change the distribution of sampling ratio for each dimension independently according to different HS images. As the cooperation of the mechanism, we combine the sparsity model and low multilinear-rank model to develop a reconstruction method. Analysis shows that our reconstruction method has a lower computational complexity than the traditional methods based on sparsity model. Simulations verify that the HS images can be reconstructed successfully with very few measurements. In summary, the proposed approach can reduce the complexity and improve the practicability for HS image compressive sensing.

  12. Predicting the fidelity of JPEG2000 compressed CT images using DICOM header information

    SciTech Connect

    Kim, Kil Joong; Kim, Bohyoung; Lee, Hyunna; Choi, Hosik; Jeon, Jong-June; Ahn, Jeong-Hwan; Lee, Kyoung Ho

    2011-12-15

    Purpose: To propose multiple logistic regression (MLR) and artificial neural network (ANN) models constructed using digital imaging and communications in medicine (DICOM) header information in predicting the fidelity of Joint Photographic Experts Group (JPEG) 2000 compressed abdomen computed tomography (CT) images. Methods: Our institutional review board approved this study and waived informed patient consent. Using a JPEG2000 algorithm, 360 abdomen CT images were compressed reversibly (n = 48, as negative control) or irreversibly (n = 312) to one of different compression ratios (CRs) ranging from 4:1 to 10:1. Five radiologists independently determined whether the original and compressed images were distinguishable or indistinguishable. The 312 irreversibly compressed images were divided randomly into training (n = 156) and testing (n = 156) sets. The MLR and ANN models were constructed regarding the DICOM header information as independent variables and the pooled radiologists' responses as dependent variable. As independent variables, we selected the CR (DICOM tag number: 0028, 2112), effective tube current-time product (0018, 9332), section thickness (0018, 0050), and field of view (0018, 0090) among the DICOM tags. Using the training set, an optimal subset of independent variables was determined by backward stepwise selection in a four-fold cross-validation scheme. The MLR and ANN models were constructed with the determined independent variables using the training set. The models were then evaluated on the testing set by using receiver-operating-characteristic (ROC) analysis regarding the radiologists' pooled responses as the reference standard and by measuring Spearman rank correlation between the model prediction and the number of radiologists who rated the two images as distinguishable. Results: The CR and section thickness were determined as the optimal independent variables. The areas under the ROC curve for the MLR and ANN predictions were 0.91 (95% CI; 0

  13. Effect of noise and MTF on the compressibility of high-resolution color images

    NASA Astrophysics Data System (ADS)

    Melnychuck, Paul W.; Barry, Michael J.; Mathieu, Michael S.

    1990-06-01

    There are an increasing number of digital image processing systems that employ photographic image capture; that is, a color photographic negative or transparency is digitally scanned, compressed, and stored or transmitted for further use. To capture the information content that a photographic color negative is capable of delivering, it must be scanned at a pixel resolution of at least 50 pixels/mm. This type of high quality imagery presents certain problems and opportunities in image coding that are not present in lower resolution systems. Firstly, photographic granularity increases the entropy of a scanned negative, limiting the extent to which entropy encoding can compress the scanned record. Secondly, any MTFrelated chemical enhancement that is incorporated into a film tends to reduce the pixel-to-pixel correlation that most compression schemes attempt to exploit. This study examines the effect of noise and MTF on the compressibility of scanned photographic images by establishing experimental information theoretic bounds. Images used for this study were corrupted with noise via a computer model of photographic grain and an MTF model of blur and chemical edge enhancement. The measured bounds are expressed in terms of the entropy of a variety of decomposed image records (e.g., DPCM predictor error) for a zeroeth-order Markov-based entropy encoder, and for a context model used by the Q-coder. The resultsshow that the entropy of the DPCM predictor error is 3-5 bits/pixel, illustrating a 2 bits/pixel difference between an ideal grain-free case, and a grainy film case. This suggests that an ideal noise filtering algorithm could lower the bitrate by as much as 50%.

  14. Reference free quality metric using a region-based attention model for JPEG-2000 compressed images

    NASA Astrophysics Data System (ADS)

    Barland, Remi; Saadane, Abdelhakim

    2006-01-01

    At high compression ratios, the current lossy compression algorithms introduce distortions that are generally exploited by the No-Reference quality assessment. For JPEG-2000 compressed images, the blurring and ringing effects cause the principal embarrassment for a human observer. However, the Human Visual System does not carry out a systematic and local research of these impairments in the whole image, but rather, it identifies some regions of interest for judging the perceptual quality. In this paper, we propose to use both of these distortions (ringing and blurring effects), locally weighted by an importance map generated by a region-based attention model, to design a new reference free quality metric for JPEG-2000 compressed images. For the blurring effect, the impairment measure depends on spatial information contained in the whole image while, for the ringing effect, only the local information localized around strong edges is used. To predict the regions in the scene that potentially attract the human attention, a stage of the proposed metric consists to generate an importance map issued from a region-based attention model, defined by Osberger et al [1]. First, explicit regions are obtained by color image segmentation. The segmented image is then analyzed by different factors, known to influence the human attention. The produced importance map is finally used to locally weight each distortion measure. The predicted scores have been compared on one hand, to the subjective scores and on other hand, to previous results, only based on the artefact measurement. This comparative study demonstrates the efficiency of the proposed quality metric.

  15. Performance analysis of compressive ghost imaging based on different signal reconstruction techniques.

    PubMed

    Kang, Yan; Yao, Yin-Ping; Kang, Zhi-Hua; Ma, Lin; Zhang, Tong-Yi

    2015-06-01

    We present different signal reconstruction techniques for implementation of compressive ghost imaging (CGI). The different techniques are validated on the data collected from ghost imaging with the pseudothermal light experimental system. Experiment results show that the technique based on total variance minimization gives high-quality reconstruction of the imaging object with less time consumption. The different performances among these reconstruction techniques and their parameter settings are also analyzed. The conclusion thus offers valuable information to promote the implementation of CGI in real applications. PMID:26367039

  16. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M. ); Hopper, T. )

    1993-01-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.

  17. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.; Hopper, T.

    1993-05-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI`s Integrated Automated Fingerprint Identification System.

  18. FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    NASA Astrophysics Data System (ADS)

    Bradley, Jonathan N.; Brislawn, Christopher M.; Hopper, Thomas

    1993-08-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite- length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.

  19. Cerebral magnetic resonance imaging of compressed air divers in diving accidents.

    PubMed

    Gao, G K; Wu, D; Yang, Y; Yu, T; Xue, J; Wang, X; Jiang, Y P

    2009-01-01

    To investigate the characteristics of the cerebral magnetic resonance imaging (MRI) of compressed air divers in diving accidents, we conducted an observational case series study. MRI of brain were examined and analysed on seven cases compressed air divers complicated with cerebral arterial gas embolism CAGE. There were some characteristics of cerebral injury: (1) Multiple lesions; (2) larger size; (3) Susceptible to parietal and frontal lobe; (4) Both cortical grey matter and subcortical white matter can be affected; (5) Cerebellum is also the target of air embolism. The MRI of brain is an sensitive method for detecting cerebral lesions in compressed air divers in diving accidents. The MRI should be finished on divers in diving accidents within 5 days. PMID:19341126

  20. Preliminary results of SAR image compression using MatrixViewTM on coherent change detection (CCD) analysis

    NASA Astrophysics Data System (ADS)

    Gresko, Lawrence S.; Gorham, LeRoy A.; Thiagarajan, Arvind

    2012-05-01

    An investigation was made into the feasibility of compressing complex Synthetic Aperture Radar (SAR) images using MatrixViewTM compression technology to achieve higher compression ratios than previously achieved. Complex SAR images contain both amplitude and phase information that are severely degraded with traditional compression techniques. This phase and amplitude information allows interferometric analysis to detect minute changes between pairs of SAR images, but is highly sensitive to any degradation in image quality. This sensitivity provides a measure to compare capabilities of different compression technologies. The interferometric process of Coherent Change Detection (CCD) is acutely sensitive to any quality loss and, therefore, is a good measure by which to compare compression capabilities of different technologies. The best compression that could be achieved by block adaptive quantization (a classical compression approach) applied to a set of I and Q phased-history samples, was a Compression Ratio (CR) of 2x. Work by Novak and Frost [3] increased this CR to 3-4x using a more complex wavelet-based Set Partitioning In Hierarchical Trees (SPIHT) algorithm (similar in its core to JPEG 2000). In each evaluation as the CR increased, degradation occurred in the reconstituted image measured by the CCD image coherence. The maximum compression was determined at the point the CCD image coherence remained > 0.9. The same investigation approach using equivalent sample data sets was performed using an emerging technology and product called MatrixViewTM. This paper documents preliminary results of MatrixView's compression of an equivalent data set to demonstrate a CR of 10-12x with an equivalent CCD coherence level of >0.9: a 300-400% improvement over SPIHT.

  1. Improving image quality in compressed ultrafast photography with a space- and intensity-constrained reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Liren; Chen, Yujia; Liang, Jinyang; Gao, Liang; Ma, Cheng; Wang, Lihong V.

    2016-03-01

    The single-shot compressed ultrafast photography (CUP) camera is the fastest receive-only camera in the world. In this work, we introduce an external CCD camera and a space- and intensity-constrained (SIC) reconstruction algorithm to improve the image quality of CUP. The CCD camera takes a time-unsheared image of the dynamic scene. Unlike the previously used unconstrained algorithm, the proposed algorithm incorporates both spatial and intensity constraints, based on the additional prior information provided by the external CCD camera. First, a spatial mask is extracted from the time-unsheared image to define the zone of action. Second, an intensity threshold constraint is determined based on the similarity between the temporally projected image of the reconstructed datacube and the time-unsheared image taken by the external CCD. Both simulation and experimental studies showed that the SIC reconstruction improves the spatial resolution, contrast, and general quality of the reconstructed image.

  2. An infrared image super-resolution reconstruction method based on compressive sensing

    NASA Astrophysics Data System (ADS)

    Mao, Yuxing; Wang, Yan; Zhou, Jintao; Jia, Haiwei

    2016-05-01

    Limited by the properties of infrared detector and camera lens, infrared images are often detail missing and indistinct in vision. The spatial resolution needs to be improved to satisfy the requirements of practical application. Based on compressive sensing (CS) theory, this thesis presents a single image super-resolution reconstruction (SRR) method. With synthetically adopting image degradation model, difference operation-based sparse transformation method and orthogonal matching pursuit (OMP) algorithm, the image SRR problem is transformed into a sparse signal reconstruction issue in CS theory. In our work, the sparse transformation matrix is obtained through difference operation to image, and, the measurement matrix is achieved analytically from the imaging principle of infrared camera. Therefore, the time consumption can be decreased compared with the redundant dictionary obtained by sample training such as K-SVD. The experimental results show that our method can achieve favorable performance and good stability with low algorithm complexity.

  3. Development of a DMD-based compressive sampling hyperspectral imaging (CS-HSI) system

    NASA Astrophysics Data System (ADS)

    Wu, Yuehao; Mirza, Iftekhar O.; Ye, Peng; Arce, Gonzalo R.; Prather, Dennis W.

    2011-03-01

    We report the development of a Digital-Micromirror-Device (DMD)-based Compressive Sampling Hyperspectral Imaging (CS-HSI) system. A DMD is used to implement CS measurement patterns, which modulate the intensity of optical images. The 3-dimensional (3-D) spatial/spectral data-cube of the original optical image is reconstructed from the CS measurements by solving a minimization problem. Two different solvers for the minimization problem were examined, including the GPSR (Gradient Projection for Sparse Reconstruction) and the TwIST (Two-step Iterative Shrinkage/Thresholding) methods. The performances of these two methods were tested and compared in terms of the image-reconstruction quality and the computer run-time. The image-formation process of the DMD-based spectral imaging system was analyzed using a Zemax model, based on which, an experimental prototype was built. We also present experimental results obtained from the prototype system.

  4. Faster techniques to evolve wavelet coefficients for better fingerprint image compression

    NASA Astrophysics Data System (ADS)

    Shanavaz, K. T.; Mythili, P.

    2013-05-01

    In this article, techniques have been presented for faster evolution of wavelet lifting coefficients for fingerprint image compression (FIC). In addition to increasing the computational speed by 81.35%, the coefficients performed much better than the reported coefficients in literature. Generally, full-size images are used for evolving wavelet coefficients, which is time consuming. To overcome this, in this work, wavelets were evolved with resized, cropped, resized-average and cropped-average images. On comparing the peak- signal-to-noise-ratios (PSNR) offered by the evolved wavelets, it was found that the cropped images excelled the resized images and is in par with the results reported till date. Wavelet lifting coefficients evolved from an average of four 256 × 256 centre-cropped images took less than 1/5th the evolution time reported in literature. It produced an improvement of 1.009 dB in average PSNR. Improvement in average PSNR was observed for other compression ratios (CR) and degraded images as well. The proposed technique gave better PSNR for various bit rates, with set partitioning in hierarchical trees (SPIHT) coder. These coefficients performed well with other fingerprint databases as well.

  5. Multiple-image encryption based on compressive holography using a multiple-beam interferometer

    NASA Astrophysics Data System (ADS)

    Wan, Yuhong; Wu, Fan; Yang, Jinghuan; Man, Tianlong

    2015-05-01

    Multiple-image encryption techniques not only improve the encryption capacity but also facilitate the transmission and storage of the ciphertext. We present a new method of multiple-image encryption based on compressive holography with enhanced data security using a multiple-beam interferometer. By modifying the Mach-Zehnder interferometer, the interference of multiple object beams and unique reference beam is implemented for encrypting multiple images simultaneously into one hologram. The original images modulated with the random phase masks are put in different positions with different distance away from the CCD camera. Each image plays the role of secret key for other images to realize the mutual encryption. Four-step phase shifting technique is combined with the holographic recording. The holographic recording is treated as a compressive sensing process, thus the decryption process is inverted as a minimization problem and the two-step iterative shrinkage/thresholding algorithm (TwIST) is employed to solve this optimization problem. The simulated results about multiple binary and grayscale images encryption are demonstrated to verify the validity and robustness of our proposed method.

  6. Research on lossless compression of true color RGB image with low time and space complexity

    NASA Astrophysics Data System (ADS)

    Pan, ShuLin; Xie, ChengJun; Xu, Lin

    2008-12-01

    Eliminating correlated redundancy of space and energy by using a DWT lifting scheme and reducing the complexity of the image by using an algebraic transform among the RGB components. An improved Rice Coding algorithm, in which presents an enumerating DWT lifting scheme that fits any size images by image renormalization has been proposed in this paper. This algorithm has a coding and decoding process without backtracking for dealing with the pixels of an image. It support LOCO-I and it can also be applied to Coder / Decoder. Simulation analysis indicates that the proposed method can achieve a high image compression. Compare with Lossless-JPG, PNG(Microsoft), PNG(Rene), PNG(Photoshop), PNG(Anix PicViewer), PNG(ACDSee), PNG(Ulead photo Explorer), JPEG2000, PNG(KoDa Inc), SPIHT and JPEG-LS, the lossless image compression ratio improved 45%, 29%, 25%, 21%, 19%, 17%, 16%, 15%, 11%, 10.5%, 10% separately with 24 pieces of RGB image provided by KoDa Inc. Accessing the main memory in Pentium IV,CPU2.20GHZ and 256MRAM, the coding speed of the proposed coder can be increased about 21 times than the SPIHT and the efficiency of the performance can be increased 166% or so, the decoder's coding speed can be increased about 17 times than the SPIHT and the efficiency of the performance can be increased 128% or so.

  7. Effects of Time-Compressed Narration and Representational Adjunct Images on Cued-Recall, Content Recognition, and Learner Satisfaction

    ERIC Educational Resources Information Center

    Ritzhaupt, Albert Dieter; Barron, Ann

    2008-01-01

    The purpose of this study was to investigate the effect of time-compressed narration and representational adjunct images on a learner's ability to recall and recognize information. The experiment was a 4 Audio Speeds (1.0 = normal vs. 1.5 = moderate vs. 2.0 = fast vs. 2.5 = fastest rate) x Adjunct Image (Image Present vs. Image Absent) factorial…

  8. Shock Compression Induced Hot Spots in Energetic Material Detected by Thermal Imaging Microscopy

    NASA Astrophysics Data System (ADS)

    Chen, Ming-Wei; Dlott, Dana

    2014-06-01

    The chemical reaction of powder energetic material is of great interest in energy and pyrotechnic applications since the high reaction temperature. Under the shock compression, the chemical reaction appears in the sub-microsecond to microsecond time scale, and releases a large amount of energy. Experimental and theoretical research progresses have been made in the past decade, in order to characterize the process under the shock compression. However, the knowledge of energy release and temperature change of this procedure is still limited, due to the difficulties of detecting technologies. We have constructed a thermal imaging microscopy apparatus, and studied the temperature change in energetic materials under the long-wavelength infrared (LWIR) and ultrasound exposure. Additionally, the real-time detection of the localized heating and energy concentration in composite material is capable with our thermal imaging microscopy apparatus. Recently, this apparatus is combined with our laser driven flyer plate system to provide a lab-scale source of shock compression to energetic material. A fast temperature increase of thermite particulars induced by the shock compression is directly observed by thermal imaging with 15-20 μm spatial resolution. Temperature change during the shock loading is evaluated to be at the order of 10^9K/s, through the direct measurement of mid-wavelength infrared (MWIR) emission intensity change. We observe preliminary results to confirm the hot spots appear with shock compression on energetic crystals, and will discuss the data and analysis in further detail. M.-W. Chen, S. You, K. S. Suslick, and D. D. Dlott, {Rev. Sci. Instr., 85, 023705 (2014) M.-W. Chen, S. You, K. S. Suslick, and D. D. Dlott, {Appl. Phys. Lett., 104, 061907 (2014)} K. E. Brown, W. L. Shaw, X. Zheng, and D. D. Dlott, {Rev. Sci. Instr., 83, 103901 (2012)}

  9. Clustered DPCM with removing noise spectra for the lossless compression of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Wu, Jiaji; Xu, Jianglei

    2013-10-01

    The clustered DPCM (C-DPCM) lossless compression method by Jarno et al. for hyperspectral images achieved a good compression effect. It can be divided into three components: clustering, prediction, and coding. In the prediction part, it solves a multiple linear regression model for each of the clusters in every band. Without considering the effect of noise spectra, there is still room for improvement. This paper proposes a C-DPCM method with Removing Noise Spectra (C-DPCM-RNS) for the lossless compression of hyperspectral images. C-DPCM-RNS's prediction part consists of two-times trainings. The prediction coefficients obtained from the first training will be used in the linear predictor to compute all the predicted values and then the difference between original and predicted values in current band of current class. Only the non-noise spectra are used in the second training. The resulting prediction coefficients from the second training will be used for prediction and sent to the decoder. The two-times trainings remove part of the interference of noise spectra, and reaches a better compression effect than other methods based on regression prediction.

  10. Compressive Holography

    NASA Astrophysics Data System (ADS)

    Lim, Se Hoon

    Compressive holography estimates images from incomplete data by using sparsity priors. Compressive holography combines digital holography and compressive sensing. Digital holography consists of computational image estimation from data captured by an electronic focal plane array. Compressive sensing enables accurate data reconstruction by prior knowledge on desired signal. Computational and optical co-design optimally supports compressive holography in the joint computational and optical domain. This dissertation explores two examples of compressive holography: estimation of 3D tomographic images from 2D data and estimation of images from under sampled apertures. Compressive holography achieves single shot holographic tomography using decompressive inference. In general, 3D image reconstruction suffers from underdetermined measurements with a 2D detector. Specifically, single shot holographic tomography shows the uniqueness problem in the axial direction because the inversion is ill-posed. Compressive sensing alleviates the ill-posed problem by enforcing some sparsity constraints. Holographic tomography is applied for video-rate microscopic imaging and diffuse object imaging. In diffuse object imaging, sparsity priors are not valid in coherent image basis due to speckle. So incoherent image estimation is designed to hold the sparsity in incoherent image basis by support of multiple speckle realizations. High pixel count holography achieves high resolution and wide field-of-view imaging. Coherent aperture synthesis can be one method to increase the aperture size of a detector. Scanning-based synthetic aperture confronts a multivariable global optimization problem due to time-space measurement errors. A hierarchical estimation strategy divides the global problem into multiple local problems with support of computational and optical co-design. Compressive sparse aperture holography can be another method. Compressive sparse sampling collects most of significant field

  11. A novel image compression algorithm based on the biorthogonal invariant set multiwavelet

    NASA Astrophysics Data System (ADS)

    Li, Yongjun; Li, Yunsong; Liu, Weijia

    2015-05-01

    On the basis of the theory of the biorthogonal invariant set multiwavelets (BISM) which is established by Micchelli and Xu, a biorthogonal invariant set multi-wavelets (BISM) filter is designed and the algorithms of decomposition and reconstruction of this filter are given in this paper, and it has many characteristics, such as symmetry, compact support, orthogonality and low complexity. In this filter, the self-affine triangle domain is as support interval, and constant function is as scaling function. Advantages such as low algorithm complexity, the energy and entropy in high concentration after transformation, no blocking effect to facilitate parallel computing are analyzed when the biorthogonal invariant sets multiwavelet (BISM) filters are used image compression. Finally, the validity of image compression algorithm based on biorthogonal invariant set multiwavelet is verified by the approximate JPEG2000 framework.

  12. Assessing mesoscale material response under shock & isentropic compression via high-resolution line-imaging VISAR.

    SciTech Connect

    Hall, Clint Allen; Furnish, Michael David; Podsednik, Jason W.; Reinhart, William Dodd; Trott, Wayne Merle; Mason, Joshua

    2003-10-01

    Of special promise for providing dynamic mesoscale response data is the line-imaging VISAR, an instrument for providing spatially resolved velocity histories in dynamic experiments. We have prepared two line-imaging VISAR systems capable of spatial resolution in the 10-20 micron range, at the Z and STAR facilities. We have applied this instrument to selected experiments on a compressed gas gun, chosen to provide initial data for several problems of interest, including: (1) pore-collapse in copper (two variations: 70 micron diameter hole in single-crystal copper) and (2) response of a welded joint in dissimilar materials (Ta, Nb) to ramp loading relative to that of a compression joint. The instrument is capable of resolving details such as the volume and collapse history of a collapsing isolated pore.

  13. Evaluation of the CASSI-DD hyperspectral compressive sensing imaging system

    NASA Astrophysics Data System (ADS)

    Busuioceanu, Maria; Messinger, David W.; Greer, John B.; Flake, J. Christopher

    2013-05-01

    Compressive Sensing (CS) systems capture data with fewer measurements than traditional sensors assuming that imagery is redundant and compressible in the spatial and spectral dimensions. We utilize a model of the Coded Aperture Snapshot Spectral Imager-Dual Disperser (CASSI-DD) CS model to simulate CS measurements from HyMap images. Flake et al's novel reconstruction algorithm, which combines a spectral smoothing parameter and spatial total variation (TV), is used to create high resolution hyperspectral imagery.1 We examine the e ect of the number of measurements, which corresponds to the percentage of physical data sampled, on the delity of simulated data. The impacts of the CS sensor model and reconstruction of the data cloud and the utility for various hyperspectral applications are described to identify the strengths and limitations of CS.

  14. Coherent source imaging and dynamic support tracking for inverse scattering using compressive MUSIC

    NASA Astrophysics Data System (ADS)

    Lee, Okkyun; Kim, Jong Min; Yoo, Jaejoon; Jin, Kyunghwan; Ye, Jong Chul

    2011-09-01

    The goal of this paper is to develop novel algorithms for inverse scattering problems such as EEG/MEG, microwave imaging, and/or diffuse optical tomograpahy, and etc. One of the main contributions of this paper is a class of novel non-iterative exact nonlinear inverse scattering theory for coherent source imaging and moving targets. Specifically, the new algorithms guarantee the exact recovery under a very relaxed constraint on the number of source and receivers, under which the conventional methods fail. Such breakthrough was possible thanks to the recent theory of compressive MUSIC and its extension using support correction criterion, where partial support are estimated using the conventional compressed sensing approaches, then the remaining supports are estimated using a novel generalized MUSIC criterion. Numerical results using coherent sources in EEG/MEG and dynamic targets confirm that the new algorithms outperform the conventional ones.

  15. Experimental study of a DMD based compressive line sensing imaging system in the turbulence environment

    NASA Astrophysics Data System (ADS)

    Ouyang, Bing; Hou, Weilin; Gong, Cuiling; Caimi, Frank M.; Dalgleish, Fraser R.; Vuorenkoski, Anni K.

    2016-05-01

    The Compressive Line Sensing (CLS) active imaging system has been demonstrated to be effective in scattering mediums, such as turbid coastal water through simulations and test tank experiments. Since turbulence is encountered in many atmospheric and underwater surveillance applications, a new CLS imaging prototype was developed to investigate the effectiveness of the CLS concept in a turbulence environment. Compared with earlier optical bench top prototype, the new system is significantly more robust and compact. A series of experiments were conducted at the Naval Research Lab's optical turbulence test facility with the imaging path subjected to various turbulence intensities. In addition to validating the system design, we obtained some unexpected exciting results - in the strong turbulence environment, the time-averaged measurements using the new CLS imaging prototype improved both SNR and resolution of the reconstructed images. We will discuss the implications of the new findings, the challenges of acquiring data through strong turbulence environment, and future enhancements.

  16. Interference-based image encryption with silhouette removal by aid of compressive sensing

    NASA Astrophysics Data System (ADS)

    Gong, Qiong; Wang, Zhipeng; Lv, Xiaodong; Qin, Yi

    2016-01-01

    Compressive sensing (CS) offers the opportunity to reconstruct a signal from its sparse representation, either in the space domain or the transform domain. Exploiting this character, we propose a simple interference-based image encryption method. For encryption, a synthetic image, which contains sparse samples of the original image and the designated values, is analytically separated into two phase only masks (POMs). Consequently, only fragmentary data of the primary image can be directly collected in the traditional decryption scheme. However, the subsequent CS reconstruction will retrieve a high quality image from the fragmentary information. The proposed method has effectively suppressed the silhouette problem. Moreover, it has also some distinct advantages over the previous approaches.

  17. Combining nonlinear multiresolution system and vector quantization for still image compression

    SciTech Connect

    Wong, Y.

    1993-12-17

    It is popular to use multiresolution systems for image coding and compression. However, general-purpose techniques such as filter banks and wavelets are linear. While these systems are rigorous, nonlinear features in the signals cannot be utilized in a single entity for compression. Linear filters are known to blur the edges. Thus, the low-resolution images are typically blurred, carrying little information. We propose and demonstrate that edge-preserving filters such as median filters can be used in generating a multiresolution system using the Laplacian pyramid. The signals in the detail images are small and localized to the edge areas. Principal component vector quantization (PCVQ) is used to encode the detail images. PCVQ is a tree-structured VQ which allows fast codebook design and encoding/decoding. In encoding, the quantization error at each level is fed back through the pyramid to the previous level so that ultimately all the error is confined to the first level. With simple coding methods, we demonstrate that images with PSNR 33 dB can be obtained at 0.66 bpp without the use of entropy coding. When the rate is decreased to 0.25 bpp, the PSNR of 30 dB can still be achieved. Combined with an earlier result, our work demonstrate that nonlinear filters can be used for multiresolution systems and image coding.

  18. Balanced Sparse Model for Tight Frames in Compressed Sensing Magnetic Resonance Imaging

    PubMed Central

    Liu, Yunsong; Cai, Jian-Feng; Zhan, Zhifang; Guo, Di; Ye, Jing; Chen, Zhong; Qu, Xiaobo

    2015-01-01

    Compressed sensing has shown to be promising to accelerate magnetic resonance imaging. In this new technology, magnetic resonance images are usually reconstructed by enforcing its sparsity in sparse image reconstruction models, including both synthesis and analysis models. The synthesis model assumes that an image is a sparse combination of atom signals while the analysis model assumes that an image is sparse after the application of an analysis operator. Balanced model is a new sparse model that bridges analysis and synthesis models by introducing a penalty term on the distance of frame coefficients to the range of the analysis operator. In this paper, we study the performance of the balanced model in tight frame based compressed sensing magnetic resonance imaging and propose a new efficient numerical algorithm to solve the optimization problem. By tuning the balancing parameter, the new model achieves solutions of three models. It is found that the balanced model has a comparable performance with the analysis model. Besides, both of them achieve better results than the synthesis model no matter what value the balancing parameter is. Experiment shows that our proposed numerical algorithm constrained split augmented Lagrangian shrinkage algorithm for balanced model (C-SALSA-B) converges faster than previously proposed algorithms accelerated proximal algorithm (APG) and alternating directional method of multipliers for balanced model (ADMM-B). PMID:25849209

  19. Multispectral image compression for spectral and color reproduction based on lossy to lossless coding

    NASA Astrophysics Data System (ADS)

    Shinoda, Kazuma; Murakami, Yuri; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2010-01-01

    In this paper we propose a multispectral image compression based on lossy to lossless coding, suitable for both spectral and color reproduction. The proposed method divides a multispectral image data into two groups, RGB and residual. The RGB component is extracted from the multispectral image, for example, by using the XYZ Color Matching Functions, a color conversion matrix, and a gamma curve. The original multispectral image is estimated from RGB data encoder, and the difference between the original and the estimated multispectral images, referred as a residual component in this paper, is calculated in the encoder. Then the RGB and the residual components are encoded by JPEG2000, respectively a progressive decoding is possible from the losslessly encoded code-stream. Experimental results show that, although the proposed method is slightly inferior to JPEG2000 with a multicomponent transform in rate-distortion plot of the spectrum domain at low bit rate, a decoded RGB image shows high quality at low bit rate with primary encoding of the RGB component. Its lossless compression ratio is close to that of JPEG2000 with the integer KLT.

  20. OFDM and compressive sensing based GPR imaging using SAR focusing algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Xia, Tian

    2015-04-01

    This paper presents a new ground penetrating radar (GPR) design approach using orthogonal frequency division multiplexing (OFDM) and compressive sensing (CS) algorithms. OFDM technique is applied to leverage GPR operating speed with multiple frequency tones transmission and receiving concurrently, and CS technique allows utilizing reduced frequency tones without compromising data reconstruction accuracy. Combination of OFDM and CS boosts the radar operating efficiency. For GPR image reconstruction, a synthetic aperture radar (SAR) technique is implemented.