Science.gov

Sample records for image compression recommendation

  1. The New CCSDS Image Compression Recommendation

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron; Masschelein, Bart; Moury, Gilles; Schaefer, Christoph

    2005-01-01

    The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists of a two-dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-Earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An Application-Specific Integrated Circuit (ASIC) implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm. Performance results and comparisons with other compressors are given for a test set of space images.

  2. Fractal image compression

    NASA Technical Reports Server (NTRS)

    Barnsley, Michael F.; Sloan, Alan D.

    1989-01-01

    Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed.

  3. Compressive Optical Image Encryption

    PubMed Central

    Li, Jun; Sheng Li, Jiao; Yang Pan, Yang; Li, Rong

    2015-01-01

    An optical image encryption technique based on compressive sensing using fully optical means has been proposed. An object image is first encrypted to a white-sense stationary noise pattern using a double random phase encoding (DRPE) method in a Mach-Zehnder interferometer. Then, the encrypted image is highly compressed to a signal using single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the encrypted image is reconstructed well via compressive sensing theory, and the original image can be decrypted with three reconstructed holograms and the correct keys. The numerical simulations show that the method is effective and suitable for optical image security transmission in future all-optical networks because of the ability of completely optical implementation and substantially smaller hologram data volume. PMID:25992946

  4. Compressive Optical Image Encryption

    NASA Astrophysics Data System (ADS)

    Li, Jun; Sheng Li, Jiao; Yang Pan, Yang; Li, Rong

    2015-05-01

    An optical image encryption technique based on compressive sensing using fully optical means has been proposed. An object image is first encrypted to a white-sense stationary noise pattern using a double random phase encoding (DRPE) method in a Mach-Zehnder interferometer. Then, the encrypted image is highly compressed to a signal using single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the encrypted image is reconstructed well via compressive sensing theory, and the original image can be decrypted with three reconstructed holograms and the correct keys. The numerical simulations show that the method is effective and suitable for optical image security transmission in future all-optical networks because of the ability of completely optical implementation and substantially smaller hologram data volume.

  5. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  6. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  7. Progressive compressive imager

    NASA Astrophysics Data System (ADS)

    Evladov, Sergei; Levi, Ofer; Stern, Adrian

    2012-06-01

    We have designed and built a working automatic progressive sampling imaging system based on the vector sensor concept, which utilizes a unique sampling scheme of Radon projections. This sampling scheme makes it possible to progressively add information resulting in tradeoff between compression and the quality of reconstruction. The uniqueness of our sampling is that in any moment of the acquisition process the reconstruction can produce a reasonable version of the image. The advantage of the gradual addition of the samples is seen when the sparsity rate of the object is unknown, and thus the number of needed measurements. We have developed the iterative algorithm OSO (Ordered Sets Optimization) which employs our sampling scheme for creation of nearly uniform distributed sets of samples, which allows the reconstruction of Mega-Pixel images. We present the good quality reconstruction from compressed data ratios of 1:20.

  8. [Irreversible image compression in radiology. Current status].

    PubMed

    Pinto dos Santos, D; Jungmann, F; Friese, C; Düber, C; Mildenberger, P

    2013-03-01

    Due to increasing amounts of data in radiology methods for image compression appear both economically and technically interesting. Irreversible image compression allows markedly higher reduction of data volume in comparison with reversible compression algorithms but is, however, accompanied by a certain amount of mathematical and visual loss of information. Various national and international radiological societies have published recommendations for the use of irreversible image compression. The degree of acceptable compression varies across modalities and regions of interest.The DICOM standard supports JPEG, which achieves compression through tiling, DCT/DWT and quantization. Although mathematical loss due to rounding up errors and reduction of high frequency information occurs this results in relatively low visual degradation.It is still unclear where to implement irreversible compression in the radiological workflow as only few studies analyzed the impact of irreversible compression on specialized image postprocessing. As long as this is within the limits recommended by the German Radiological Society irreversible image compression could be implemented directly at the imaging modality as it would comply with § 28 of the roentgen act (RöV). PMID:23456043

  9. Space-time compressive imaging.

    PubMed

    Treeaporn, Vicha; Ashok, Amit; Neifeld, Mark A

    2012-02-01

    Compressive imaging systems typically exploit the spatial correlation of the scene to facilitate a lower dimensional measurement relative to a conventional imaging system. In natural time-varying scenes there is a high degree of temporal correlation that may also be exploited to further reduce the number of measurements. In this work we analyze space-time compressive imaging using Karhunen-Loève (KL) projections for the read-noise-limited measurement case. Based on a comprehensive simulation study, we show that a KL-based space-time compressive imager offers higher compression relative to space-only compressive imaging. For a relative noise strength of 10% and reconstruction error of 10%, we find that space-time compressive imaging with 8×8×16 spatiotemporal blocks yields about 292× compression compared to a conventional imager, while space-only compressive imaging provides only 32× compression. Additionally, under high read-noise conditions, a space-time compressive imaging system yields lower reconstruction error than a conventional imaging system due to the multiplexing advantage. We also discuss three electro-optic space-time compressive imaging architecture classes, including charge-domain processing by a smart focal plane array (FPA). Space-time compressive imaging using a smart FPA provides an alternative method to capture the nonredundant portions of time-varying scenes.

  10. Compressive sensing in medical imaging.

    PubMed

    Graff, Christian G; Sidky, Emil Y

    2015-03-10

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

  11. Compressive sensing in medical imaging

    PubMed Central

    Graff, Christian G.; Sidky, Emil Y.

    2015-01-01

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400

  12. Image quality (IQ) guided multispectral image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  13. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  14. Compressive Sensing for Quantum Imaging

    NASA Astrophysics Data System (ADS)

    Howland, Gregory A.

    This thesis describes the application of compressive sensing to several challenging problems in quantum imaging with practical and fundamental implications. Compressive sensing is a measurement technique that compresses a signal during measurement such that it can be dramatically undersampled. Compressive sensing has been shown to be an extremely efficient measurement technique for imaging, particularly when detector arrays are not available. The thesis first reviews compressive sensing through the lens of quantum imaging and quantum measurement. Four important applications and their corresponding experiments are then described in detail. The first application is a compressive sensing, photon-counting lidar system. A novel depth mapping technique that uses standard, linear compressive sensing is described. Depth maps up to 256 x 256 pixel transverse resolution are recovered with depth resolution less than 2.54 cm. The first three-dimensional, photon counting video is recorded at 32 x 32 pixel resolution and 14 frames-per-second. The second application is the use of compressive sensing for complementary imaging---simultaneously imaging the transverse-position and transverse-momentum distributions of optical photons. This is accomplished by taking random, partial projections of position followed by imaging the momentum distribution on a cooled CCD camera. The projections are shown to not significantly perturb the photons' momenta while allowing high resolution position images to be reconstructed using compressive sensing. A variety of objects and their diffraction patterns are imaged including the double slit, triple slit, alphanumeric characters, and the University of Rochester logo. The third application is the use of compressive sensing to characterize spatial entanglement of photon pairs produced by spontaneous parametric downconversion. The technique gives a theoretical speedup N2/log N for N-dimensional entanglement over the standard raster scanning technique

  15. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  16. Compressive passive millimeter wave imager

    SciTech Connect

    Gopalsami, Nachappa; Liao, Shaolin; Elmer, Thomas W; Koehl, Eugene R; Heifetz, Alexander; Raptis, Apostolos C

    2015-01-27

    A compressive scanning approach for millimeter wave imaging and sensing. A Hadamard mask is positioned to receive millimeter waves from an object to be imaged. A subset of the full set of Hadamard acquisitions is sampled. The subset is used to reconstruct an image representing the object.

  17. Object-Based Image Compression

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    2003-01-01

    Image compression frequently supports reduced storage requirement in a computer system, as well as enhancement of effective channel bandwidth in a communication system, by decreasing the source bit rate through reduction of source redundancy. The majority of image compression techniques emphasize pixel-level operations, such as matching rectangular or elliptical sampling blocks taken from the source data stream, with exemplars stored in a database (e.g., a codebook in vector quantization or VQ). Alternatively, one can represent a source block via transformation, coefficient quantization, and selection of coefficients deemed significant for source content approximation in the decompressed image. This approach, called transform coding (TC), has predominated for several decades in the signal and image processing communities. A further technique that has been employed is the deduction of affine relationships from source properties such as local self-similarity, which supports the construction of adaptive codebooks in a self-VQ paradigm that has been called iterated function systems (IFS). Although VQ, TC, and IFS based compression algorithms have enjoyed varying levels of success for different types of applications, bit rate requirements, and image quality constraints, few of these algorithms examine the higher-level spatial structure of an image, and fewer still exploit this structure to enhance compression ratio. In this paper, we discuss a fourth type of compression algorithm, called object-based compression, which is based on research in joint segmentaton and compression, as well as previous research in the extraction of sketch-like representations from digital imagery. Here, large image regions that correspond to contiguous recognizeable objects or parts of objects are segmented from the source, then represented compactly in the compressed image. Segmentation is facilitated by source properties such as size, shape, texture, statistical properties, and spectral

  18. Hyperspectral image compressive projection algorithm

    NASA Astrophysics Data System (ADS)

    Rice, Joseph P.; Allen, David W.

    2009-05-01

    We describe a compressive projection algorithm and experimentally assess its performance when used with a Hyperspectral Image Projector (HIP). The HIP is being developed by NIST for system-level performance testing of hyperspectral and multispectral imagers. It projects a two-dimensional image into the unit under test (UUT), whereby each pixel can have an independently programmable arbitrary spectrum. To efficiently project a single frame of dynamic realistic hyperspectral imagery through the collimator into the UUT, a compression algorithm has been developed whereby the series of abundance images and corresponding endmember spectra that comprise the image cube of that frame are first computed using an automated endmember-finding algorithm such as the Sequential Maximum Angle Convex Cone (SMACC) endmember model. Then these endmember spectra are projected sequentially on the HIP spectral engine in sync with the projection of the abundance images on the HIP spatial engine, during the singleframe exposure time of the UUT. The integrated spatial image captured by the UUT is the endmember-weighted sum of the abundance images, which results in the formation of a datacube for that frame. Compressive projection enables a much smaller set of broadband spectra to be projected than monochromatic projection, and thus utilizes the inherent multiplex advantage of the HIP spectral engine. As a result, radiometric brightness and projection frame rate are enhanced. In this paper, we use a visible breadboard HIP to experimentally assess the compressive projection algorithm performance.

  19. A programmable image compression system

    NASA Technical Reports Server (NTRS)

    Farrelle, Paul M.

    1989-01-01

    A programmable image compression system which has the necessary flexibility to address diverse imaging needs is described. It can compress and expand single frame video images (monochrome or color) as well as documents and graphics (black and white or color) for archival or transmission applications. Through software control, the compression mode can be set for lossless or controlled quality coding; the image size and bit depth can be varied; and the image source and destination devices can be readily changed. Despite the large combination of image data types, image sources, and algorithms, the system provides a simple consistent interface to the programmer. This system (OPTIPAC) is based on the TITMS320C25 digital signal processing (DSP) chip and has been implemented as a co-processor board for an IBM PC-AT compatible computer. The underlying philosophy can readily be applied to different hardware platforms. By using multiple DSP chips or incorporating algorithm specific chips, the compression and expansion times can be significantly reduced to meet performance requirements.

  20. [Medical image compression: a review].

    PubMed

    Noreña, Tatiana; Romero, Eduardo

    2013-01-01

    Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings. PMID:23715317

  1. [Medical image compression: a review].

    PubMed

    Noreña, Tatiana; Romero, Eduardo

    2013-01-01

    Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings.

  2. Efficient lossy compression for compressive sensing acquisition of images in compressive sensing imaging systems.

    PubMed

    Li, Xiangwei; Lan, Xuguang; Yang, Meng; Xue, Jianru; Zheng, Nanning

    2014-12-05

    Compressive Sensing Imaging (CSI) is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS) acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  3. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    PubMed Central

    Li, Xiangwei; Lan, Xuguang; Yang, Meng; Xue, Jianru; Zheng, Nanning

    2014-01-01

    Compressive Sensing Imaging (CSI) is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS) acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4∼2 dB comparing with current state-of-the-art, while maintaining a low computational complexity. PMID:25490597

  4. Compressing TV-image data

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Lee, J.; Rice, R. F.; Schlutsmeyer, A. P.

    1981-01-01

    Compressing technique calculates activity estimator for each segment of image line. Estimator is used in conjunction with allowable bits per line, N, to determine number of bits necessary to code each segment and which segments can tolerate truncation. Preprocessed line data are then passed to adaptive variable-length coder, which selects optimum transmission code. Method increases capacity of broadcast and cable television transmissions and helps reduce size of storage medium for video and digital audio recordings.

  5. Longwave infrared compressive hyperspectral imager

    NASA Astrophysics Data System (ADS)

    Dupuis, Julia R.; Kirby, Michael; Cosofret, Bogdan R.

    2015-06-01

    Physical Sciences Inc. (PSI) is developing a longwave infrared (LWIR) compressive sensing hyperspectral imager (CS HSI) based on a single pixel architecture for standoff vapor phase plume detection. The sensor employs novel use of a high throughput stationary interferometer and a digital micromirror device (DMD) converted for LWIR operation in place of the traditional cooled LWIR focal plane array. The CS HSI represents a substantial cost reduction over the state of the art in LWIR HSI instruments. Radiometric improvements for using the DMD in the LWIR spectral range have been identified and implemented. In addition, CS measurement and sparsity bases specifically tailored to the CS HSI instrument and chemical plume imaging have been developed and validated using LWIR hyperspectral image streams of chemical plumes. These bases enable comparable statistics to detection based on uncompressed data. In this paper, we present a system model predicting the overall performance of the CS HSI system. Results from a breadboard build and test validating the system model are reported. In addition, the measurement and sparsity basis work demonstrating the plume detection on compressed hyperspectral images is presented.

  6. Correlation and image compression for limited-bandwidth CCD.

    SciTech Connect

    Thompson, Douglas G.

    2005-07-01

    As radars move to Unmanned Aerial Vehicles with limited-bandwidth data downlinks, the amount of data stored and transmitted with each image becomes more significant. This document gives the results of a study to determine the effect of lossy compression in the image magnitude and phase on Coherent Change Detection (CCD). We examine 44 lossy compression types, plus lossless zlib compression, and test each compression method with over 600 CCD image pairs. We also derive theoretical predictions for the correlation for most of these compression schemes, which compare favorably with the experimental results. We recommend image transmission formats for limited-bandwidth programs having various requirements for CCD, including programs which cannot allow performance degradation and those which have stricter bandwidth requirements at the expense of CCD performance.

  7. High compression image and image sequence coding

    NASA Technical Reports Server (NTRS)

    Kunt, Murat

    1989-01-01

    The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.

  8. Variable density compressed image sampling.

    PubMed

    Wang, Zhongmin; Arce, Gonzalo R

    2010-01-01

    Compressed sensing (CS) provides an efficient way to acquire and reconstruct natural images from a limited number of linear projection measurements leading to sub-Nyquist sampling rates. A key to the success of CS is the design of the measurement ensemble. This correspondence focuses on the design of a novel variable density sampling strategy, where the a priori information of the statistical distributions that natural images exhibit in the wavelet domain is exploited. The proposed variable density sampling has the following advantages: 1) the generation of the measurement ensemble is computationally efficient and requires less memory; 2) the necessary number of measurements for image reconstruction is reduced; 3) the proposed sampling method can be applied to several transform domains and leads to simple implementations. Extensive simulations show the effectiveness of the proposed sampling method.

  9. Image coding compression based on DCT

    NASA Astrophysics Data System (ADS)

    Feng, Fei; Liu, Peixue; Jiang, Baohua

    2012-04-01

    With the development of computer science and communications, the digital image processing develops more and more fast. High quality images are loved by people, but it will waste more stored space in our computer and it will waste more bandwidth when it is transferred by Internet. Therefore, it's necessary to have an study on technology of image compression. At present, many algorithms about image compression is applied to network and the image compression standard is established. In this dissertation, some analysis on DCT will be written. Firstly, the principle of DCT will be shown. It's necessary to realize image compression, because of the widely using about this technology; Secondly, we will have a deep understanding of DCT by the using of Matlab, the process of image compression based on DCT, and the analysis on Huffman coding; Thirdly, image compression based on DCT will be shown by using Matlab and we can have an analysis on the quality of the picture compressed. It is true that DCT is not the only algorithm to realize image compression. I am sure there will be more algorithms to make the image compressed have a high quality. I believe the technology about image compression will be widely used in the network or communications in the future.

  10. Image Compression in Signal-Dependent Noise

    NASA Astrophysics Data System (ADS)

    Shahnaz, Rubeena; Walkup, John F.; Krile, Thomas F.

    1999-09-01

    The performance of an image compression scheme is affected by the presence of noise, and the achievable compression may be reduced significantly. We investigated the effects of specific signal-dependent-noise (SDN) sources, such as film-grain and speckle noise, on image compression, using JPEG (Joint Photographic Experts Group) standard image compression. For the improvement of compression ratios noisy images are preprocessed for noise suppression before compression is applied. Two approaches are employed for noise suppression. In one approach an estimator designed specifically for the SDN model is used. In an alternate approach, the noise is first transformed into signal-independent noise (SIN) and then an estimator designed for SIN is employed. The performances of these two schemes are compared. The compression results achieved for noiseless, noisy, and restored images are also presented.

  11. Studies on image compression and image reconstruction

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Nori, Sekhar; Araj, A.

    1994-01-01

    During this six month period our works concentrated on three, somewhat different areas. We looked at and developed a number of error concealment schemes for use in a variety of video coding environments. This work is described in an accompanying (draft) Masters thesis. In the thesis we describe application of this techniques to the MPEG video coding scheme. We felt that the unique frame ordering approach used in the MPEG scheme would be a challenge to any error concealment/error recovery technique. We continued with our work in the vector quantization area. We have also developed a new type of vector quantizer, which we call a scan predictive vector quantization. The scan predictive VQ was tested on data processed at Goddard to approximate Landsat 7 HRMSI resolution and compared favorably with existing VQ techniques. A paper describing this work is included. The third area is concerned more with reconstruction than compression. While there is a variety of efficient lossless image compression schemes, they all have a common property that they use past data to encode future data. This is done either via taking differences, context modeling, or by building dictionaries. When encoding large images, this common property becomes a common flaw. When the user wishes to decode just a portion of the image, the requirement that the past history be available forces the decoding of a significantly larger portion of the image than desired by the user. Even with intelligent partitioning of the image dataset, the number of pixels decoded may be four times the number of pixels requested. We have developed an adaptive scanning strategy which can be used with any lossless compression scheme and which lowers the additional number of pixels to be decoded to about 7 percent of the number of pixels requested! A paper describing these results is included.

  12. Coded aperture compressive temporal imaging.

    PubMed

    Llull, Patrick; Liao, Xuejun; Yuan, Xin; Yang, Jianbo; Kittle, David; Carin, Lawrence; Sapiro, Guillermo; Brady, David J

    2013-05-01

    We use mechanical translation of a coded aperture for code division multiple access compression of video. We discuss the compressed video's temporal resolution and present experimental results for reconstructions of > 10 frames of temporal data per coded snapshot.

  13. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  14. Digital Image Compression Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.

    1993-01-01

    The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.

  15. An image-data-compression algorithm

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Rice, R. F.

    1981-01-01

    Cluster Compression Algorithm (CCA) preprocesses Landsat image data immediately following satellite data sensor (receiver). Data are reduced by extracting pertinent image features and compressing this result into concise format for transmission to ground station. This results in narrower transmission bandwidth, increased data-communication efficiency, and reduced computer time in reconstructing and analyzing image. Similar technique could be applied to other types of recorded data to cut costs of transmitting, storing, distributing, and interpreting complex information.

  16. Lossless Compression on MRI Images Using SWT.

    PubMed

    Anusuya, V; Raghavan, V Srinivasa; Kavitha, G

    2014-10-01

    Medical image compression is one of the growing research fields in biomedical applications. Most medical images need to be compressed using lossless compression as each pixel information is valuable. With the wide pervasiveness of medical imaging applications in health-care settings and the increased interest in telemedicine technologies, it has become essential to reduce both storage and transmission bandwidth requirements needed for archival and communication of related data, preferably by employing lossless compression methods. Furthermore, providing random access as well as resolution and quality scalability to the compressed data has become of great utility. Random access refers to the ability to decode any section of the compressed image without having to decode the entire data set. The system proposes to implement a lossless codec using an entropy coder. 3D medical images are decomposed into 2D slices and subjected to 2D-stationary wavelet transform (SWT). The decimated coefficients are compressed in parallel using embedded block coding with optimized truncation of the embedded bit stream. These bit streams are decoded and reconstructed using inverse SWT. Finally, the compression ratio (CR) is evaluated to prove the efficiency of the proposal. As an enhancement, the proposed system concentrates on minimizing the computation time by introducing parallel computing on the arithmetic coding stage as it deals with multiple subslices.

  17. Context-Aware Image Compression

    PubMed Central

    Chan, Jacky C. K.; Mahjoubfar, Ata; Chen, Claire L.; Jalali, Bahram

    2016-01-01

    We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling. PMID:27367904

  18. Cloud Optimized Image Format and Compression

    NASA Astrophysics Data System (ADS)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  19. Block adaptive rate controlled image data compression

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Hilbert, E.; Lee, J.-J.; Schlutsmeyer, A.

    1979-01-01

    A block adaptive rate controlled (BARC) image data compression algorithm is described. It is noted that in the algorithm's principal rate controlled mode, image lines can be coded at selected rates by combining practical universal noiseless coding techniques with block adaptive adjustments in linear quantization. Compression of any source data at chosen rates of 3.0 bits/sample and above can be expected to yield visual image quality with imperceptible degradation. Exact reconstruction will be obtained if the one-dimensional difference entropy is below the selected compression rate. It is noted that the compressor can also be operated as a floating rate noiseless coder by simply not altering the input data quantization. Here, the universal noiseless coder ensures that the code rate is always close to the entropy. Application of BARC image data compression to the Galileo orbiter mission of Jupiter is considered.

  20. Iris Recognition: The Consequences of Image Compression

    NASA Astrophysics Data System (ADS)

    Ives, Robert W.; Bishop, Daniel A.; Du, Yingzi; Belcher, Craig

    2010-12-01

    Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA) is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  1. Implementation of image compression for printers

    NASA Astrophysics Data System (ADS)

    Oka, Kenichiro; Onishi, Masaru

    1992-05-01

    Printers process a large quantity of data when printing. For example, printing on an A3 size (297 mm X 420 mm) at 300 dpi resolution requires 17.4 million pixels, and about 66 Mbytes in a 32-bits/pixel-color image composed of yellow (Y), magenta (M), cyan (C) and black components. Containing such a large capacity of random access memories (RAMs) in a printer causes an increase in both the cost and size of memory circuits. Thus, image compression techniques are examined in this study to cope with these problems. A still-image coding, being standardized by JPEG (Joint Photographic Experts Group), will presumably be utilized for image communications or image data bases. The JPEG scheme can compress natural images efficiently but it is unsuitable for text or computer graphics (CG) images for degradation of restored images. This scheme, therefore, cannot be implemented for printers which require good image quality. We studied codings which are more suitable for printers than the JPEG scheme. Two criteria were considered to select a coding scheme for printers: (1) no visible degradation of input printer images and (2) capability of image edition. Especially in terms of criteria (2), a fixed-length coding was adopted; an arbitrary pixel data code can be easily read out of an image memory. We then implemented an image coding scheme in our new sublimation full-color printer. Input image data are compressed by coding before being written into an image memory.

  2. Data compression for satellite images

    NASA Technical Reports Server (NTRS)

    Chen, P. H.; Wintz, P. A.

    1976-01-01

    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.

  3. A New Approach for Fingerprint Image Compression

    SciTech Connect

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  4. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  5. Lossless image compression technique for infrared thermal images

    NASA Astrophysics Data System (ADS)

    Allred, Lloyd G.; Kelly, Gary E.

    1992-07-01

    The authors have achieved a 6.5-to-one image compression technique for thermal images (640 X 480, 1024 colors deep). Using a combination of new and more traditional techniques, the combined algorithm is computationally simple, enabling `on-the-fly' compression and storage of an image in less time than it takes to transcribe the original image to or from a magnetic medium. Similar compression has been achieved on visual images by virtue of the feature that all optical devices possess a modulation transfer function. As a consequence of this property, the difference in color between adjacent pixels is a usually small number, often between -1 and +1 graduations for a meaningful color scheme. By differentiating adjacent rows and columns, the original image can be expressed in terms of these small numbers. A simple compression algorithm for these small numbers achieves a four to one image compression. By piggy-backing this technique with a LZW compression or a fixed Huffman coding, an additional 35% image compression is obtained, resulting in a 6.5-to-one lossless image compression. Because traditional noise-removal operators tend to minimize the color graduations between adjacent pixels, an additional 20% reduction can be obtained by preprocessing the image with a noise-removal operator. Although noise removal operators are not lossless, their application may prove crucial in applications requiring high compression, such as the storage or transmission of a large number or images. The authors are working with the Air Force Photonics Technology Application Program Management office to apply this technique to transmission of optical images from satellites.

  6. Lossless compression of synthetic aperture radar images

    SciTech Connect

    Ives, R.W.; Magotra, N.; Mandyam, G.D.

    1996-02-01

    Synthetic Aperture Radar (SAR) has been proven an effective sensor in a wide variety of applications. Many of these uses require transmission and/or processing of the image data in a lossless manner. With the current state of SAR technology, the amount of data contained in a single image may be massive, whether the application requires the entire complex image or magnitude data only. In either case, some type of compression may be required to losslessly transmit this data in a given bandwidth or store it in a reasonable volume. This paper provides the results of applying several lossless compression schemes to SAR imagery.

  7. Photorefractive Crystal Compresses Dynamic Range Of Image

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang

    1991-01-01

    Experiment shows dynamic range of spatial variations of illumination within image compressed by use of photorefractive crystal. In technique, photorefractive crystal placed in optical path at some stage preceding video camera, photographic camera, or final photodetector stage. Provided brightness of parts of scene vary as slowly as or more slowly than photorefractive crystal responds, effect exploited to provide real-time dynamic-range compression to prevent saturation of bright areas in video or photographic images of scene, helping to preserve spatial-variation information in such images.

  8. Compressive hyperspectral and multispectral imaging fusion

    NASA Astrophysics Data System (ADS)

    Espitia, Óscar; Castillo, Sergio; Arguello, Henry

    2016-05-01

    Image fusion is a valuable framework which combines two or more images of the same scene from one or multiple sensors, allowing to improve the resolution of the images and increase the interpretable content. In remote sensing a common fusion problem consists of merging hyperspectral (HS) and multispectral (MS) images that involve large amount of redundant data, which ignores the highly correlated structure of the datacube along the spatial and spectral dimensions. Compressive HS and MS systems compress the spectral data in the acquisition step allowing to reduce the data redundancy by using different sampling patterns. This work presents a compressed HS and MS image fusion approach, which uses a high dimensional joint sparse model. The joint sparse model is formulated by combining HS and MS compressive acquisition models. The high spectral and spatial resolution image is reconstructed by using sparse optimization algorithms. Different fusion spectral image scenarios are used to explore the performance of the proposed scheme. Several simulations with synthetic and real datacubes show promising results as the reliable reconstruction of a high spectral and spatial resolution image can be achieved by using as few as just the 50% of the datacube.

  9. An Analog Processor for Image Compression

    NASA Technical Reports Server (NTRS)

    Tawel, R.

    1992-01-01

    This paper describes a novel analog Vector Array Processor (VAP) that was designed for use in real-time and ultra-low power image compression applications. This custom CMOS processor is based architectually on the Vector Quantization (VQ) algorithm in image coding, and the hardware implementation fully exploits the inherent parallelism built-in the VQ algorithm.

  10. Compression of gray-scale fingerprint images

    NASA Astrophysics Data System (ADS)

    Hopper, Thomas

    1994-03-01

    The FBI has developed a specification for the compression of gray-scale fingerprint images to support paperless identification services within the criminal justice community. The algorithm is based on a scalar quantization of a discrete wavelet transform decomposition of the images, followed by zero run encoding and Huffman encoding.

  11. FBI compression standard for digitized fingerprint images

    NASA Astrophysics Data System (ADS)

    Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas

    1996-11-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  12. Universal lossless compression algorithm for textual images

    NASA Astrophysics Data System (ADS)

    al Zahir, Saif

    2012-03-01

    In recent years, an unparalleled volume of textual information has been transported over the Internet via email, chatting, blogging, tweeting, digital libraries, and information retrieval systems. As the volume of text data has now exceeded 40% of the total volume of traffic on the Internet, compressing textual data becomes imperative. Many sophisticated algorithms were introduced and employed for this purpose including Huffman encoding, arithmetic encoding, the Ziv-Lempel family, Dynamic Markov Compression, and Burrow-Wheeler Transform. My research presents novel universal algorithm for compressing textual images. The algorithm comprises two parts: 1. a universal fixed-to-variable codebook; and 2. our row and column elimination coding scheme. Simulation results on a large number of Arabic, Persian, and Hebrew textual images show that this algorithm has a compression ratio of nearly 87%, which exceeds published results including JBIG2.

  13. The effect of lossy image compression on image classification

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.

  14. Imaging With Nature: Compressive Imaging Using a Multiply Scattering Medium

    PubMed Central

    Liutkus, Antoine; Martina, David; Popoff, Sébastien; Chardon, Gilles; Katz, Ori; Lerosey, Geoffroy; Gigan, Sylvain; Daudet, Laurent; Carron, Igor

    2014-01-01

    The recent theory of compressive sensing leverages upon the structure of signals to acquire them with much fewer measurements than was previously thought necessary, and certainly well below the traditional Nyquist-Shannon sampling rate. However, most implementations developed to take advantage of this framework revolve around controlling the measurements with carefully engineered material or acquisition sequences. Instead, we use the natural randomness of wave propagation through multiply scattering media as an optimal and instantaneous compressive imaging mechanism. Waves reflected from an object are detected after propagation through a well-characterized complex medium. Each local measurement thus contains global information about the object, yielding a purely analog compressive sensing method. We experimentally demonstrate the effectiveness of the proposed approach for optical imaging by using a 300-micrometer thick layer of white paint as the compressive imaging device. Scattering media are thus promising candidates for designing efficient and compact compressive imagers. PMID:25005695

  15. Optical Data Compression in Time Stretch Imaging

    PubMed Central

    Chen, Claire Lifan; Mahjoubfar, Ata; Jalali, Bahram

    2015-01-01

    Time stretch imaging offers real-time image acquisition at millions of frames per second and subnanosecond shutter speed, and has enabled detection of rare cancer cells in blood with record throughput and specificity. An unintended consequence of high throughput image acquisition is the massive amount of digital data generated by the instrument. Here we report the first experimental demonstration of real-time optical image compression applied to time stretch imaging. By exploiting the sparsity of the image, we reduce the number of samples and the amount of data generated by the time stretch camera in our proof-of-concept experiments by about three times. Optical data compression addresses the big data predicament in such systems. PMID:25906244

  16. Image and video compression for HDR content

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Reinhard, Erik; Agrafiotis, Dimitris; Bull, David R.

    2012-10-01

    High Dynamic Range (HDR) technology can offer high levels of immersion with a dynamic range meeting and exceeding that of the Human Visual System (HVS). A primary drawback with HDR images and video is that memory and bandwidth requirements are significantly higher than for conventional images and video. Many bits can be wasted coding redundant imperceptible information. The challenge is therefore to develop means for efficiently compressing HDR imagery to a manageable bit rate without compromising perceptual quality. In this paper, we build on previous work of ours and propose a compression method for both HDR images and video, based on an HVS optimised wavelet subband weighting method. The method has been fully integrated into a JPEG 2000 codec for HDR image compression and implemented as a pre-processing step for HDR video coding (an H.264 codec is used as the host codec for video compression). Experimental results indicate that the proposed method outperforms previous approaches and operates in accordance with characteristics of the HVS, tested objectively using a HDR Visible Difference Predictor (VDP). Aiming to further improve the compression performance of our method, we additionally present the results of a psychophysical experiment, carried out with the aid of a high dynamic range display, to determine the difference in the noise visibility threshold between HDR and Standard Dynamic Range (SDR) luminance edge masking. Our findings show that noise has increased visibility on the bright side of a luminance edge. Masking is more consistent on the darker side of the edge.

  17. Directly Estimating Endmembers for Compressive Hyperspectral Images

    PubMed Central

    Xu, Hongwei; Fu, Ning; Qiao, Liyan; Peng, Xiyuan

    2015-01-01

    The large volume of hyperspectral images (HSI) generated creates huge challenges for transmission and storage, making data compression more and more important. Compressive Sensing (CS) is an effective data compression technology that shows that when a signal is sparse in some basis, only a small number of measurements are needed for exact signal recovery. Distributed CS (DCS) takes advantage of both intra- and inter- signal correlations to reduce the number of measurements needed for multichannel-signal recovery. HSI can be observed by the DCS framework to reduce the volume of data significantly. The traditional method for estimating endmembers (spectral information) first recovers the images from the compressive HSI and then estimates endmembers via the recovered images. The recovery step takes considerable time and introduces errors into the estimation step. In this paper, we propose a novel method, by designing a type of coherent measurement matrix, to estimate endmembers directly from the compressively observed HSI data via convex geometry (CG) approaches without recovering the images. Numerical simulations show that the proposed method outperforms the traditional method with better estimation speed and better (or comparable) accuracy in both noisy and noiseless cases. PMID:25905699

  18. Recommended frequency of ABPI review for patients wearing compression hosiery.

    PubMed

    Furlong, Winnie

    2015-11-11

    This paper is a sequel to the article 'How often should patients in compression have ABPI recorded?' ( Furlong, 2013 ). Monitoring ankle brachial pressure index (ABPI) is essential, especially in those patients wearing compression hosiery, as it can change over time ( Simon et al, 1994 ; Pankhurst, 2004 ), particularly in the presence of peripheral arterial disease (PAD). Leg ulceration caused by venous disease requires graduated compression ( Wounds UK, 2002 ; Anderson, 2008). Once healed, compression hosiery is required to help prevent ulcer recurrence ( Vandongen and Stacey, 2000 ). The Royal College of Nursing ( RCN, 2006 ) guidelines suggest 3-monthly reviews, including ABPI, with no further guidance. Wounds UK (2002) suggests that patients who have ABPI<0.9, diabetes, reduced mobility or symptoms of claudication should have at least 3/12 Doppler, and that those in compression hosiery without complications who are able to report should have vascular assessment yearly. PMID:26559232

  19. Lossless compression algorithm for multispectral imagers

    NASA Astrophysics Data System (ADS)

    Gladkova, Irina; Grossberg, Michael; Gottipati, Srikanth

    2008-08-01

    Multispectral imaging is becoming an increasingly important tool for monitoring the earth and its environment from space borne and airborne platforms. Multispectral imaging data consists of visible and IR measurements from a scene across space and spectrum. Growing data rates resulting from faster scanning and finer spatial and spectral resolution makes compression an increasingly critical tool to reduce data volume for transmission and archiving. Research for NOAA NESDIS has been directed to finding for the characteristics of satellite atmospheric Earth science Imager sensor data what level of Lossless compression ratio can be obtained as well as appropriate types of mathematics and approaches that can lead to approaching this data's entropy level. Conventional lossless do not achieve the theoretical limits for lossless compression on imager data as estimated from the Shannon entropy. In a previous paper, the authors introduce a lossless compression algorithm developed for MODIS as a proxy for future NOAA-NESDIS satellite based Earth science multispectral imagers such as GOES-R. The algorithm is based on capturing spectral correlations using spectral prediction, and spatial correlations with a linear transform encoder. In decompression, the algorithm uses a statistically computed look up table to iteratively predict each channel from a channel decompressed in the previous iteration. In this paper we present a new approach which fundamentally differs from our prior work. In this new approach, instead of having a single predictor for each pair of bands we introduce a piecewise spatially varying predictor which significantly improves the compression results. Our new algorithm also now optimizes the sequence of channels we use for prediction. Our results are evaluated by comparison with a state of the art wavelet based image compression scheme, Jpeg2000. We present results on the 14 channel subset of the MODIS imager, which serves as a proxy for the GOES-R imager. We

  20. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as phi(sub gamma), are discrete time signals, where gamma represents the dictionary index. A dictionary with a collection of these waveforms is typically complete or overcomplete. Given such a dictionary, the goal is to obtain a representation image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  1. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as lambda, are discrete time signals, where y represents the dictionary index. A dictionary with a collection of these waveforms Is typically complete or over complete. Given such a dictionary, the goal is to obtain a representation Image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  2. Compression of color-mapped images

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, A. C.; Sayood, Khalid

    1992-01-01

    In a standard image coding scenario, pixel-to-pixel correlation nearly always exists in the data, especially if the image is a natural scene. This correlation is what allows predictive coding schemes (e.g., DPCM) to perform efficient compression. In a color-mapped image, the values stored in the pixel array are no longer directly related to the pixel intensity. Two color indices which are numerically adjacent (close) may point to two very different colors. The correlation still exists, but only via the colormap. This fact can be exploited by sorting the color map to reintroduce the structure. The sorting of colormaps is studied and it is shown how the resulting structure can be used in both lossless and lossy compression of images.

  3. Entangled-photon compressive ghost imaging

    SciTech Connect

    Zerom, Petros; Chan, Kam Wai Clifford; Howell, John C.; Boyd, Robert W.

    2011-12-15

    We have experimentally demonstrated high-resolution compressive ghost imaging at the single-photon level using entangled photons produced by a spontaneous parametric down-conversion source and using single-pixel detectors. For a given mean-squared error, the number of photons needed to reconstruct a two-dimensional image is found to be much smaller than that in quantum ghost imaging experiments employing a raster scan. This procedure not only shortens the data acquisition time, but also suggests a more economical use of photons for low-light-level and quantum image formation.

  4. New trends in image data compression.

    PubMed

    Cicconi, P; Reusens, E; Dufaux, F; Moccagatta, I; Rouchouze, B; Ebrahimi, T; Kunt, M

    1994-01-01

    This paper gives an overview of a number of advanced techniques for image compression, which are under investigation in the Signal Processing Laboratory at the Swiss Federal Institute of Technology of Lausanne. Various applications ranging from High definition television (HDTV) to multimedia will be discussed. In particular, systems based on subband decomposition, edge based representation, as well as symmetries will be presented.

  5. Data Compression Techniques For CT Image Archival

    NASA Astrophysics Data System (ADS)

    Quinn, John F.; Rhodes, Michael L.; Rosner, Bruce

    1983-05-01

    Large digital files are inherent to CT image data. CT installations that routinely archive patient data are penalized computer time, technologist time, tape purchase, and file space. This paper introduces compression techniques that reduce the amount of tape needed to store image data and the amount of computer time to do so. The benefits delivered by this technique have also been applied to online disk systems. Typical reductions of 40% to 50% of original file space is reported.

  6. Multi-spectral compressive snapshot imaging using RGB image sensors.

    PubMed

    Rueda, Hoover; Lau, Daniel; Arce, Gonzalo R

    2015-05-01

    Compressive sensing is a powerful sensing and reconstruction framework for recovering high dimensional signals with only a handful of observations and for spectral imaging, compressive sensing offers a novel method of multispectral imaging. Specifically, the coded aperture snapshot spectral imager (CASSI) system has been demonstrated to produce multi-spectral data cubes color images from a single snapshot taken by a monochrome image sensor. In this paper, we expand the theoretical framework of CASSI to include the spectral sensitivity of the image sensor pixels to account for color and then investigate the impact on image quality using either a traditional color image sensor that spatially multiplexes red, green, and blue light filters or a novel Foveon image sensor which stacks red, green, and blue pixels on top of one another. PMID:25969307

  7. Unsupervised orthogonalization neural network for image compression

    NASA Astrophysics Data System (ADS)

    Liu, Lurng-Kuo; Ligomenides, Panos A.

    1992-11-01

    In this paper, we present a unsupervised orthogonalization neural network, which, based on Principal Component (PC) analysis, acts as an orthonormal feature detector and decorrelation network. As in the PC analysis, this network involves extracting the most heavily information- loaded features that contained in the set of input training patterns. The network self-organizes its weight vectors so that they converge to a set of orthonormal weight vectors that span the eigenspace of the correlation matrix in the input patterns. Therefore, the network is applicable to practical image transmission problems for exploiting the natural redundancy that exists in most images and for preserving the quality of the compressed-decompressed image. We have applied the proposed neural model to the problem of image compression for visual communications. Simulation results have shown that the proposed neural model provides a high compression ratio and yields excellent perceptual visual quality of the reconstructed images, and a small mean square error. Generalization performance and convergence speed are also investigated.

  8. Feasibility studies of optical processing of image bandwidth compression schemes

    NASA Astrophysics Data System (ADS)

    Hunt, B. R.; Strickland, R. N.; Schowengerdt, R. A.

    1983-05-01

    This research focuses on these three areas: (1) formulation of alternative architectural concepts for image bandwidth compression, i.e., the formulation of components and schematic diagrams which differ from conventional digital bandwidth compression schemes by being implemented by various optical computation methods; (2) simulation of optical processing concepts for image bandwidth compression, so as to gain insight into typical performance parameters and elements of system performance sensitivity; and (3) maturation of optical processing for image bandwidth compression until the overall state of optical methods in image compression becomes equal to that of digital image compression.

  9. Multi-wavelength compressive computational ghost imaging

    NASA Astrophysics Data System (ADS)

    Welsh, Stephen S.; Edgar, Matthew P.; Jonathan, Phillip; Sun, Baoqing; Padgett, Miles J.

    2013-03-01

    The field of ghost imaging encompasses systems which can retrieve the spatial information of an object through correlated measurements of a projected light field, having spatial resolution, and the associated reflected or transmitted light intensity measured by a photodetector. By employing a digital light projector in a computational ghost imaging system with multiple spectrally filtered photodetectors we obtain high-quality multi-wavelength reconstructions of real macroscopic objects. We compare different reconstruction algorithms and reveal the use of compressive sensing techniques for achieving sub-Nyquist performance. Furthermore, we demonstrate the use of this technology in non-visible and fluorescence imaging applications.

  10. Combining image-processing and image compression schemes

    NASA Technical Reports Server (NTRS)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  11. Performance assessment of compressive sensing imaging

    NASA Astrophysics Data System (ADS)

    Du Bosq, Todd W.; Haefner, David P.; Preece, Bradley L.

    2014-05-01

    Compressive sensing (CS) can potentially form an image of equivalent quality to a large format, megapixel array, using a smaller number of individual measurements. This has the potential to provide smaller, cheaper, and lower bandwidth imaging systems. To properly assess the value of such systems, it is necessary to fully characterize the image quality, including artifacts, sensitivity to noise, and CS limitations. Full resolution imagery of an eight tracked vehicle target set at range was used as an input for simulated single-pixel CS camera measurements. The CS algorithm then reconstructs images from the simulated single-pixel CS camera for various levels of compression and noise. For comparison, a traditional camera was also simulated setting the number of pixels equal to the number of CS measurements in each case. Human perception experiments were performed to determine the identification performance within the trade space. The performance of the nonlinear CS camera was modeled with the Night Vision Integrated Performance Model (NVIPM) by mapping the nonlinear degradations to an equivalent linear shift invariant model. Finally, the limitations of compressive sensing modeling will be discussed.

  12. Chest tuberculosis: Radiological review and imaging recommendations

    PubMed Central

    Bhalla, Ashu Seith; Goyal, Ankur; Guleria, Randeep; Gupta, Arun Kumar

    2015-01-01

    Chest tuberculosis (CTB) is a widespread problem, especially in our country where it is one of the leading causes of mortality. The article reviews the imaging findings in CTB on various modalities. We also attempt to categorize the findings into those definitive for active TB, indeterminate for disease activity, and those indicating healed TB. Though various radiological modalities are widely used in evaluation of such patients, no imaging guidelines exist for the use of these modalities in diagnosis and follow-up. Consequently, imaging is not optimally utilized and patients are often unnecessarily subjected to repeated CT examinations, which is undesirable. Based on the available literature and our experience, we propose certain recommendations delineating the role of imaging in the diagnosis and follow-up of such patients. The authors recognize that this is an evolving field and there may be future revisions depending on emergence of new evidence. PMID:26288514

  13. A recommender system for medical imaging diagnostic.

    PubMed

    Monteiro, Eriksson; Valente, Frederico; Costa, Carlos; Oliveira, José Luís

    2015-01-01

    The large volume of data captured daily in healthcare institutions is opening new and great perspectives about the best ways to use it towards improving clinical practice. In this paper we present a context-based recommender system to support medical imaging diagnostic. The system relies on data mining and context-based retrieval techniques to automatically lookup for relevant information that may help physicians in the diagnostic decision.

  14. Compressive sensing based ptychography image encryption

    NASA Astrophysics Data System (ADS)

    Rawat, Nitin

    2015-09-01

    A compressive sensing (CS) based ptychography combined with an optical image encryption is proposed. The diffraction pattern is recorded through ptychography technique further compressed by non-uniform sampling via CS framework. The system requires much less encrypted data and provides high security. The diffraction pattern as well as the lesser measurements of the encrypted samples serves as a secret key which make the intruder attacks more difficult. Furthermore, CS shows that the linearly projected few random samples have adequate information for decryption with a dramatic volume reduction. Experimental results validate the feasibility and effectiveness of our proposed technique compared with the existing techniques. The retrieved images do not reveal any information with the original information. In addition, the proposed system can be robust even with partial encryption and under brute-force attacks.

  15. Image Segmentation, Registration, Compression, and Matching

    NASA Technical Reports Server (NTRS)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  16. Lossless Astronomical Image Compression and the Effects of Random Noise

    NASA Technical Reports Server (NTRS)

    Pence, William

    2009-01-01

    In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.

  17. Fpack and Funpack Utilities for FITS Image Compression and Uncompression

    NASA Technical Reports Server (NTRS)

    Pence, W.

    2008-01-01

    Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see http://fits.gsfc.nasa.gov). The associated funpack program restores the compressed image file back to its original state (as long as a lossless compression algorithm is used). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression algorithms. Fpack stores the compressed image using the FITS tiled image compression convention (see http://fits.gsfc.nasa.gov/fits_registry.html). Under this convention, the image is first divided into a user-configurable grid of rectangular tiles, and then each tile is individually compressed and stored in a variable-length array column in a FITS binary table. By default, fpack usually adopts a row-by-row tiling pattern. The FITS image header keywords remain uncompressed for fast access by FITS reading and writing software. The tiled image compression convention can in principle support any number of different compression algorithms. The fpack and funpack utilities call on routines in the CFITSIO library (http://hesarc.gsfc.nasa.gov/fitsio) to perform the actual compression and uncompression of the FITS images, which currently supports the GZIP, Rice, H-compress, and PLIO IRAF pixel list compression algorithms.

  18. Computed Tomography Image Compressibility and Limitations of Compression Ratio-Based Guidelines.

    PubMed

    Pambrun, Jean-François; Noumeir, Rita

    2015-12-01

    Finding optimal compression levels for diagnostic imaging is not an easy task. Significant compressibility variations exist between modalities, but little is known about compressibility variations within modalities. Moreover, compressibility is affected by acquisition parameters. In this study, we evaluate the compressibility of thousands of computed tomography (CT) slices acquired with different slice thicknesses, exposures, reconstruction filters, slice collimations, and pitches. We demonstrate that exposure, slice thickness, and reconstruction filters have a significant impact on image compressibility due to an increased high frequency content and a lower acquisition signal-to-noise ratio. We also show that compression ratio is not a good fidelity measure. Therefore, guidelines based on compression ratio should ideally be replaced with other compression measures better correlated with image fidelity. Value-of-interest (VOI) transformations also affect the perception of quality. We have studied the effect of value-of-interest transformation and found significant masking of artifacts when window is widened. PMID:25804842

  19. Fast Lossless Compression of Multispectral-Image Data

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew

    2006-01-01

    An algorithm that effects fast lossless compression of multispectral-image data is based on low-complexity, proven adaptive-filtering algorithms. This algorithm is intended for use in compressing multispectral-image data aboard spacecraft for transmission to Earth stations. Variants of this algorithm could be useful for lossless compression of three-dimensional medical imagery and, perhaps, for compressing image data in general.

  20. Selective document image data compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1998-05-19

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel. 10 figs.

  1. Selective document image data compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1998-01-01

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel.--(235 words)

  2. Outer planet Pioneer imaging communications system study. [data compression

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The effects of different types of imaging data compression on the elements of the Pioneer end-to-end data system were studied for three imaging transmission methods. These were: no data compression, moderate data compression, and the advanced imaging communications system. It is concluded that: (1) the value of data compression is inversely related to the downlink telemetry bit rate; (2) the rolling characteristics of the spacecraft limit the selection of data compression ratios; and (3) data compression might be used to perform acceptable outer planet mission at reduced downlink telemetry bit rates.

  3. Improved Compression of Wavelet-Transformed Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Klimesh, Matthew

    2005-01-01

    A recently developed data-compression method is an adaptive technique for coding quantized wavelet-transformed data, nominally as part of a complete image-data compressor. Unlike some other approaches, this method admits a simple implementation and does not rely on the use of large code tables. A common data compression approach, particularly for images, is to perform a wavelet transform on the input data, and then losslessly compress a quantized version of the wavelet-transformed data. Under this compression approach, it is common for the quantized data to include long sequences, or runs, of zeros. The new coding method uses prefixfree codes for the nonnegative integers as part of an adaptive algorithm for compressing the quantized wavelet-transformed data by run-length coding. In the form of run-length coding used here, the data sequence to be encoded is parsed into strings consisting of some number (possibly 0) of zeros, followed by a nonzero value. The nonzero value and the length of the run of zeros are encoded. For a data stream that contains a sufficiently high frequency of zeros, this method is known to be more effective than using a single variable length code to encode each symbol. The specific prefix-free codes used are from two classes of variable-length codes: a class known as Golomb codes, and a class known as exponential-Golomb codes. The codes within each class are indexed by a single integer parameter. The present method uses exponential-Golomb codes for the lengths of the runs of zeros, and Golomb codes for the nonzero values. The code parameters within each code class are determined adaptively on the fly as compression proceeds, on the basis of statistics from previously encoded values. In particular, a simple adaptive method has been devised to select the parameter identifying the particular exponential-Golomb code to use. The method tracks the average number of bits used to encode recent runlengths, and takes the difference between this average

  4. Image Data Compression In A Personal Computer Environment

    NASA Astrophysics Data System (ADS)

    Farrelle, Paul M.; Harrington, Daniel G.; Jain, Anil K.

    1988-12-01

    This paper describes an image compression engine that is valuable for compressing virtually all types of images that occur in a personal computer environment. This allows efficient handling of still frame video images (monochrome or color) as well as documents and graphics (black-and-white or color) for archival and transmission applications. Through software control different image sizes, bit depths, and choices between lossless compression, high speed compression and controlled error compression are allowed. Having integrated a diverse set of compression algorithms on a single board, the device is suitable for a multitude of picture archival and communication (PAC) applications including medical imaging, electronic publishing, prepress imaging, document processing, law enforcement and forensic imaging.

  5. Research on compressive fusion for remote sensing images

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Wan, Guobin; Li, Yuanyuan; Zhao, Xiaoxia; Chong, Xin

    2014-02-01

    A compressive fusion of remote sensing images is presented based on the block compressed sensing (BCS) and non-subsampled contourlet transform (NSCT). Since the BCS requires small memory space and enables fast computation, firstly, the images with large amounts of data can be compressively sampled into block images with structured random matrix. Further, the compressive measurements are decomposed with NSCT and their coefficients are fused by a rule of linear weighting. And finally, the fused image is reconstructed by the gradient projection sparse reconstruction algorithm, together with consideration of blocking artifacts. The field test of remote sensing images fusion shows the validity of the proposed method.

  6. Texture-based medical image compression.

    PubMed

    Bairagi, Vinayak K; Sapkal, Ashok M; Tapaswi, Ankita

    2013-02-01

    Image processing is one of the most researched areas these days due to the flooding of the internet with an overload of images. The noble medicine industry is not left untouched. It has also suffered with an excess of patient record storage and maintenance. With the advent of automation of the industries in the world, the medicine industry has sought to change and provide a more portable feel to it, leading to the fields of telemedicine and such. Our algorithm comes in handy in such scenarios where large amount of data needs to be transmitted over the network for perusal by another consultant. We aim for a visual quality approach in our algorithm rather than pixel-wise fidelity. We utilize parameters of edges and textures as the basic parameters in our compression algorithm.

  7. Watermark Compression in Medical Image Watermarking Using Lempel-Ziv-Welch (LZW) Lossless Compression Technique.

    PubMed

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohd; Ali, Mushtaq

    2016-04-01

    In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery.

  8. Watermark Compression in Medical Image Watermarking Using Lempel-Ziv-Welch (LZW) Lossless Compression Technique.

    PubMed

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohd; Ali, Mushtaq

    2016-04-01

    In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery. PMID:26429361

  9. Compressed sensing in imaging mass spectrometry

    NASA Astrophysics Data System (ADS)

    Bartels, Andreas; Dülk, Patrick; Trede, Dennis; Alexandrov, Theodore; Maaß, Peter

    2013-12-01

    Imaging mass spectrometry (IMS) is a technique of analytical chemistry for spatially resolved, label-free and multipurpose analysis of biological samples that is able to detect the spatial distribution of hundreds of molecules in one experiment. The hyperspectral IMS data is typically generated by a mass spectrometer analyzing the surface of the sample. In this paper, we propose a compressed sensing approach to IMS which potentially allows for faster data acquisition by collecting only a part of the pixels in the hyperspectral image and reconstructing the full image from this data. We present an integrative approach to perform both peak-picking spectra and denoising m/z-images simultaneously, whereas the state of the art data analysis methods solve these problems separately. We provide a proof of the robustness of the recovery of both the spectra and individual channels of the hyperspectral image and propose an algorithm to solve our optimization problem which is based on proximal mappings. The paper concludes with the numerical reconstruction results for an IMS dataset of a rat brain coronal section.

  10. Chronic edema of the lower extremities: international consensus recommendations for compression therapy clinical research trials.

    PubMed

    Stout, N; Partsch, H; Szolnoky, G; Forner-Cordero, I; Mosti, G; Mortimer, P; Flour, M; Damstra, R; Piller, N; Geyer, M J; Benigni, J-P; Moffat, C; Cornu-Thenard, A; Schingale, F; Clark, M; Chauveau, M

    2012-08-01

    Chronic edema is a multifactorial condition affecting patients with various diseases. Although the pathophysiology of edema varies, compression therapy is a basic tenant of treatment, vital to reducing swelling. Clinical trials are disparate or lacking regarding specific protocols and application recommendations for compression materials and methodology to enable optimal efficacy. Compression therapy is a basic treatment modality for chronic leg edema; however, the evidence base for the optimal application, duration and intensity of compression therapy is lacking. The aim of this document was to present the proceedings of a day-long international expert consensus group meeting that examined the current state of the science for the use of compression therapy in chronic edema. An expert consensus group met in Brighton, UK, in March 2010 to examine the current state of the science for compression therapy in chronic edema of the lower extremities. Panel discussions and open space discussions examined the current literature, clinical practice patterns, common materials and emerging technologies for the management of chronic edema. This document outlines a proposed clinical research agenda focusing on compression therapy in chronic edema. Future trials comparing different compression devices, materials, pressures and parameters for application are needed to enhance the evidence base for optimal chronic oedema management. Important outcomes measures and methods of pressure and oedema quantification are outlined. Future trials are encouraged to optimize compression therapy in chronic edema of the lower extremities.

  11. Image compression using wavelet transform and multiresolution decomposition.

    PubMed

    Averbuch, A; Lazar, D; Israeli, M

    1996-01-01

    Schemes for image compression of black-and-white images based on the wavelet transform are presented. The multiresolution nature of the discrete wavelet transform is proven as a powerful tool to represent images decomposed along the vertical and horizontal directions using the pyramidal multiresolution scheme. The wavelet transform decomposes the image into a set of subimages called shapes with different resolutions corresponding to different frequency bands. Hence, different allocations are tested, assuming that details at high resolution and diagonal directions are less visible to the human eye. The resultant coefficients are vector quantized (VQ) using the LGB algorithm. By using an error correction method that approximates the reconstructed coefficients quantization error, we minimize distortion for a given compression rate at low computational cost. Several compression techniques are tested. In the first experiment, several 512x512 images are trained together and common table codes created. Using these tables, the training sequence black-and-white images achieve a compression ratio of 60-65 and a PSNR of 30-33. To investigate the compression on images not part of the training set, many 480x480 images of uncalibrated faces are trained together and yield global tables code. Images of faces outside the training set are compressed and reconstructed using the resulting tables. The compression ratio is 40; PSNRs are 30-36. Images from the training set have similar compression values and quality. Finally, another compression method based on the end vector bit allocation is examined.

  12. Image compression system and method having optimized quantization tables

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)

    1998-01-01

    A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.

  13. Fast fractal image compression with triangulation wavelets

    NASA Astrophysics Data System (ADS)

    Hebert, D. J.; Soundararajan, Ezekiel

    1998-10-01

    We address the problem of improving the performance of wavelet based fractal image compression by applying efficient triangulation methods. We construct iterative function systems (IFS) in the tradition of Barnsley and Jacquin, using non-uniform triangular range and domain blocks instead of uniform rectangular ones. We search for matching domain blocks in the manner of Zhang and Chen, performing a fast wavelet transform on the blocks and eliminating low resolution mismatches to gain speed. We obtain further improvements by the efficiencies of binary triangulations (including the elimination of affine and symmetry calculations and reduced parameter storage), and by pruning the binary tree before construction of the IFS. Our wavelets are triangular Haar wavelets and `second generation' interpolation wavelets as suggested by Sweldens' recent work.

  14. Image compression using the W-transform

    SciTech Connect

    Reynolds, W.D. Jr.

    1995-12-31

    The authors present the W-transform for a multiresolution signal decomposition. One of the differences between the wavelet transform and W-transform is that the W-transform leads to a nonorthogonal signal decomposition. Another difference between the two is the manner in which the W-transform handles the endpoints (boundaries) of the signal. This approach does not restrict the length of the signal to be a power of two. Furthermore, it does not call for the extension of the signal thus, the W-transform is a convenient tool for image compression. They present the basic theory behind the W-transform and include experimental simulations to demonstrate its capabilities.

  15. [Multispectral image compression algorithms for color reproduction].

    PubMed

    Liang, Wei; Zeng, Ping; Luo, Xue-mei; Wang, Yi-feng; Xie, Kun

    2015-01-01

    In order to improve multispectral images compression efficiency and further facilitate their storage and transmission for the application of color reproduction and so on, in which fields high color accuracy is desired, WF serial methods is proposed, and APWS_RA algorithm is designed. Then the WF_APWS_RA algorithm, which has advantages of low complexity, good illuminant stability and supporting consistent coior reproduction across devices, is presented. The conventional MSE based wavelet embedded coding principle is first studied. And then color perception distortion criterion and visual characteristic matrix W are proposed. Meanwhile, APWS_RA algorithm is formed by optimizing the. rate allocation strategy of APWS. Finally, combined above technologies, a new coding method named WF_APWS_RA is designed. Colorimetric error criterion is used in the algorithm and APWS_RA is applied on visual weighted multispectral image. In WF_APWS_RA, affinity propagation clustering is utilized to exploit spectral correlation of weighted image. Then two-dimensional wavelet transform is used to remove the spatial redundancy. Subsequently, error compensation mechanism and rate pre-allocation are combined to accomplish the embedded wavelet coding. Experimental results show that at the same bit rate, compared with classical coding algorithms, WF serial algorithms have better performance on color retention. APWS_RA preserves least spectral error and WF APWS_RA algorithm has obvious superiority on color accuracy.

  16. Learning random networks for compression of still and moving images

    NASA Technical Reports Server (NTRS)

    Gelenbe, Erol; Sungur, Mert; Cramer, Christopher

    1994-01-01

    Image compression for both still and moving images is an extremely important area of investigation, with numerous applications to videoconferencing, interactive education, home entertainment, and potential applications to earth observations, medical imaging, digital libraries, and many other areas. We describe work on a neural network methodology to compress/decompress still and moving images. We use the 'point-process' type neural network model which is closer to biophysical reality than standard models, and yet is mathematically much more tractable. We currently achieve compression ratios of the order of 120:1 for moving grey-level images, based on a combination of motion detection and compression. The observed signal-to-noise ratio varies from values above 25 to more than 35. The method is computationally fast so that compression and decompression can be carried out in real-time. It uses the adaptive capabilities of a set of neural networks so as to select varying compression ratios in real-time as a function of quality achieved. It also uses a motion detector which will avoid retransmitting portions of the image which have varied little from the previous frame. Further improvements can be achieved by using on-line learning during compression, and by appropriate compensation of nonlinearities in the compression/decompression scheme. We expect to go well beyond the 250:1 compression level for color images with good quality levels.

  17. Efficient MR image reconstruction for compressed MR imaging.

    PubMed

    Huang, Junzhou; Zhang, Shaoting; Metaxas, Dimitris

    2011-10-01

    In this paper, we propose an efficient algorithm for MR image reconstruction. The algorithm minimizes a linear combination of three terms corresponding to a least square data fitting, total variation (TV) and L1 norm regularization. This has been shown to be very powerful for the MR image reconstruction. First, we decompose the original problem into L1 and TV norm regularization subproblems respectively. Then, these two subproblems are efficiently solved by existing techniques. Finally, the reconstructed image is obtained from the weighted average of solutions from two subproblems in an iterative framework. We compare the proposed algorithm with previous methods in term of the reconstruction accuracy and computation complexity. Numerous experiments demonstrate the superior performance of the proposed algorithm for compressed MR image reconstruction. PMID:21742542

  18. Efficient MR image reconstruction for compressed MR imaging.

    PubMed

    Huang, Junzhou; Zhang, Shaoting; Metaxas, Dimitris

    2010-01-01

    In this paper, we propose an efficient algorithm for MR image reconstruction. The algorithm minimizes a linear combination of three terms corresponding to a least square data fitting, total variation (TV) and L1 norm regularization. This has been shown to be very powerful for the MR image reconstruction. First, we decompose the original problem into L1 and TV norm regularization subproblems respectively. Then, these two subproblems are efficiently solved by existing techniques. Finally, the reconstructed image is obtained from the weighted average of solutions from two subproblems in an iterative framework. We compare the proposed algorithm with previous methods in term of the reconstruction accuracy and computation complexity. Numerous experiments demonstrate the superior performance of the proposed algorithm for compressed MR image reconstruction. PMID:20879224

  19. Fast computational scheme of image compression for 32-bit microprocessors

    NASA Technical Reports Server (NTRS)

    Kasperovich, Leonid

    1994-01-01

    This paper presents a new computational scheme of image compression based on the discrete cosine transform (DCT), underlying JPEG and MPEG International Standards. The algorithm for the 2-d DCT computation uses integer operations (register shifts and additions / subtractions only); its computational complexity is about 8 additions per image pixel. As a meaningful example of an on-board image compression application we consider the software implementation of the algorithm for the Mars Rover (Marsokhod, in Russian) imaging system being developed as a part of Mars-96 International Space Project. It's shown that fast software solution for 32-bit microprocessors may compete with the DCT-based image compression hardware.

  20. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-12-30

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.

  1. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.

  2. Texture-based medical image retrieval in compressed domain using compressive sensing.

    PubMed

    Yadav, Kuldeep; Srivastava, Avi; Mittal, Ankush; Ansari, M A

    2014-01-01

    Content-based image retrieval has gained considerable attention in today's scenario as a useful tool in many applications; texture is one of them. In this paper, we focus on texture-based image retrieval in compressed domain using compressive sensing with the help of DC coefficients. Medical imaging is one of the fields which have been affected most, as there had been huge size of image database and getting out the concerned image had been a daunting task. Considering this, in this paper we propose a new model of image retrieval process using compressive sampling, since it allows accurate recovery of image from far fewer samples of unknowns and it does not require a close relation of matching between sampling pattern and characteristic image structure with increase acquisition speed and enhanced image quality.

  3. CWICOM: A Highly Integrated & Innovative CCSDS Image Compression ASIC

    NASA Astrophysics Data System (ADS)

    Poupat, Jean-Luc; Vitulli, Raffaele

    2013-08-01

    The space market is more and more demanding in terms of on image compression performances. The earth observation satellites instrument resolution, the agility and the swath are continuously increasing. It multiplies by 10 the volume of picture acquired on one orbit. In parallel, the satellites size and mass are decreasing, requiring innovative electronic technologies reducing size, mass and power consumption. Astrium, leader on the market of the combined solutions for compression and memory for space application, has developed a new image compression ASIC which is presented in this paper. CWICOM is a high performance and innovative image compression ASIC developed by Astrium in the frame of the ESA contract n°22011/08/NLL/LvH. The objective of this ESA contract is to develop a radiation hardened ASIC that implements the CCSDS 122.0-B-1 Standard for Image Data Compression, that has a SpaceWire interface for configuring and controlling the device, and that is compatible with Sentinel-2 interface and with similar Earth Observation missions. CWICOM stands for CCSDS Wavelet Image COMpression ASIC. It is a large dynamic, large image and very high speed image compression ASIC potentially relevant for compression of any 2D image with bi-dimensional data correlation such as Earth observation, scientific data compression… The paper presents some of the main aspects of the CWICOM development, such as the algorithm and specification, the innovative memory organization, the validation approach and the status of the project.

  4. Accelerated Compressed Sensing Based CT Image Reconstruction.

    PubMed

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization. PMID:26167200

  5. Accelerated Compressed Sensing Based CT Image Reconstruction

    PubMed Central

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R.; Paul, Narinder S.; Cobbold, Richard S. C.

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization. PMID:26167200

  6. Wavelet/scalar quantization compression standard for fingerprint images

    SciTech Connect

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.

  7. Digital mammography, cancer screening: Factors important for image compression

    NASA Technical Reports Server (NTRS)

    Clarke, Laurence P.; Blaine, G. James; Doi, Kunio; Yaffe, Martin J.; Shtern, Faina; Brown, G. Stephen; Winfield, Daniel L.; Kallergi, Maria

    1993-01-01

    The use of digital mammography for breast cancer screening poses several novel problems such as development of digital sensors, computer assisted diagnosis (CAD) methods for image noise suppression, enhancement, and pattern recognition, compression algorithms for image storage, transmission, and remote diagnosis. X-ray digital mammography using novel direct digital detection schemes or film digitizers results in large data sets and, therefore, image compression methods will play a significant role in the image processing and analysis by CAD techniques. In view of the extensive compression required, the relative merit of 'virtually lossless' versus lossy methods should be determined. A brief overview is presented here of the developments of digital sensors, CAD, and compression methods currently proposed and tested for mammography. The objective of the NCI/NASA Working Group on Digital Mammography is to stimulate the interest of the image processing and compression scientific community for this medical application and identify possible dual use technologies within the NASA centers.

  8. On-line structure-lossless digital mammogram image compression

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Huang, H. K.

    1996-04-01

    This paper proposes a novel on-line structure lossless compression method for digital mammograms during the film digitization process. The structure-lossless compression segments the breast and the background, compresses the former with a predictive lossless coding method and discards the latter. This compression scheme is carried out during the film digitization process and no additional time is required for the compression. Digital mammograms are compressed on-the-fly while they are created. During digitization, lines of scanned data are first acquired into a small temporary buffer in the scanner, then they are transferred to a large image buffer in an acquisition computer which is connected to the scanner. The compression process, running concurrently with the digitization process in the acquisition computer, constantly checks the image buffer and compresses any newly arrived data. Since compression is faster than digitization, data compression is completed as soon as digitization is finished. On-line compression during digitization does not increase overall digitizing time. Additionally, it reduces the mammogram image size by a factor of 3 to 9 with no loss of information. This algorithm has been implemented in a film digitizer. Statistics were obtained based on digitizing 46 mammograms at four sampling distances from 50 to 200 microns.

  9. Iliac vein compression syndrome: Clinical, imaging and pathologic findings

    PubMed Central

    Brinegar, Katelyn N; Sheth, Rahul A; Khademhosseini, Ali; Bautista, Jemianne; Oklu, Rahmi

    2015-01-01

    May-Thurner syndrome (MTS) is the pathologic compression of the left common iliac vein by the right common iliac artery, resulting in left lower extremity pain, swelling, and deep venous thrombosis. Though this syndrome was first described in 1851, there are currently no standardized criteria to establish the diagnosis of MTS. Since MTS is treated by a wide array of specialties, including interventional radiology, vascular surgery, cardiology, and vascular medicine, the need for an established diagnostic criterion is imperative in order to reduce misdiagnosis and inappropriate treatment. Although MTS has historically been diagnosed by the presence of pathologic features, the use of dynamic imaging techniques has led to a more radiologic based diagnosis. Thus, imaging plays an integral part in screening patients for MTS, and the utility of a wide array of imaging modalities has been evaluated. Here, we summarize the historical aspects of the clinical features of this syndrome. We then provide a comprehensive assessment of the literature on the efficacy of imaging tools available to diagnose MTS. Lastly, we provide clinical pearls and recommendations to aid physicians in diagnosing the syndrome through the use of provocative measures. PMID:26644823

  10. Data delivery system for MAPPER using image compression

    NASA Astrophysics Data System (ADS)

    Yang, Jeehong; Savari, Serap A.

    2013-03-01

    The data delivery throughput of electron beam lithography systems can be improved by applying lossless image compression to the layout image and using an electron beam writer that can decode the compressed image on-the-fly. In earlier research we introduced the lossless layout image compression algorithm Corner2, which assumes a somewhat idealized writing strategy, namely row-by-row with a raster order. The MAPPER system has electron beam writers positioned in a lattice formation and each electron beam writer writes a designated block in a zig-zag order. We introduce Corner2-MEB, which redesigns Corner2 for MAPPER systems.

  11. The impact of skull bone intensity on the quality of compressed CT neuro images

    NASA Astrophysics Data System (ADS)

    Kowalik-Urbaniak, Ilona; Vrscay, Edward R.; Wang, Zhou; Cavaro-Menard, Christine; Koff, David; Wallace, Bill; Obara, Boguslaw

    2012-02-01

    The increasing use of technologies such as CT and MRI, along with a continuing improvement in their resolution, has contributed to the explosive growth of digital image data being generated. Medical communities around the world have recognized the need for efficient storage, transmission and display of medical images. For example, the Canadian Association of Radiologists (CAR) has recommended compression ratios for various modalities and anatomical regions to be employed by lossy JPEG and JPEG2000 compression in order to preserve diagnostic quality. Here we investigate the effects of the sharp skull edges present in CT neuro images on JPEG and JPEG2000 lossy compression. We conjecture that this atypical effect is caused by the sharp edges between the skull bone and the background regions as well as between the skull bone and the interior regions. These strong edges create large wavelet coefficients that consume an unnecessarily large number of bits in JPEG2000 compression because of its bitplane coding scheme, and thus result in reduced quality at the interior region, which contains most diagnostic information in the image. To validate the conjecture, we investigate a segmentation based compression algorithm based on simple thresholding and morphological operators. As expected, quality is improved in terms of PSNR as well as the structural similarity (SSIM) image quality measure, and its multiscale (MS-SSIM) and informationweighted (IW-SSIM) versions. This study not only supports our conjecture, but also provides a solution to improve the performance of JPEG and JPEG2000 compression for specific types of CT images.

  12. Context Modeler for Wavelet Compression of Spectral Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Xie, Hua; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    A context-modeling sub-algorithm has been developed as part of an algorithm that effects three-dimensional (3D) wavelet-based compression of hyperspectral image data. The context-modeling subalgorithm, hereafter denoted the context modeler, provides estimates of probability distributions of wavelet-transformed data being encoded. These estimates are utilized by an entropy coding subalgorithm that is another major component of the compression algorithm. The estimates make it possible to compress the image data more effectively than would otherwise be possible. The following background discussion is prerequisite to a meaningful summary of the context modeler. This discussion is presented relative to ICER-3D, which is the name attached to a particular compression algorithm and the software that implements it. The ICER-3D software is summarized briefly in the preceding article, ICER-3D Hyperspectral Image Compression Software (NPO-43238). Some aspects of this algorithm were previously described, in a slightly more general context than the ICER-3D software, in "Improving 3D Wavelet-Based Compression of Hyperspectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. In turn, ICER-3D is a product of generalization of ICER, another previously reported algorithm and computer program that can perform both lossless and lossy wavelet-based compression and decompression of gray-scale-image data. In ICER-3D, hyperspectral image data are decomposed using a 3D discrete wavelet transform (DWT). Following wavelet decomposition, mean values are subtracted from spatial planes of spatially low-pass subbands prior to encoding. The resulting data are converted to sign-magnitude form and compressed. In ICER-3D, compression is progressive, in that compressed information is ordered so that as more of the compressed data stream is received, successive reconstructions of the hyperspectral image data are of successively higher overall fidelity.

  13. Estimating JPEG2000 compression for image forensics using Benford's Law

    NASA Astrophysics Data System (ADS)

    Qadir, Ghulam; Zhao, Xi; Ho, Anthony T. S.

    2010-05-01

    With the tremendous growth and usage of digital images nowadays, the integrity and authenticity of digital content is becoming increasingly important, and a growing concern to many government and commercial sectors. Image Forensics, based on a passive statistical analysis of the image data only, is an alternative approach to the active embedding of data associated with Digital Watermarking. Benford's Law was first introduced to analyse the probability distribution of the 1st digit (1-9) numbers of natural data, and has since been applied to Accounting Forensics for detecting fraudulent income tax returns [9]. More recently, Benford's Law has been further applied to image processing and image forensics. For example, Fu et al. [5] proposed a Generalised Benford's Law technique for estimating the Quality Factor (QF) of JPEG compressed images. In our previous work, we proposed a framework incorporating the Generalised Benford's Law to accurately detect unknown JPEG compression rates of watermarked images in semi-fragile watermarking schemes. JPEG2000 (a relatively new image compression standard) offers higher compression rates and better image quality as compared to JPEG compression. In this paper, we propose the novel use of Benford's Law for estimating JPEG2000 compression for image forensics applications. By analysing the DWT coefficients and JPEG2000 compression on 1338 test images, the initial results indicate that the 1st digit probability of DWT coefficients follow the Benford's Law. The unknown JPEG2000 compression rates of the image can also be derived, and proved with the help of a divergence factor, which shows the deviation between the probabilities and Benford's Law. Based on 1338 test images, the mean divergence for DWT coefficients is approximately 0.0016, which is lower than DCT coefficients at 0.0034. However, the mean divergence for JPEG2000 images compression rate at 0.1 is 0.0108, which is much higher than uncompressed DWT coefficients. This result

  14. Compressing subbanded image data with Lempel-Ziv-based coders

    NASA Technical Reports Server (NTRS)

    Glover, Daniel; Kwatra, S. C.

    1993-01-01

    A method of improving the compression of image data using Lempel-Ziv-based coding is presented. Image data is first processed with a simple transform, such as the Walsh Hadamard Transform, to produce subbands. The subbanded data can be rounded to eight bits or it can be quantized for higher compression at the cost of some reduction in the quality of the reconstructed image. The data is then run-length coded to take advantage of the large runs of zeros produced by quantization. Compression results are presented and contrasted with a subband compression method using quantization followed by run-length coding and Huffman coding. The Lempel-Ziv-based coding in conjunction with run-length coding produces the best compression results at the same reconstruction quality (compared with the Huffman-based coding) on the image data used.

  15. Image encryption and compression based on kronecker compressed sensing and elementary cellular automata scrambling

    NASA Astrophysics Data System (ADS)

    Chen, Tinghuan; Zhang, Meng; Wu, Jianhui; Yuen, Chau; Tong, You

    2016-10-01

    Because of simple encryption and compression procedure in single step, compressed sensing (CS) is utilized to encrypt and compress an image. Difference of sparsity levels among blocks of the sparsely transformed image degrades compression performance. In this paper, motivated by this difference of sparsity levels, we propose an encryption and compression approach combining Kronecker CS (KCS) with elementary cellular automata (ECA). In the first stage of encryption, ECA is adopted to scramble the sparsely transformed image in order to uniformize sparsity levels. A simple approximate evaluation method is introduced to test the sparsity uniformity. Due to low computational complexity and storage, in the second stage of encryption, KCS is adopted to encrypt and compress the scrambled and sparsely transformed image, where the measurement matrix with a small size is constructed from the piece-wise linear chaotic map. Theoretical analysis and experimental results show that our proposed scrambling method based on ECA has great performance in terms of scrambling and uniformity of sparsity levels. And the proposed encryption and compression method can achieve better secrecy, compression performance and flexibility.

  16. Improving a DWT-based compression algorithm for high image-quality requirement of satellite images

    NASA Astrophysics Data System (ADS)

    Thiebaut, Carole; Latry, Christophe; Camarero, Roberto; Cazanave, Grégory

    2011-10-01

    Past and current optical Earth observation systems designed by CNES are using a fixed-rate data compression processing performed at a high-rate in a pushbroom mode (also called scan-based mode). This process generates fixed-length data to the mass memory and data downlink is performed at a fixed rate too. Because of on-board memory limitations and high data rate processing needs, the rate allocation procedure is performed over a small image area called a "segment". For both PLEIADES compression algorithm and CCSDS Image Data Compression recommendation, this rate allocation is realised by truncating to the desired rate a hierarchical bitstream of coded and quantized wavelet coefficients for each segment. Because the quantisation induced by truncation of the bit planes description is the same for the whole segment, some parts of the segment have a poor image quality. These artefacts generally occur in low energy areas within a segment of higher level of energy. In order to locally correct these areas, CNES has studied "exceptional processing" targeted for DWT-based compression algorithms. According to a criteria computed for each part of the segment (called block), the wavelet coefficients can be amplified before bit-plane encoding. As usual Region of Interest handling, these multiplied coefficients will be processed earlier by the encoder than in the nominal case (without exceptional processing). The image quality improvement brought by the exceptional processing has been confirmed by visual image analysis and fidelity criteria. The complexity of the proposed improvement for on-board application has also been analysed.

  17. Image compression using a self-organized neural network

    NASA Astrophysics Data System (ADS)

    Ji, Qiang

    1997-04-01

    In the research described by this paper, we implemented and evaluated a linear self-organized feedforward neural network for image compression. Based on the generalized Hebbian learning algorithm (GHA), the neural network extracts the principle components from the auto-correlation matrix of the input images. To do so, an image is first divided into mutually exclusive square blocks of size m multiplied by m. Each block represents a feature vector of m2 dimension in the feature space. The input dimension of the neural net is therefore m2 and the output dimension is m. Training based on GHA for each block then yields a weight matrix with dimension of m multiplied by m2, rows of which are the eigenvectors of the auto-correlation matrix of the input image block. Projection of each image block onto the extracted eigenvectors yields m coefficients for each block. Image compression is then accomplished by quantizing and coding the coefficients for each block. To evaluate the performance of the neural network, two experiments were conducted using standard IEEE images. First, the neural net was implemented to compress images at different bit rates using different block sizes. Second, to test the neural networks's generalization capability, the sets of principle components extracted from one image was used for compressing different but statistically similar images. The evaluation, based on both visual inspection and statistical measures (NMSE and SNR) of the reconstructed images, demonstrates that the network can yield satisfactory image compression performance and possesses a good generalization capability.

  18. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  19. Multispectral image compression based on DSC combined with CCSDS-IDC.

    PubMed

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.

  20. Multispectral Image Compression Based on DSC Combined with CCSDS-IDC

    PubMed Central

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741

  1. Science-based Region-of-Interest Image Compression

    NASA Technical Reports Server (NTRS)

    Wagstaff, K. L.; Castano, R.; Dolinar, S.; Klimesh, M.; Mukai, R.

    2004-01-01

    As the number of currently active space missions increases, so does competition for Deep Space Network (DSN) resources. Even given unbounded DSN time, power and weight constraints onboard the spacecraft limit the maximum possible data transmission rate. These factors highlight a critical need for very effective data compression schemes. Images tend to be the most bandwidth-intensive data, so image compression methods are particularly valuable. In this paper, we describe a method for prioritizing regions in an image based on their scientific value. Using a wavelet compression method that can incorporate priority information, we ensure that the highest priority regions are transmitted with the highest fidelity.

  2. In-orbit commissioning of SPOT5 image compression function

    NASA Astrophysics Data System (ADS)

    Moury, Gilles A.; Latry, Christophe

    2003-11-01

    CNES has launched in May 2002 a new high resolution (2.5m) and large swath (2 x 60km) optical remote sensing satellite: SPOT5. To achieve a high image acquisition capacity with this system, a large on-board mass memory (100 Gbits) together with a 3:1 real-time compression are being used. The quasi-lossless and fixed output rate requirements put on the on-board image compression resulted in the development of a custom algorithm. This algorithm is based on: a DCT decorrelator, a scalar quantizer, an entropy coder and a rate regulator. It has been extensively tested before launch both in terms of quantitative performances and in terms of visual performances. The objectives of the on-orbit validation of the SPOT5 image compression function were the following: (1) Perform an image quality assessment in worst case conditions for the compression. In particular, the THR mode (2.5 m resolution) is potentially sensitive to compression noise and was therefore thoroughly checked for any compression artefacts. Compression noise characteristics were taken into account in the denoising stage of the ground processing for improved performances; (2) Verify the adequacy of the compression parameters with regard to the in-flight characteristics of the instruments (MTF, radiometric spreading, ...); (3) Technological checkout of the compression unit on board the satellite. This paper will present an overview of SPOT5 mission, the methods used for on-orbit validation of the compression and, finally, all the validation results together with the lessons learned throughout this development. On-board image compression for future CNES remote sensing missions will be addressed as a conclusion.

  3. A high-speed distortionless predictive image-compression scheme

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Smyth, P.; Wang, H.

    1990-01-01

    A high-speed distortionless predictive image-compression scheme that is based on differential pulse code modulation output modeling combined with efficient source-code design is introduced. Experimental results show that this scheme achieves compression that is very close to the difference entropy of the source.

  4. Increasing FTIR spectromicroscopy speed and resolution through compressive imaging

    SciTech Connect

    Gallet, Julien; Riley, Michael; Hao, Zhao; Martin, Michael C

    2007-10-15

    At the Advanced Light Source at Lawrence Berkeley National Laboratory, we are investigating how to increase both the speed and resolution of synchrotron infrared imaging. Synchrotron infrared beamlines have diffraction-limited spot sizes and high signal to noise, however spectral images must be obtained one point at a time and the spatial resolution is limited by the effects of diffraction. One technique to assist in speeding up spectral image acquisition is described here and uses compressive imaging algorithms. Compressive imaging can potentially attain resolutions higher than allowed by diffraction and/or can acquire spectral images without having to measure every spatial point individually thus increasing the speed of such maps. Here we present and discuss initial tests of compressive imaging techniques performed with ALS Beamline 1.4.3?s Nic-Plan infrared microscope, Beamline 1.4.4 Continuum XL IR microscope, and also with a stand-alone Nicolet Nexus 470 FTIR spectrometer.

  5. Image compression and decompression based on gazing area

    NASA Astrophysics Data System (ADS)

    Tsumura, Norimichi; Endo, Chizuko; Haneishi, Hideaki; Miyake, Yoichi

    1996-04-01

    In this paper, we introduce a new method of data compression and decompression technique to search the aimed image based on the gazing area of the image. Many methods of data compression have been proposed. Particularly, JPEG compression technique has been widely used as a standard method. However, this method is not always effective to search the aimed images from the image filing system. In a previous paper, by the eye movement analysis, we found that images have a particular gazing area. It is considered that the gazing area is the most important region of the image, then we considered introducing the information to compress and transmit the image. A method named fixation based progressive image transmission is introduced to transmit the image effectively. In this method, after the gazing area is estimated, the area is first transmitted and then the other regions are transmitted. If we are not interested in the first transmitted image, then we can search other images. Therefore, the aimed image can be searched from the filing system, effectively. We compare the searching time of the proposed method with the conventional method. The result shows that the proposed method is faster than the conventional one to search the aimed image.

  6. CoGI: Towards Compressing Genomes as an Image.

    PubMed

    Xie, Xiaojing; Zhou, Shuigeng; Guan, Jihong

    2015-01-01

    Genomic science is now facing an explosive increase of data thanks to the fast development of sequencing technology. This situation poses serious challenges to genomic data storage and transferring. It is desirable to compress data to reduce storage and transferring cost, and thus to boost data distribution and utilization efficiency. Up to now, a number of algorithms / tools have been developed for compressing genomic sequences. Unlike the existing algorithms, most of which treat genomes as one-dimensional text strings and compress them based on dictionaries or probability models, this paper proposes a novel approach called CoGI (the abbreviation of Compressing Genomes as an Image) for genome compression, which transforms the genomic sequences to a two-dimensional binary image (or bitmap), then applies a rectangular partition coding algorithm to compress the binary image. CoGI can be used as either a reference-based compressor or a reference-free compressor. For the former, we develop two entropy-based algorithms to select a proper reference genome. Performance evaluation is conducted on various genomes. Experimental results show that the reference-based CoGI significantly outperforms two state-of-the-art reference-based genome compressors GReEn and RLZ-opt in both compression ratio and compression efficiency. It also achieves comparable compression ratio but two orders of magnitude higher compression efficiency in comparison with XM--one state-of-the-art reference-free genome compressor. Furthermore, our approach performs much better than Gzip--a general-purpose and widely-used compressor, in both compression speed and compression ratio. So, CoGI can serve as an effective and practical genome compressor. The source code and other related documents of CoGI are available at: http://admis.fudan.edu.cn/projects/cogi.htm. PMID:26671800

  7. Compensation of log-compressed images for 3-D ultrasound.

    PubMed

    Sanches, João M; Marques, Jorge S

    2003-02-01

    In this study, a Bayesian approach was used for 3-D reconstruction in the presence of multiplicative noise and nonlinear compression of the ultrasound (US) data. Ultrasound images are often considered as being corrupted by multiplicative noise (speckle). Several statistical models have been developed to represent the US data. However, commercial US equipment performs a nonlinear image compression that reduces the dynamic range of the US signal for visualization purposes. This operation changes the distribution of the image pixels, preventing a straightforward application of the models. In this paper, the nonlinear compression is explicitly modeled and considered in the reconstruction process, where the speckle noise present in the radio frequency (RF) US data is modeled with a Rayleigh distribution. The results obtained by considering the compression of the US data are then compared with those obtained assuming no compression. It is shown that the estimation performed using the nonlinear log-compression model leads to better results than those obtained with the Rayleigh reconstruction method. The proposed algorithm is tested with synthetic and real data and the results are discussed. The results have shown an improvement in the reconstruction results when the compression operation is included in the image formation model, leading to sharper images with enhanced anatomical details.

  8. Compressed/reconstructed test images for CRAF/Cassini

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.

    1991-01-01

    A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.

  9. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  10. The Pixon Method for Data Compression Image Classification, and Image Reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard; Yahil, Amos

    2002-01-01

    As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.

  11. A High Performance Image Data Compression Technique for Space Applications

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Venbrux, Jack

    2003-01-01

    A highly performing image data compression technique is currently being developed for space science applications under the requirement of high-speed and pushbroom scanning. The technique is also applicable to frame based imaging data. The algorithm combines a two-dimensional transform with a bitplane encoding; this results in an embedded bit string with exact desirable compression rate specified by the user. The compression scheme performs well on a suite of test images acquired from spacecraft instruments. It can also be applied to three-dimensional data cube resulting from hyper-spectral imaging instrument. Flight qualifiable hardware implementations are in development. The implementation is being designed to compress data in excess of 20 Msampledsec and support quantization from 2 to 16 bits. This paper presents the algorithm, its applications and status of development.

  12. An image compression technique for use on token ring networks

    NASA Astrophysics Data System (ADS)

    Gorjala, B.; Sayood, Khalid; Meempat, G.

    1992-12-01

    A low complexity technique for compression of images for transmission over local area networks is presented. The technique uses the synchronous traffic as a side channel for improving the performance of an adaptive differential pulse code modulation (ADPCM) based coder.

  13. Pre-Processor for Compression of Multispectral Image Data

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron

    2006-01-01

    A computer program that preprocesses multispectral image data has been developed to provide the Mars Exploration Rover (MER) mission with a means of exploiting the additional correlation present in such data without appreciably increasing the complexity of compressing the data.

  14. Simultaneous fusion, compression, and encryption of multiple images.

    PubMed

    Alfalou, A; Brosseau, C; Abdallah, N; Jridi, M

    2011-11-21

    We report a new spectral multiple image fusion analysis based on the discrete cosine transform (DCT) and a specific spectral filtering method. In order to decrease the size of the multiplexed file, we suggest a procedure of compression which is based on an adapted spectral quantization. Each frequency is encoded with an optimized number of bits according its importance and its position in the DC domain. This fusion and compression scheme constitutes a first level of encryption. A supplementary level of encryption is realized by making use of biometric information. We consider several implementations of this analysis by experimenting with sequences of gray scale images. To quantify the performance of our method we calculate the MSE (mean squared error) and the PSNR (peak signal to noise ratio). Our results consistently improve performances compared to the well-known JPEG image compression standard and provide a viable solution for simultaneous compression and encryption of multiple images.

  15. An image compression technique for use on token ring networks

    NASA Technical Reports Server (NTRS)

    Gorjala, B.; Sayood, Khalid; Meempat, G.

    1992-01-01

    A low complexity technique for compression of images for transmission over local area networks is presented. The technique uses the synchronous traffic as a side channel for improving the performance of an adaptive differential pulse code modulation (ADPCM) based coder.

  16. New Methods for Lossless Image Compression Using Arithmetic Coding.

    ERIC Educational Resources Information Center

    Howard, Paul G.; Vitter, Jeffrey Scott

    1992-01-01

    Identifies four components of a good predictive lossless image compression method: (1) pixel sequence, (2) image modeling and prediction, (3) error modeling, and (4) error coding. Highlights include Laplace distribution and a comparison of the multilevel progressive method for image coding with the prediction by partial precision matching method.…

  17. The Effects of Applying Breast Compression in Dynamic Contrast Material–enhanced MR Imaging

    PubMed Central

    Macura, Katarzyna J.; Kamel, Ihab R.; Bluemke, David A.; Jacobs, Michael A.

    2014-01-01

    resulted in complete loss of enhancement of nine of 210 lesions (4%). Conclusion Breast compression during biopsy affected breast lesion detection, lesion size, and dynamic contrast-enhanced MR imaging interpretation and performance. Limiting the application of breast compression is recommended, except when clinically necessary. © RSNA, 2014 Online supplemental material is available for this article. PMID:24620911

  18. OARSI Clinical Trials Recommendations for Hip Imaging in Osteoarthritis

    PubMed Central

    Gold, Garry E.; Cicuttini, Flavia; Crema, Michel D.; Eckstein, Felix; Guermazi, Ali; Kijowski, Richard; Link, Thomas M.; Maheu, Emmanuel; Martel-Pelletier, Johanne; Miller, Colin G.; Pelletier, Jean-Pierre; Peterfy, Charles G.; Potter, Hollis G.; Roemer, Frank W.; Hunter, David. J

    2015-01-01

    Imaging of hip in osteoarthritis (OA) has seen considerable progress in the past decade, with the introduction of new techniques that may be more sensitive to structural disease changes. The purpose of this expert opinion, consensus driven recommendation is to provide detail on how to apply hip imaging in disease modifying clinical trials. It includes information on acquisition methods/ techniques (including guidance on positioning for radiography, sequence/protocol recommendations/ hardware for MRI); commonly encountered problems (including positioning, hardware and coil failures, artifacts associated with various MRI sequences); quality assurance/ control procedures; measurement methods; measurement performance (reliability, responsiveness, and validity); recommendations for trials; and research recommendations. PMID:25952344

  19. Planning/scheduling techniques for VQ-based image compression

    NASA Technical Reports Server (NTRS)

    Short, Nicholas M., Jr.; Manohar, Mareboyana; Tilton, James C.

    1994-01-01

    The enormous size of the data holding and the complexity of the information system resulting from the EOS system pose several challenges to computer scientists, one of which is data archival and dissemination. More than ninety percent of the data holdings of NASA is in the form of images which will be accessed by users across the computer networks. Accessing the image data in its full resolution creates data traffic problems. Image browsing using a lossy compression reduces this data traffic, as well as storage by factor of 30-40. Of the several image compression techniques, VQ is most appropriate for this application since the decompression of the VQ compressed images is a table lookup process which makes minimal additional demands on the user's computational resources. Lossy compression of image data needs expert level knowledge in general and is not straightforward to use. This is especially true in the case of VQ. It involves the selection of appropriate codebooks for a given data set and vector dimensions for each compression ratio, etc. A planning and scheduling system is described for using the VQ compression technique in the data access and ingest of raw satellite data.

  20. Image compression software for the SOHO LASCO and EIT experiments

    NASA Technical Reports Server (NTRS)

    Grunes, Mitchell R.; Howard, Russell A.; Hoppel, Karl; Mango, Stephen A.; Wang, Dennis

    1994-01-01

    This paper describes the lossless and lossy image compression algorithms to be used on board the Solar Heliospheric Observatory (SOHO) in conjunction with the Large Angle Spectrometric Coronograph and Extreme Ultraviolet Imaging Telescope experiments. It also shows preliminary results obtained using similar prior imagery and discusses the lossy compression artifacts which will result. This paper is in part intended for the use of SOHO investigators who need to understand the results of SOHO compression in order to better allocate the transmission bits which they have been allocated.

  1. Lossless grey image compression using a splitting binary tree

    NASA Astrophysics Data System (ADS)

    Li, Tao; Tian, Xin; Xiong, Cheng-Yi; Li, Yan-Sheng; Zhang, Yun; Tian, Jin-Wen

    2013-10-01

    A multi-layer coding algorithm is proposed for grey image lossless compression. We transform the original image by a set of bases (e.g., wavelets, DCT, and gradient spaces). Then, the transformed image is split into a sub-image set with a binary tree. The set include two parts: major sub-images and minor sub-images, which are coded separately. Experimental results over a common dataset show that the proposed algorithm performs close to JPEG-LS in terms of bitrate. However, we can get a scalable image quality, which is similar to JPEG2000. A suboptimal compressed image can be obtained when the bitstream is truncated by unexpected factors. Our algorithm is quit suitable for image transmission, on internet or on satellites.

  2. Assessment of effects of lossy compression of hyperspectral image data

    NASA Astrophysics Data System (ADS)

    Su, Jonathan K.; Hsu, Su May; Orloff, Seth

    2004-08-01

    Hyperspectral imaging (HSI) sensors provide imagery with hundreds of spectral bands, typically covering VNIR and/or SWIR wavelengths. This high spectral resolution aids applications such as terrain classification and material identification, but it can also produce imagery that occupies well over 100 MB, which creates problems for storage and transmission. This paper investigates the effects of lossy compression on a representative HSI cube, with background classification serving as an example application. The compression scheme first performs principal components analysis spectrally, then discards many of the lower-importance principal-component (PC) images, and then applies JPEG2000 spatial compression to each of the individual retained PC images. The assessment of compression effects considers both general-purpose distortion measures, such as root mean square difference, and statistical tests for deciding whether compression causes significant degradations in classification. Experimental results demonstrate the effectiveness of proper PC-image rate allocation, which enabled compression at ratios of 100-340 without producing significant classification differences. Results also indicate that distortion might serve as a predictor of compression-induced changes in application performance.

  3. Imaging industry expectations for compressed sensing in MRI

    NASA Astrophysics Data System (ADS)

    King, Kevin F.; Kanwischer, Adriana; Peters, Rob

    2015-09-01

    Compressed sensing requires compressible data, incoherent acquisition and a nonlinear reconstruction algorithm to force creation of a compressible image consistent with the acquired data. MRI images are compressible using various transforms (commonly total variation or wavelets). Incoherent acquisition of MRI data by appropriate selection of pseudo-random or non-Cartesian locations in k-space is straightforward. Increasingly, commercial scanners are sold with enough computing power to enable iterative reconstruction in reasonable times. Therefore integration of compressed sensing into commercial MRI products and clinical practice is beginning. MRI frequently requires the tradeoff of spatial resolution, temporal resolution and volume of spatial coverage to obtain reasonable scan times. Compressed sensing improves scan efficiency and reduces the need for this tradeoff. Benefits to the user will include shorter scans, greater patient comfort, better image quality, more contrast types per patient slot, the enabling of previously impractical applications, and higher throughput. Challenges to vendors include deciding which applications to prioritize, guaranteeing diagnostic image quality, maintaining acceptable usability and workflow, and acquisition and reconstruction algorithm details. Application choice depends on which customer needs the vendor wants to address. The changing healthcare environment is putting cost and productivity pressure on healthcare providers. The improved scan efficiency of compressed sensing can help alleviate some of this pressure. Image quality is strongly influenced by image compressibility and acceleration factor, which must be appropriately limited. Usability and workflow concerns include reconstruction time and user interface friendliness and response. Reconstruction times are limited to about one minute for acceptable workflow. The user interface should be designed to optimize workflow and minimize additional customer training. Algorithm

  4. Integer cosine transform for image compression

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Pollara, F.; Shahshahani, M.

    1991-01-01

    This article describes a recently introduced transform algorithm called the integer cosine transform (ICT), which is used in transform-based data compression schemes. The ICT algorithm requires only integer operations on small integers and at the same time gives a rate-distortion performance comparable to that offered by the floating-point discrete cosine transform (DCT). The article addresses the issue of implementation complexity, which is of prime concern for source coding applications of interest in deep-space communications. Complexity reduction in the transform stage of the compression scheme is particularly relevant, since this stage accounts for most (typically over 80 percent) of the computational load.

  5. Lossy and lossless compression of MERIS hyperspectral images with exogenous quasi-optimal spectral transforms

    NASA Astrophysics Data System (ADS)

    Akam Bita, Isidore Paul; Barret, Michel; Dalla Vedova, Florio; Gutzwiller, Jean-Louis

    2010-07-01

    Our research focuses on reducing complexity of hyperspectral image codecs based on transform and/or subband coding, so they can be on-board a satellite. It is well-known that the Karhunen Loeve transform (KLT) can be sub-optimal for non Gaussian data. However, it is generally recommended as the best calculable coding transform in practice. Now, for a compression scheme compatible with both the JPEG2000 Part2 standard and the CCSDS recommendations for onboard satellite image compression, the concept and computation of optimal spectral transforms (OST), at high bit-rates, were carried out, under low restrictive hypotheses. These linear transforms are optimal for reducing spectral redundancies of multi- or hyper-spectral images, when the spatial redundancies are reduced with a fixed 2-D discrete wavelet transform. The problem of OST is their heavy computational cost. In this paper we present the performances in coding of a quasi-optimal spectral transform, called exogenous OrthOST, obtained by learning an orthogonal OST on a sample of hyperspectral images from the spectrometer MERIS. Moreover, we compute an integer variant of OrthOST for lossless compression. The performances are compared to the ones of the KLT in both lossy and lossless compressions. We observe good performances of the exogenous OrthOST.

  6. A novel psychovisual threshold on large DCT for image compression.

    PubMed

    Abu, Nur Azman; Ernawan, Ferda

    2015-01-01

    A psychovisual experiment prescribes the quantization values in image compression. The quantization process is used as a threshold of the human visual system tolerance to reduce the amount of encoded transform coefficients. It is very challenging to generate an optimal quantization value based on the contribution of the transform coefficient at each frequency order. The psychovisual threshold represents the sensitivity of the human visual perception at each frequency order to the image reconstruction. An ideal contribution of the transform at each frequency order will be the primitive of the psychovisual threshold in image compression. This research study proposes a psychovisual threshold on the large discrete cosine transform (DCT) image block which will be used to automatically generate the much needed quantization tables. The proposed psychovisual threshold will be used to prescribe the quantization values at each frequency order. The psychovisual threshold on the large image block provides significant improvement in the quality of output images. The experimental results on large quantization tables from psychovisual threshold produce largely free artifacts in the visual output image. Besides, the experimental results show that the concept of psychovisual threshold produces better quality image at the higher compression rate than JPEG image compression.

  7. Microscopic off-axis holographic image compression with JPEG 2000

    NASA Astrophysics Data System (ADS)

    Bruylants, Tim; Blinder, David; Ottevaere, Heidi; Munteanu, Adrian; Schelkens, Peter

    2014-05-01

    With the advent of modern computing and imaging technologies, the use of digital holography became practical in many applications such as microscopy, interferometry, non-destructive testing, data encoding, and certification. In this respect the need for an efficient representation technology becomes imminent. However, microscopic holographic off-axis recordings have characteristics that differ significantly from that of regular natural imagery, because they represent a recorded interference pattern that mainly manifests itself in the high-frequency bands. Since regular image compression schemes are typically based on a Laplace frequency distribution, they are unable to optimally represent such holographic data. However, unlike most image codecs, the JPEG 2000 standard can be modified to efficiently cope with images containing such alternative frequency distributions by applying the arbitrary wavelet decomposition of Part 2. As such, employing packet decompositions already significantly improves the compression performance for off-axis holographic images over that of regular image compression schemes. Moreover, extending JPEG 2000 with directional wavelet transforms shows even higher compression efficiency improvements. Such an extension to the standard would only require signaling the applied directions, and would not impact any other existing functionality. In this paper, we show that wavelet packet decomposition combined with directional wavelet transforms provides efficient lossy-to-lossless compression of microscopic off-axis holographic imagery.

  8. Watermarking of ultrasound medical images in teleradiology using compressed watermark.

    PubMed

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohamad; Ali, Mushtaq

    2016-01-01

    The open accessibility of Internet-based medical images in teleradialogy face security threats due to the nonsecured communication media. This paper discusses the spatial domain watermarking of ultrasound medical images for content authentication, tamper detection, and lossless recovery. For this purpose, the image is divided into two main parts, the region of interest (ROI) and region of noninterest (RONI). The defined ROI and its hash value are combined as watermark, lossless compressed, and embedded into the RONI part of images at pixel's least significant bits (LSBs). The watermark lossless compression and embedding at pixel's LSBs preserve image diagnostic and perceptual qualities. Different lossless compression techniques including Lempel-Ziv-Welch (LZW) were tested for watermark compression. The performances of these techniques were compared based on more bit reduction and compression ratio. LZW was found better than others and used in tamper detection and recovery watermarking of medical images (TDARWMI) scheme development to be used for ROI authentication, tamper detection, localization, and lossless recovery. TDARWMI performance was compared and found to be better than other watermarking schemes.

  9. Watermarking of ultrasound medical images in teleradiology using compressed watermark.

    PubMed

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohamad; Ali, Mushtaq

    2016-01-01

    The open accessibility of Internet-based medical images in teleradialogy face security threats due to the nonsecured communication media. This paper discusses the spatial domain watermarking of ultrasound medical images for content authentication, tamper detection, and lossless recovery. For this purpose, the image is divided into two main parts, the region of interest (ROI) and region of noninterest (RONI). The defined ROI and its hash value are combined as watermark, lossless compressed, and embedded into the RONI part of images at pixel's least significant bits (LSBs). The watermark lossless compression and embedding at pixel's LSBs preserve image diagnostic and perceptual qualities. Different lossless compression techniques including Lempel-Ziv-Welch (LZW) were tested for watermark compression. The performances of these techniques were compared based on more bit reduction and compression ratio. LZW was found better than others and used in tamper detection and recovery watermarking of medical images (TDARWMI) scheme development to be used for ROI authentication, tamper detection, localization, and lossless recovery. TDARWMI performance was compared and found to be better than other watermarking schemes. PMID:26839914

  10. Medical image compression by using three-dimensional wavelet transformation.

    PubMed

    Wang, J; Huang, K

    1996-01-01

    This paper proposes a three-dimensional (3-D) medical image compression method for computed tomography (CT) and magnetic resonance (MR) that uses a separable nonuniform 3-D wavelet transform. The separable wavelet transform employs one filter bank within two-dimensional (2-D) slices and then a second filter bank on the slice direction. CT and MR image sets normally have different resolutions within a slice and between slices. The pixel distances within a slice are normally less than 1 mm and the distance between slices can vary from 1 mm to 10 mm. To find the best filter bank in the slice direction, the authors use the various filter banks in the slice direction and compare the compression results. The results from the 12 selected MR and CT image sets at various slice thickness show that the Haar transform in the slice direction gives the optimum performance for most image sets, except for a CT image set which has 1 mm slice distance. Compared with 2-D wavelet compression, compression ratios of the 3-D method are about 70% higher for CT and 35% higher for MR image sets at a peak signal to noise ratio (PSNR) of 50 dB, In general, the smaller the slice distance, the better the 3-D compression performance. PMID:18215935

  11. Projection-based spatially adaptive reconstruction of block-transform compressed images.

    PubMed

    Yang, Y; Galatsanos, N P; Katsaggelos, A K

    1995-01-01

    At the present time, block-transform coding is probably the most popular approach for image compression. For this approach, the compressed images are decoded using only the transmitted transform data. We formulate image decoding as an image recovery problem. According to this approach, the decoded image is reconstructed using not only the transmitted data but, in addition, the prior knowledge that images before compression do not display between-block discontinuities. A spatially adaptive image recovery algorithm is proposed based on the theory of projections onto convex sets. Apart from the data constraint set, this algorithm uses another new constraint set that enforces between-block smoothness. The novelty of this set is that it captures both the local statistical properties of the image and the human perceptual characteristics. A simplified spatially adaptive recovery algorithm is also proposed, and the analysis of its computational complexity is presented. Numerical experiments are shown that demonstrate that the proposed algorithms work better than both the JPEG deblocking recommendation and our previous projection-based image decoding approach.

  12. Compressive SAR imaging with joint sparsity and local similarity exploitation.

    PubMed

    Shen, Fangfang; Zhao, Guanghui; Shi, Guangming; Dong, Weisheng; Wang, Chenglong; Niu, Yi

    2015-02-12

    Compressive sensing-based synthetic aperture radar (SAR) imaging has shown its superior capability in high-resolution image formation. However, most of those works focus on the scenes that can be sparsely represented in fixed spaces. When dealing with complicated scenes, these fixed spaces lack adaptivity in characterizing varied image contents. To solve this problem, a new compressive sensing-based radar imaging approach with adaptive sparse representation is proposed. Specifically, an autoregressive model is introduced to adaptively exploit the structural sparsity of an image. In addition, similarity among pixels is integrated into the autoregressive model to further promote the capability and thus an adaptive sparse representation facilitated by a weighted autoregressive model is derived. Since the weighted autoregressive model is inherently determined by the unknown image, we propose a joint optimization scheme by iterative SAR imaging and updating of the weighted autoregressive model to solve this problem. Eventually, experimental results demonstrated the validity and generality of the proposed approach.

  13. Medical Image Compression Using a New Subband Coding Method

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen; Tucker, Doug

    1995-01-01

    A recently introduced iterative complexity- and entropy-constrained subband quantization design algorithm is generalized and applied to medical image compression. In particular, the corresponding subband coder is used to encode Computed Tomography (CT) axial slice head images, where statistical dependencies between neighboring image subbands are exploited. Inter-slice conditioning is also employed for further improvements in compression performance. The subband coder features many advantages such as relatively low complexity and operation over a very wide range of bit rates. Experimental results demonstrate that the performance of the new subband coder is relatively good, both objectively and subjectively.

  14. Image Compression on a VLSI Neural-Based Vector Quantizer.

    ERIC Educational Resources Information Center

    Chen, Oscal T.-C.; And Others

    1992-01-01

    Describes a modified frequency-sensitive self-organization (FSO) algorithm for image data compression and the associated VLSI architecture. Topics discussed include vector quantization; VLSI neural processor architecture; detailed circuit implementation; and a neural network vector quantization prototype chip. Examples of images using the FSO…

  15. The FBI compression standard for digitized fingerprint images

    SciTech Connect

    Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.; Hopper, T.

    1996-10-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  16. Compressive sampling for time critical microwave imaging applications

    PubMed Central

    O'Halloran, Martin; McGinley, Brian; Conceicao, Raquel C.; Kilmartin, Liam; Jones, Edward; Glavin, Martin

    2014-01-01

    Across all biomedical imaging applications, there is a growing emphasis placed on reducing data acquisition and imaging times. This research explores the use of a technique, known as compressive sampling or compressed sensing (CS), as an efficient technique to minimise the data acquisition time for time critical microwave imaging (MWI) applications. Where a signal exhibits sparsity in the time domain, the proposed CS implementation allows for sub-sampling acquisition in the frequency domain and consequently shorter imaging times, albeit at the expense of a slight degradation in reconstruction quality of the signals as the compression increases. This Letter focuses on ultra wideband (UWB) radar MWI applications where reducing acquisition is of critical importance therefore a slight degradation in reconstruction quality may be acceptable. The analysis demonstrates the effectiveness and suitability of CS with UWB applications. PMID:26609368

  17. Compressive spectral integral imaging using a microlens array

    NASA Astrophysics Data System (ADS)

    Feng, Weiyi; Rueda, Hoover; Fu, Chen; Qian, Chen; Arce, Gonzalo R.

    2016-05-01

    In this paper, a compressive spectral integral imaging system using a microlens array (MLA) is proposed. This system can sense the 4D spectro-volumetric information into a compressive 2D measurement image on the detector plane. In the reconstruction process, the 3D spatial information at different depths and the spectral responses of each spatial volume pixel can be obtained simultaneously. In the simulation, sensing of the 3D objects is carried out by optically recording elemental images (EIs) using a scanned pinhole camera. With the elemental images, a spectral data cube with different perspectives and depth information can be reconstructed using the TwIST algorithm in the multi-shot compressive spectral imaging framework. Then, the 3D spatial images with one dimensional spectral information at arbitrary depths are computed using the computational integral imaging method by inversely mapping the elemental images according to geometrical optics. The simulation results verify the feasibility of the proposed system. The 3D volume images and the spectral information of the volume pixels can be successfully reconstructed at the location of the 3D objects. The proposed system can capture both 3D volumetric images and spectral information in a video rate, which is valuable in biomedical imaging and chemical analysis.

  18. Context and task-aware knowledge-enhanced compressive imaging

    NASA Astrophysics Data System (ADS)

    Rao, Shankar; Ni, Kang-Yu; Owechko, Yuri

    2013-09-01

    We describe a foveated compressive sensing approach for image analysis applications that utilizes knowledge of the task to be performed to reduce the number of required measurements compared to conventional Nyquist sampling and compressive sensing based approaches. Our Compressive Optical Foveated Architecture (COFA) adapts the dictionary and compressive measurements to structure and sparsity in the signal, task, and scene by reducing measurement and dictionary mutual coherence and increasing sparsity using principles of actionable information and foveated compressive sensing. Actionable information is used to extract task-relevant regions of interest (ROIs) from a low-resolution scene analysis by eliminating the effects of nuisances for occlusion and anomalous motion detection. From the extracted ROIs, preferential measurements are taken using foveation as part of the compressive sensing adaptation process. The task-specific measurement matrix is optimized by using a novel saliency-weighted coherence minimization with respect to the learned signal dictionary. This incorporates the relative usage of the atoms in the dictionary. Therefore, the measurement matrix is not random, as in conventional compressive sensing, but is based on the dictionary structure and atom distributions. We utilize a patch-based method to learn the signal priors. A treestructured dictionary of image patches using KSVD is learned which can sparsely represent any given image patch with the tree-structure. We have implemented COFA in an end-to-end simulation of a vehicle fingerprinting task for aerial surveillance using foveated compressive measurements adapted to hierarchical ROIs consisting of background, roads, and vehicles. Our results show 113x reduction in measurements over conventional sensing and 28x reduction over compressive sensing using random measurements.

  19. Onboard low-complexity compression of solar stereo images.

    PubMed

    Wang, Shuang; Cui, Lijuan; Cheng, Samuel; Stanković, Lina; Stanković, Vladimir

    2012-06-01

    We propose an adaptive distributed compression solution using particle filtering that tracks correlation, as well as performing disparity estimation, at the decoder side. The proposed algorithm is tested on the stereo solar images captured by the twin satellites system of NASA's Solar TErrestrial RElations Observatory (STEREO) project. Our experimental results show improved compression performance w.r.t. to a benchmark compression scheme, accurate correlation estimation by our proposed particle-based belief propagation algorithm, and significant peak signal-to-noise ratio improvement over traditional separate bit-plane decoding without dynamic correlation and disparity estimation.

  20. Improved satellite image compression and reconstruction via genetic algorithms

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael; Lamont, Gary

    2008-10-01

    A wide variety of signal and image processing applications, including the US Federal Bureau of Investigation's fingerprint compression standard [3] and the JPEG-2000 image compression standard [26], utilize wavelets. This paper describes new research that demonstrates how a genetic algorithm (GA) may be used to evolve transforms that outperform wavelets for satellite image compression and reconstruction under conditions subject to quantization error. The new approach builds upon prior work by simultaneously evolving real-valued coefficients representing matched forward and inverse transform pairs at each of three levels of a multi-resolution analysis (MRA) transform. The training data for this investigation consists of actual satellite photographs of strategic urban areas. Test results show that a dramatic reduction in the error present in reconstructed satellite images may be achieved without sacrificing the compression capabilities of the forward transform. The transforms evolved during this research outperform previous start-of-the-art solutions, which optimized coefficients for the reconstruction transform only. These transforms also outperform wavelets, reducing error by more than 0.76 dB at a quantization level of 64. In addition, transforms trained using representative satellite images do not perform quite as well when subsequently tested against images from other classes (such as fingerprints or portraits). This result suggests that the GA developed for this research is automatically learning to exploit specific attributes common to the class of images represented in the training population.

  1. Fast-adaptive near-lossless image compression

    NASA Astrophysics Data System (ADS)

    He, Kejing

    2016-05-01

    The purpose of image compression is to store or transmit image data efficiently. However, most compression methods emphasize the compression ratio rather than the throughput. We propose an encoding process and rules, and consequently a fast-adaptive near-lossless image compression method (FAIC) with good compression ratio. FAIC is a single-pass method, which removes bits from each codeword, then predicts the next pixel value through localized edge detection techniques, and finally uses Golomb-Rice codes to encode the residuals. FAIC uses only logical operations, bitwise operations, additions, and subtractions. Meanwhile, it eliminates the slow operations (e.g., multiplication, division, and logarithm) and the complex entropy coder, which can be a bottleneck in hardware implementations. Besides, FAIC does not depend on any precomputed tables or parameters. Experimental results demonstrate that FAIC achieves good balance between compression ratio and computational complexity in certain range (e.g., peak signal-to-noise ratio >35 dB, bits per pixel>2). It is suitable for applications in which the amount of data is huge or the computation power is limited.

  2. Imaging Recommendations for Acute Stroke and Transient Ischemic Attack Patients

    PubMed Central

    Wintermark, Max; Sanelli, Pina C.; Albers, Gregory W.; Bello, Jacqueline A.; Derdeyn, Colin P.; Hetts, Steven W.; Johnson, Michele H.; Kidwell, Chelsea S.; Lev, Michael H.; Liebeskind, David S.; Rowley, Howard A.; Schaefer, Pamela W.; Sunshine, Jeffrey L.; Zaharchuk, Greg; Meltzer, Carolyn C.

    2014-01-01

    In the article entitled “Imaging Recommendations for Acute Stroke and Transient Ischemic Attack Patients: A Joint Statement by the American Society of Neuroradiology, the American College of Radiology and the Society of NeuroInterventional Surgery”, we are proposing a simple, pragmatic approach that will allow the reader to develop an optimal imaging algorithm for stroke patients at their institution. PMID:23948676

  3. Feature-preserving image/video compression

    NASA Astrophysics Data System (ADS)

    Al-Jawad, Naseer; Jassim, Sabah

    2005-10-01

    Advances in digital image processing, the advents of multimedia computing, and the availability of affordable high quality digital cameras have led to increased demand for digital images/videos. There has been a fast growth in the number of information systems that benefit from digital imaging techniques and present many tough challenges. In this paper e are concerned with applications for which image quality is a critical requirement. The fields of medicine, remote sensing, real time surveillance, and image-based automatic fingerprint/face identification systems are all but few examples of such applications. Medical care is increasingly dependent on imaging for diagnostics, surgery, and education. It is estimated that medium size hospitals in the US generate terabytes of MRI images and X-Ray images are generated to be stored in very large databases which are frequently accessed and searched for research and training. On the other hand, the rise of international terrorism and the growth of identity theft have added urgency to the development of new efficient biometric-based person verification/authentication systems. In future, such systems can provide an additional layer of security for online transactions or for real-time surveillance.

  4. Compression through decomposition into browse and residual images

    NASA Technical Reports Server (NTRS)

    Novik, Dmitry A.; Tilton, James C.; Manohar, M.

    1993-01-01

    Economical archival and retrieval of image data is becoming increasingly important considering the unprecedented data volumes expected from the Earth Observing System (EOS) instruments. For cost effective browsing the image data (possibly from remote site), and retrieving the original image data from the data archive, we suggest an integrated image browse and data archive system employing incremental transmission. We produce our browse image data with the JPEG/DCT lossy compression approach. Image residual data is then obtained by taking the pixel by pixel differences between the original data and the browse image data. We then code the residual data with a form of variable length coding called diagonal coding. In our experiments, the JPEG/DCT is used at different quality factors (Q) to generate the browse and residual data. The algorithm has been tested on band 4 of two Thematic mapper (TM) data sets. The best overall compression ratios (of about 1.7) were obtained when a quality factor of Q=50 was used to produce browse data at a compression ratio of 10 to 11. At this quality factor the browse image data has virtually no visible distortions for the images tested.

  5. Compressed image quality metric based on perceptually weighted distortion.

    PubMed

    Hu, Sudeng; Jin, Lina; Wang, Hanli; Zhang, Yun; Kwong, Sam; Kuo, C-C Jay

    2015-12-01

    Objective quality assessment for compressed images is critical to various image compression systems that are essential in image delivery and storage. Although the mean squared error (MSE) is computationally simple, it may not be accurate to reflect the perceptual quality of compressed images, which is also affected dramatically by the characteristics of human visual system (HVS), such as masking effect. In this paper, an image quality metric (IQM) is proposed based on perceptually weighted distortion in terms of the MSE. To capture the characteristics of HVS, a randomness map is proposed to measure the masking effect and a preprocessing scheme is proposed to simulate the processing that occurs in the initial part of HVS. Since the masking effect highly depends on the structural randomness, the prediction error from neighborhood with a statistical model is used to measure the significance of masking. Meanwhile, the imperceptible signal with high frequency could be removed by preprocessing with low-pass filters. The relation is investigated between the distortions before and after masking effect, and a masking modulation model is proposed to simulate the masking effect after preprocessing. The performance of the proposed IQM is validated on six image databases with various compression distortions. The experimental results show that the proposed algorithm outperforms other benchmark IQMs. PMID:26415170

  6. Compressive microscopic imaging with "positive-negative" light modulation

    NASA Astrophysics Data System (ADS)

    Yu, Wen-Kai; Yao, Xu-Ri; Liu, Xue-Feng; Lan, Ruo-Ming; Wu, Ling-An; Zhai, Guang-Jie; Zhao, Qing

    2016-07-01

    An experiment on compressive microscopic imaging with single-pixel detector and single-arm has been performed on the basis of "positive-negative" (differential) light modulation of a digital micromirror device (DMD). A magnified image of micron-sized objects illuminated by the microscope's own incandescent lamp has been successfully acquired. The image quality is improved by one more orders of magnitude compared with that obtained by conventional single-pixel imaging scheme with normal modulation using the same sampling rate, and moreover, the system is robust against the instability of light source and may be applied to very weak light condition. Its nature and the analysis of noise sources is discussed deeply. The realization of this technique represents a big step to the practical applications of compressive microscopic imaging in the fields of biology and materials science.

  7. View compensated compression of volume rendered images for remote visualization.

    PubMed

    Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S

    2009-07-01

    Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.

  8. A JPEG backward-compatible HDR image compression

    NASA Astrophysics Data System (ADS)

    Korshunov, Pavel; Ebrahimi, Touradj

    2012-10-01

    High Dynamic Range (HDR) imaging is expected to become one of the technologies that could shape next generation of consumer digital photography. Manufacturers are rolling out cameras and displays capable of capturing and rendering HDR images. The popularity and full public adoption of HDR content is however hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of Low Dynamic Range (LDR) displays that are unable to render HDR. To facilitate wide spread of HDR usage, the backward compatibility of HDR technology with commonly used legacy image storage, rendering, and compression is necessary. Although many tone-mapping algorithms were developed for generating viewable LDR images from HDR content, there is no consensus on which algorithm to use and under which conditions. This paper, via a series of subjective evaluations, demonstrates the dependency of perceived quality of the tone-mapped LDR images on environmental parameters and image content. Based on the results of subjective tests, it proposes to extend JPEG file format, as the most popular image format, in a backward compatible manner to also deal with HDR pictures. To this end, the paper provides an architecture to achieve such backward compatibility with JPEG and demonstrates efficiency of a simple implementation of this framework when compared to the state of the art HDR image compression.

  9. Differentiation applied to lossless compression of medical images.

    PubMed

    Nijim, Y W; Stearns, S D; Mikhael, W B

    1996-01-01

    Lossless compression of medical images using a proposed differentiation technique is explored. This scheme is based on computing weighted differences between neighboring pixel values. The performance of the proposed approach, for the lossless compression of magnetic resonance (MR) images and ultrasonic images, is evaluated and compared with the lossless linear predictor and the lossless Joint Photographic Experts Group (JPEG) standard. The residue sequence of these techniques is coded using arithmetic coding. The proposed scheme yields compression measures, in terms of bits per pixel, that are comparable with or lower than those obtained using the linear predictor and the lossless JPEG standard, respectively, with 8-b medical images. The advantages of the differentiation technique presented here over the linear predictor are: 1) the coefficients of the differentiator are known by the encoder and the decoder, which eliminates the need to compute or encode these coefficients, and 21 the computational complexity is greatly reduced. These advantages are particularly attractive in real time processing for compressing and decompressing medical images. PMID:18215936

  10. Ultrasonic elastography using sector scan imaging and a radial compression.

    PubMed

    Souchon, Rémi; Soualmi, Lahbib; Bertrand, Michel; Chapelon, Jean-Yves; Kallel, Faouzi; Ophir, Jonathan

    2002-05-01

    Elastography is an imaging technique based on strain estimation in soft tissues under quasi-static compression. The stress is usually created by a compression plate, and the target is imaged by an ultrasonic linear array. This configuration is used for breast elastography, and has been investigated both theoretically and experimentally. Phenomena such as strain decay with tissue depth and strain concentrations have been reported. However in some in vivo situations, like prostate or blood vessels imaging, this set-up cannot be used. We propose a device to acquire in vivo elastograms of the prostate. The compression is applied by inflating a balloon that covers a transrectal sector probe. The 1D algorithm used to calculate the radial strain fails if the center of the imaging probe does not correspond to the center of the compressor. Therefore, experimental elastograms are calculated with a 2D algorithm that accounts for tangential displacements of the tissue. In this article, in order to gain a better understanding of the image formation process, the use of ultrasonic sector scans to image the radial compression of a target is investigated. Elastograms of homogeneous phantoms are presented, and compared with simulated images. Both show a strain decay with tissue depth. Then experimental and simulated elastograms of a phantom that contains a hard inclusion are presented, showing that strain concentrations occur as well. A method to compensate for strain decay and therefore to increase the contrast of the strain elastograms is proposed. It is expected that such information will help to interpret and possibly improve the elastograms obtained via radial compression.

  11. Hardware implementation of LOTRRP compression for real-time image compression

    NASA Astrophysics Data System (ADS)

    Crooks, Marc W.; Capps, Charles; Hawkins, Eric; Wesley, Michael

    1996-03-01

    Lapped Orthogonal Transforms (LOT) are becoming more widely used in image coding applications for image transmission and archival schemes. Previously sponsored U.S. Army Missile Command research has developed a LOT Recursive Residual Projection (RRP) that uses the following multiple bases functions: Discrete Cosine Transform (DCT), Discrete Walsh Transform (DWT), and Discrete Slant Transform (DST). For high compression ratios the LOTRRP was shown no outperform the single bases transforms at the cost increased computations. The work presented in this paper describes a VHSIC Hardware Description Language (VHDL) design of the LOTDCT, LOTDWT, and LOTDST targeted for implementation on Application Specific Integrated Circuits (ASICs). This hardware solution was chosen to compress RS-170 standard video for real-time image transmission on a very low bandwidth packetized data link.

  12. Simultaneous image compression, fusion and encryption algorithm based on compressive sensing and chaos

    NASA Astrophysics Data System (ADS)

    Liu, Xingbin; Mei, Wenbo; Du, Huiqian

    2016-05-01

    In this paper, a novel approach based on compressive sensing and chaos is proposed for simultaneously compressing, fusing and encrypting multi-modal images. The sparsely represented source images are firstly measured with the key-controlled pseudo-random measurement matrix constructed using logistic map, which reduces the data to be processed and realizes the initial encryption. Then the obtained measurements are fused by the proposed adaptive weighted fusion rule. The fused measurement is further encrypted into the ciphertext through an iterative procedure including improved random pixel exchanging technique and fractional Fourier transform. The fused image can be reconstructed by decrypting the ciphertext and using a recovery algorithm. The proposed algorithm not only reduces data volume but also simplifies keys, which improves the efficiency of transmitting data and distributing keys. Numerical results demonstrate the feasibility and security of the proposed scheme.

  13. Hyperspectral scanning white light interferometry based on compressive imaging

    NASA Astrophysics Data System (ADS)

    Azari, Mohammad; Habibi, Nasim; Abolbashari, Mehrdad; Farahi, Faramarz

    2016-02-01

    We have developed a compressive hyperspectral imaging system that is based on single-pixel camera architecture. We have incorporated the developed system in a scanning white-light interferometer (SWLI) and showed that replacing SWLI's CCD-based camera by the compressive hyperspectral imaging system, we have access to high-resolution multispectral images of interferometer's fringes. Using these multi-spectral images, the system is capable of simultaneous spectroscopy of the surface, which can be used, for example, to eliminate the effect of surface contamination and providing new spectral information for fringe signal analysis which could be used to reduce the need for vertical scan, therefore making height measurement more tolerant to object's position.

  14. Fractal image compression: A resolution independent representation for imagery

    NASA Technical Reports Server (NTRS)

    Sloan, Alan D.

    1993-01-01

    A deterministic fractal is an image which has low information content and no inherent scale. Because of their low information content, deterministic fractals can be described with small data sets. They can be displayed at high resolution since they are not bound by an inherent scale. A remarkable consequence follows. Fractal images can be encoded at very high compression ratios. This fern, for example is encoded in less than 50 bytes and yet can be displayed at resolutions with increasing levels of detail appearing. The Fractal Transform was discovered in 1988 by Michael F. Barnsley. It is the basis for a new image compression scheme which was initially developed by myself and Michael Barnsley at Iterated Systems. The Fractal Transform effectively solves the problem of finding a fractal which approximates a digital 'real world image'.

  15. Wavelet-based pavement image compression and noise reduction

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Huang, Peisen S.; Chiang, Fu-Pen

    2005-08-01

    For any automated distress inspection system, typically a huge number of pavement images are collected. Use of an appropriate image compression algorithm can save disk space, reduce the saving time, increase the inspection distance, and increase the processing speed. In this research, a modified EZW (Embedded Zero-tree Wavelet) coding method, which is an improved version of the widely used EZW coding method, is proposed. This method, unlike the two-pass approach used in the original EZW method, uses only one pass to encode both the coordinates and magnitudes of wavelet coefficients. An adaptive arithmetic encoding method is also implemented to encode four symbols assigned by the modified EZW into binary bits. By applying a thresholding technique to terminate the coding process, the modified EZW coding method can compress the image and reduce noise simultaneously. The new method is much simpler and faster. Experimental results also show that the compression ratio was increased one and one-half times compared to the EZW coding method. The compressed and de-noised data can be used to reconstruct wavelet coefficients for off-line pavement image processing such as distress classification and quantification.

  16. Optimized satellite image compression and reconstruction via evolution strategies

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael

    2009-05-01

    This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.

  17. Analysis-Driven Lossy Compression of DNA Microarray Images.

    PubMed

    Hernández-Cabronero, Miguel; Blanes, Ian; Pinho, Armando J; Marcellin, Michael W; Serra-Sagristà, Joan

    2016-02-01

    DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yield only limited compression performance (compression ratios below 2:1), whereas lossy coding methods may introduce unacceptable distortions in the analysis process. This work introduces a novel Relative Quantizer (RQ), which employs non-uniform quantization intervals designed for improved compression while bounding the impact on the DNA microarray analysis. This quantizer constrains the maximum relative error introduced into quantized imagery, devoting higher precision to pixels critical to the analysis process. For suitable parameter choices, the resulting variations in the DNA microarray analysis are less than half of those inherent to the experimental variability. Experimental results reveal that appropriate analysis can still be performed for average compression ratios exceeding 4.5:1.

  18. Simultaneous compression and encryption of closely resembling images: application to video sequences and polarimetric images.

    PubMed

    Aldossari, M; Alfalou, A; Brosseau, C

    2014-09-22

    This study presents and validates an optimized method of simultaneous compression and encryption designed to process images with close spectra. This approach is well adapted to the compression and encryption of images of a time-varying scene but also to static polarimetric images. We use the recently developed spectral fusion method [Opt. Lett.35, 1914-1916 (2010)] to deal with the close resemblance of the images. The spectral plane (containing the information to send and/or to store) is decomposed in several independent areas which are assigned according a specific way. In addition, each spectrum is shifted in order to minimize their overlap. The dual purpose of these operations is to optimize the spectral plane allowing us to keep the low- and high-frequency information (compression) and to introduce an additional noise for reconstructing the images (encryption). Our results show that not only can the control of the spectral plane enhance the number of spectra to be merged, but also that a compromise between the compression rate and the quality of the reconstructed images can be tuned. We use a root-mean-square (RMS) optimization criterion to treat compression. Image encryption is realized at different security levels. Firstly, we add a specific encryption level which is related to the different areas of the spectral plane, and then, we make use of several random phase keys. An in-depth analysis at the spectral fusion methodology is done in order to find a good trade-off between the compression rate and the quality of the reconstructed images. Our new proposal spectral shift allows us to minimize the image overlap. We further analyze the influence of the spectral shift on the reconstructed image quality and compression rate. The performance of the multiple-image optical compression and encryption method is verified by analyzing several video sequences and polarimetric images.

  19. Neural networks for data compression and invariant image recognition

    NASA Technical Reports Server (NTRS)

    Gardner, Sheldon

    1989-01-01

    An approach to invariant image recognition (I2R), based upon a model of biological vision in the mammalian visual system (MVS), is described. The complete I2R model incorporates several biologically inspired features: exponential mapping of retinal images, Gabor spatial filtering, and a neural network associative memory. In the I2R model, exponentially mapped retinal images are filtered by a hierarchical set of Gabor spatial filters (GSF) which provide compression of the information contained within a pixel-based image. A neural network associative memory (AM) is used to process the GSF coded images. We describe a 1-D shape function method for coding of scale and rotationally invariant shape information. This method reduces image shape information to a periodic waveform suitable for coding as an input vector to a neural network AM. The shape function method is suitable for near term applications on conventional computing architectures equipped with VLSI FFT chips to provide a rapid image search capability.

  20. Lossless compression of multispectral images using spectral information

    NASA Astrophysics Data System (ADS)

    Ma, Long; Shi, Zelin; Tang, Xusheng

    2009-10-01

    Multispectral images are available for different purposes due to developments in spectral imaging systems. The sizes of multispectral images are enormous. Thus transmission and storage of these volumes of data require huge time and memory resources. That is why compression algorithms must be developed. A salient property of multispectral images is that strong spectral correlation exists throughout almost all bands. This fact is successfully used to predict each band based on the previous bands. We propose to use spectral linear prediction and entropy coding with context modeling for encoding multispectral images. Linear prediction predicts the value for the next sample and computes the difference between predicted value and the original value. This difference is usually small, so it can be encoded with less its than the original value. The technique implies prediction of each image band by involving number of bands along the image spectra. Each pixel is predicted using information provided by pixels in the previous bands in the same spatial position. As done in the JPEG-LS, the proposed coder also represents the mapped residuals by using an adaptive Golomb-Rice code with context modeling. This residual coding is context adaptive, where the context used for the current sample is identified by a context quantization function of the three gradients. Then, context-dependent Golomb-Rice code and bias parameters are estimated sample by sample. The proposed scheme was compared with three algorithms applied to the lossless compression of multispectral images, namely JPEG-LS, Rice coding, and JPEG2000. Simulation tests performed on AVIRIS images have demonstrated that the proposed compression scheme is suitable for multispectral images.

  1. A Motion-Compensating Image-Compression Scheme

    NASA Technical Reports Server (NTRS)

    Wong, Carol

    1994-01-01

    Chrominance used (in addition to luminance) in estimating motion. Variable-rate digital coding scheme for compression of color-video-image data designed to deliver pictures of good quality at moderate compressed-data rate of 1 to 2 bits per pixel, or of fair quality at rate less than 1 bit per pixel. Scheme, in principle, implemented by use of commercially available application-specific integrated circuits. Incorporates elements of some prior coding schemes, including motion compensation (MC) and discrete cosine transform (DCT).

  2. Haar wavelet processor for adaptive on-line image compression

    NASA Astrophysics Data System (ADS)

    Diaz, F. Javier; Buron, Angel M.; Solana, Jose M.

    2005-06-01

    An image coding processing scheme based on a variant of the Haar Wavelet Transform that uses only addition and subtraction is presented. After computing the transform, the selection and coding of the coefficients is performed using a methodology optimized to attain the lowest hardware implementation complexity. Coefficients are sorted in groups according to the number of pixels used in their computing. The idea behind it is to use a different threshold for each group of coefficients; these thresholds are obtained recurrently from an initial one. Parameter values used to achieve the desired compression level are established "on-line", adapting their values to each image, which leads to an improvement in the quality obtained for a preset compression level. Despite its adaptive characteristic, the coding scheme presented leads to a hardware implementation of markedly low circuit complexity. The compression reached for images of 512x512 pixels (256 grey levels) is over 22:1 (~0.4 bits/pixel) with a rmse of 8-10%. An image processor (excluding memory) prototype designed to compute the proposed transform has been implemented using FPGA chips. The processor for images of 256x256 pixels has been implemented using only one general-purpose low-cost FPGA chip, thus proving the design reliability and its relative simplicity.

  3. Lossy compression of MERIS superspectral images with exogenous quasi optimal coding transforms

    NASA Astrophysics Data System (ADS)

    Akam Bita, Isidore Paul; Barret, Michel; Dalla Vedova, Florio; Gutzwiller, Jean-Louis

    2009-08-01

    Our research focuses on reducing complexity of hyperspectral image codecs based on transform and/or subband coding, so they can be on-board a satellite. It is well-known that the Karhunen-Loève Transform (KLT) can be sub-optimal in transform coding for non Gaussian data. However, it is generally recommended as the best calculable linear coding transform in practice. Now, the concept and the computation of optimal coding transforms (OCT), under low restrictive hypotheses at high bit-rates, were carried out and adapted to a compression scheme compatible with both the JPEG2000 Part2 standard and the CCSDS recommendations for on-board satellite image compression, leading to the concept and computation of Optimal Spectral Transforms (OST). These linear transforms are optimal for reducing spectral redundancies of multi- or hyper-spectral images, when the spatial redundancies are reduced with a fixed 2-D Discrete Wavelet Transform (DWT). The problem of OST is their heavy computational cost. In this paper we present the performances in coding of a quasi optimal spectral transform, called exogenous OrthOST, obtained by learning an orthogonal OST on a sample of superspectral images from the spectrometer MERIS. The performances are presented in terms of bit-rate versus distortion for four various distortions and compared to the ones of the KLT. We observe good performances of the exogenous OrthOST, as it was the case on Hyperion hyper-spectral images in previous works.

  4. Influence of Lossy Compressed DEM on Radiometric Correction for Land Cover Classification of Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Moré, G.; Pesquer, L.; Blanes, I.; Serra-Sagristà, J.; Pons, X.

    2012-12-01

    World coverage Digital Elevation Models (DEM) have progressively increased their spatial resolution (e.g., ETOPO, SRTM, or Aster GDEM) and, consequently, their storage requirements. On the other hand, lossy data compression facilitates accessing, sharing and transmitting large spatial datasets in environments with limited storage. However, since lossy compression modifies the original information, rigorous studies are needed to understand its effects and consequences. The present work analyzes the influence of DEM quality -modified by lossy compression-, on the radiometric correction of remote sensing imagery, and the eventual propagation of the uncertainty in the resulting land cover classification. Radiometric correction is usually composed of two parts: atmospheric correction and topographical correction. For topographical correction, DEM provides the altimetry information that allows modeling the incidence radiation on terrain surface (cast shadows, self shadows, etc). To quantify the effects of the DEM lossy compression on the radiometric correction, we use radiometrically corrected images for classification purposes, and compare the accuracy of two standard coding techniques for a wide range of compression ratios. The DEM has been obtained by resampling the DEM v.2 of Catalonia (ICC), originally having 15 m resolution, to the Landsat TM resolution. The Aster DEM has been used to fill the gaps beyond the administrative limits of Catalonia. The DEM has been lossy compressed with two coding standards at compression ratios 5:1, 10:1, 20:1, 100:1 and 200:1. The employed coding standards have been JPEG2000 and CCSDS-IDC; the former is an international ISO/ITU-T standard for almost any type of images, while the latter is a recommendation of the CCSDS consortium for mono-component remote sensing images. Both techniques are wavelet-based followed by an entropy-coding stage. Also, for large compression ratios, both techniques need a post processing for correctly

  5. A novel image fusion approach based on compressive sensing

    NASA Astrophysics Data System (ADS)

    Yin, Hongpeng; Liu, Zhaodong; Fang, Bin; Li, Yanxia

    2015-11-01

    Image fusion can integrate complementary and relevant information of source images captured by multiple sensors into a unitary synthetic image. The compressive sensing-based (CS) fusion approach can greatly reduce the processing speed and guarantee the quality of the fused image by integrating fewer non-zero coefficients. However, there are two main limitations in the conventional CS-based fusion approach. Firstly, directly fusing sensing measurements may bring greater uncertain results with high reconstruction error. Secondly, using single fusion rule may result in the problems of blocking artifacts and poor fidelity. In this paper, a novel image fusion approach based on CS is proposed to solve those problems. The non-subsampled contourlet transform (NSCT) method is utilized to decompose the source images. The dual-layer Pulse Coupled Neural Network (PCNN) model is used to integrate low-pass subbands; while an edge-retention based fusion rule is proposed to fuse high-pass subbands. The sparse coefficients are fused before being measured by Gaussian matrix. The fused image is accurately reconstructed by Compressive Sampling Matched Pursuit algorithm (CoSaMP). Experimental results demonstrate that the fused image contains abundant detailed contents and preserves the saliency structure. These also indicate that our proposed method achieves better visual quality than the current state-of-the-art methods.

  6. Regional contrast enhancement and data compression for digital mammographic images

    NASA Astrophysics Data System (ADS)

    Chen, Ji; Flynn, Michael J.; Rebner, Murray

    1993-07-01

    The wide dynamic range of mammograms poses problems for displaying images on an electronic monitor and printing images through a laser printer. In addition, digital mammograms require a large amount of storage and network transmission bandwidth. We applied contrast enhancement and data compression to the segmented images to solve these problems. Using both image intensity and Gaussian filtered images, we separated the original image into three regions: the interior region, the skinline transition region, and the exterior region. In the transition region, unsharp masking process was applied and an adaptive density shift was used to simulate the process of highlighting with a spot light. The exterior region was set to a high density to reduce glare. The interior and skinline regions are the diagnostically informative areas that need to be preserved. Visually lossless coding was done for the interior by the wavelet or subband transform coding method. This was used because there are no block artifacts and a lowpass filtered image is generated by the transform. The exterior region can be represented by a bit-plane image containing only the labeling information or represented by the lower resolution transform coefficients. Therefore, by applying filters of different scales, we can accomplish region segmentation and data compression.

  7. Spatial compression algorithm for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R.

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  8. Digital image compression for a 2f multiplexing optical setup

    NASA Astrophysics Data System (ADS)

    Vargas, J.; Amaya, D.; Rueda, E.

    2016-07-01

    In this work a virtual 2f multiplexing system was implemented in combination with digital image compression techniques and redundant information elimination. Depending on the image type to be multiplexed, a memory-usage saving of as much as 99% was obtained. The feasibility of the system was tested using three types of images, binary characters, QR codes, and grey level images. A multiplexing step was implemented digitally, while a demultiplexing step was implemented in a virtual 2f optical setup following real experimental parameters. To avoid cross-talk noise, each image was codified with a specially designed phase diffraction carrier that would allow the separation and relocation of the multiplexed images on the observation plane by simple light propagation. A description of the system is presented together with simulations that corroborate the method. The present work may allow future experimental implementations that will make use of all the parallel processing capabilities of optical systems.

  9. Image compression using address-vector quantization

    NASA Astrophysics Data System (ADS)

    Nasrabadi, Nasser M.; Feng, Yushu

    1990-12-01

    A novel vector quantization scheme, the address-vector quantizer (A-VQ), is proposed which exploits the interblock correlation by encoding a group of blocks together using an address-codebook (AC). The AC is a set of address-codevectors (ACVs), each representing a combination of addresses or indices. Each element of the ACV is an address of an entry in the LBG-codebook, representing a vector-quantized block. The AC consists of an active (addressable) region and an inactive (nonaddressable) region. During encoding the ACVs in the AC are reordered adaptively to bring the most probable ACVs into the active region. When encoding an ACV, the active region is checked, and if such an address combination exists, its index is transmitted to the receiver. Otherwise, the address of each block is transmitted individually. The SNR of the images encoded by the A-VQ method is the same as that of a memoryless vector quantizer, but the bit rate is by a factor of approximately two.

  10. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a

  11. Novel image compression-encryption hybrid algorithm based on key-controlled measurement matrix in compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua

    2014-10-01

    The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.

  12. Colorimetric-spectral clustering: a tool for multispectral image compression

    NASA Astrophysics Data System (ADS)

    Ciprian, R.; Carbucicchio, M.

    2011-11-01

    In this work a new compression method for multispectral images has been proposed: the 'colorimetric-spectral clustering'. The basic idea arises from the well-known cluster analysis, a multivariate analysis which finds the natural links between objects grouping them into clusters. In the colorimetric-spectral clustering compression method, the objects are the spectral reflectance factors of the multispectral images that are grouped into clusters on the basis of their colour difference. In particular two spectra can belong to the same cluster only if their colour difference is lower than a threshold fixed before starting the compression procedure. The performance of the colorimetric-spectral clustering has been compared to the k-means cluster analysis, in which the Euclidean distance between spectra is considered, to the principal component analysis and to the LabPQR method. The colorimetric-spectral clustering is able to preserve both the spectral and the colorimetric information of a multispectral image, allowing this information to be reproduced for all pixels of the image.

  13. Hybrid tenso-vectorial compressive sensing for hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Li, Qun; Bernal, Edgar A.

    2016-05-01

    Hyperspectral imaging has a wide range of applications relying on remote material identification, including astronomy, mineralogy, and agriculture; however, due to the large volume of data involved, the complexity and cost of hyperspectral imagers can be prohibitive. The exploitation of redundancies along the spatial and spectral dimensions of a hyperspectral image of a scene has created new paradigms that overcome the limitations of traditional imaging systems. While compressive sensing (CS) approaches have been proposed and simulated with success on already acquired hyperspectral imagery, most of the existing work relies on the capability to simultaneously measure the spatial and spectral dimensions of the hyperspectral cube. Most real-life devices, however, are limited to sampling one or two dimensions at a time, which renders a significant portion of the existing work unfeasible. We propose a new variant of the recently proposed serial hybrid vectorial and tensorial compressive sensing (HCS-S) algorithm that, like its predecessor, is compatible with real-life devices both in terms of the acquisition and reconstruction requirements. The newly introduced approach is parallelizable, and we abbreviate it as HCS-P. Together, HCS-S and HCS-P comprise a generalized framework for hybrid tenso-vectorial compressive sensing, or HCS for short. We perform a detailed analysis that demonstrates the uniqueness of the signal reconstructed by both the original HCS-S and the proposed HCS-P algorithms. Last, we analyze the behavior of the HCS reconstruction algorithms in the presence of measurement noise, both theoretically and experimentally.

  14. Lossless compression of stromatolite images: a biogenicity index?

    PubMed

    Corsetti, Frank A; Storrie-Lombardi, Michael C

    2003-01-01

    It has been underappreciated that inorganic processes can produce stromatolites (laminated macroscopic constructions commonly attreibuted to microbiological activity), thus calling into question the long-standing use of stromatolites as de facto evidence for ancient life. Using lossless compression on unmagnified reflectance red-green-blue (RGB) images of matched stromatolite-sediment matrix pairs as a complexity metric, the compressibility index (delta(c), the log ratio of the ratio of the compressibility of the matrix versus the target) of a putative abiotic test stromatolite is significantly less than the delta(c) of a putative biotic test stromatolite. There is a clear separation in delta(c) between the different stromatolites discernible at the outcrop scale. In terms of absolute compressibility, the sediment matrix between the stromatolite columns was low in both cases, the putative abiotic stromatolite was similar to the intracolumnar sediment, and the putative biotic stromatolite was much greater (again discernible at the outcrop scale). We propose tht this metric would be useful for evaluating the biogenicity of images obtained by the camera systems available on every Mars surface probe launched to date including Viking, Pathfinder, Beagle, and the two Mars Exploration Rovers.

  15. Phase Preserving Dynamic Range Compression of Aeromagnetic Images

    NASA Astrophysics Data System (ADS)

    Kovesi, Peter

    2014-05-01

    Geoscientific images with a high dynamic range, such as aeromagnetic images, are difficult to present in a manner that facilitates interpretation. The data values may range over 20000 nanoteslas or more but a computer monitor is typically designed to present input data constrained to 8 bit values. Standard photographic high dynamic range tonemapping algorithms may be unsuitable, or inapplicable to such data because they are have been developed on the basis of statistics of natural images, feature types found in natural images, and models of the human visual system. These algorithms may also require image segmentation and/or decomposition of the image into base and detail layers but these operations may have no meaning for geoscientific images. For geological and geophysical data high dynamic range images are often dealt with via histogram equalization. The problem with this approach is that the contrast stretch or compression applied to data values depends on how frequently the data values occur in the image and not on the magnitude of any data features themselves. This can lead to inappropriate distortions in the output. Other approaches include use of the Automatic Gain Control algorithm developed by Rajagopalan, or the tilt derivative. A difficulty with these approaches is that the signal can be over-normalized and perception of the overall variations in the signal can be lost. To overcome these problems a method is presented that compresses the dynamic range of an image while preserving local features. It makes no assumptions about the formation of the image, the feature types it contains, or its range of values. Thus, unlike algorithms designed for photographic images, this algorithm can be applied to a wide range of scientific images. The method is based on extracting local phase and amplitude values across the image using monogenic filters. The dynamic range of the image can then be reduced by applying a range reducing function to the amplitude values, for

  16. Remotely sensed image compression based on wavelet transform

    NASA Technical Reports Server (NTRS)

    Kim, Seong W.; Lee, Heung K.; Kim, Kyung S.; Choi, Soon D.

    1995-01-01

    In this paper, we present an image compression algorithm that is capable of significantly reducing the vast amount of information contained in multispectral images. The developed algorithm exploits the spectral and spatial correlations found in multispectral images. The scheme encodes the difference between images after contrast/brightness equalization to remove the spectral redundancy, and utilizes a two-dimensional wavelet transform to remove the spatial redundancy. the transformed images are then encoded by Hilbert-curve scanning and run-length-encoding, followed by Huffman coding. We also present the performance of the proposed algorithm with the LANDSAT MultiSpectral Scanner data. The loss of information is evaluated by PSNR (peak signal to noise ratio) and classification capability.

  17. Compression and storage of multiple images with modulating blazed gratings

    NASA Astrophysics Data System (ADS)

    Yin, Shen; Tao, Shaohua

    2013-07-01

    A method for compressing, storing and reconstructing high-volume data is presented in this paper. Blazed gratings with different orientations and blaze angles are used to superpose many grayscaled images, and customized spatial filters are used to selectively recover the corresponding images from the diffraction spots of the superposed images. The simulation shows that as many as 198 images with a size of 512 pixels × 512 pixels can be stored in a diffractive optical element (DOE) with complex amplitudes of the same size, and the recovered images from the DOE are discernible with high visual quality. Optical encryption/decryption can also be added to the digitized DOE to enhance the security of the stored data.

  18. Real-Time Digital Compression Of Television Image Data

    NASA Technical Reports Server (NTRS)

    Barnes, Scott P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1990-01-01

    Digital encoding/decoding system compresses color television image data in real time for transmission at lower data rates and, consequently, lower bandwidths. Implements predictive coding process, in which each picture element (pixel) predicted from values of prior neighboring pixels, and coded transmission expresses difference between actual and predicted current values. Combines differential pulse-code modulation process with non-linear, nonadaptive predictor, nonuniform quantizer, and multilevel Huffman encoder.

  19. Geostationary Imaging FTS (GIFTS) Data Processing: Measurement Simulation and Compression

    NASA Technical Reports Server (NTRS)

    Huang, Hung-Lung; Revercomb, H. E.; Thom, J.; Antonelli, P. B.; Osborne, B.; Tobin, D.; Knuteson, R.; Garcia, R.; Dutcher, S.; Li, J.

    2001-01-01

    GIFTS (Geostationary Imaging Fourier Transform Spectrometer), a forerunner of next generation geostationary satellite weather observing systems, will be built to fly on the NASA EO-3 geostationary orbit mission in 2004 to demonstrate the use of large area detector arrays and readouts. Timely high spatial resolution images and quantitative soundings of clouds, water vapor, temperature, and pollutants of the atmosphere for weather prediction and air quality monitoring will be achieved. GIFTS is novel in terms of providing many scientific returns that traditionally can only be achieved by separate advanced imaging and sounding systems. GIFTS' ability to obtain half-hourly high vertical density wind over the full earth disk is revolutionary. However, these new technologies bring forth many challenges for data transmission, archiving, and geophysical data processing. In this paper, we will focus on the aspect of data volume and downlink issues by conducting a GIFTS data compression experiment. We will discuss the scenario of using principal component analysis as a foundation for atmospheric data retrieval and compression of uncalibrated and un-normalized interferograms. The effects of compression on the degradation of the signal and noise reduction in interferogram and spectral domains will be highlighted. A simulation system developed to model the GIFTS instrument measurements is described in detail.

  20. Passive millimeter-wave imaging with compressive sensing

    NASA Astrophysics Data System (ADS)

    Gopalsami, Nachappa; Liao, Shaolin; Elmer, Thomas W.; Koehl, Eugene R.; Heifetz, Alexander; Raptis, Apostolos C.; Spinoulas, Leonidas; Katsaggelos, Aggelos K.

    2012-09-01

    Passive millimeter-wave (PMMW) imagers using a single radiometer, called single pixel imagers, employ raster scanning to produce images. A serious drawback of such a single pixel imaging system is the long acquisition time needed to produce a high-fidelity image, arising from two factors: (a) the time to scan the whole scene pixel by pixel and (b) the integration time for each pixel to achieve adequate signal to noise ratio. Recently, compressive sensing (CS) has been developed for single-pixel optical cameras to significantly reduce the imaging time and at the same time produce high-fidelity images by exploiting the sparsity of the data in some transform domain. While the efficacy of CS has been established for single-pixel optical systems, its application to PMMW imaging is not straightforward due to its (a) longer wavelength by three to four orders of magnitude that suffers high diffraction losses at finite size spatial waveform modulators and (b) weaker radiation intensity, for example, by eight orders of magnitude less than that of infrared. We present the development and implementation of a CS technique for PMMW imagers and shows a factor-of-ten increase in imaging speed.

  1. An adaptive technique to maximize lossless image data compression of satellite images

    NASA Technical Reports Server (NTRS)

    Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe

    1994-01-01

    Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.

  2. Compressive sensing for direct millimeter-wave holographic imaging.

    PubMed

    Qiao, Lingbo; Wang, Yingxin; Shen, Zongjun; Zhao, Ziran; Chen, Zhiqiang

    2015-04-10

    Direct millimeter-wave (MMW) holographic imaging, which provides both the amplitude and phase information by using the heterodyne mixing technique, is considered a powerful tool for personnel security surveillance. However, MWW imaging systems usually suffer from the problem of high cost or relatively long data acquisition periods for array or single-pixel systems. In this paper, compressive sensing (CS), which aims at sparse sampling, is extended to direct MMW holographic imaging for reducing the number of antenna units or the data acquisition time. First, following the scalar diffraction theory, an exact derivation of the direct MMW holographic reconstruction is presented. Then, CS reconstruction strategies for complex-valued MMW images are introduced based on the derived reconstruction formula. To pursue the applicability for near-field MMW imaging and more complicated imaging targets, three sparsity bases, including total variance, wavelet, and curvelet, are evaluated for the CS reconstruction of MMW images. We also discuss different sampling patterns for single-pixel, linear array and two-dimensional array MMW imaging systems. Both simulations and experiments demonstrate the feasibility of recovering MMW images from measurements at 1/2 or even 1/4 of the Nyquist rate.

  3. Effects of Image Compression on Automatic Count of Immunohistochemically Stained Nuclei in Digital Images

    PubMed Central

    López, Carlos; Lejeune, Marylène; Escrivà, Patricia; Bosch, Ramón; Salvadó, Maria Teresa; Pons, Lluis E.; Baucells, Jordi; Cugat, Xavier; Álvaro, Tomás; Jaén, Joaquín

    2008-01-01

    This study investigates the effects of digital image compression on automatic quantification of immunohistochemical nuclear markers. We examined 188 images with a previously validated computer-assisted analysis system. A first group was composed of 47 images captured in TIFF format, and other three contained the same images converted from TIFF to JPEG format with 3×, 23× and 46× compression. Counts of TIFF format images were compared with the other three groups. Overall, differences in the count of the images increased with the percentage of compression. Low-complexity images (≤100 cells/field, without clusters or with small-area clusters) had small differences (<5 cells/field in 95–100% of cases) and high-complexity images showed substantial differences (<35–50 cells/field in 95–100% of cases). Compression does not compromise the accuracy of immunohistochemical nuclear marker counts obtained by computer-assisted analysis systems for digital images with low complexity and could be an efficient method for storing these images. PMID:18755997

  4. Edge-Based Image Compression with Homogeneous Diffusion

    NASA Astrophysics Data System (ADS)

    Mainberger, Markus; Weickert, Joachim

    It is well-known that edges contain semantically important image information. In this paper we present a lossy compression method for cartoon-like images that exploits information at image edges. These edges are extracted with the Marr-Hildreth operator followed by hysteresis thresholding. Their locations are stored in a lossless way using JBIG. Moreover, we encode the grey or colour values at both sides of each edge by applying quantisation, subsampling and PAQ coding. In the decoding step, information outside these encoded data is recovered by solving the Laplace equation, i.e. we inpaint with the steady state of a homogeneous diffusion process. Our experiments show that the suggested method outperforms the widely-used JPEG standard and can even beat the advanced JPEG2000 standard for cartoon-like images.

  5. Fast Second Degree Total Variation Method for Image Compressive Sensing

    PubMed Central

    Liu, Pengfei; Xiao, Liang; Zhang, Jun

    2015-01-01

    This paper presents a computationally efficient algorithm for image compressive sensing reconstruction using a second degree total variation (HDTV2) regularization. Firstly, a preferably equivalent formulation of the HDTV2 functional is derived, which can be formulated as a weighted L1-L2 mixed norm of second degree image derivatives under the spectral decomposition framework. Secondly, using the equivalent formulation of HDTV2, we introduce an efficient forward-backward splitting (FBS) scheme to solve the HDTV2-based image reconstruction model. Furthermore, from the averaged non-expansive operator point of view, we make a detailed analysis on the convergence of the proposed FBS algorithm. Experiments on medical images demonstrate that the proposed method outperforms several fast algorithms of the TV and HDTV2 reconstruction models in terms of peak signal to noise ratio (PSNR), structural similarity index (SSIM) and convergence speed. PMID:26361008

  6. An image feature data compressing method based on product RSOM

    NASA Astrophysics Data System (ADS)

    Wang, Jianming; Liu, Lihua; Xia, Shengping

    2015-12-01

    Data explosion and information redundancy are the main characteristics of the era of big data. Digging out valuable information from mass data is the premise of efficient information processing, which is a key technology in the area of object recognition with mass feature database. In the area of large scale image processing, both of the massive image data and the image features of high-dimension take great challenges to object recognition and information retrieval. Similar with big data, the large scale image feature database, which contains extensive quantity of information redundancy, can also be quantitatively represented by finite clustering models without degrading recognition performance. Inspired by the ideas of product quantization and high dimensional feature division, a data compression method based on recursive self-organizing mapping (RSOM) algorithm is proposed in this paper.

  7. Fast Second Degree Total Variation Method for Image Compressive Sensing.

    PubMed

    Liu, Pengfei; Xiao, Liang; Zhang, Jun

    2015-01-01

    This paper presents a computationally efficient algorithm for image compressive sensing reconstruction using a second degree total variation (HDTV2) regularization. Firstly, a preferably equivalent formulation of the HDTV2 functional is derived, which can be formulated as a weighted L1-L2 mixed norm of second degree image derivatives under the spectral decomposition framework. Secondly, using the equivalent formulation of HDTV2, we introduce an efficient forward-backward splitting (FBS) scheme to solve the HDTV2-based image reconstruction model. Furthermore, from the averaged non-expansive operator point of view, we make a detailed analysis on the convergence of the proposed FBS algorithm. Experiments on medical images demonstrate that the proposed method outperforms several fast algorithms of the TV and HDTV2 reconstruction models in terms of peak signal to noise ratio (PSNR), structural similarity index (SSIM) and convergence speed.

  8. Development of a compressive sampling hyperspectral imager prototype

    NASA Astrophysics Data System (ADS)

    Barducci, Alessandro; Guzzi, Donatella; Lastri, Cinzia; Nardino, Vanni; Marcoionni, Paolo; Pippi, Ivan

    2013-10-01

    Compressive sensing (CS) is a new technology that investigates the chance to sample signals at a lower rate than the traditional sampling theory. The main advantage of CS is that compression takes place during the sampling phase, making possible significant savings in terms of the ADC, data storage memory, down-link bandwidth, and electrical power absorption. The CS technology could have primary importance for spaceborne missions and technology, paving the way to noteworthy reductions of payload mass, volume, and cost. On the contrary, the main CS disadvantage is made by the intensive off-line data processing necessary to obtain the desired source estimation. In this paper we summarize the CS architecture and its possible implementations for Earth observation, giving evidence of possible bottlenecks hindering this technology. CS necessarily employs a multiplexing scheme, which should produce some SNR disadvantage. Moreover, this approach would necessitate optical light modulators and 2-dim detector arrays of high frame rate. This paper describes the development of a sensor prototype at laboratory level that will be utilized for the experimental assessment of CS performance and the related reconstruction errors. The experimental test-bed adopts a push-broom imaging spectrometer, a liquid crystal plate, a standard CCD camera and a Silicon PhotoMultiplier (SiPM) matrix. The prototype is being developed within the framework of the ESA ITI-B Project titled "Hyperspectral Passive Satellite Imaging via Compressive Sensing".

  9. Motion-compensated compressed sensing for dynamic imaging

    NASA Astrophysics Data System (ADS)

    Sundaresan, Rajagopalan; Kim, Yookyung; Nadar, Mariappan S.; Bilgin, Ali

    2010-08-01

    The recently introduced Compressed Sensing (CS) theory explains how sparse or compressible signals can be reconstructed from far fewer samples than what was previously believed possible. The CS theory has attracted significant attention for applications such as Magnetic Resonance Imaging (MRI) where long acquisition times have been problematic. This is especially true for dynamic MRI applications where high spatio-temporal resolution is needed. For example, in cardiac cine MRI, it is desirable to acquire the whole cardiac volume within a single breath-hold in order to avoid artifacts due to respiratory motion. Conventional MRI techniques do not allow reconstruction of high resolution image sequences from such limited amount of data. Vaswani et al. recently proposed an extension of the CS framework to problems with partially known support (i.e. sparsity pattern). In their work, the problem of recursive reconstruction of time sequences of sparse signals was considered. Under the assumption that the support of the signal changes slowly over time, they proposed using the support of the previous frame as the "known" part of the support for the current frame. While this approach works well for image sequences with little or no motion, motion causes significant change in support between adjacent frames. In this paper, we illustrate how motion estimation and compensation techniques can be used to reconstruct more accurate estimates of support for image sequences with substantial motion (such as cardiac MRI). Experimental results using phantoms as well as real MRI data sets illustrate the improved performance of the proposed technique.

  10. Information-theoretic assessment of imaging systems via data compression

    NASA Astrophysics Data System (ADS)

    Aiazzi, Bruno; Alparone, Luciano; Baronti, Stefano

    2001-12-01

    This work focuses on estimating the information conveyed to a user by either multispectral or hyperspectral image data. The goal is establishing the extent to which an increase in spectral resolution can increase the amount of usable information. As a matter of fact, a tradeoff exists between spatial and spectral resolution, due to physical constraints of sensors imaging with a prefixed SNR. Lossless data compression is exploited to measure the useful information content. In fact, the bit rate achieved by the reversible compression process takes into account both the contribution of the observation noise i.e., information regarded as statistical uncertainty, the relevance of which is null to a user, and the intrinsic information of hypothetically noise-free data. An entropic model of the image source is defined and, once the standard deviation of the noise, assumed to be Gaussian and possibly nonwhite, has been preliminarily estimated, such a model is inverted to yield an estimate of the information content of the noise-free source from the code rate. Results both of noise and of information assessment are reported and discussed on synthetic noisy images, on Landsat TM data, and on AVIRIS data.

  11. A comparison of select image-compression algorithms for an electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.

  12. Entropy coders for image compression based on binary forward classification

    NASA Astrophysics Data System (ADS)

    Yoo, Hoon; Jeong, Jechang

    2000-12-01

    Entropy coders as a noiseless compression method are widely used as final step compression for images, and there have been many contributions to increase of entropy coder performance and to reduction of entropy coder complexity. In this paper, we propose some entropy coders based on the binary forward classification (BFC). The BFC requires overhead of classification but there is no change between the amount of input information and the total amount of classified output information, which we prove this property in this paper. And using the proved property, we propose entropy coders that are the BFC followed by Golomb-Rice coders (BFC+GR) and the BFC followed by arithmetic coders (BFC+A). The proposed entropy coders introduce negligible additional complexity due to the BFC. Simulation results also show better performance than other entropy coders that have similar complexity to the proposed coders.

  13. Single image non-uniformity correction using compressive sensing

    NASA Astrophysics Data System (ADS)

    Jian, Xian-zhong; Lu, Rui-zhi; Guo, Qiang; Wang, Gui-pu

    2016-05-01

    A non-uniformity correction (NUC) method for an infrared focal plane array imaging system was proposed. The algorithm, based on compressive sensing (CS) of single image, overcame the disadvantages of "ghost artifacts" and bulk calculating costs in traditional NUC algorithms. A point-sampling matrix was designed to validate the measurements of CS on the time domain. The measurements were corrected using the midway infrared equalization algorithm, and the missing pixels were solved with the regularized orthogonal matching pursuit algorithm. Experimental results showed that the proposed method can reconstruct the entire image with only 25% pixels. A small difference was found between the correction results using 100% pixels and the reconstruction results using 40% pixels. Evaluation of the proposed method on the basis of the root-mean-square error, peak signal-to-noise ratio, and roughness index (ρ) proved the method to be robust and highly applicable.

  14. Image compression using a novel edge-based coding algorithm

    NASA Astrophysics Data System (ADS)

    Keissarian, Farhad; Daemi, Mohammad F.

    2001-08-01

    In this paper, we present a novel edge-based coding algorithm for image compression. The proposed coding scheme is the predictive version of the original algorithm, which we presented earlier in literature. In the original version, an image is block coded according to the level of visual activity of individual blocks, following a novel edge-oriented classification stage. Each block is then represented by a set of parameters associated with the pattern appearing inside the block. The use of these parameters at the receiver reduces the cost of reconstruction significantly. In the present study, we extend and improve the performance of the existing technique by exploiting the expected spatial redundancy across the neighboring blocks. Satisfactory coded images at competitive bit rate with other block-based coding techniques have been obtained.

  15. Objective index of image fidelity for JPEG2000 compressed body CT images

    SciTech Connect

    Kim, Kil Joong; Lee, Kyoung Ho; Kang, Heung-Sik; Kim, So Yeon; Kim, Young Hoon; Kim, Bohyoung; Seo, Jinwook; Mantiuk, Rafal

    2009-07-15

    Compression ratio (CR) has been the de facto standard index of compression level for medical images. The aim of the study is to evaluate the CR, peak signal-to-noise ratio (PSNR), and a perceptual quality metric (high-dynamic range visual difference predictor HDR-VDP) as objective indices of image fidelity for Joint Photographic Experts Group (JPEG) 2000 compressed body computed tomography (CT) images, from the viewpoint of visually lossless compression approach. A total of 250 body CT images obtained with five different scan protocols (5-mm-thick abdomen, 0.67-mm-thick abdomen, 5-mm-thick lung, 0.67-mm-thick lung, and 5-mm-thick low-dose lung) were compressed to one of five CRs (reversible, 6:1, 8:1, 10:1, and 15:1). The PSNR and HDR-VDP values were calculated for the 250 pairs of the original and compressed images. By alternately displaying an original and its compressed image on the same monitor, five radiologists independently determined if the pair was distinguishable or indistinguishable. The kappa statistic for the interobserver agreement among the five radiologists' responses was 0.70. According to the radiologists' responses, the number of distinguishable image pairs tended to significantly differ among the five scan protocols at 6:1-10:1 compressions (Fisher-Freeman-Halton exact tests). Spearman's correlation coefficients between each of the CR, PSNR, and HDR-VDP and the number of radiologists who responded as distinguishable were 0.72, -0.77, and 0.85, respectively. Using the radiologists' pooled responses as the reference standards, the areas under the receiver-operating-characteristic curves for the CR, PSNR, and HDR-VDP were 0.87, 0.93, and 0.97, respectively, showing significant differences between the CR and PSNR (p=0.04), or HDR-VDP (p<0.001), and between the PSNR and HDR-VDP (p<0.001). In conclusion, the CR is less suitable than the PSNR or HDR-VDP as an objective index of image fidelity for JPEG2000 compressed body CT images. The HDR-VDP is more

  16. Compressive fluorescence microscopy for biological and hyperspectral imaging.

    PubMed

    Studer, Vincent; Bobin, Jérome; Chahid, Makhlad; Mousavi, Hamed Shams; Candes, Emmanuel; Dahan, Maxime

    2012-06-26

    The mathematical theory of compressed sensing (CS) asserts that one can acquire signals from measurements whose rate is much lower than the total bandwidth. Whereas the CS theory is now well developed, challenges concerning hardware implementations of CS-based acquisition devices--especially in optics--have only started being addressed. This paper presents an implementation of compressive sensing in fluorescence microscopy and its applications to biomedical imaging. Our CS microscope combines a dynamic structured wide-field illumination and a fast and sensitive single-point fluorescence detection to enable reconstructions of images of fluorescent beads, cells, and tissues with undersampling ratios (between the number of pixels and number of measurements) up to 32. We further demonstrate a hyperspectral mode and record images with 128 spectral channels and undersampling ratios up to 64, illustrating the potential benefits of CS acquisition for higher-dimensional signals, which typically exhibits extreme redundancy. Altogether, our results emphasize the interest of CS schemes for acquisition at a significantly reduced rate and point to some remaining challenges for CS fluorescence microscopy. PMID:22689950

  17. Generation of separate density and compressibility images in tissue.

    PubMed

    Norton, S J

    1983-07-01

    A method is suggested for reconstructing separate images of the variations in density and compressibility in the same tissue sample. The two long, rectangular transducer elements. As in diffraction tomography, 180 degrees access around the region of interest is required. This approach differs from conventional diffraction tomography, however, in that no transducer arrays are required and broadband illumination is used. A flat transducer, assumed long relative to the extent of the object, is used as a source of broadband, plane-wave illumination, and as a receiver of the backscattered sound. A second transducer, oriented at a different angle with respect to the first, issued as a receiver only. The two transducers are rotated together 180 degrees around the object, and the scattered sound is recorded at the plane-wave spectrum of the object directly, and provides sufficient information to reconstruct independent images of the variations in both the density and compressibility of the scattering medium. Image resolution is limited by the bandwidth of the illuminating sound.

  18. Multifrequency Bayesian compressive sensing methods for microwave imaging.

    PubMed

    Poli, Lorenzo; Oliveri, Giacomo; Ding, Ping Ping; Moriyama, Toshifumi; Massa, Andrea

    2014-11-01

    The Bayesian retrieval of sparse scatterers under multifrequency transverse magnetic illuminations is addressed. Two innovative imaging strategies are formulated to process the spectral content of microwave scattering data according to either a frequency-hopping multistep scheme or a multifrequency one-shot scheme. To solve the associated inverse problems, customized implementations of single-task and multitask Bayesian compressive sensing are introduced. A set of representative numerical results is discussed to assess the effectiveness and the robustness against the noise of the proposed techniques also in comparison with some state-of-the-art deterministic strategies. PMID:25401353

  19. Multifrequency Bayesian compressive sensing methods for microwave imaging.

    PubMed

    Poli, Lorenzo; Oliveri, Giacomo; Ding, Ping Ping; Moriyama, Toshifumi; Massa, Andrea

    2014-11-01

    The Bayesian retrieval of sparse scatterers under multifrequency transverse magnetic illuminations is addressed. Two innovative imaging strategies are formulated to process the spectral content of microwave scattering data according to either a frequency-hopping multistep scheme or a multifrequency one-shot scheme. To solve the associated inverse problems, customized implementations of single-task and multitask Bayesian compressive sensing are introduced. A set of representative numerical results is discussed to assess the effectiveness and the robustness against the noise of the proposed techniques also in comparison with some state-of-the-art deterministic strategies.

  20. An investigation of image compression on NIIRS rating degradation through automated image analysis

    NASA Astrophysics Data System (ADS)

    Chen, Hua-Mei; Blasch, Erik; Pham, Khanh; Wang, Zhonghai; Chen, Genshe

    2016-05-01

    The National Imagery Interpretability Rating Scale (NIIRS) is a subjective quantification of static image widely adopted by the Geographic Information System (GIS) community. Efforts have been made to relate NIIRS image quality to sensor parameters using the general image quality equations (GIQE), which make it possible to automatically predict the NIIRS rating of an image through automated image analysis. In this paper, we present an automated procedure to extract line edge profile based on which the NIIRS rating of a given image can be estimated through the GIQEs if the ground sampling distance (GSD) is known. Steps involved include straight edge detection, edge stripes determination, and edge intensity determination, among others. Next, we show how to employ GIQEs to estimate NIIRS degradation without knowing the ground truth GSD and investigate the effects of image compression on the degradation of an image's NIIRS rating. Specifically, we consider JPEG and JPEG2000 image compression standards. The extensive experimental results demonstrate the effect of image compression on the ground sampling distance and relative edge response, which are the major factors effecting NIIRS rating.

  1. COxSwAIN: Compressive Sensing for Advanced Imaging and Navigation

    NASA Technical Reports Server (NTRS)

    Kurwitz, Richard; Pulley, Marina; LaFerney, Nathan; Munoz, Carlos

    2015-01-01

    The COxSwAIN project focuses on building an image and video compression scheme that can be implemented in a small or low-power satellite. To do this, we used Compressive Sensing, where the compression is performed by matrix multiplications on the satellite and reconstructed on the ground. Our paper explains our methodology and demonstrates the results of the scheme, being able to achieve high quality image compression that is robust to noise and corruption.

  2. ZPEG: a hybrid DPCM-DCT based approach for compression of Z-stack images.

    PubMed

    Khire, Sourabh; Cooper, Lee; Park, Yuna; Carter, Alexis; Jayant, Nikil; Saltz, Joel

    2012-01-01

    Modern imaging technology permits obtaining images at varying depths along the thickness, or the Z-axis of the sample being imaged. A stack of multiple such images is called a Z-stack image. The focus capability offered by Z-stack images is critical for many digital pathology applications. A single Z-stack image may result in several hundred gigabytes of data, and needs to be compressed for archival and distribution purposes. Currently, the existing methods for compression of Z-stack images such as JPEG and JPEG 2000 compress each focal plane independently, and do not take advantage of the Z-signal redundancy. It is possible to achieve additional compression efficiency over the existing methods, by exploiting the high Z-signal correlation during image compression. In this paper, we propose a novel algorithm for compression of Z-stack images, which we term as ZPEG. ZPEG extends the popular discrete-cosine transform (DCT) based image encoder to compress Z-stack images. This is achieved by decorrelating the neighboring layers of the Z-stack image using differential pulse-code modulation (DPCM). PSNR measurements, as well as subjective evaluations by experts indicate that ZPEG can encode Z-stack images at a higher quality as compared to JPEG, JPEG 2000 and JP3D at compression ratios below 50∶1.

  3. Recommendations

    ERIC Educational Resources Information Center

    Brazelton, G. Blue; Renn, Kristen A.; Stewart, Dafina-Lazarus

    2015-01-01

    In this chapter, the editors provide a summary of the information shared in this sourcebook about the success of students who have minoritized identities of sexuality or gender and offer recommendations for policy, practice, and further research.

  4. A linear mixture analysis-based compression for hyperspectral image analysis

    SciTech Connect

    C. I. Chang; I. W. Ginsberg

    2000-06-30

    In this paper, the authors present a fully constrained least squares linear spectral mixture analysis-based compression technique for hyperspectral image analysis, particularly, target detection and classification. Unlike most compression techniques that directly deal with image gray levels, the proposed compression approach generates the abundance fractional images of potential targets present in an image scene and then encodes these fractional images so as to achieve data compression. Since the vital information used for image analysis is generally preserved and retained in the abundance fractional images, the loss of information may have very little impact on image analysis. In some occasions, it even improves analysis performance. Airborne visible infrared imaging spectrometer (AVIRIS) data experiments demonstrate that it can effectively detect and classify targets while achieving very high compression ratios.

  5. Underwater Acoustic Matched Field Imaging Based on Compressed Sensing

    PubMed Central

    Yan, Huichen; Xu, Jia; Long, Teng; Zhang, Xudong

    2015-01-01

    Matched field processing (MFP) is an effective method for underwater target imaging and localizing, but its performance is not guaranteed due to the nonuniqueness and instability problems caused by the underdetermined essence of MFP. By exploiting the sparsity of the targets in an imaging area, this paper proposes a compressive sensing MFP (CS-MFP) model from wave propagation theory by using randomly deployed sensors. In addition, the model’s recovery performance is investigated by exploring the lower bounds of the coherence parameter of the CS dictionary. Furthermore, this paper analyzes the robustness of CS-MFP with respect to the displacement of the sensors. Subsequently, a coherence-excluding coherence optimized orthogonal matching pursuit (CCOOMP) algorithm is proposed to overcome the high coherent dictionary problem in special cases. Finally, some numerical experiments are provided to demonstrate the effectiveness of the proposed CS-MFP method. PMID:26457708

  6. Compressed hyperspectral image sensing with joint sparsity reconstruction

    NASA Astrophysics Data System (ADS)

    Liu, Haiying; Li, Yunsong; Zhang, Jing; Song, Juan; Lv, Pei

    2011-10-01

    Recent compressed sensing (CS) results show that it is possible to accurately reconstruct images from a small number of linear measurements via convex optimization techniques. In this paper, according to the correlation analysis of linear measurements for hyperspectral images, a joint sparsity reconstruction algorithm based on interband prediction and joint optimization is proposed. In the method, linear prediction is first applied to remove the correlations among successive spectral band measurement vectors. The obtained residual measurement vectors are then recovered using the proposed joint optimization based POCS (projections onto convex sets) algorithm with the steepest descent method. In addition, a pixel-guided stopping criterion is introduced to stop the iteration. Experimental results show that the proposed algorithm exhibits its superiority over other known CS reconstruction algorithms in the literature at the same measurement rates, while with a faster convergence speed.

  7. Underwater Acoustic Matched Field Imaging Based on Compressed Sensing.

    PubMed

    Yan, Huichen; Xu, Jia; Long, Teng; Zhang, Xudong

    2015-01-01

    Matched field processing (MFP) is an effective method for underwater target imaging and localizing, but its performance is not guaranteed due to the nonuniqueness and instability problems caused by the underdetermined essence of MFP. By exploiting the sparsity of the targets in an imaging area, this paper proposes a compressive sensing MFP (CS-MFP) model from wave propagation theory by using randomly deployed sensors. In addition, the model's recovery performance is investigated by exploring the lower bounds of the coherence parameter of the CS dictionary. Furthermore, this paper analyzes the robustness of CS-MFP with respect to the displacement of the sensors. Subsequently, a coherence-excluding coherence optimized orthogonal matching pursuit (CCOOMP) algorithm is proposed to overcome the high coherent dictionary problem in special cases. Finally, some numerical experiments are provided to demonstrate the effectiveness of the proposed CS-MFP method. PMID:26457708

  8. Honey Bee Mating Optimization Vector Quantization Scheme in Image Compression

    NASA Astrophysics Data System (ADS)

    Horng, Ming-Huwi

    The vector quantization is a powerful technique in the applications of digital image compression. The traditionally widely used method such as the Linde-Buzo-Gray (LBG) algorithm always generated local optimal codebook. Recently, particle swarm optimization (PSO) is adapted to obtain the near-global optimal codebook of vector quantization. In this paper, we applied a new swarm algorithm, honey bee mating optimization, to construct the codebook of vector quantization. The proposed method is called the honey bee mating optimization based LBG (HBMO-LBG) algorithm. The results were compared with the other two methods that are LBG and PSO-LBG algorithms. Experimental results showed that the proposed HBMO-LBG algorithm is more reliable and the reconstructed images get higher quality than those generated form the other three methods.

  9. Improving multispectral satellite image compression using onboard subpixel registration

    NASA Astrophysics Data System (ADS)

    Albinet, Mathieu; Camarero, Roberto; Isnard, Maxime; Poulet, Christophe; Perret, Jokin

    2013-09-01

    Future CNES earth observation missions will have to deal with an ever increasing telemetry data rate due to improvements in resolution and addition of spectral bands. Current CNES image compressors implement a discrete wavelet transform (DWT) followed by a bit plane encoding (BPE) but only on a mono spectral basis and do not profit from the multispectral redundancy of the observed scenes. Recent CNES studies have proven a substantial gain on the achievable compression ratio, +20% to +40% on selected scenarios, by implementing a multispectral compression scheme based on a Karhunen Loeve transform (KLT) followed by the classical DWT+BPE. But such results can be achieved only on perfectly registered bands; a default of registration as low as 0.5 pixel ruins all the benefits of multispectral compression. In this work, we first study the possibility to implement a multi-bands subpixel onboard registration based on registration grids generated on-the-fly by the satellite attitude control system and simplified resampling and interpolation techniques. Indeed bands registration is usually performed on ground using sophisticated techniques too computationally intensive for onboard use. This fully quantized algorithm is tuned to meet acceptable registration performances within stringent image quality criteria, with the objective of onboard real-time processing. In a second part, we describe a FPGA implementation developed to evaluate the design complexity and, by extrapolation, the data rate achievable on a spacequalified ASIC. Finally, we present the impact of this approach on the processing chain not only onboard but also on ground and the impacts on the design of the instrument.

  10. Code aperture optimization for spectrally agile compressive imaging.

    PubMed

    Arguello, Henry; Arce, Gonzalo R

    2011-11-01

    Coded aperture snapshot spectral imaging (CASSI) provides a mechanism for capturing a 3D spectral cube with a single shot 2D measurement. In many applications selective spectral imaging is sought since relevant information often lies within a subset of spectral bands. Capturing and reconstructing all the spectral bands in the observed image cube, to then throw away a large portion of this data, is inefficient. To this end, this paper extends the concept of CASSI to a system admitting multiple shot measurements, which leads not only to higher quality of reconstruction but also to spectrally selective imaging when the sequence of code aperture patterns is optimized. The aperture code optimization problem is shown to be analogous to the optimization of a constrained multichannel filter bank. The optimal code apertures allow the decomposition of the CASSI measurement into several subsets, each having information from only a few selected spectral bands. The rich theory of compressive sensing is used to effectively reconstruct the spectral bands of interest from the measurements. A number of simulations are developed to illustrate the spectral imaging characteristics attained by optimal aperture codes.

  11. Sparse radar imaging using 2D compressed sensing

    NASA Astrophysics Data System (ADS)

    Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying

    2014-10-01

    Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.

  12. Learning-based compressed sensing for infrared image super resolution

    NASA Astrophysics Data System (ADS)

    Zhao, Yao; Sui, Xiubao; Chen, Qian; Wu, Shaochi

    2016-05-01

    This paper presents an infrared image super-resolution method based on compressed sensing (CS). First, the reconstruction model under the CS framework is established and a Toeplitz matrix is selected as the sensing matrix. Compared with traditional learning-based methods, the proposed method uses a set of sub-dictionaries instead of two coupled dictionaries to recover high resolution (HR) images. And Toeplitz sensing matrix allows the proposed method time-efficient. Second, all training samples are divided into several feature spaces by using the proposed adaptive k-means classification method, which is more accurate than the standard k-means method. On the basis of this approach, a complex nonlinear mapping from the HR space to low resolution (LR) space can be converted into several compact linear mappings. Finally, the relationships between HR and LR image patches can be obtained by multi-sub-dictionaries and HR infrared images are reconstructed by the input LR images and multi-sub-dictionaries. The experimental results show that the proposed method is quantitatively and qualitatively more effective than other state-of-the-art methods.

  13. Area and power efficient DCT architecture for image compression

    NASA Astrophysics Data System (ADS)

    Dhandapani, Vaithiyanathan; Ramachandran, Seshasayanan

    2014-12-01

    The discrete cosine transform (DCT) is one of the major components in image and video compression systems. The final output of these systems is interpreted by the human visual system (HVS), which is not perfect. The limited perception of human visualization allows the algorithm to be numerically approximate rather than exact. In this paper, we propose a new matrix for discrete cosine transform. The proposed 8 × 8 transformation matrix contains only zeros and ones which requires only adders, thus avoiding the need for multiplication and shift operations. The new class of transform requires only 12 additions, which highly reduces the computational complexity and achieves a performance in image compression that is comparable to that of the existing approximated DCT. Another important aspect of the proposed transform is that it provides an efficient area and power optimization while implementing in hardware. To ensure the versatility of the proposal and to further evaluate the performance and correctness of the structure in terms of speed, area, and power consumption, the model is implemented on Xilinx Virtex 7 field programmable gate array (FPGA) device and synthesized with Cadence® RTL Compiler® using UMC 90 nm standard cell library. The analysis obtained from the implementation indicates that the proposed structure is superior to the existing approximation techniques with a 30% reduction in power and 12% reduction in area.

  14. Block-based conditional entropy coding for medical image compression

    NASA Astrophysics Data System (ADS)

    Bharath Kumar, Sriperumbudur V.; Nagaraj, Nithin; Mukhopadhyay, Sudipta; Xu, Xiaofeng

    2003-05-01

    In this paper, we propose a block-based conditional entropy coding scheme for medical image compression using the 2-D integer Haar wavelet transform. The main motivation to pursue conditional entropy coding is that the first-order conditional entropy is always theoretically lesser than the first and second-order entropies. We propose a sub-optimal scan order and an optimum block size to perform conditional entropy coding for various modalities. We also propose that a similar scheme can be used to obtain a sub-optimal scan order and an optimum block size for other wavelets. The proposed approach is motivated by a desire to perform better than JPEG2000 in terms of compression ratio. We hint towards developing a block-based conditional entropy coder, which has the potential to perform better than JPEG2000. Though we don't indicate a method to achieve the first-order conditional entropy coder, the use of conditional adaptive arithmetic coder would achieve arbitrarily close to the theoretical conditional entropy. All the results in this paper are based on the medical image data set of various bit-depths and various modalities.

  15. High dynamic range coherent imaging using compressed sensing.

    PubMed

    He, Kuan; Sharma, Manoj Kumar; Cossairt, Oliver

    2015-11-30

    In both lensless Fourier transform holography (FTH) and coherent diffraction imaging (CDI), a beamstop is used to block strong intensities which exceed the limited dynamic range of the sensor, causing a loss in low-frequency information, making high quality reconstructions difficult or even impossible. In this paper, we show that an image can be recovered from high-frequencies alone, thereby overcoming the beamstop problem in both FTH and CDI. The only requirement is that the object is sparse in a known basis, a common property of most natural and manmade signals. The reconstruction method relies on compressed sensing (CS) techniques, which ensure signal recovery from incomplete measurements. Specifically, in FTH, we perform compressed sensing (CS) reconstruction of captured holograms and show that this method is applicable not only to standard FTH, but also multiple or extended reference FTH. For CDI, we propose a new phase retrieval procedure, which combines Fienup's hybrid input-output (HIO) method and CS. Both numerical simulations and proof-of-principle experiments are shown to demonstrate the effectiveness and robustness of the proposed CS-based reconstructions in dealing with missing data in both FTH and CDI. PMID:26698723

  16. A CMOS Imager with Focal Plane Compression using Predictive Coding

    NASA Technical Reports Server (NTRS)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  17. High dynamic range coherent imaging using compressed sensing.

    PubMed

    He, Kuan; Sharma, Manoj Kumar; Cossairt, Oliver

    2015-11-30

    In both lensless Fourier transform holography (FTH) and coherent diffraction imaging (CDI), a beamstop is used to block strong intensities which exceed the limited dynamic range of the sensor, causing a loss in low-frequency information, making high quality reconstructions difficult or even impossible. In this paper, we show that an image can be recovered from high-frequencies alone, thereby overcoming the beamstop problem in both FTH and CDI. The only requirement is that the object is sparse in a known basis, a common property of most natural and manmade signals. The reconstruction method relies on compressed sensing (CS) techniques, which ensure signal recovery from incomplete measurements. Specifically, in FTH, we perform compressed sensing (CS) reconstruction of captured holograms and show that this method is applicable not only to standard FTH, but also multiple or extended reference FTH. For CDI, we propose a new phase retrieval procedure, which combines Fienup's hybrid input-output (HIO) method and CS. Both numerical simulations and proof-of-principle experiments are shown to demonstrate the effectiveness and robustness of the proposed CS-based reconstructions in dealing with missing data in both FTH and CDI.

  18. Error-resilient pyramid vector quantization for image compression.

    PubMed

    Hung, A C; Tsern, E K; Meng, T H

    1998-01-01

    Pyramid vector quantization (PVQ) uses the lattice points of a pyramidal shape in multidimensional space as the quantizer codebook. It is a fixed-rate quantization technique that can be used for the compression of Laplacian-like sources arising from transform and subband image coding, where its performance approaches the optimal entropy-coded scalar quantizer without the necessity of variable length codes. In this paper, we investigate the use of PVQ for compressed image transmission over noisy channels, where the fixed-rate quantization reduces the susceptibility to bit-error corruption. We propose a new method of deriving the indices of the lattice points of the multidimensional pyramid and describe how these techniques can also improve the channel noise immunity of general symmetric lattice quantizers. Our new indexing scheme improves channel robustness by up to 3 dB over previous indexing methods, and can be performed with similar computational cost. The final fixed-rate coding algorithm surpasses the performance of typical Joint Photographic Experts Group (JPEG) implementations and exhibits much greater error resilience.

  19. High-performance compression and double cryptography based on compressive ghost imaging with the fast Fourier transform

    NASA Astrophysics Data System (ADS)

    Leihong, Zhang; Zilan, Pan; Luying, Wu; Xiuhua, Ma

    2016-11-01

    To solve the problem that large images can hardly be retrieved for stringent hardware restrictions and the security level is low, a method based on compressive ghost imaging (CGI) with Fast Fourier Transform (FFT) is proposed, named FFT-CGI. Initially, the information is encrypted by the sender with FFT, and the FFT-coded image is encrypted by the system of CGI with a secret key. Then the receiver decrypts the image with the aid of compressive sensing (CS) and FFT. Simulation results are given to verify the feasibility, security, and compression of the proposed encryption scheme. The experiment suggests the method can improve the quality of large images compared with conventional ghost imaging and achieve the imaging for large-sized images, further the amount of data transmitted largely reduced because of the combination of compressive sensing and FFT, and improve the security level of ghost images through ciphertext-only attack (COA), chosen-plaintext attack (CPA), and noise attack. This technique can be immediately applied to encryption and data storage with the advantages of high security, fast transmission, and high quality of reconstructed information.

  20. Three-dimensional imaging reconstruction algorithm of gated-viewing laser imaging with compressive sensing.

    PubMed

    Li, Li; Xiao, Wei; Jian, Weijian

    2014-11-20

    Three-dimensional (3D) laser imaging combining compressive sensing (CS) has an advantage in lower power consumption and less imaging sensors; however, it brings enormous stress to subsequent calculation devices. In this paper we proposed a fast 3D imaging reconstruction algorithm to deal with time-slice images sampled by single-pixel detectors. The algorithm implements 3D imaging reconstruction before CS recovery, thus it saves plenty of runtime of CS recovery. Several experiments are conducted to verify the performance of the algorithm. Simulation results demonstrated that the proposed algorithm has better performance in terms of efficiency compared to an existing algorithm.

  1. Vertebral Compression Fracture with Intravertebral Vacuum Cleft Sign: Pathogenesis, Image, and Surgical Intervention

    PubMed Central

    Wu, Ai-Min; Ni, Wen-Fei

    2013-01-01

    The intravertebral vacuum cleft (IVC) sign in vertebral compression fracture patients has obtained much attention. The pathogenesis, image character and efficacy of surgical intervention were disputed. Many pathogenesis theories were proposed, and its image characters are distinct from malignancy and infection. Percutaneous vertebroplasty (PVP) or percutaneous kyphoplasty (PKP) have been the main therapeutic methods for these patients in recent years. Avascular necrosis theory is the most supported; PVP could relieve back pain, restore vertebral body height and correct the kyphotic angulation (KA), and is recommended for these patients. PKP seems to be more effective for the correction of KA and lower cement leakage. The Kümmell's disease with IVC sign reported by modern authors was incomplete consistent with syndrome reported by Dr. Hermann Kümmell. PMID:23741556

  2. 3-D Adaptive Sparsity Based Image Compression with Applications to Optical Coherence Tomography

    PubMed Central

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A.; Farsiu, Sina

    2015-01-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  3. Image-Data Compression Using Edge-Optimizing Algorithm for WFA Inference.

    ERIC Educational Resources Information Center

    Culik, Karel II; Kari, Jarkko

    1994-01-01

    Presents an inference algorithm that produces a weighted finite automata (WFA), in particular, the grayness functions of graytone images. Image-data compression results based on the new inference algorithm produces a WFA with a relatively small number of edges. Image-data compression results alone and in combination with wavelets are discussed.…

  4. Effective palette indexing for image compression using self-organization of Kohonen feature map.

    PubMed

    Pei, Soo-Chang; Chuang, Yu-Ting; Chuang, Wei-Hong

    2006-09-01

    The process of limited-color image compression usually involves color quantization followed by palette re-indexing. Palette re-indexing could improve the compression of color-indexed images, but it is still complicated and consumes extra time. Making use of the topology-preserving property of self-organizing Kohonen feature map, we can generate a fairly good color index table to achieve both high image quality and high compression, without re-indexing. Promising experiment results will be presented.

  5. Prior image constrained compressed sensing: Implementation and performance evaluation

    PubMed Central

    Lauzier, Pascal Thériault; Tang, Jie; Chen, Guang-Hong

    2012-01-01

    Purpose: Prior image constrained compressed sensing (PICCS) is an image reconstruction framework which incorporates an often available prior image into the compressed sensing objective function. The images are reconstructed using an optimization procedure. In this paper, several alternative unconstrained minimization methods are used to implement PICCS. The purpose is to study and compare the performance of each implementation, as well as to evaluate the performance of the PICCS objective function with respect to image quality. Methods: Six different minimization methods are investigated with respect to convergence speed and reconstruction accuracy. These minimization methods include the steepest descent (SD) method and the conjugate gradient (CG) method. These algorithms require a line search to be performed. Thus, for each minimization algorithm, two line searching algorithms are evaluated: a backtracking (BT) line search and a fast Newton-Raphson (NR) line search. The relative root mean square error is used to evaluate the reconstruction accuracy. The algorithm that offers the best convergence speed is used to study the performance of PICCS with respect to the prior image parameter α and the data consistency parameter λ. PICCS is studied in terms of reconstruction accuracy, low-contrast spatial resolution, and noise characteristics. A numerical phantom was simulated and an animal model was scanned using a multirow detector computed tomography (CT) scanner to yield the projection datasets used in this study. Results: For λ within a broad range, the CG method with Fletcher-Reeves formula and NR line search offers the fastest convergence for an equal level of reconstruction accuracy. Using this minimization method, the reconstruction accuracy of PICCS was studied with respect to variations in α and λ. When the number of view angles is varied between 107, 80, 64, 40, 20, and 16, the relative root mean square error reaches a minimum value for α ≈ 0.5. For

  6. Auto-shape lossless compression of pharynx and esophagus fluoroscopic images.

    PubMed

    Arif, Arif Sameh; Mansor, Sarina; Logeswaran, Rajasvaran; Karim, Hezerul Abdul

    2015-02-01

    The massive number of medical images produced by fluoroscopic and other conventional diagnostic imaging devices demand a considerable amount of space for data storage. This paper proposes an effective method for lossless compression of fluoroscopic images. The main contribution in this paper is the extraction of the regions of interest (ROI) in fluoroscopic images using appropriate shapes. The extracted ROI is then effectively compressed using customized correlation and the combination of Run Length and Huffman coding, to increase compression ratio. The experimental results achieved show that the proposed method is able to improve the compression ratio by 400 % as compared to that of traditional methods.

  7. Compressive spectral polarization imaging by a pixelized polarizer and colored patterned detector.

    PubMed

    Fu, Chen; Arguello, Henry; Sadler, Brian M; Arce, Gonzalo R

    2015-11-01

    A compressive spectral and polarization imager based on a pixelized polarizer and colored patterned detector is presented. The proposed imager captures several dispersed compressive projections with spectral and polarization coding. Stokes parameter images at several wavelengths are reconstructed directly from 2D projections. Employing a pixelized polarizer and colored patterned detector enables compressive sensing over spatial, spectral, and polarization domains, reducing the total number of measurements. Compressive sensing codes are specially designed to enhance the peak signal-to-noise ratio in the reconstructed images. Experiments validate the architecture and reconstruction algorithms.

  8. Interlabial masses in little girls: review and imaging recommendations

    SciTech Connect

    Nussbaum, A.R.; Lebowitz, R.L.

    1983-07-01

    When an interlabial mass is seen on physical examination in a little girl, there is often confusion about its etiology, its implications, and what should be done next. Five common interlabial masses, which superficially are strikingly similar, include a prolapsed ectopic ureterocele, a prolapsed urethra, a paraurethral cyst, hydro(metro)colpos, and rhabdomyosarcoma of the vagina (botryoid sarcoma). A prolapsed ectopic ureterocele occurs in white girls as a smooth mass which protrudes from the urethral meatus so that urine exits circumferentially. A prolapsed urethra occurs in black girls and resembles a donut with the urethral meatus in the center. A paraurethral cyst is smaller and displaces the meatus, so that the urinary stream is eccentric. Hydro(metro)colpos from hymenal imperforation presents as a smooth mass that fills the vaginal introitus, as opposed to the introital grapelike cluster of masses of botryoid sarcoma. Recommendations for efficient imaging are presented.

  9. The compression and storage method of the same kind of medical images: DPCM

    NASA Astrophysics Data System (ADS)

    Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong

    2006-09-01

    Medical imaging has started to take advantage of digital technology, opening the way for advanced medical imaging and teleradiology. Medical images, however, require large amounts of memory. At over 1 million bytes per image, a typical hospital needs a staggering amount of memory storage (over one trillion bytes per year), and transmitting an image over a network (even the promised superhighway) could take minutes--too slow for interactive teleradiology. This calls for image compression to reduce significantly the amount of data needed to represent an image. Several compression techniques with different compression ratio have been developed. However, the lossless techniques, which allow for perfect reconstruction of the original images, yield modest compression ratio, while the techniques that yield higher compression ratio are lossy, that is, the original image is reconstructed only approximately. Medical imaging poses the great challenge of having compression algorithms that are lossless (for diagnostic and legal reasons) and yet have high compression ratio for reduced storage and transmission time. To meet this challenge, we are developing and studying some compression schemes, which are either strictly lossless or diagnostically lossless, taking advantage of the peculiarities of medical images and of the medical practice. In order to increase the Signal to Noise Ratio (SNR) by exploitation of correlations within the source signal, a method of combining differential pulse code modulation (DPCM) is presented.

  10. Toward an image compression algorithm for the high-resolution electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.

  11. Compressive spectral polarization imaging with coded micropolarizer array

    NASA Astrophysics Data System (ADS)

    Fu, Chen; Arguello, Henry; Sadler, Brian M.; Arce, Gonzalo R.

    2015-05-01

    We present a compressive spectral polarization imager based on a prism which is rotated to different angles as the measurement shots are taken, and a colored detector with a micropolarizer array. The prism shears the scene along one spatial axis according to its wavelength components. The scene is then projected to different locations on the detector as measurement shots are taken. Composed of 0°, 45°, 90°, 135° linear micropolarizers, the pixels of the micropolarizer array matched to that of the colored detector, thus the first three Stokes parameters of the scene are compressively sensed. The four dimensional (4D) data cube is thus projected onto the two dimensional (2D) FPA. Designed patterns for the micropolarizer and the colored detector are applied so as to improve the reconstruction problem. The 4D spectral-polarization data cube is reconstructed from the 2D measurements via nonlinear optimization with sparsity constraints. Computer simulations are performed and the performance of designed patterns is compared with random patterns.

  12. Efficient algorithms for robust recovery of images from compressed data.

    PubMed

    Pham, Duc-Son; Venkatesh, Svetha

    2013-12-01

    Compressed sensing (CS) is an important theory for sub-Nyquist sampling and recovery of compressible data. Recently, it has been extended to cope with the case where corruption to the CS data is modeled as impulsive noise. The new formulation, termed as robust CS, combines robust statistics and CS into a single framework to suppress outliers in the CS recovery. To solve the newly formulated robust CS problem, a scheme that iteratively solves a number of CS problems--the solutions from which provably converge to the true robust CS solution--is suggested. This scheme is, however, rather inefficient as it has to use existing CS solvers as a proxy. To overcome limitations with the original robust CS algorithm, we propose in this paper more computationally efficient algorithms by following latest advances in large-scale convex optimization for nonsmooth regularization. Furthermore, we also extend the robust CS formulation to various settings, including additional affine constraints, l1-norm loss function, mix-norm regularization, and multitasking, so as to further improve robust CS and derive simple but effective algorithms to solve these extensions. We demonstrate that the new algorithms provide much better computational advantage over the original robust CS method on the original robust CS formulation, and effectively solve more sophisticated extensions where the original methods simply cannot. We demonstrate the usefulness of the extensions on several imaging tasks.

  13. Television image compression and small animal remote monitoring

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Jackson, Robert W.

    1990-01-01

    It was shown that a subject can reliably discriminate a difference in video image quality (using a specific commercial product) for image compression levels ranging from 384 kbits per second to 1536 kbits per second. However, their discriminations are significantly influenced by whether or not the TV camera is stable or moving and whether or not the animals are quiescent or active, which is correlated with illumination level (daylight versus night illumination, respectively). The highest video rate used here was 1.54 megabits per second, which is about 18 percent of the so-called normal TV resolution of 8.4MHz. Since this video rate was judged to be acceptable by 27 of the 34 subjects (79 percent), for monitoring the general health and status of small animals within their illuminated (lights on) cages (regardless of whether the camera was stable or moved), it suggests that an immediate Space Station Freedom to ground bandwidth reduction of about 80 percent can be tolerated without a significant loss in general monitoring capability. Another general conclusion is that the present methodology appears to be effective in quantifying visual judgments of video image quality.

  14. Intelligent fuzzy approach for fast fractal image compression

    NASA Astrophysics Data System (ADS)

    Nodehi, Ali; Sulong, Ghazali; Al-Rodhaan, Mznah; Al-Dhelaan, Abdullah; Rehman, Amjad; Saba, Tanzila

    2014-12-01

    Fractal image compression (FIC) is recognized as a NP-hard problem, and it suffers from a high number of mean square error (MSE) computations. In this paper, a two-phase algorithm was proposed to reduce the MSE computation of FIC. In the first phase, based on edge property, range and domains are arranged. In the second one, imperialist competitive algorithm (ICA) is used according to the classified blocks. For maintaining the quality of the retrieved image and accelerating algorithm operation, we divided the solutions into two groups: developed countries and undeveloped countries. Simulations were carried out to evaluate the performance of the developed approach. Promising results thus achieved exhibit performance better than genetic algorithm (GA)-based and Full-search algorithms in terms of decreasing the number of MSE computations. The number of MSE computations was reduced by the proposed algorithm for 463 times faster compared to the Full-search algorithm, although the retrieved image quality did not have a considerable change.

  15. Television image compression and small animal remote monitoring

    NASA Astrophysics Data System (ADS)

    Haines, Richard F.; Jackson, Robert W.

    1990-04-01

    It was shown that a subject can reliably discriminate a difference in video image quality (using a specific commercial product) for image compression levels ranging from 384 kbits per second to 1536 kbits per second. However, their discriminations are significantly influenced by whether or not the TV camera is stable or moving and whether or not the animals are quiescent or active, which is correlated with illumination level (daylight versus night illumination, respectively). The highest video rate used here was 1.54 megabits per second, which is about 18 percent of the so-called normal TV resolution of 8.4MHz. Since this video rate was judged to be acceptable by 27 of the 34 subjects (79 percent), for monitoring the general health and status of small animals within their illuminated (lights on) cages (regardless of whether the camera was stable or moved), it suggests that an immediate Space Station Freedom to ground bandwidth reduction of about 80 percent can be tolerated without a significant loss in general monitoring capability. Another general conclusion is that the present methodology appears to be effective in quantifying visual judgments of video image quality.

  16. Adaptive downsampling to improve image compression at low bit rates.

    PubMed

    Lin, Weisi; Dong, Li

    2006-09-01

    At low bit rates, better coding quality can be achieved by downsampling the image prior to compression and estimating the missing portion after decompression. This paper presents a new algorithm in such a paradigm, based on the adaptive decision of appropriate downsampling directions/ratios and quantization steps, in order to achieve higher coding quality with low bit rates with the consideration of local visual significance. The full-resolution image can be restored from the DCT coefficients of the downsampled pixels so that the spatial interpolation required otherwise is avoided. The proposed algorithm significantly raises the critical bit rate to approximately 1.2 bpp, from 0.15-0.41 bpp in the existing downsample-prior-to-JPEG schemes and, therefore, outperforms the standard JPEG method in a much wider bit-rate scope. The experiments have demonstrated better PSNR improvement over the existing techniques before the critical bit rate. In addition, the adaptive mode decision not only makes the critical bit rate less image-independent, but also automates the switching coders in variable bit-rate applications, since the algorithm turns to the standard JPEG method whenever it is necessary at higher bit rates.

  17. Integer cosine transform chip design for image compression

    NASA Astrophysics Data System (ADS)

    Ruiz, Gustavo A.; Michell, Juan A.; Buron, Angel M.; Solana, Jose M.; Manzano, Miguel A.; Diaz, J.

    2003-04-01

    The Discrete Cosine Transform (DCT) is the most widely used transform for image compression. The Integer Cosine Transform denoted ICT (10, 9, 6, 2, 3, 1) has been shown to be a promising alternative to the DCT due to its implementation simplicity, similar performance and compatibility with the DCT. This paper describes the design and implementation of a 8×8 2-D ICT processor for image compression, that meets the numerical characteristic of the IEEE std. 1180-1990. This processor uses a low latency data flow that minimizes the internal memory and a parallel pipelined architecture, based on a numerical strength reduction Integer Cosine Transform (10, 9, 6, 2, 3, 1) algorithm, in order to attain high throughput and continuous data flow. A prototype of the 8×8 ICT processor has been implemented using a standard cell design methodology and a 0.35-μm CMOS CSD 3M/2P 3.3V process on a 10 mm2 die. Pipeline circuit techniques have been used to attain the maximum frequency of operation allowed by the technology, attaining a critical path of 1.8ns, which should be increased by a 20% to allow for line delays, placing the estimated operational frequency at 500Mhz. The circuit includes 12446 cells, being flip-flops 6757 of them. Two clock signals have been distributed, an external one (fs) and an internal one (fs/2). The high number of flip-flops has forced the use of a strategy to minimize clock-skew, combining big sized buffers on the periphery and using wide metal lines (clock-trunks) to distribute the signals.

  18. Radon transform imaging: low-cost video compressive imaging at extreme resolutions

    NASA Astrophysics Data System (ADS)

    Sankaranarayanan, Aswin C.; Wang, Jian; Gupta, Mohit

    2016-05-01

    Most compressive imaging architectures rely on programmable light-modulators to obtain coded linear measurements of a signal. As a consequence, the properties of the light modulator place fundamental limits on the cost, performance, practicality, and capabilities of the compressive camera. For example, the spatial resolution of the single pixel camera is limited to that of its light modulator, which is seldom greater than 4 megapixels. In this paper, we describe a novel approach to compressive imaging that avoids the use of spatial light modulator. In its place, we use novel cylindrical optics and a rotation gantry to directly sample the Radon transform of the image focused on the sensor plane. We show that the reconstruction problem is identical to sparse tomographic recovery and we can leverage the vast literature in compressive magnetic resonance imaging (MRI) to good effect. The proposed design has many important advantages over existing compressive cameras. First, we can achieve a resolution of N × N pixels using a sensor with N photodetectors; hence, with commercially available SWIR line-detectors with 10k pixels, we can potentially achieve spatial resolutions of 100 megapixels, a capability that is unprecedented. Second, our design is scalable more gracefully across wavebands of light since we only require sensors and optics that are optimized for the wavelengths of interest; in contrast, spatial light modulators like DMDs require expensive coatings to be effective in non-visible wavebands. Third, we can exploit properties of line-detectors including electronic shutters and pixels with large aspect ratios to optimize light throughput. On the ip side, a drawback of our approach is the need for moving components in the imaging architecture.

  19. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information

    NASA Technical Reports Server (NTRS)

    Pence, William D.; White, R. L.; Seaman, R.

    2010-01-01

    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  20. Three novel lossless image compression schemes for medical image archiving and telemedicine.

    PubMed

    Wang, J; Naghdy, G

    2000-01-01

    In this article, three novel lossless image compression schemes, hybrid predictive/vector quantization lossless image coding (HPVQ), shape-adaptive differential pulse code modulation (DPCM) (SADPCM), and shape-VQ-based hybrid ADPCM/DCT (ADPCMDCT) are introduced. All are based on the lossy coder, VQ. However, VQ is used in these new schemes as a tool to improve the decorrelation efficiency of those traditional lossless predictive coders such as DPCM, adaptive DPCM (ADPCM), and multiplicative autoregressive coding (MAR). A new kind of VQ, shape-VQ, is also introduced in this article. It provides predictive coders useful information regarding the shape characters of image block. These enhance the performance of predictive coders in the context of lossless coding. Simulation results of the proposed coders applied in lossless medical image compression are presented. Some leading lossless techniques such as DPCM, hierarchical interfold (HINT), CALIC, and the standard lossless JPEG are included in the tests. Promising results show that all these three methods are good candidates for lossless medical image compression. PMID:10957738

  1. Effect of Breast Compression on Lesion Characteristic Visibility with Diffraction-Enhanced Imaging

    SciTech Connect

    Faulconer, L.; Parham, C; Connor, D; Kuzmiak, C; Koomen, M; Lee, Y; Cho, K; Rafoth, J; Livasy, C; et al.

    2010-01-01

    Conventional mammography can not distinguish between transmitted, scattered, or refracted x-rays, thus requiring breast compression to decrease tissue depth and separate overlapping structures. Diffraction-enhanced imaging (DEI) uses monochromatic x-rays and perfect crystal diffraction to generate images with contrast based on absorption, refraction, or scatter. Because DEI possesses inherently superior contrast mechanisms, the current study assesses the effect of breast compression on lesion characteristic visibility with DEI imaging of breast specimens. Eleven breast tissue specimens, containing a total of 21 regions of interest, were imaged by DEI uncompressed, half-compressed, or fully compressed. A fully compressed DEI image was displayed on a soft-copy mammography review workstation, next to a DEI image acquired with reduced compression, maintaining all other imaging parameters. Five breast imaging radiologists scored image quality metrics considering known lesion pathology, ranking their findings on a 7-point Likert scale. When fully compressed DEI images were compared to those acquired with approximately a 25% difference in tissue thickness, there was no difference in scoring of lesion feature visibility. For fully compressed DEI images compared to those acquired with approximately a 50% difference in tissue thickness, across the five readers, there was a difference in scoring of lesion feature visibility. The scores for this difference in tissue thickness were significantly different at one rocking curve position and for benign lesion characterizations. These results should be verified in a larger study because when evaluating the radiologist scores overall, we detected a significant difference between the scores reported by the five radiologists. Reducing the need for breast compression might increase patient comfort during mammography. Our results suggest that DEI may allow a reduction in compression without substantially compromising clinical image

  2. The wavelet/scalar quantization compression standard for digital fingerprint images

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  3. Rapid MR spectroscopic imaging of lactate using compressed sensing

    NASA Astrophysics Data System (ADS)

    Vidya Shankar, Rohini; Agarwal, Shubhangi; Geethanath, Sairam; Kodibagkar, Vikram D.

    2015-03-01

    Imaging lactate metabolism in vivo may improve cancer targeting and therapeutics due to its key role in the development, maintenance, and metastasis of cancer. The long acquisition times associated with magnetic resonance spectroscopic imaging (MRSI), which is a useful technique for assessing metabolic concentrations, are a deterrent to its routine clinical use. The objective of this study was to combine spectral editing and prospective compressed sensing (CS) acquisitions to enable precise and high-speed imaging of the lactate resonance. A MRSI pulse sequence with two key modifications was developed: (1) spectral editing components for selective detection of lactate, and (2) a variable density sampling mask for pseudo-random under-sampling of the k-space `on the fly'. The developed sequence was tested on phantoms and in vivo in rodent models of cancer. Datasets corresponding to the 1X (fully-sampled), 2X, 3X, 4X, 5X, and 10X accelerations were acquired. The under-sampled datasets were reconstructed using a custom-built algorithm in MatlabTM, and the fidelity of the CS reconstructions was assessed in terms of the peak amplitudes, SNR, and total acquisition time. The accelerated reconstructions demonstrate a reduction in the scan time by up to 90% in vitro and up to 80% in vivo, with negligible loss of information when compared with the fully-sampled dataset. The proposed unique combination of spectral editing and CS facilitated rapid mapping of the spatial distribution of lactate at high temporal resolution. This technique could potentially be translated to the clinic for the routine assessment of lactate changes in solid tumors.

  4. Image Algebra Matlab language version 2.3 for image processing and compression research

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Hayden, Eric

    2010-08-01

    Image algebra is a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra was developed under DARPA and US Air Force sponsorship at University of Florida for over 15 years beginning in 1984. Image algebra has been implemented in a variety of programming languages designed specifically to support the development of image processing and computer vision algorithms and software. The University of Florida has been associated with development of the languages FORTRAN, Ada, Lisp, and C++. The latter implementation involved a class library, iac++, that supported image algebra programming in C++. Since image processing and computer vision are generally performed with operands that are array-based, the Matlab™ programming language is ideal for implementing the common subset of image algebra. Objects include sets and set operations, images and operations on images, as well as templates and image-template convolution operations. This implementation, called Image Algebra Matlab (IAM), has been found to be useful for research in data, image, and video compression, as described herein. Due to the widespread acceptance of the Matlab programming language in the computing community, IAM offers exciting possibilities for supporting a large group of users. The control over an object's computational resources provided to the algorithm designer by Matlab means that IAM programs can employ versatile representations for the operands and operations of the algebra, which are supported by the underlying libraries written in Matlab. In a previous publication, we showed how the functionality of IAC++ could be carried forth into a Matlab implementation, and provided practical details of a prototype implementation called IAM Version 1. In this paper, we further elaborate the purpose and structure of image algebra, then present a maturing implementation of Image Algebra Matlab called IAM Version 2.3, which extends the previous implementation

  5. Adjustable lossless image compression based on a natural splitting of an image into drawing, shading, and fine-grained components

    NASA Technical Reports Server (NTRS)

    Novik, Dmitry A.; Tilton, James C.

    1993-01-01

    The compression, or efficient coding, of single band or multispectral still images is becoming an increasingly important topic. While lossy compression approaches can produce reconstructions that are visually close to the original, many scientific and engineering applications require exact (lossless) reconstructions. However, the most popular and efficient lossless compression techniques do not fully exploit the two-dimensional structural links existing in the image data. We describe here a general approach to lossless data compression that effectively exploits two-dimensional structural links of any length. After describing in detail two main variants on this scheme, we discuss experimental results.

  6. Spectral compression algorithms for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R.

    2007-10-16

    A method for spectrally compressing data sets enables the efficient analysis of very large multivariate images. The spectral compression algorithm uses a factored representation of the data that can be obtained from Principal Components Analysis or other factorization technique. Furthermore, a block algorithm can be used for performing common operations more efficiently. An image analysis can be performed on the factored representation of the data, using only the most significant factors. The spectral compression algorithm can be combined with a spatial compression algorithm to provide further computational efficiencies.

  7. MTF as a quality measure for compressed images transmitted over computer networks

    NASA Astrophysics Data System (ADS)

    Hadar, Ofer; Stern, Adrian; Huber, Merav; Huber, Revital

    1999-12-01

    One result of the recent advances in different components of imaging systems technology is that, these systems have become more resolution-limited and less noise-limited. The most useful tool utilized in characterization of resolution- limited systems is the Modulation Transfer Function (MTF). The goal of this work is to use the MTF as an image quality measure of image compression implemented by the JPEG (Joint Photographic Expert Group) algorithm and transmitted MPEG (Motion Picture Expert Group) compressed video stream through a lossy packet network. Although we realize that the MTF is not an ideal parameter with which to measure image quality after compression and transmission due to the non- linearity shift invariant process, we examine the conditions under which it can be used as an approximated criterion for image quality. The advantage in using the MTF of the compression algorithm is that it can be easily combined with the overall MTF of the imaging system.

  8. Low-complexity wavelet filter design for image compression

    NASA Technical Reports Server (NTRS)

    Majani, E.

    1994-01-01

    Image compression algorithms based on the wavelet transform are an increasingly attractive and flexible alternative to other algorithms based on block orthogonal transforms. While the design of orthogonal wavelet filters has been studied in significant depth, the design of nonorthogonal wavelet filters, such as linear-phase (LP) filters, has not yet reached that point. Of particular interest are wavelet transforms with low complexity at the encoder. In this article, we present known and new parameterizations of the two families of LP perfect reconstruction (PR) filters. The first family is that of all PR LP filters with finite impulse response (FIR), with equal complexity at the encoder and decoder. The second family is one of LP PR filters, which are FIR at the encoder and infinite impulse response (IIR) at the decoder, i.e., with controllable encoder complexity. These parameterizations are used to optimize the subband/wavelet transform coding gain, as defined for nonorthogonal wavelet transforms. Optimal LP wavelet filters are given for low levels of encoder complexity, as well as their corresponding integer approximations, to allow for applications limited to using integer arithmetic. These optimal LP filters yield larger coding gains than orthogonal filters with an equivalent complexity. The parameterizations described in this article can be used for the optimization of any other appropriate objective function.

  9. Computational simulation of breast compression based on segmented breast and fibroglandular tissues on magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Shih, Tzu-Ching; Chen, Jeon-Hor; Liu, Dongxu; Nie, Ke; Sun, Lizhi; Lin, Muqing; Chang, Daniel; Nalcioglu, Orhan; Su, Min-Ying

    2010-07-01

    This study presents a finite element-based computational model to simulate the three-dimensional deformation of a breast and fibroglandular tissues under compression. The simulation was based on 3D MR images of the breast, and craniocaudal and mediolateral oblique compression, as used in mammography, was applied. The geometry of the whole breast and the segmented fibroglandular tissues within the breast were reconstructed using triangular meshes by using the Avizo® 6.0 software package. Due to the large deformation in breast compression, a finite element model was used to simulate the nonlinear elastic tissue deformation under compression, using the MSC.Marc® software package. The model was tested in four cases. The results showed a higher displacement along the compression direction compared to the other two directions. The compressed breast thickness in these four cases at a compression ratio of 60% was in the range of 5-7 cm, which is a typical range of thickness in mammography. The projection of the fibroglandular tissue mesh at a compression ratio of 60% was compared to the corresponding mammograms of two women, and they demonstrated spatially matched distributions. However, since the compression was based on magnetic resonance imaging (MRI), which has much coarser spatial resolution than the in-plane resolution of mammography, this method is unlikely to generate a synthetic mammogram close to the clinical quality. Whether this model may be used to understand the technical factors that may impact the variations in breast density needs further investigation. Since this method can be applied to simulate compression of the breast at different views and different compression levels, another possible application is to provide a tool for comparing breast images acquired using different imaging modalities--such as MRI, mammography, whole breast ultrasound and molecular imaging--that are performed using different body positions and under different compression

  10. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-01-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters

  11. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Astrophysics Data System (ADS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-07-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters

  12. Optical image encryption via photon-counting imaging and compressive sensing based ptychography

    NASA Astrophysics Data System (ADS)

    Rawat, Nitin; Hwang, In-Chul; Shi, Yishi; Lee, Byung-Geun

    2015-06-01

    In this study, we investigate the integration of compressive sensing (CS) and photon-counting imaging (PCI) techniques with a ptychography-based optical image encryption system. Primarily, the plaintext real-valued image is optically encrypted and recorded via a classical ptychography technique. Further, the sparse-based representations of the original encrypted complex data can be produced by combining CS and PCI techniques with the primary encrypted image. Such a combination takes an advantage of reduced encrypted samples (i.e., linearly projected random compressive complex samples and photon-counted complex samples) that can be exploited to realize optical decryption, which inherently serves as a secret key (i.e., independent to encryption phase keys) and makes an intruder attack futile. In addition to this, recording fewer encrypted samples provides a substantial bandwidth reduction in online transmission. We demonstrate that the fewer sparse-based complex samples have adequate information to realize decryption. To the best of our knowledge, this is the first report on integrating CS and PCI with conventional ptychography-based optical image encryption.

  13. Pornographic image recognition and filtering using incremental learning in compressed domain

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Wang, Chao; Zhuo, Li; Geng, Wenhao

    2015-11-01

    With the rapid development and popularity of the network, the openness, anonymity, and interactivity of networks have led to the spread and proliferation of pornographic images on the Internet, which have done great harm to adolescents' physical and mental health. With the establishment of image compression standards, pornographic images are mainly stored with compressed formats. Therefore, how to efficiently filter pornographic images is one of the challenging issues for information security. A pornographic image recognition and filtering method in the compressed domain is proposed by using incremental learning, which includes the following steps: (1) low-resolution (LR) images are first reconstructed from the compressed stream of pornographic images, (2) visual words are created from the LR image to represent the pornographic image, and (3) incremental learning is adopted to continuously adjust the classification rules to recognize the new pornographic image samples after the covering algorithm is utilized to train and recognize the visual words in order to build the initial classification model of pornographic images. The experimental results show that the proposed pornographic image recognition method using incremental learning has a higher recognition rate as well as costing less recognition time in the compressed domain.

  14. A novel color image compression algorithm using the human visual contrast sensitivity characteristics

    NASA Astrophysics Data System (ADS)

    Yao, Juncai; Liu, Guizhong

    2016-07-01

    In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.

  15. Reconstruction-Free Action Inference from Compressive Imagers.

    PubMed

    Kulkarni, Kuldeep; Turaga, Pavan

    2016-04-01

    Persistent surveillance from camera networks, such as at parking lots, UAVs, etc., often results in large amounts of video data, resulting in significant challenges for inference in terms of storage, communication and computation. Compressive cameras have emerged as a potential solution to deal with the data deluge issues in such applications. However, inference tasks such as action recognition require high quality features which implies reconstructing the original video data. Much work in compressive sensing (CS) theory is geared towards solving the reconstruction problem, where state-of-the-art methods are computationally intensive and provide low-quality results at high compression rates. Thus, reconstruction-free methods for inference are much desired. In this paper, we propose reconstruction-free methods for action recognition from compressive cameras at high compression ratios of 100 and above. Recognizing actions directly from CS measurements requires features which are mostly nonlinear and thus not easily applicable. This leads us to search for such properties that are preserved in compressive measurements. To this end, we propose the use of spatio-temporal smashed filters, which are compressive domain versions of pixel-domain matched filters. We conduct experiments on publicly available databases and show that one can obtain recognition rates that are comparable to the oracle method in uncompressed setup, even for high compression ratios.

  16. Revisiting the Recommended Geometry for the Diametrally Compressed Ceramic C-Ring Specimen

    SciTech Connect

    Jadaan, Osama M.; Wereszczak, Andrew A

    2009-04-01

    A study conducted several years ago found that a stated allowable width/thickness (b/t) ratio in ASTM C1323 (Standard Test Method for Ultimate Strength of Advanced Ceramics with Diametrally Compressed C-Ring Specimens at Ambient Temperature) could ultimately cause the prediction of a non-conservative probability of survival when the measured C-ring strength was scaled to a different size. Because of that problem, this study sought to reevaluate the stress state and geometry of the C-ring specimen and suggest changes to ASTM C1323 that would resolve that issue. Elasticity, mechanics of materials, and finite element solutions were revisited with the C ring geometry. To avoid the introduction of more than 2% error, it was determined that the C ring width/thickness (b/t) ratio should range between 1-3 and that its inner radius/outer radius (ri/ro) ratio should range between 0.50-0.95. ASTM C1323 presently allows for b/t to be as large as 4 so that ratio should be reduced to 3.

  17. Passive forgery detection using discrete cosine transform coefficient analysis in JPEG compressed images

    NASA Astrophysics Data System (ADS)

    Lin, Cheng-Shian; Tsay, Jyh-Jong

    2016-05-01

    Passive forgery detection aims to detect traces of image tampering without the need for prior information. With the increasing demand for image content protection, passive detection methods able to identify image tampering areas are increasingly needed. However, most current passive approaches either work only for image-level JPEG compression detection and cannot localize region-level forgery, or suffer from high-false detection rates in localizing altered regions. This paper proposes an effective approach based on discrete cosine transform coefficient analysis for the detection and localization of altered regions of JPEG compressed images. This approach can also work with altered JPEG images resaved in JPEG compressed format with different quality factors. Experiments with various tampering methods such as copy-and-paste, image completion, and composite tampering, show that the proposed approach is able to effectively detect and localize altered areas and is not sensitive to image contents such as edges and textures.

  18. Venous Thoracic Outlet Compression and the Paget-Schroetter Syndrome: A Review and Recommendations for Management

    SciTech Connect

    Thompson, J. F. Winterborn, R. J.; Bays, S.; White, H.; Kinsella, D. C.; Watkinson, A. F.

    2011-10-15

    Paget Schroetter syndrome, or effort thrombosis of the axillosubclavian venous system, is distinct from other forms of upper limb deep vein thrombosis. It occurs in younger patients and often is secondary to competitive sport, music, or strenuous occupation. If untreated, there is a higher incidence of disabling venous hypertension than was previously appreciated. Anticoagulation alone or in combination with thrombolysis leads to a high rate of rethrombosis. We have established a multidisciplinary protocol over 15 years, based on careful patient selection and a combination of lysis, decompressive surgery, and postoperative percutaneous venoplasty. During the past 10 years, a total of 232 decompression procedures have been performed. This article reviews the literature and presents the Exeter Protocol along with practical recommendations for management.

  19. Depth-dependent swimbladder compression in herring Clupea harengus observed using magnetic resonance imaging.

    PubMed

    Fässler, S M M; Fernandes, P G; Semple, S I K; Brierley, A S

    2009-01-01

    Changes in swimbladder morphology in an Atlantic herring Clupea harengus with pressure were examined by magnetic resonance imaging of a dead fish in a purpose-built pressure chamber. Swimbladder volume changed with pressure according to Boyle's Law, but compression in the lateral aspect was greater than in the dorsal aspect. This uneven compression has a reduced effect on acoustic backscattering than symmetrical compression and would elicit less pronounced effects of depth on acoustic biomass estimates of C. harengus. PMID:20735542

  20. Computational Simulation of Breast Compression Based on Segmented Breast and Fibroglandular Tissues on Magnetic Resonance Images

    PubMed Central

    Shih, Tzu-Ching; Chen, Jeon-Hor; Liu, Dongxu; Nie, Ke; Sun, Lizhi; Lin, Muqing; Chang, Daniel; Nalcioglu, Orhan; Su, Min-Ying

    2010-01-01

    This study presents a finite element based computational model to simulate the three-dimensional deformation of the breast and the fibroglandular tissues under compression. The simulation was based on 3D MR images of the breast, and the craniocaudal and mediolateral oblique compression as used in mammography was applied. The geometry of whole breast and the segmented fibroglandular tissues within the breast were reconstructed using triangular meshes by using the Avizo® 6.0 software package. Due to the large deformation in breast compression, a finite element model was used to simulate the non-linear elastic tissue deformation under compression, using the MSC.Marc® software package. The model was tested in 4 cases. The results showed a higher displacement along the compression direction compared to the other two directions. The compressed breast thickness in these 4 cases at 60% compression ratio was in the range of 5-7 cm, which is the typical range of thickness in mammography. The projection of the fibroglandular tissue mesh at 60% compression ratio was compared to the corresponding mammograms of two women, and they demonstrated spatially matched distributions. However, since the compression was based on MRI, which has much coarser spatial resolution than the in-plane resolution of mammography, this method is unlikely to generate a synthetic mammogram close to the clinical quality. Whether this model may be used to understand the technical factors that may impact the variations in breast density measurements needs further investigation. Since this method can be applied to simulate compression of the breast at different views and different compression levels, another possible application is to provide a tool for comparing breast images acquired using different imaging modalities – such as MRI, mammography, whole breast ultrasound, and molecular imaging – that are performed using different body positions and different compression conditions. PMID:20601773

  1. A Lossless hybrid wavelet-fractal compression for welding radiographic images.

    PubMed

    Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud

    2016-01-01

    In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm.

  2. A Lossless hybrid wavelet-fractal compression for welding radiographic images.

    PubMed

    Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud

    2016-01-01

    In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm. PMID:26890900

  3. Architecture for one-shot compressive imaging using computer-generated holograms.

    PubMed

    Macfaden, Alexander J; Kindness, Stephen J; Wilkinson, Timothy D

    2016-09-10

    We propose a synchronous implementation of compressive imaging. This method is mathematically equivalent to prevailing sequential methods, but uses a static holographic optical element to create a spatially distributed spot array from which the image can be reconstructed with an instantaneous measurement. We present the holographic design requirements and demonstrate experimentally that the linear algebra of compressed imaging can be implemented with this technique. We believe this technique can be integrated with optical metasurfaces, which will allow the development of new compressive sensing methods. PMID:27661381

  4. Lossless compression of hyperspectral images based on the prediction error block

    NASA Astrophysics Data System (ADS)

    Li, Yongjun; Li, Yunsong; Song, Juan; Liu, Weijia; Li, Jiaojiao

    2016-05-01

    A lossless compression algorithm of hyperspectral image based on distributed source coding is proposed, which is used to compress the spaceborne hyperspectral data effectively. In order to make full use of the intra-frame correlation and inter-frame correlation, the prediction error block scheme are introduced. Compared with the scalar coset based distributed compression method (s-DSC) proposed by E.Magli et al., that is , the bitrate of the whole block is determined by its maximum prediction error, and the s-DSC-classify scheme proposed by Song Juan that is based on classification and coset coding, the prediction error block scheme could reduce the bitrate efficiently. Experimental results on hyperspectral images show that the proposed scheme can offer both high compression performance and low encoder complexity and decoder complexity, which is available for on-board compression of hyperspectral images.

  5. Observer performance assessment of JPEG-compressed high-resolution chest images

    NASA Astrophysics Data System (ADS)

    Good, Walter F.; Maitz, Glenn S.; King, Jill L.; Gennari, Rose C.; Gur, David

    1999-05-01

    The JPEG compression algorithm was tested on a set of 529 chest radiographs that had been digitized at a spatial resolution of 100 micrometer and contrast sensitivity of 12 bits. Images were compressed using five fixed 'psychovisual' quantization tables which produced average compression ratios in the range 15:1 to 61:1, and were then printed onto film. Six experienced radiologists read all cases from the laser printed film, in each of the five compressed modes as well as in the non-compressed mode. For comparison purposes, observers also read the same cases with reduced pixel resolutions of 200 micrometer and 400 micrometer. The specific task involved detecting masses, pneumothoraces, interstitial disease, alveolar infiltrates and rib fractures. Over the range of compression ratios tested, for images digitized at 100 micrometer, we were unable to demonstrate any statistically significant decrease (p greater than 0.05) in observer performance as measured by ROC techniques. However, the observers' subjective assessments of image quality did decrease significantly as image resolution was reduced and suggested a decreasing, but nonsignificant, trend as the compression ratio was increased. The seeming discrepancy between our failure to detect a reduction in observer performance, and other published studies, is likely due to: (1) the higher resolution at which we digitized our images; (2) the higher signal-to-noise ratio of our digitized films versus typical CR images; and (3) our particular choice of an optimized quantization scheme.

  6. Effect of block size on image quality for compressed chest radiographs

    NASA Astrophysics Data System (ADS)

    Chen, Ji; Flynn, Michael J.

    1992-05-01

    Data compression can improve imaging system efficiency by reducing the required storage space and the image transmission time. Transform compression methods have been applied to digital radiographs with good results. Block transform compression is usually based on 8 X 8 or 16 X 16 transform blocks for the sake of simplicity and speed. Compression with these small sizes tends to require accurate coefficient representations to prevent blocking artifacts. Weighted quantization of block transform coefficients can reduce the blocking effects and improve compression performance. Full frame compression has the advantage of eliminating blocking effects but the disadvantage of heavy demand for computing resources. Small block compression can retain local variation better and has a simpler and faster implementation. We have evaluated the performance tradeoffs for different block sizes and their effects on the image quality of chest radiographs. The results showed that there is no significant difference in root-mean-square error nor in power spectra between different block sizes for visually lossless compression (at about 10:1 compression ratio).

  7. Using a visual discrimination model for the detection of compression artifacts in virtual pathology images.

    PubMed

    Johnson, Jeffrey P; Krupinski, Elizabeth A; Yan, Michelle; Roehrig, Hans; Graham, Anna R; Weinstein, Ronald S

    2011-02-01

    A major issue in telepathology is the extremely large and growing size of digitized "virtual" slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. "Visually lossless" compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods. PMID:20875970

  8. Analyzing the Effect of JPEG Compression on Local Variance of Image Intensity.

    PubMed

    Yang, Jianquan; Zhu, Guopu; Shi, Yun-Qing

    2016-06-01

    The local variance of image intensity is a typical measure of image smoothness. It has been extensively used, for example, to measure the visual saliency or to adjust the filtering strength in image processing and analysis. However, to the best of our knowledge, no analytical work has been reported about the effect of JPEG compression on image local variance. In this paper, a theoretical analysis on the variation of local variance caused by JPEG compression is presented. First, the expectation of intensity variance of 8×8 non-overlapping blocks in a JPEG image is derived. The expectation is determined by the Laplacian parameters of the discrete cosine transform coefficient distributions of the original image and the quantization step sizes used in the JPEG compression. Second, some interesting properties that describe the behavior of the local variance under different degrees of JPEG compression are discussed. Finally, both the simulation and the experiments are performed to verify our derivation and discussion. The theoretical analysis presented in this paper provides some new insights into the behavior of local variance under JPEG compression. Moreover, it has the potential to be used in some areas of image processing and analysis, such as image enhancement, image quality assessment, and image filtering. PMID:27093626

  9. Analyzing the Effect of JPEG Compression on Local Variance of Image Intensity.

    PubMed

    Yang, Jianquan; Zhu, Guopu; Shi, Yun-Qing

    2016-06-01

    The local variance of image intensity is a typical measure of image smoothness. It has been extensively used, for example, to measure the visual saliency or to adjust the filtering strength in image processing and analysis. However, to the best of our knowledge, no analytical work has been reported about the effect of JPEG compression on image local variance. In this paper, a theoretical analysis on the variation of local variance caused by JPEG compression is presented. First, the expectation of intensity variance of 8×8 non-overlapping blocks in a JPEG image is derived. The expectation is determined by the Laplacian parameters of the discrete cosine transform coefficient distributions of the original image and the quantization step sizes used in the JPEG compression. Second, some interesting properties that describe the behavior of the local variance under different degrees of JPEG compression are discussed. Finally, both the simulation and the experiments are performed to verify our derivation and discussion. The theoretical analysis presented in this paper provides some new insights into the behavior of local variance under JPEG compression. Moreover, it has the potential to be used in some areas of image processing and analysis, such as image enhancement, image quality assessment, and image filtering.

  10. Visual sensitivity correlated tone reproduction for low dynamic range images in the compression field

    NASA Astrophysics Data System (ADS)

    Lee, Geun-Young; Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-11-01

    An image toning method for low dynamic range image compression is presented. The proposed method inserts tone mapping into JPEG baseline instead of postprocessing. First, an image is decomposed into detail, base, and surrounding components in terms of the discrete cosine transform coefficients. Subsequently, a luminance-adaptive tone mapping based on the human visual sensitivity properties is applied. In addition, compensation modules are added to enhance the visually sensitive factors, such as saturation, sharpness, and gamma. A comparative study confirms that the transmitted compression images have good image quality.

  11. A new near-­lossless scheme for multiview image compression

    NASA Astrophysics Data System (ADS)

    Battin, Benjamin; Vautrot, Philippe; Lucas, Laurent

    2010-02-01

    In the last few years, autostereoscopy has become an emerging technology. This technique uses n acquisitions from the same scene and introduces therefore a new data redundancy dimension. This process generates a large amount of data (typically n times more than a single image) that needs to be compressed for further network applications. It must be an almost-lossless scheme since autostereoscopy is very sensitive to artifacts. Thus common JPEG compression is not suitable for this application. A simple way to compress an image sequence is to take each view and compress it separately with well-known near-lossless algorithms like JPEG at high quality, JPEG2000 or JPEG-LS. This approach is very easy to implement but does not reduce the inter-view redundancy and can be improved by considering the whole image set. In this paper, we present an alternative to traditionnal methods used for image compression: MICA (Multiview Image Compression Algorithm). MICA is a near-lossless scheme that exploits the positive-sided geometric distribution (PSGD) of pixels from the difference of two consecutive views with a modified arithmetic coding. However, we choose to keep a lossless compression scheme (JPEG-LS) for two specific views in order to avoid error propagation during the decoding process. This algorithm has a low complexity, and can be easily parallelized either on CPU or on GPU for real-time applications or autostereoscopic videos.

  12. Datapath system for multiple electron beam lithography systems using image compression

    NASA Astrophysics Data System (ADS)

    Yang, Jeehong; Savari, Serap A.; Harris, H. Rusty

    2013-07-01

    The datapath throughput of electron beam lithography systems can be improved by applying lossless image compression to the layout images and using an electron beam writer that contains a decoding circuit packed in single silicon to decode the compressed image on-the-fly. In our past research, we had introduced Corner2, a lossless layout image compression algorithm that achieved significantly better performance in compression ratio, encoding/decoding speed, and decoder memory requirement than Block C4. However, it assumed a somewhat different writing strategy from those currently suggested by multiple electron beam (MEB) system designers. The Corner2 algorithm is modified so that it can support the writing strategy of an MEB system.

  13. Compressive Sensing Based Bio-Inspired Shape Feature Detection CMOS Imager

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A. (Inventor)

    2015-01-01

    A CMOS imager integrated circuit using compressive sensing and bio-inspired detection is presented which integrates novel functions and algorithms within a novel hardware architecture enabling efficient on-chip implementation.

  14. Injectant mole-fraction imaging in compressible mixing flows using planar laser-induced iodine fluorescence

    NASA Technical Reports Server (NTRS)

    Hartfield, Roy J., Jr.; Abbitt, John D., III; Mcdaniel, James C.

    1989-01-01

    A technique is described for imaging the injectant mole-fraction distribution in nonreacting compressible mixing flow fields. Planar fluorescence from iodine, seeded into air, is induced by a broadband argon-ion laser and collected using an intensified charge-injection-device array camera. The technique eliminates the thermodynamic dependence of the iodine fluorescence in the compressible flow field by taking the ratio of two images collected with identical thermodynamic flow conditions but different iodine seeding conditions.

  15. Improving signal-to-noise ratio performance of compressive imaging based on spatial correlation

    NASA Astrophysics Data System (ADS)

    Mao, Tianyi; Chen, Qian; He, Weiji; Zou, Yunhao; Dai, Huidong; Gu, Guohua

    2016-08-01

    In this paper, compressive imaging based on spatial correlation (CISC), which uses second-order correlation with the measurement matrix, is introduced to improve the signal-to-noise ratio performance of compressive imaging (CI). Numerical simulations and experiments are performed as well. Referred to the results, it can be seen that CISC performs much better than CI in three common noise environments. This provides the great opportunity to pave the way for real applications.

  16. Fixed-quality/variable bit-rate on-board image compression for future CNES missions

    NASA Astrophysics Data System (ADS)

    Camarero, Roberto; Delaunay, Xavier; Thiebaut, Carole

    2012-10-01

    The huge improvements in resolution and dynamic range of current [1][2] and future CNES remote sensing missions (from 5m/2.5m in Spot5 to 70cm in Pleiades) illustrate the increasing need of efficient on-board image compressors. Many techniques have been considered by CNES during the last years in order to go beyond usual compression ratios: new image transforms or post-transforms [3][4], exceptional processing [5], selective compression [6]. However, even if significant improvements have been obtained, none of those techniques has ever contested an essential drawback in current on-board compression schemes: fixed-rate (or compression ratio). This classical assumption provides highly-predictable data volumes that simplify storage and transmission. But on the other hand, it demands to compress every image-segment (strip) of the scene within the same amount of data. Therefore, this fixed bit-rate is dimensioned on the worst case assessments to guarantee the quality requirements in all areas of the image. This is obviously not the most economical way of achieving the required image quality for every single segment. Thus, CNES has started a study to re-use existing compressors [7] in a Fixed-Quality/Variable bit-rate mode. The main idea is to compute a local complexity metric in order to assign the optimum bit-rate to comply with quality requirements. Consequently, complex areas are less compressed than simple ones, offering a better image quality for an equivalent global bit-rate. "Near-lossless bit-rate" of image segments has revealed as an efficient image complexity estimator. It links quality criteria and bit-rates through a single theoretical relationship. Compression parameters are thus automatically computed in accordance with the quality requirements. In addition, this complexity estimator could be implemented in a one-pass compression and truncation scheme.

  17. Student Images of Agriculture: Survey Highlights and Recommendations.

    ERIC Educational Resources Information Center

    Mallory, Mary E.; Sommer, Robert

    1986-01-01

    The high school students studied were unaware of the range of opportunities in agricultural careers. It was recommended that the University of California, Davis initiate a public relations campaign, with television advertising, movies, and/or public service announcements focusing on exciting, high-tech agricultural research and enterprise. (CT)

  18. Compressed Sensing for Millimeter-wave Ground Based SAR/ISAR Imaging

    NASA Astrophysics Data System (ADS)

    Yiğit, Enes

    2014-11-01

    Millimeter-wave (MMW) ground based (GB) synthetic aperture radar (SAR) and inverse SAR (ISAR) imaging are the powerful tools for the detection of foreign object debris (FOD) and concealed objects that requires wide bandwidths and highly frequent samplings in both slow-time and fast-time domains according to Shannon/Nyquist sampling theorem. However, thanks to the compressive sensing (CS) theory GB-SAR/ISAR data can be reconstructed by much fewer random samples than the Nyquist rate. In this paper, the impact of both random frequency sampling and random spatial domain data collection of a SAR/ISAR sensor on reconstruction quality of a scene of interest was studied. To investigate the feasibility of using proposed CS framework, different experiments for various FOD-like and concealed object-like targets were carried out at the Ka and W band frequencies of the MMW. The robustness and effectiveness of the recommend CS-based reconstruction configurations were verified through a comparison among each other by using integrated side lobe ratios (ISLR) of the images.

  19. Estimate of DTM Degradation due to Image Compression for the Stereo Camera of the Bepicolombo Mission

    NASA Astrophysics Data System (ADS)

    Re, C.; Simioni, E.; Cremonese, G.; Roncella, R.; Forlani, G.; Langevin, Y.; Da Deppo, V.; Naletto, G.; Salemi, G.

    2016-06-01

    The great amount of data that will be produced during the imaging of Mercury by the stereo camera (STC) of the BepiColombo mission needs a compromise with the restrictions imposed by the band downlink that could drastically reduce the duration and frequency of the observations. The implementation of an on-board real time data compression strategy preserving as much information as possible is therefore mandatory. The degradation that image compression might cause to the DTM accuracy is worth to be investigated. During the stereo-validation procedure of the innovative STC imaging system, several image pairs of an anorthosite sample and a modelled piece of concrete have been acquired under different illumination angles. This set of images has been used to test the effects of the compression algorithm (Langevin and Forni, 2000) on the accuracy of the DTM produced by dense image matching. Different configurations taking in account at the same time both the illumination of the surface and the compression ratio, have been considered. The accuracy of the DTMs is evaluated by comparison with a high resolution laser-scan acquisition of the same targets. The error assessment included also an analysis on the image plane indicating the influence of the compression procedure on the image measurements.

  20. Comparison of Open Source Compression Algorithms on Vhr Remote Sensing Images for Efficient Storage Hierarchy

    NASA Astrophysics Data System (ADS)

    Akoguz, A.; Bozkurt, S.; Gozutok, A. A.; Alp, G.; Turan, E. G.; Bogaz, M.; Kent, S.

    2016-06-01

    High resolution level in satellite imagery came with its fundamental problem as big amount of telemetry data which is to be stored after the downlink operation. Moreover, later the post-processing and image enhancement steps after the image is acquired, the file sizes increase even more and then it gets a lot harder to store and consume much more time to transmit the data from one source to another; hence, it should be taken into account that to save even more space with file compression of the raw and various levels of processed data is a necessity for archiving stations to save more space. Lossless data compression algorithms that will be examined in this study aim to provide compression without any loss of data holding spectral information. Within this objective, well-known open source programs supporting related compression algorithms have been implemented on processed GeoTIFF images of Airbus Defence & Spaces SPOT 6 & 7 satellites having 1.5 m. of GSD, which were acquired and stored by ITU Center for Satellite Communications and Remote Sensing (ITU CSCRS), with the algorithms Lempel-Ziv-Welch (LZW), Lempel-Ziv-Markov chain Algorithm (LZMA & LZMA2), Lempel-Ziv-Oberhumer (LZO), Deflate & Deflate 64, Prediction by Partial Matching (PPMd or PPM2), Burrows-Wheeler Transform (BWT) in order to observe compression performances of these algorithms over sample datasets in terms of how much of the image data can be compressed by ensuring lossless compression.

  1. Time-of-flight compressed-sensing ultrafast photography for encrypted three-dimensional dynamic imaging

    NASA Astrophysics Data System (ADS)

    Liang, Jinyang; Gao, Liang; Hai, Pengfei; Li, Chiye; Wang, Lihong V.

    2016-02-01

    We applied compressed ultrafast photography (CUP), a computational imaging technique, to acquire three-dimensional (3D) images. The approach unites image encryption, compression, and acquisition in a single measurement, thereby allowing efficient and secure data transmission. By leveraging the time-of-flight (ToF) information of pulsed light reflected by the object, we can reconstruct a volumetric image (150 mm×150 mm×1050 mm, x × y × z) from a single camera snapshot. Furthermore, we demonstrated high-speed 3D videography of a moving object at 75 frames per second using the ToF-CUP camera.

  2. High capacity image steganography method based on framelet and compressive sensing

    NASA Astrophysics Data System (ADS)

    Xiao, Moyan; He, Zhibiao

    2015-12-01

    To improve the capacity and imperceptibility of image steganography, a novel high capacity and imperceptibility image steganography method based on a combination of framelet and compressive sensing (CS) is put forward. Firstly, SVD (Singular Value Decomposition) transform to measurement values obtained by compressive sensing technique to the secret data. Then the singular values in turn embed into the low frequency coarse subbands of framelet transform to the blocks of the cover image which is divided into non-overlapping blocks. Finally, use inverse framelet transforms and combine to obtain the stego image. The experimental results show that the proposed steganography method has a good performance in hiding capacity, security and imperceptibility.

  3. Efficient compression scheme by use of the region division of elemental images on MALT in three-dimensional integral imaging

    NASA Astrophysics Data System (ADS)

    Kang, Ho-Hyun; Lee, Jung-Woo; Shin, Dong-Hak; Kim, Eun-Soo

    2010-02-01

    This paper addresses the efficient compression scheme of elemental image array (EIA) generated from the moving array lenslet technique (MALT) based on MPEG-4. The EIAs are picked-up by MALT controlling the spatial ray sampling of ray and which produces few EIAs that the positions of the lenslet arrays are rapidly vibrated in the lateral directions within the retention time of the afterimage of human eye. To enhance the similarity in each EIA picked-up by MALT, several EIAs obtained from MALT are regenerated by the collection of an elemental image occupied at the same position in each EIA. The newly generated each EIA has high similarity among adjacent elemental images. To illustrate the feasibility of the proposed scheme, some experiments are carried out to show the increased compression efficiency and we obtained the improved compression ratio of 12% compared to the unhandled compression scheme.

  4. Compression of multispectral fluorescence microscopic images based on a modified set partitioning in hierarchal trees

    NASA Astrophysics Data System (ADS)

    Mansoor, Awais; Robinson, J. Paul; Rajwa, Bartek

    2009-02-01

    Modern automated microscopic imaging techniques such as high-content screening (HCS), high-throughput screening, 4D imaging, and multispectral imaging are capable of producing hundreds to thousands of images per experiment. For quick retrieval, fast transmission, and storage economy, these images should be saved in a compressed format. A considerable number of techniques based on interband and intraband redundancies of multispectral images have been proposed in the literature for the compression of multispectral and 3D temporal data. However, these works have been carried out mostly in the elds of remote sensing and video processing. Compression for multispectral optical microscopy imaging, with its own set of specialized requirements, has remained under-investigated. Digital photography{oriented 2D compression techniques like JPEG (ISO/IEC IS 10918-1) and JPEG2000 (ISO/IEC 15444-1) are generally adopted for multispectral images which optimize visual quality but do not necessarily preserve the integrity of scientic data, not to mention the suboptimal performance of 2D compression techniques in compressing 3D images. Herein we report our work on a new low bit-rate wavelet-based compression scheme for multispectral fluorescence biological imaging. The sparsity of signicant coefficients in high-frequency subbands of multispectral microscopic images is found to be much greater than in natural images; therefore a quad-tree concept such as Said et al.'s SPIHT1 along with correlation of insignicant wavelet coefficients has been proposed to further exploit redundancy at high-frequency subbands. Our work propose a 3D extension to SPIHT, incorporating a new hierarchal inter- and intra-spectral relationship amongst the coefficients of 3D wavelet-decomposed image. The new relationship, apart from adopting the parent-child relationship of classical SPIHT, also brought forth the conditional "sibling" relationship by relating only the insignicant wavelet coefficients of subbands

  5. Characterization of Diesel and Gasoline Compression Ignition Combustion in a Rapid Compression-Expansion Machine using OH* Chemiluminescence Imaging

    NASA Astrophysics Data System (ADS)

    Krishnan, Sundar Rajan; Srinivasan, Kalyan Kumar; Stegmeir, Matthew

    2015-11-01

    Direct-injection compression ignition combustion of diesel and gasoline were studied in a rapid compression-expansion machine (RCEM) using high-speed OH* chemiluminescence imaging. The RCEM (bore = 84 mm, stroke = 110-250 mm) was used to simulate engine-like operating conditions at the start of fuel injection. The fuels were supplied by a high-pressure fuel cart with an air-over-fuel pressure amplification system capable of providing fuel injection pressures up to 2000 bar. A production diesel fuel injector was modified to provide a single fuel spray for both diesel and gasoline operation. Time-resolved combustion pressure in the RCEM was measured using a Kistler piezoelectric pressure transducer mounted on the cylinder head and the instantaneous piston displacement was measured using an inductive linear displacement sensor (0.05 mm resolution). Time-resolved, line-of-sight OH* chemiluminescence images were obtained using a Phantom V611 CMOS camera (20.9 kHz @ 512 x 512 pixel resolution, ~ 48 μs time resolution) coupled with a short wave pass filter (cut-off ~ 348 nm). The instantaneous OH* distributions, which indicate high temperature flame regions within the combustion chamber, were used to discern the characteristic differences between diesel and gasoline compression ignition combustion. The authors gratefully acknowledge facilities support for the present work from the Energy Institute at Mississippi State University.

  6. Efficient compression of rearranged time-multiplexed elemental image arrays in MALT-based three-dimensional integral imaging

    NASA Astrophysics Data System (ADS)

    Kang, Ho-Hyun; Lee, Byung-Gook; Kim, Eun-Soo

    2011-06-01

    In this paper, an approach to efficiently compress the time-multiplexed EIAs picked up from the MALT-based integral imaging system is proposed. In this method, the time-multiplexed EIAs are rearranged by collecting the elemental images occupied at the same position in each EIA to enhance the similarity among the elemental images. Then, MPEG-4 is applied to these rearranged elemental images for compression. From the experimental results, it is shown that the average correlation quality ( ACQ) value representing a degree of similarity between the elemental images, and the resultant compression efficiency have been enhanced by 11.50% and 9.97%, respectively on the average for three kinds of test scenarios in the proposed method, compared to those of the conventional method. Good experimental results finally confirmed the feasibility of the proposed scheme.

  7. Hierarchical prediction and context adaptive coding for lossless color image compression.

    PubMed

    Kim, Seyun; Cho, Nam Ik

    2014-01-01

    This paper presents a new lossless color image compression algorithm, based on the hierarchical prediction and context-adaptive arithmetic coding. For the lossless compression of an RGB image, it is first decorrelated by a reversible color transform and then Y component is encoded by a conventional lossless grayscale image compression method. For encoding the chrominance images, we develop a hierarchical scheme that enables the use of upper, left, and lower pixels for the pixel prediction, whereas the conventional raster scan prediction methods use upper and left pixels. An appropriate context model for the prediction error is also defined and the arithmetic coding is applied to the error signal corresponding to each context. For several sets of images, it is shown that the proposed method further reduces the bit rates compared with JPEG2000 and JPEG-XR.

  8. Fast compressive measurements acquisition using optimized binary sensing matrices for low-light-level imaging.

    PubMed

    Ke, Jun; Lam, Edmund Y

    2016-05-01

    Compressive measurements benefit low-light-level imaging (L3-imaging) due to the significantly improved measurement signal-to-noise ratio (SNR). However, as with other compressive imaging (CI) systems, compressive L3-imaging is slow. To accelerate the data acquisition, we develop an algorithm to compute the optimal binary sensing matrix that can minimize the image reconstruction error. First, we make use of the measurement SNR and the reconstruction mean square error (MSE) to define the optimal gray-value sensing matrix. Then, we construct an equality-constrained optimization problem to solve for a binary sensing matrix. From several experimental results, we show that the latter delivers a similar reconstruction performance as the former, while having a smaller dynamic range requirement to system sensors.

  9. An introduction to video image compression and authentication technology for safeguards applications

    SciTech Connect

    Johnson, C.S.

    1995-07-01

    Verification of a video image has been a major problem for safeguards for several years. Various verification schemes have been tried on analog video signals ever since the mid-1970`s. These schemes have provided a measure of protection but have never been widely adopted. The development of reasonably priced complex video processing integrated circuits makes it possible to digitize a video image and then compress the resulting digital file into a smaller file without noticeable loss of resolution. Authentication and/or encryption algorithms can be more easily applied to digital video files that have been compressed. The compressed video files require less time for algorithm processing and image transmission. An important safeguards application for authenticated, compressed, digital video images is in unattended video surveillance systems and remote monitoring systems. The use of digital images in the surveillance system makes it possible to develop remote monitoring systems that send images over narrow bandwidth channels such as the common telephone line. This paper discusses the video compression process, authentication algorithm, and data format selected to transmit and store the authenticated images.

  10. A Complete Image Compression Scheme Based on Overlapped Block Transform with Post-Processing

    NASA Astrophysics Data System (ADS)

    Kwan, C.; Li, B.; Xu, R.; Li, X.; Tran, T.; Nguyen, T.

    2006-12-01

    A complete system was built for high-performance image compression based on overlapped block transform. Extensive simulations and comparative studies were carried out for still image compression including benchmark images (Lena and Barbara), synthetic aperture radar (SAR) images, and color images. We have achieved consistently better results than three commercial products in the market (a Summus wavelet codec, a baseline JPEG codec, and a JPEG-2000 codec) for most images that we used in this study. Included in the system are two post-processing techniques based on morphological and median filters for enhancing the perceptual quality of the reconstructed images. The proposed system also supports the enhancement of a small region of interest within an image, which is of interest in various applications such as target recognition and medical diagnosis

  11. Informational Analysis for Compressive Sampling in Radar Imaging

    PubMed Central

    Zhang, Jingxiong; Yang, Ke

    2015-01-01

    Compressive sampling or compressed sensing (CS) works on the assumption of the sparsity or compressibility of the underlying signal, relies on the trans-informational capability of the measurement matrix employed and the resultant measurements, operates with optimization-based algorithms for signal reconstruction and is thus able to complete data compression, while acquiring data, leading to sub-Nyquist sampling strategies that promote efficiency in data acquisition, while ensuring certain accuracy criteria. Information theory provides a framework complementary to classic CS theory for analyzing information mechanisms and for determining the necessary number of measurements in a CS environment, such as CS-radar, a radar sensor conceptualized or designed with CS principles and techniques. Despite increasing awareness of information-theoretic perspectives on CS-radar, reported research has been rare. This paper seeks to bridge the gap in the interdisciplinary area of CS, radar and information theory by analyzing information flows in CS-radar from sparse scenes to measurements and determining sub-Nyquist sampling rates necessary for scene reconstruction within certain distortion thresholds, given differing scene sparsity and average per-sample signal-to-noise ratios (SNRs). Simulated studies were performed to complement and validate the information-theoretic analysis. The combined strategy proposed in this paper is valuable for information-theoretic orientated CS-radar system analysis and performance evaluation. PMID:25811226

  12. Informational analysis for compressive sampling in radar imaging.

    PubMed

    Zhang, Jingxiong; Yang, Ke

    2015-03-24

    Compressive sampling or compressed sensing (CS) works on the assumption of the sparsity or compressibility of the underlying signal, relies on the trans-informational capability of the measurement matrix employed and the resultant measurements, operates with optimization-based algorithms for signal reconstruction and is thus able to complete data compression, while acquiring data, leading to sub-Nyquist sampling strategies that promote efficiency in data acquisition, while ensuring certain accuracy criteria. Information theory provides a framework complementary to classic CS theory for analyzing information mechanisms and for determining the necessary number of measurements in a CS environment, such as CS-radar, a radar sensor conceptualized or designed with CS principles and techniques. Despite increasing awareness of information-theoretic perspectives on CS-radar, reported research has been rare. This paper seeks to bridge the gap in the interdisciplinary area of CS, radar and information theory by analyzing information flows in CS-radar from sparse scenes to measurements and determining sub-Nyquist sampling rates necessary for scene reconstruction within certain distortion thresholds, given differing scene sparsity and average per-sample signal-to-noise ratios (SNRs). Simulated studies were performed to complement and validate the information-theoretic analysis. The combined strategy proposed in this paper is valuable for information-theoretic orientated CS-radar system analysis and performance evaluation.

  13. Wavelet-based compression of medical images: filter-bank selection and evaluation.

    PubMed

    Saffor, A; bin Ramli, A R; Ng, K H

    2003-06-01

    Wavelet-based image coding algorithms (lossy and lossless) use a fixed perfect reconstruction filter-bank built into the algorithm for coding and decoding of images. However, no systematic study has been performed to evaluate the coding performance of wavelet filters on medical images. We evaluated the best types of filters suitable for medical images in providing low bit rate and low computational complexity. In this study a variety of wavelet filters are used to compress and decompress computed tomography (CT) brain and abdomen images. We applied two-dimensional wavelet decomposition, quantization and reconstruction using several families of filter banks to a set of CT images. Discreet Wavelet Transform (DWT), which provides efficient framework of multi-resolution frequency was used. Compression was accomplished by applying threshold values to the wavelet coefficients. The statistical indices such as mean square error (MSE), maximum absolute error (MAE) and peak signal-to-noise ratio (PSNR) were used to quantify the effect of wavelet compression of selected images. The code was written using the wavelet and image processing toolbox of the MATLAB (version 6.1). This results show that no specific wavelet filter performs uniformly better than others except for the case of Daubechies and bi-orthogonal filters which are the best among all. MAE values achieved by these filters were 5 x 10(-14) to 12 x 10(-14) for both CT brain and abdomen images at different decomposition levels. This indicated that using these filters a very small error (approximately 7 x 10(-14)) can be achieved between original and the filtered image. The PSNR values obtained were higher for the brain than the abdomen images. For both the lossy and lossless compression, the 'most appropriate' wavelet filter should be chosen adaptively depending on the statistical properties of the image being coded to achieve higher compression ratio. PMID:12956184

  14. A joint image encryption and watermarking algorithm based on compressive sensing and chaotic map

    NASA Astrophysics Data System (ADS)

    Xiao, Di; Cai, Hong-Kun; Zheng, Hong-Ying

    2015-06-01

    In this paper, a compressive sensing (CS) and chaotic map-based joint image encryption and watermarking algorithm is proposed. The transform domain coefficients of the original image are scrambled by Arnold map firstly. Then the watermark is adhered to the scrambled data. By compressive sensing, a set of watermarked measurements is obtained as the watermarked cipher image. In this algorithm, watermark embedding and data compression can be performed without knowing the original image; similarly, watermark extraction will not interfere with decryption. Due to the characteristics of CS, this algorithm features compressible cipher image size, flexible watermark capacity, and lossless watermark extraction from the compressed cipher image as well as robustness against packet loss. Simulation results and analyses show that the algorithm achieves good performance in the sense of security, watermark capacity, extraction accuracy, reconstruction, robustness, etc. Project supported by the Open Research Fund of Chongqing Key Laboratory of Emergency Communications, China (Grant No. CQKLEC, 20140504), the National Natural Science Foundation of China (Grant Nos. 61173178, 61302161, and 61472464), and the Fundamental Research Funds for the Central Universities, China (Grant Nos. 106112013CDJZR180005 and 106112014CDJZR185501).

  15. Hardware Implementation of Lossless Adaptive Compression of Data From a Hyperspectral Imager

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didlier; Aranki, Nazeeh I.; Klimesh, Matthew A.; Bakhshi, Alireza

    2012-01-01

    Efficient onboard data compression can reduce the data volume from hyperspectral imagers on NASA and DoD spacecraft in order to return as much imagery as possible through constrained downlink channels. Lossless compression is important for signature extraction, object recognition, and feature classification capabilities. To provide onboard data compression, a hardware implementation of a lossless hyperspectral compression algorithm was developed using a field programmable gate array (FPGA). The underlying algorithm is the Fast Lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral- Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), p. 26 with the modification reported in Lossless, Multi-Spectral Data Comressor for Improved Compression for Pushbroom-Type Instruments (NPO-45473), NASA Tech Briefs, Vol. 32, No. 7 (July 2008) p. 63, which provides improved compression performance for data from pushbroom-type imagers. An FPGA implementation of the unmodified FL algorithm was previously developed and reported in Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System (NPO-46867), NASA Tech Briefs, Vol. 36, No. 5 (May 2012) p. 42. The essence of the FL algorithm is adaptive linear predictive compression using the sign algorithm for filter adaption. The FL compressor achieves a combination of low complexity and compression effectiveness that exceeds that of stateof- the-art techniques currently in use. The modification changes the predictor structure to tolerate differences in sensitivity of different detector elements, as occurs in pushbroom-type imagers, which are suitable for spacecraft use. The FPGA implementation offers a low-cost, flexible solution compared to traditional ASIC (application specific integrated circuit) and can be integrated as an intellectual property (IP) for part of, e.g., a design that manages the instrument interface. The FPGA implementation was benchmarked on the Xilinx

  16. The Cyborg Astrobiologist: matching of prior textures by image compression for geological mapping and novelty detection

    NASA Astrophysics Data System (ADS)

    McGuire, P. C.; Bonnici, A.; Bruner, K. R.; Gross, C.; Ormö, J.; Smosna, R. A.; Walter, S.; Wendt, L.

    2014-07-01

    We describe an image-comparison technique of Heidemann and Ritter (2008a, b), which uses image compression, and is capable of: (i) detecting novel textures in a series of images, as well as of: (ii) alerting the user to the similarity of a new image to a previously observed texture. This image-comparison technique has been implemented and tested using our Astrobiology Phone-cam system, which employs Bluetooth communication to send images to a local laptop server in the field for the image-compression analysis. We tested the system in a field site displaying a heterogeneous suite of sandstones, limestones, mudstones and coal beds. Some of the rocks are partly covered with lichen. The image-matching procedure of this system performed very well with data obtained through our field test, grouping all images of yellow lichens together and grouping all images of a coal bed together, and giving 91% accuracy for similarity detection. Such similarity detection could be employed to make maps of different geological units. The novelty-detection performance of our system was also rather good (64% accuracy). Such novelty detection may become valuable in searching for new geological units, which could be of astrobiological interest. The current system is not directly intended for mapping and novelty detection of a second field site based on image-compression analysis of an image database from a first field site, although our current system could be further developed towards this end. Furthermore, the image-comparison technique is an unsupervised technique that is not capable of directly classifying an image as containing a particular geological feature; labelling of such geological features is done post facto by human geologists associated with this study, for the purpose of analysing the system's performance. By providing more advanced capabilities for similarity detection and novelty detection, this image-compression technique could be useful in giving more scientific autonomy

  17. Performance evaluation of integer to integer wavelet transform for synthetic aperture radar image compression

    NASA Astrophysics Data System (ADS)

    Xue, Wentong; Song, Jianshe; Yuan, Lihai; Shen, Tao

    2005-11-01

    An efficient and novel imagery compression system for Synthetic Aperture Radar (SAR) which uses integer to integer wavelet transform and Modified Set Partitioning Embedded Block Coder (M-SPECK) has been presented in this paper. The presence of speckle noise, detailed texture, high dynamic range in SAR images, and even its vast data volume show the great differences of SAR imagery. Integer to integer wavelet transform is invertible in finite precision arithmetic, it maps integers to integers, and approximates linear wavelet transforms from which they are derived. Considering in terms of computational load, compression ratio and subjective visual quality metrics, several filter banks are compared together and some factors affecting the compression performance of the integer to integer wavelet transform are discussed in details. Then the optimal filter banks which are more appropriate for the SAR images compression are given. Information of high frequency has relatively larger proportion in SAR images compared with those of nature images. Measures to modify the quantizing thresholds in traditional SPECK are taken, which could be suitable to the contents of SAR imagery for the purpose of compression. Both the integer to integer wavelet transform and modified SPECK have the desirable feature of low computational complexity. Experimental results show its superiority over the traditional approaches in the condition of tradeoffs between compression efficiency and computational complexity.

  18. Context-dependent JPEG backward-compatible high-dynamic range image compression

    NASA Astrophysics Data System (ADS)

    Korshunov, Pavel; Ebrahimi, Touradj

    2013-10-01

    High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.

  19. The Cyborg Astrobiologist: Image Compression for Geological Mapping and Novelty Detection

    NASA Astrophysics Data System (ADS)

    McGuire, P. C.; Bonnici, A.; Bruner, K. R.; Gross, C.; Ormö, J.; Smosna, R. A.; Walter, S.; Wendt, L.

    2013-09-01

    We describe an image-comparison technique of Heidemann and Ritter [4,5] that uses image compression, and is capable of: (i) detecting novel textures in a series of images, as well as of: (ii) alerting the user to the similarity of a new image to a previously-observed texture. This image-comparison technique has been implemented and tested using our Astrobiology Phone-cam system, which employs Bluetooth communication to send images to a local laptop server in the field for the image-compression analysis. We tested the system in a field site displaying a heterogeneous suite of sandstones, limestones, mudstones and coalbeds. Some of the rocks are partly covered with lichen. The image-matching procedure of this system performed very well with data obtained through our field test, grouping all images of yellow lichens together and grouping all images of a coal bed together, and giving a 91% accuracy for similarity detection. Such similarity detection could be employed to make maps of different geological units. The novelty-detection performance of our system was also rather good (a 64% accuracy). Such novelty detection may become valuable in searching for new geological units, which could be of astrobiological interest. By providing more advanced capabilities for similarity detection and novelty detection, this image-compression technique could be useful in giving more scientific autonomy to robotic planetary rovers, and in assisting human astronauts in their geological exploration.

  20. [Recommendations of the ESC guidelines regarding cardiovascular imaging].

    PubMed

    Sechtem, U; Greulich, S; Ong, P

    2016-08-01

    Cardiac imaging plays a key role in the diagnosis and risk stratification in the ESC guidelines for the management of patients with stable coronary artery disease. Demonstration of myocardial ischaemia guides the decision which further diagnostic and therapeutic strategy should be followed in these patients. One should, however, not forget that there are no randomised studies supporting this type of management. In patients with a low pretest probability coronary CT angiography is the optimal tool to exclude coronary artery stenoses rapidly and effectively. In the near future, however, better data is needed showing how much cardiac imaging is really necessary and how cost-effective it is in patients with stable coronary artery disease. PMID:27388914

  1. A context-sensitive image annotation recommendation engine for radiology.

    PubMed

    Mabotuwana, Thusitha; Qian, Yuechen; Sevenster, Merlijn

    2014-01-01

    In the typical radiology reading workflow, a radiologist would go through an imaging study and annotate specific regions of interest. The radiologist has the option to select a suitable description (e.g., "calcification") from a list of predefined descriptions, or input the description directly as free-text. However, this process is time-consuming and the descriptions are not standardized over time, even for the same patient or the same general finding. In this paper, we describe an approach that presents finding descriptions based on textual information extracted from a patient's prior reports. Using 133 finding descriptions obtained in routine oncology workflow, we demonstrate how the system can be used to reduce keystrokes by up to 86% in about 38% of the instances. We have integrated our solution into a PACS and discuss how the system can be used in a clinical setting to improve the image annotation workflow efficiency and promote standardization of finding descriptions. PMID:25160368

  2. A hyperspectral images compression algorithm based on 3D bit plane transform

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Xiang, Libin; Zhang, Sam; Quan, Shengxue

    2010-10-01

    According the analyses of the hyper-spectral images, a new compression algorithm based on 3-D bit plane transform is proposed. The spectral coefficient is higher than the spatial. The algorithm is proposed to overcome the shortcoming of 1-D bit plane transform for it can only reduce the correlation when the neighboring pixels have similar values. The algorithm calculates the horizontal, vertical and spectral bit plane transform sequentially. As the spectral bit plane transform, the algorithm can be easily realized by hardware. In addition, because the calculation and encoding of the transform matrix of each bit are independent, the algorithm can be realized by parallel computing model, which can improve the calculation efficiency and save the processing time greatly. The experimental results show that the proposed algorithm achieves improved compression performance. With a certain compression ratios, the algorithm satisfies requirements of hyper-spectral images compression system, by efficiently reducing the cost of computation and memory usage.

  3. [Recommendations for training in cross-sectional cardiac imaging].

    PubMed

    Joffre, F; Boyer, L; Dacher, J-N; Gilard, M; Douek, P; Gueret, P

    2009-09-01

    The recent and future advancements that are known in the field of cardiac imaging imply an optimal training of the operators. This training concerns medical specialists whether originating from radiology or cardiology. The training of the medical specialists in cardiac imaging entitles 3 main essential steps: The basic training taking place within each specialty, allowing the fellow to get acquainted with the clinical and technical basics. The specialized training, delivered principally in post-residency. This training must include an upgrading of each specialty in the domain that does not concern it (a technical base for the cardiologist, a physio-pathological and clinical base for the radiologist). It must include a specific theoretical training covering all aspects of cardiac imaging as well as practical training in a certified training centre. The continuous medical training and maintenance of skills that allow a sustained activity in the field and the obligation to regularly participate in the actions of specific validated training. The different aspects of these rules are exposed in this chapter.

  4. Lossless compression of hyperspectral images using C-DPCM-APL with reference bands selection

    NASA Astrophysics Data System (ADS)

    Wang, Keyan; Liao, Huilin; Li, Yunsong; Zhang, Shanshan; Wu, Xianyun

    2014-05-01

    The availability of hyperspectral images has increased in recent years, which is used in military and civilian applications, such as target recognition, surveillance, geological mapping and environmental monitoring. Because of its abundant data quantity and special importance, now it exists lossless compression methods of hyperspectral images mainly exploiting the strong spatial or spectral correlation. C-DPCM-APL is a method that achieves highest lossless compression ratio on the CCSDS hyperspectral images acquired in 2006 but consuming longest processing time among existing lossless compression methods to determine the optimal prediction length for each band. C-DPCM-APL gets best compression performance mainly via using optimal prediction length but ignoring the correlationship between reference bands and the current band which is a crucial factor that influences the precision of prediction. Considering this, we propose a method that selects reference bands according to the atmospheric absorption characteristic of hyperspectral images. Experiments on CCSDS 2006 images data set show that the proposed reduces the computation complexity heavily without decaying its lossless compression performance when compared to C-DPCM-APL.

  5. ABDOMINAL LYMPHOMA: IMAGING WORK UP CHALLENGES AND RECOMMENDATIONS IN RESOURCE LIMITED SETUP.

    PubMed

    Kebede, Asfaw Atnafu; Bekele, Frehiwot; Assefa, Getachew

    2014-10-01

    Lymphoma management begins with an accurate diagnosis & staging. Major advances in imaging techniques, make cross sectional imaging and nuclear medicine technique an excellent tool for patient work up. However, limited access to modern imaging modality in resource limited set up and luck of standardized imaging work up challenged patient's management. Assess the local lymphoma imaging work up and management challenges in patients with lymphoma and develop local imaging and reporting guideline. A semistructured qualitative interview to six conveniently selected physicians (hematologists, oncologists & pathologists) who primarily takes care of lymphoma patient and literature review on the role of various imaging modalities, recommendation and experience of other countries were used as a methodology Conventional and basic imaging modalities are used in the work up of patient in our set up. The imaging recommendation for these patients requires at least CT of the chest, abdomen and pelvis for initial diagnosis and FDG-PET and/or PET-CTfor follow up and recurrence. Due to the comparable diagnostic potentials of US and its wide spread availability, makes US still the primary imaging modality. Luck of required information's and inconsistency in the radiologists report found to challenge physicians in their patient management. The study concluded that US should still stay as the most important imaging modality in the initial treatment, staging and follow up patients in resource limited set up. It also recommended the general imaging work up and reporting framework. PMID:26410993

  6. A lossless compression method for medical image sequences using JPEG-LS and interframe coding.

    PubMed

    Miaou, Shaou-Gang; Ke, Fu-Sheng; Chen, Shu-Ching

    2009-09-01

    Hospitals and medical centers produce an enormous amount of digital medical images every day, especially in the form of image sequences, which requires considerable storage space. One solution could be the application of lossless compression. Among available methods, JPEG-LS has excellent coding performance. However, it only compresses a single picture with intracoding and does not utilize the interframe correlation among pictures. Therefore, this paper proposes a method that combines the JPEG-LS and an interframe coding with motion vectors to enhance the compression performance of using JPEG-LS alone. Since the interframe correlation between two adjacent images in a medical image sequence is usually not as high as that in a general video image sequence, the interframe coding is activated only when the interframe correlation is high enough. With six capsule endoscope image sequences under test, the proposed method achieves average compression gains of 13.3% and 26.3% over the methods of using JPEG-LS and JPEG2000 alone, respectively. Similarly, for an MRI image sequence, coding gains of 77.5% and 86.5% are correspondingly obtained.

  7. An image compression algorithm for a high-resolution digital still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    The Electronic Still Camera (ESC) project will provide for the capture and transmission of high-quality images without the use of film. The image quality will be superior to video and will approach the quality of 35mm film. The camera, which will have the same general shape and handling as a 35mm camera, will be able to send images to earth in near real-time. Images will be stored in computer memory (RAM) in removable cartridges readable by a computer. To save storage space, the image will be compressed and reconstructed at the time of viewing. Both lossless and loss-y image compression algorithms are studied, described, and compared.

  8. Adaptive uniform grayscale coded aperture design for high dynamic range compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Diaz, Nelson; Rueda, Hoover; Arguello, Henry

    2016-05-01

    Imaging spectroscopy is an important area with many applications in surveillance, agriculture and medicine. The disadvantage of conventional spectroscopy techniques is that they collect the whole datacube. In contrast, compressive spectral imaging systems capture snapshot compressive projections, which are the input of reconstruction algorithms to yield the underlying datacube. Common compressive spectral imagers use coded apertures to perform the coded projections. The coded apertures are the key elements in these imagers since they define the sensing matrix of the system. The proper design of the coded aperture entries leads to a good quality in the reconstruction. In addition, the compressive measurements are prone to saturation due to the limited dynamic range of the sensor, hence the design of coded apertures must consider saturation. The saturation errors in compressive measurements are unbounded and compressive sensing recovery algorithms only provide solutions for bounded noise or bounded with high probability. In this paper it is proposed the design of uniform adaptive grayscale coded apertures (UAGCA) to improve the dynamic range of the estimated spectral images by reducing the saturation levels. The saturation is attenuated between snapshots using an adaptive filter which updates the entries of the grayscale coded aperture based on the previous snapshots. The coded apertures are optimized in terms of transmittance and number of grayscale levels. The advantage of the proposed method is the efficient use of the dynamic range of the image sensor. Extensive simulations show improvements in the image reconstruction of the proposed method compared with grayscale coded apertures (UGCA) and adaptive block-unblock coded apertures (ABCA) in up to 10 dB.

  9. Method and apparatus for optical encoding with compressible imaging

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B. (Inventor)

    2006-01-01

    The present invention presents an optical encoder with increased conversion rates. Improvement in the conversion rate is a result of combining changes in the pattern recognition encoder's scale pattern with an image sensor readout technique which takes full advantage of those changes, and lends itself to operation by modern, high-speed, ultra-compact microprocessors and digital signal processors (DSP) or field programmable gate array (FPGA) logic elements which can process encoder scale images at the highest speeds. Through these improvements, all three components of conversion time (reciprocal conversion rate)--namely exposure time, image readout time, and image processing time--are minimized.

  10. Simultaneous optical image compression and encryption using error-reduction phase retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Liu, Zhengjun; Liu, Shutian

    2015-12-01

    We report a simultaneous image compression and encryption scheme based on solving a typical optical inverse problem. The secret images to be processed are multiplexed as the input intensities of a cascaded diffractive optical system. At the output plane, a compressed complex-valued data with a lot fewer measurements can be obtained by utilizing error-reduction phase retrieval algorithm. The magnitude of the output image can serve as the final ciphertext while its phase serves as the decryption key. Therefore the compression and encryption are simultaneously completed without additional encoding and filtering operations. The proposed strategy can be straightforwardly applied to the existing optical security systems that involve diffraction and interference. Numerical simulations are performed to demonstrate the validity and security of the proposal.

  11. Dual domain watermarking for authentication and compression of cultural heritage images.

    PubMed

    Zhao, Yang; Campisi, Patrizio; Kundur, Deepa

    2004-03-01

    This paper proposes an approach for the combined image authentication and compression of color images by making use of a digital watermarking and data hiding framework. The digital watermark is comprised of two components: a soft-authenticator watermark for authentication and tamper assessment of the given image, and a chrominance watermark employed to improve the efficiency of compression. The multipurpose watermark is designed by exploiting the orthogonality of various domains used for authentication, color decomposition and watermark insertion. The approach is implemented as a DCT-DWT dual domain algorithm and is applied for the protection and compression of cultural heritage imagery. Analysis is provided to characterize the behavior of the scheme under ideal conditions. Simulations and comparisons of the proposed approach with state-of-the-art existing work demonstrate the potential of the overall scheme. PMID:15376933

  12. A new fast matching method for adaptive compression of stereoscopic images

    NASA Astrophysics Data System (ADS)

    Ortis, A.; Battiato, S.

    2015-03-01

    In the last few years, due to the growing use of stereoscopic images, much effort has been spent by the scientific community to develop algorithms for stereoscopic image compression. Stereo images represent the same scene from two different views, and therefore they typically contain a high degree of redundancy. It is then possible to implement some compression strategies devoted to exploit the intrinsic characteristics of the two involved images that are typically embedded in a MPO (Multi Picture Object) data format. MPO files represents a stereoscopic image by building a list of JPEG images. Our previous work introduced a simple block-matching approach to compute local residual useful to reconstruct during the decoding phase, stereoscopic images that maintain high perceptual quality; this allows to the encoder to force high level of compression at least for one of the two involved images. On the other hand the matching approach, based only on the similarity of the blocks, results rather inefficient. Starting from this point, the main contribution of this paper focuses on the improvement of both matching step effectiveness and its computational cost. Such alternative approach aims to greatly enhance matching step by exploiting the geometric properties of a pair of stereoscopic images. In this way we significantly reduce the complexity of the method without affecting results in terms of quality.

  13. Geostatistical analysis of Landsat-TM lossy compression images in a high-performance computing environment

    NASA Astrophysics Data System (ADS)

    Pesquer, Lluís; Cortés, Ana; Serral, Ivette; Pons, Xavier

    2011-11-01

    The main goal of this study is to characterize the effects of lossy image compression procedures on the spatial patterns of remotely sensed images, as well as to test the performance of job distribution tools specifically designed for obtaining geostatistical parameters (variogram) in a High Performance Computing (HPC) environment. To this purpose, radiometrically and geometrically corrected Landsat-5 TM images from April, July, August and September 2006 were compressed using two different methods: Band-Independent Fixed-Rate (BIFR) and three-dimensional Discrete Wavelet Transform (3d-DWT) applied to the JPEG 2000 standard. For both methods, a wide range of compression ratios (2.5:1, 5:1, 10:1, 50:1, 100:1, 200:1 and 400:1, from soft to hard compression) were compared. Variogram analyses conclude that all compression ratios maintain the variogram shapes and that the higher ratios (more than 100:1) reduce variance in the sill parameter of about 5%. Moreover, the parallel solution in a distributed environment demonstrates that HPC offers a suitable scientific test bed for time demanding execution processes, as in geostatistical analyses of remote sensing images.

  14. Lossless compression of RNAi fluorescence images using regional fluctuations of pixels.

    PubMed

    Karimi, Nader; Samavi, Shadrokh; Shirani, Shahram

    2013-03-01

    RNA interference (RNAi) is considered one of the most powerful genomic tools which allows the study of drug discovery and understanding of the complex cellular processes by high-content screens. This field of study, which was the subject of 2006 Nobel Prize of medicine, has drastically changed the conventional methods of analysis of genes. A large number of images have been produced by the RNAi experiments. Even though a number of capable special purpose methods have been proposed recently for the processing of RNAi images but there is no customized compression scheme for these images. Hence, highly proficient tools are required to compress these images. In this paper, we propose a new efficient lossless compression scheme for the RNAi images. A new predictor specifically designed for these images is proposed. It is shown that pixels can be classified into three categories based on their intensity distributions. Using classification of pixels based on the intensity fluctuations among the neighbors of a pixel a context-based method is designed. Comparisons of the proposed method with the existing state-of-the-art lossless compression standards and well-known general-purpose methods are performed to show the efficiency of the proposed method.

  15. Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder

    NASA Astrophysics Data System (ADS)

    August, Isaac; Oiknine, Yaniv; Abuleil, Marwan; Abdulhalim, Ibrahim; Stern, Adrian

    2016-03-01

    Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems.

  16. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  17. Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder

    PubMed Central

    August, Isaac; Oiknine, Yaniv; AbuLeil, Marwan; Abdulhalim, Ibrahim; Stern, Adrian

    2016-01-01

    Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems. PMID:27004447

  18. Image and spectral image compression for four experiments on the ROSETTA and Mars Express missions of ESA

    NASA Astrophysics Data System (ADS)

    Langevin, Yves; Forni, O.

    2000-12-01

    The output rates of imaging scientific experiments on planetary missions far exceed the few 10 kbits/s provided by X or Ka band downlink. This severely restricts the duration and frequency of observations. Space applications present specific constraints for compression methods: space qualified ROM and fast RAM chips have limited capacity and large power requirements. Real time compression is therefore preferable (no large local data buffer) but requires a large processing throughput. Wavelet compression provides a fast and efficient method for lossy data compression, when combined with tree- coding algorithms such as that of Said and Pearlman. We have developed such an algorithm for four instruments on ROSETTA (ESA cometary rendez-vous mission) and Mars Express (ESA Mars Orbiter and Lander mission), building on the experience from two experiments on CASSINI and MARS 96 for which lossless compression was implemented. Modern Digital Signal Processors using a pipeline architecture provide the required high computing capability. The Said-Pearlman tree-coding algorithm has been optimized for speed and code size by reducing branching and bit manipulation, which are very costly in terms of processor cycles. Written in C with a few assembly language modules, the implementation on a DSP of this new version of the Said-Pearlman algorithm provides a processing capability of 500 kdata/s (imaging), which is adequate for our applications. Compression ratios of at least 10 can be achieved with acceptable data quality.

  19. Fast compressed sensing analysis for super-resolution imaging using L1-homotopy.

    PubMed

    Babcock, Hazen P; Moffitt, Jeffrey R; Cao, Yunlong; Zhuang, Xiaowei

    2013-11-18

    In super-resolution imaging techniques based on single-molecule switching and localization, the time to acquire a super-resolution image is limited by the maximum density of fluorescent emitters that can be accurately localized per imaging frame. In order to increase the imaging rate, several methods have been recently developed to analyze images with higher emitter densities. One powerful approach uses methods based on compressed sensing to increase the analyzable emitter density per imaging frame by several-fold compared to other reported approaches. However, the computational cost of this approach, which uses interior point methods, is high, and analysis of a typical 40 µm x 40 µm field-of-view super-resolution movie requires thousands of hours on a high-end desktop personal computer. Here, we demonstrate an alternative compressed-sensing algorithm, L1-Homotopy (L1H), which can generate super-resolution image reconstructions that are essentially identical to those derived using interior point methods in one to two orders of magnitude less time depending on the emitter density. Moreover, for an experimental data set with varying emitter density, L1H analysis is ~300-fold faster than interior point methods. This drastic reduction in computational time should allow the compressed sensing approach to be routinely applied to super-resolution image analysis.

  20. Compression of Encrypted Images Using Set Partitioning In Hierarchical Trees Algorithm

    NASA Astrophysics Data System (ADS)

    Sarika, G.; Unnithan, Harikuttan; Peter, Smitha

    2011-10-01

    When it is desired to transmit redundant data over an insecure channel, it is customary to encrypt the data. For encrypted real world sources such as images, the use of Markova properties in the slepian-wolf decoder does not work well for gray scale images. Here in this paper we propose a method of compression of an encrypted image. In the encoder section, the image is first encrypted and then it undergoes compression in resolution. The cipher function scrambles only the pixel values, but does not shuffle the pixel locations. After down sampling, each sub-image is encoded independently and the resulting syndrome bits are transmitted. The received image undergoes a joint decryption and decompression in the decoder section. By using the local statistics based on the image, it is recovered back. Here the decoder gets only lower resolution version of the image. In addition, this method provides only partial access to the current source at the decoder side, which improves the decoder's learning of the source statistics. The source dependency is exploited to improve the compression efficiency. This scheme provides better coding efficiency and less computational complexity.

  1. Joint pattern recognition/data compression concept for ERTS multispectral imaging

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.

    1975-01-01

    This paper describes a new technique which jointly applies clustering and source encoding concepts to obtain data compression. The cluster compression technique basically uses clustering to extract features from the measurement data set which are used to describe characteristics of the entire data set. In addition, the features may be used to approximate each individual measurement vector by forming a sequence of scalar numbers which define each measurement vector in terms of the cluster features. This sequence, called the feature map, is then efficiently represented by using source encoding concepts. A description of a practical cluster compression algorithm is given and experimental results are presented to show trade-offs and characteristics of various implementations. Examples are provided which demonstrate the application of cluster compression to multispectral image data of the Earth Resources Technology Satellite.

  2. A new simultaneous compression and encryption method for images suitable to recognize form by optical correlation

    NASA Astrophysics Data System (ADS)

    Alfalou, Ayman; Elbouz, Marwa; Jridi, Maher; Loussert, Alain

    2009-09-01

    In some recognition form applications (which require multiple images: facial identification or sign-language), many images should be transmitted or stored. This requires the use of communication systems with a good security level (encryption) and an acceptable transmission rate (compression rate). In the literature, several encryption and compression techniques can be found. In order to use optical correlation, encryption and compression techniques cannot be deployed independently and in a cascade manner. Otherwise, our system will suffer from two major problems. In fact, we cannot simply use these techniques in a cascade manner without considering the impact of one technique over another. Secondly, a standard compression can affect the correlation decision, because the correlation is sensitive to the loss of information. To solve both problems, we developed a new technique to simultaneously compress & encrypt multiple images using a BPOF optimized filter. The main idea of our approach consists in multiplexing the spectrums of different transformed images by a Discrete Cosine Transform (DCT). To this end, the spectral plane should be divided into several areas and each of them corresponds to the spectrum of one image. On the other hand, Encryption is achieved using the multiplexing, a specific rotation functions, biometric encryption keys and random phase keys. A random phase key is widely used in optical encryption approaches. Finally, many simulations have been conducted. Obtained results corroborate the good performance of our approach. We should also mention that the recording of the multiplexed and encrypted spectra is optimized using an adapted quantification technique to improve the overall compression rate.

  3. Automatic measurement of compression wood cell attributes in fluorescence microscopy images.

    PubMed

    Selig, B; Luengo Hendriks, C L; Bardage, S; Daniel, G; Borgefors, G

    2012-06-01

    This paper presents a new automated method for analyzing compression wood fibers in fluorescence microscopy. Abnormal wood known as compression wood is present in almost every softwood tree harvested. Compression wood fibers show a different cell wall morphology and chemistry compared to normal wood fibers, and their mechanical and physical characteristics are considered detrimental for both construction wood and pulp and paper purposes. Currently there is the need for improved methodologies for characterization of lignin distribution in wood cell walls, such as from compression wood fibers, that will allow for a better understanding of fiber mechanical properties. Traditionally, analysis of fluorescence microscopy images of fiber cross-sections has been done manually, which is time consuming and subjective. Here, we present an automatic method, using digital image analysis, that detects and delineates softwood fibers in fluorescence microscopy images, dividing them into cell lumen, normal and highly lignified areas. It also quantifies the different areas, as well as measures cell wall thickness. The method is evaluated by comparing the automatic with a manual delineation. While the boundaries between the various fiber wall regions are detected using the automatic method with precision similar to inter and intra expert variability, the position of the boundary between lumen and the cell wall has a systematic shift that can be corrected. Our method allows for transverse structural characterization of compression wood fibers, which may allow for improved understanding of the micro-mechanical modeling of wood and pulp fibers.

  4. Tissue cartography: compressing bio-image data by dimensional reduction.

    PubMed

    Heemskerk, Idse; Streichan, Sebastian J

    2015-12-01

    The high volumes of data produced by state-of-the-art optical microscopes encumber research. We developed a method that reduces data size and processing time by orders of magnitude while disentangling signal by taking advantage of the laminar structure of many biological specimens. Our Image Surface Analysis Environment automatically constructs an atlas of 2D images for arbitrarily shaped, dynamic and possibly multilayered surfaces of interest. Built-in correction for cartographic distortion ensures that no information on the surface is lost, making the method suitable for quantitative analysis. We applied our approach to 4D imaging of a range of samples, including a Drosophila melanogaster embryo and a Danio rerio beating heart.

  5. ASFNR recommendations for clinical performance of MR dynamic susceptibility contrast perfusion imaging of the brain.

    PubMed

    Welker, K; Boxerman, J; Kalnin, A; Kaufmann, T; Shiroishi, M; Wintermark, M

    2015-06-01

    MR perfusion imaging is becoming an increasingly common means of evaluating a variety of cerebral pathologies, including tumors and ischemia. In particular, there has been great interest in the use of MR perfusion imaging for both assessing brain tumor grade and for monitoring for tumor recurrence in previously treated patients. Of the various techniques devised for evaluating cerebral perfusion imaging, the dynamic susceptibility contrast method has been employed most widely among clinical MR imaging practitioners. However, when implementing DSC MR perfusion imaging in a contemporary radiology practice, a neuroradiologist is confronted with a large number of decisions. These include choices surrounding appropriate patient selection, scan-acquisition parameters, data-postprocessing methods, image interpretation, and reporting. Throughout the imaging literature, there is conflicting advice on these issues. In an effort to provide guidance to neuroradiologists struggling to implement DSC perfusion imaging in their MR imaging practice, the Clinical Practice Committee of the American Society of Functional Neuroradiology has provided the following recommendations. This guidance is based on review of the literature coupled with the practice experience of the authors. While the ASFNR acknowledges that alternate means of carrying out DSC perfusion imaging may yield clinically acceptable results, the following recommendations should provide a framework for achieving routine success in this complicated-but-rewarding aspect of neuroradiology MR imaging practice.

  6. Consensus recommendations for a standardized Brain Tumor Imaging Protocol in clinical trials

    PubMed Central

    Ellingson, Benjamin M.; Bendszus, Martin; Boxerman, Jerrold; Barboriak, Daniel; Erickson, Bradley J.; Smits, Marion; Nelson, Sarah J.; Gerstner, Elizabeth; Alexander, Brian; Goldmacher, Gregory; Wick, Wolfgang; Vogelbaum, Michael; Weller, Michael; Galanis, Evanthia; Kalpathy-Cramer, Jayashree; Shankar, Lalitha; Jacobs, Paula; Pope, Whitney B.; Yang, Dewen; Chung, Caroline; Knopp, Michael V.; Cha, Soonme; van den Bent, Martin J.; Chang, Susan; Al Yung, W.K.; Cloughesy, Timothy F.; Wen, Patrick Y.; Gilbert, Mark R.

    2015-01-01

    A recent joint meeting was held on January 30, 2014, with the US Food and Drug Administration (FDA), National Cancer Institute (NCI), clinical scientists, imaging experts, pharmaceutical and biotech companies, clinical trials cooperative groups, and patient advocate groups to discuss imaging endpoints for clinical trials in glioblastoma. This workshop developed a set of priorities and action items including the creation of a standardized MRI protocol for multicenter studies. The current document outlines consensus recommendations for a standardized Brain Tumor Imaging Protocol (BTIP), along with the scientific and practical justifications for these recommendations, resulting from a series of discussions between various experts involved in aspects of neuro-oncology neuroimaging for clinical trials. The minimum recommended sequences include: (i) parameter-matched precontrast and postcontrast inversion recovery-prepared, isotropic 3D T1-weighted gradient-recalled echo; (ii) axial 2D T2-weighted turbo spin-echo acquired after contrast injection and before postcontrast 3D T1-weighted images to control timing of images after contrast administration; (iii) precontrast, axial 2D T2-weighted fluid-attenuated inversion recovery; and (iv) precontrast, axial 2D, 3-directional diffusion-weighted images. Recommended ranges of sequence parameters are provided for both 1.5 T and 3 T MR systems. PMID:26250565

  7. Grid-Independent Compressive Imaging and Fourier Phase Retrieval

    ERIC Educational Resources Information Center

    Liao, Wenjing

    2013-01-01

    This dissertation is composed of two parts. In the first part techniques of band exclusion(BE) and local optimization(LO) are proposed to solve linear continuum inverse problems independently of the grid spacing. The second part is devoted to the Fourier phase retrieval problem. Many situations in optics, medical imaging and signal processing call…

  8. Volume and tissue composition preserving deformation of breast CT images to simulate breast compression in mammographic imaging

    NASA Astrophysics Data System (ADS)

    Han, Tao; Chen, Lingyun; Lai, Chao-Jen; Liu, Xinming; Shen, Youtao; Zhong, Yuncheng; Ge, Shuaiping; Yi, Ying; Wang, Tianpeng; Shaw, Chris C.

    2009-02-01

    Images of mastectomy breast specimens have been acquired with a bench top experimental Cone beam CT (CBCT) system. The resulting images have been segmented to model an uncompressed breast for simulation of various CBCT techniques. To further simulate conventional or tomosynthesis mammographic imaging for comparison with the CBCT technique, a deformation technique was developed to convert the CT data for an uncompressed breast to a compressed breast without altering the breast volume or regional breast density. With this technique, 3D breast deformation is separated into two 2D deformations in coronal and axial views. To preserve the total breast volume and regional tissue composition, each 2D deformation step was achieved by altering the square pixels into rectangular ones with the pixel areas unchanged and resampling with the original square pixels using bilinear interpolation. The compression was modeled by first stretching the breast in the superior-inferior direction in the coronal view. The image data were first deformed by distorting the voxels with a uniform distortion ratio. These deformed data were then deformed again using distortion ratios varying with the breast thickness and re-sampled. The deformation procedures were applied in the axial view to stretch the breast in the chest wall to nipple direction while shrinking it in the mediolateral to lateral direction re-sampled and converted into data for uniform cubic voxels. Threshold segmentation was applied to the final deformed image data to obtain the 3D compressed breast model. Our results show that the original segmented CBCT image data were successfully converted into those for a compressed breast with the same volume and regional density preserved. Using this compressed breast model, conventional and tomosynthesis mammograms were simulated for comparison with CBCT.

  9. Low complexity DCT engine for image and video compression

    NASA Astrophysics Data System (ADS)

    Jridi, Maher; Ouerhani, Yousri; Alfalou, Ayman

    2013-02-01

    In this paper, we defined a low complexity 2D-DCT architecture. The latter will be able to transform spatial pixels to spectral pixels while taking into account the constraints of the considered compression standard. Indeed, this work is our first attempt to obtain one reconfigurable multistandard DCT. Due to our new matrix decomposition, we could define one common 2D-DCT architecture. The constant multipliers can be configured to handle the case of RealDCT and/or IntDCT (multiplication by 2). Our optimized algorithm not only provides a reduction of computational complexity, but also leads to scalable pipelined design in systolic arrays. Indeed, the 8 × 8 StdDCT can be computed by using 4×4 StdDCT which can be obtained by calculating 2×2 StdDCT. Besides, the proposed structure can be extended to deal with higher number of N (i.e. 16 × 16 and 32 × 32). The performance of the proposed architecture are better when compared with conventional designs. In particular, for N = 4, it is found that the proposed design have nearly third the area-time complexity of the existing DCT structures. This gain is expected to be higher for a greater size of 2D-DCT.

  10. Image reconstruction of compressed sensing MRI using graph-based redundant wavelet transform.

    PubMed

    Lai, Zongying; Qu, Xiaobo; Liu, Yunsong; Guo, Di; Ye, Jing; Zhan, Zhifang; Chen, Zhong

    2016-01-01

    Compressed sensing magnetic resonance imaging has shown great capacity for accelerating magnetic resonance imaging if an image can be sparsely represented. How the image is sparsified seriously affects its reconstruction quality. In the present study, a graph-based redundant wavelet transform is introduced to sparsely represent magnetic resonance images in iterative image reconstructions. With this transform, image patches is viewed as vertices and their differences as edges, and the shortest path on the graph minimizes the total difference of all image patches. Using the l1 norm regularized formulation of the problem solved by an alternating-direction minimization with continuation algorithm, the experimental results demonstrate that the proposed method outperforms several state-of-the-art reconstruction methods in removing artifacts and achieves fewer reconstruction errors on the tested datasets.

  11. Encrypted Three-dimensional Dynamic Imaging using Snapshot Time-of-flight Compressed Ultrafast Photography.

    PubMed

    Liang, Jinyang; Gao, Liang; Hai, Pengfei; Li, Chiye; Wang, Lihong V

    2015-01-01

    Compressed ultrafast photography (CUP), a computational imaging technique, is synchronized with short-pulsed laser illumination to enable dynamic three-dimensional (3D) imaging. By leveraging the time-of-flight (ToF) information of pulsed light backscattered by the object, ToF-CUP can reconstruct a volumetric image from a single camera snapshot. In addition, the approach unites the encryption of depth data with the compressed acquisition of 3D data in a single snapshot measurement, thereby allowing efficient and secure data storage and transmission. We demonstrated high-speed 3D videography of moving objects at up to 75 volumes per second. The ToF-CUP camera was applied to track the 3D position of a live comet goldfish. We have also imaged a moving object obscured by a scattering medium. PMID:26503834

  12. Encrypted Three-dimensional Dynamic Imaging using Snapshot Time-of-flight Compressed Ultrafast Photography.

    PubMed

    Liang, Jinyang; Gao, Liang; Hai, Pengfei; Li, Chiye; Wang, Lihong V

    2015-10-27

    Compressed ultrafast photography (CUP), a computational imaging technique, is synchronized with short-pulsed laser illumination to enable dynamic three-dimensional (3D) imaging. By leveraging the time-of-flight (ToF) information of pulsed light backscattered by the object, ToF-CUP can reconstruct a volumetric image from a single camera snapshot. In addition, the approach unites the encryption of depth data with the compressed acquisition of 3D data in a single snapshot measurement, thereby allowing efficient and secure data storage and transmission. We demonstrated high-speed 3D videography of moving objects at up to 75 volumes per second. The ToF-CUP camera was applied to track the 3D position of a live comet goldfish. We have also imaged a moving object obscured by a scattering medium.

  13. Encrypted Three-dimensional Dynamic Imaging using Snapshot Time-of-flight Compressed Ultrafast Photography

    NASA Astrophysics Data System (ADS)

    Liang, Jinyang; Gao, Liang; Hai, Pengfei; Li, Chiye; Wang, Lihong V.

    2015-10-01

    Compressed ultrafast photography (CUP), a computational imaging technique, is synchronized with short-pulsed laser illumination to enable dynamic three-dimensional (3D) imaging. By leveraging the time-of-flight (ToF) information of pulsed light backscattered by the object, ToF-CUP can reconstruct a volumetric image from a single camera snapshot. In addition, the approach unites the encryption of depth data with the compressed acquisition of 3D data in a single snapshot measurement, thereby allowing efficient and secure data storage and transmission. We demonstrated high-speed 3D videography of moving objects at up to 75 volumes per second. The ToF-CUP camera was applied to track the 3D position of a live comet goldfish. We have also imaged a moving object obscured by a scattering medium.

  14. Encrypted Three-dimensional Dynamic Imaging using Snapshot Time-of-flight Compressed Ultrafast Photography

    PubMed Central

    Liang, Jinyang; Gao, Liang; Hai, Pengfei; Li, Chiye; Wang, Lihong V.

    2015-01-01

    Compressed ultrafast photography (CUP), a computational imaging technique, is synchronized with short-pulsed laser illumination to enable dynamic three-dimensional (3D) imaging. By leveraging the time-of-flight (ToF) information of pulsed light backscattered by the object, ToF-CUP can reconstruct a volumetric image from a single camera snapshot. In addition, the approach unites the encryption of depth data with the compressed acquisition of 3D data in a single snapshot measurement, thereby allowing efficient and secure data storage and transmission. We demonstrated high-speed 3D videography of moving objects at up to 75 volumes per second. The ToF-CUP camera was applied to track the 3D position of a live comet goldfish. We have also imaged a moving object obscured by a scattering medium. PMID:26503834

  15. An LCD driver with on-chip frame buffer and 3 times image compression

    NASA Astrophysics Data System (ADS)

    Sung, Star; Baudia, Jacques

    2008-01-01

    An LCD Driver with on-chip frame buffer and 3 times image compression codec reaching visually lossless image quality is presented. The frame buffer compression codec can encode and decode up to eight pixels in one clock cycle. Integrating a whole frame buffer with RGB=888 bits into the display driver sharply reduces power dissipated between the IO pad and PCB board at a cost of 50% IC die area increase. The existing working chip (STE2102, a ram-less LCD Driver with die size of 170mm x 12mm) is manufactured by ST Micro 0.18μm high voltage CMOS process. A new chip design with on-chip frame buffer SRAM and 3 times compression codec supporting QVGA (320x240) is completed which reduces the frame buffer SRAM density and area by a factor of ~3.0 times and cuts the power consumption of the on-chip SRAM frame buffer by ~9.0 times of which 3 times is contributed by less capacitive bit line load and another 3 times from data rate reduction from image compression. The compression codec having 25K gates in encoder and 10K in decoder accepts both YUV and RGB color formats. An on-chip color-space-conversion unit converts the decompressed YUV components with 420, 422 and 444 formats to be RGB format before driving out to be displayed. The high image quality is achieved by applying some patented proprietary compression algorithms including accurate prediction in DPCM, a Golomb-Rice like VLC coding with accurate predictive divider and an intelligent bit rate distribution control.

  16. OARSI Clinical Trials Recommendations: Hand imaging in clinical trials in osteoarthritis.

    PubMed

    Hunter, D J; Arden, N; Cicuttini, F; Crema, M D; Dardzinski, B; Duryea, J; Guermazi, A; Haugen, I K; Kloppenburg, M; Maheu, E; Miller, C G; Martel-Pelletier, J; Ochoa-Albíztegui, R E; Pelletier, J-P; Peterfy, C; Roemer, F; Gold, G E

    2015-05-01

    Tremendous advances have occurred in our understanding of the pathogenesis of hand osteoarthritis (OA) and these are beginning to be applied to trials targeted at modification of the disease course. The purpose of this expert opinion, consensus driven exercise is to provide detail on how one might use and apply hand imaging assessments in disease modifying clinical trials. It includes information on acquisition methods/techniques (including guidance on positioning for radiography, sequence/protocol recommendations/hardware for MRI); commonly encountered problems (including positioning, hardware and coil failures, sequences artifacts); quality assurance/control procedures; measurement methods; measurement performance (reliability, responsiveness, validity); recommendations for trials; and research recommendations. PMID:25952345

  17. High Order Entropy-Constrained Residual VQ for Lossless Compression of Images

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen

    1995-01-01

    High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance.

  18. A new anti-forensic scheme--hiding the single JPEG compression trace for digital image.

    PubMed

    Cao, Yanjun; Gao, Tiegang; Sheng, Guorui; Fan, Li; Gao, Lin

    2015-01-01

    To prevent image forgeries, a number of forensic techniques for digital image have been developed that can detect an image's origin, trace its processing history, and can also locate the position of tampering. Especially, the statistical footprint left by JPEG compression operation can be a valuable source of information for the forensic analyst, and some image forensic algorithm have been raised based on the image statistics in the DCT domain. Recently, it has been shown that footprints can be removed by adding a suitable anti-forensic dithering signal to the image in the DCT domain, this results in invalid for some image forensic algorithms. In this paper, a novel anti-forensic algorithm is proposed, which is capable of concealing the quantization artifacts that left in the single JPEG compressed image. In the scheme, a chaos-based dither is added to an image's DCT coefficients to remove such artifacts. Effectiveness of both the scheme and the loss of image quality are evaluated through the experiments. The simulation results show that the proposed anti-forensic scheme can verify the reliability of the JPEG forensic tools.

  19. DSP accelerator for the wavelet compression/decompression of high- resolution images

    SciTech Connect

    Hunt, M.A.; Gleason, S.S.; Jatko, W.B.

    1993-07-23

    A Texas Instruments (TI) TMS320C30-based S-Bus digital signal processing (DSP) module was used to accelerate a wavelet-based compression and decompression algorithm applied to high-resolution fingerprint images. The law enforcement community, together with the National Institute of Standards and Technology (NISI), is adopting a standard based on the wavelet transform for the compression, transmission, and decompression of scanned fingerprint images. A two-dimensional wavelet transform of the input image is computed. Then spatial/frequency regions are automatically analyzed for information content and quantized for subsequent Huffman encoding. Compression ratios range from 10:1 to 30:1 while maintaining the level of image quality necessary for identification. Several prototype systems were developed using SUN SPARCstation 2 with a 1280 {times} 1024 8-bit display, 64-Mbyte random access memory (RAM), Tiber distributed data interface (FDDI), and Spirit-30 S-Bus DSP-accelerators from Sonitech. The final implementation of the DSP-accelerated algorithm performed the compression or decompression operation in 3.5 s per print. Further increases in system throughput were obtained by adding several DSP accelerators operating in parallel.

  20. Application of Compressed Sensing to 2-D Ultrasonic Propagation Imaging System data

    SciTech Connect

    Mascarenas, David D.; Farrar, Charles R.; Chong, See Yenn; Lee, J.R.; Park, Gyu Hae; Flynn, Eric B.

    2012-06-29

    The Ultrasonic Propagation Imaging (UPI) System is a unique, non-contact, laser-based ultrasonic excitation and measurement system developed for structural health monitoring applications. The UPI system imparts laser-induced ultrasonic excitations at user-defined locations on a structure of interest. The response of these excitations is then measured by piezoelectric transducers. By using appropriate data reconstruction techniques, a time-evolving image of the response can be generated. A representative measurement of a plate might contain 800x800 spatial data measurement locations and each measurement location might be sampled at 500 instances in time. The result is a total of 640,000 measurement locations and 320,000,000 unique measurements. This is clearly a very large set of data to collect, store in memory and process. The value of these ultrasonic response images for structural health monitoring applications makes tackling these challenges worthwhile. Recently compressed sensing has presented itself as a candidate solution for directly collecting relevant information from sparse, high-dimensional measurements. The main idea behind compressed sensing is that by directly collecting a relatively small number of coefficients it is possible to reconstruct the original measurement. The coefficients are obtained from linear combinations of (what would have been the original direct) measurements. Often compressed sensing research is simulated by generating compressed coefficients from conventionally collected measurements. The simulation approach is necessary because the direct collection of compressed coefficients often requires compressed sensing analog front-ends that are currently not commercially available. The ability of the UPI system to make measurements at user-defined locations presents a unique capability on which compressed measurement techniques may be directly applied. The application of compressed sensing techniques on this data holds the potential to

  1. Efficient patch-based approach for compressive depth imaging.

    PubMed

    Yuan, Xin; Liao, Xuejun; Llull, Patrick; Brady, David; Carin, Lawrence

    2016-09-20

    We present efficient camera hardware and algorithms to capture images with extended depth of field. The camera moves its focal plane via a liquid lens and modulates the scene at different focal planes by shifting a fixed binary mask, with synchronization achieved by using the same triangular wave to control the focal plane and the pizeoelectronic translator that shifts the mask. Efficient algorithms are developed to reconstruct the all-in-focus image and the depth map from a single coded exposure, and various sparsity priors are investigated to enhance the reconstruction, including group sparsity, tree structure, and dictionary learning. The algorithms naturally admit a parallel computational structure due to the independent patch-level operations. Experimental results on both simulation and real datasets demonstrate the efficacy of the new hardware and the inversion algorithms. PMID:27661583

  2. VLSI-based video event triggering for image data compression

    NASA Astrophysics Data System (ADS)

    Williams, Glenn L.

    1994-02-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  3. VLSI-based Video Event Triggering for Image Data Compression

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1994-01-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  4. Compression of binary images on a hypercube machine

    SciTech Connect

    Scheuermann, P.; Yaagoub, A. . Electrical Engineering and Computer Science); Ouksel, M.A. . IDS Dept.)

    1994-10-01

    The S-tree linear representation is an efficient structure for representing binary images which requires three bits for each disjoint binary region. The authors present parallel algorithms for encoding and decoding the S-tree representation from/onto a binary pixel array in a hypercube connected machine. Both the encoding and the decoding algorithms make use of a condensation procedure in order to produce the final result cooperatively. The encoding algorithm conceptually uses a pyramid configuration, where in each iteration half of the processors in the grid below it remain active. The decoding algorithm is based on the observation that each processor an independently decode a given binary region if it contains in its memory an S-tree segment augmented with a linear prefix. They analyze the algorithms in terms of processing and communication time and present results of experiments performed with real and randomly generated images that verify the theoretical results.

  5. Psychophysical evaluation of the effect of JPEG, full-frame discrete cosine transform (DCT) and wavelet image compression on signal detection in medical image noise

    NASA Astrophysics Data System (ADS)

    Eckstein, Miguel P.; Morioka, Craig A.; Whiting, James S.; Eigler, Neal L.

    1995-04-01

    Image quality associated with image compression has been either arbitrarily evaluated through visual inspection, loosely defined in terms of some subjective criteria such as image sharpness or blockiness, or measured by arbitrary measures such as the mean square error between the uncompressed and compressed image. The present paper psychophysically evaluated the effect of three different compression algorithms (JPEG, full-frame, and wavelet) on human visual detection of computer-simulated low-contrast lesions embedded in real medical image noise from patient coronary angiogram. Performance identifying the signal present location as measure by d' index of detectability decreased for all three algorithms by approximately 30% and 62% for the 16:1 and 30:1 compression rations respectively. We evaluated the ability of two previously proposed measures of image quality, mean square error (MSE) and normalized nearest neighbor difference (NNND), to determine the best compression algorithm. The MSE predicted significantly higher image quality for the JPEG algorithm in the 16:1 compression ratio and for both JPEG and full-frame for the 30:1 compression ratio. The NNND predicted significantly high image quality for the full-frame algorithm for both compassion rations. These findings suggest that these two measures of image quality may lead to erroneous conclusions in evaluations and/or optimizations if image compression algorithms.

  6. Prospective acceleration of diffusion tensor imaging with compressed sensing using adaptive dictionaries

    PubMed Central

    McClymont, Darryl; Teh, Irvin; Whittington, Hannah J.; Grau, Vicente

    2015-01-01

    Purpose Diffusion MRI requires acquisition of multiple diffusion‐weighted images, resulting in long scan times. Here, we investigate combining compressed sensing and a fast imaging sequence to dramatically reduce acquisition times in cardiac diffusion MRI. Methods Fully sampled and prospectively undersampled diffusion tensor imaging data were acquired in five rat hearts at acceleration factors of between two and six using a fast spin echo (FSE) sequence. Images were reconstructed using a compressed sensing framework, enforcing sparsity by means of decomposition by adaptive dictionaries. A tensor was fit to the reconstructed images and fiber tractography was performed. Results Acceleration factors of up to six were achieved, with a modest increase in root mean square error of mean apparent diffusion coefficient (ADC), fractional anisotropy (FA), and helix angle. At an acceleration factor of six, mean values of ADC and FA were within 2.5% and 5% of the ground truth, respectively. Marginal differences were observed in the fiber tracts. Conclusion We developed a new k‐space sampling strategy for acquiring prospectively undersampled diffusion‐weighted data, and validated a novel compressed sensing reconstruction algorithm based on adaptive dictionaries. The k‐space undersampling and FSE acquisition each reduced acquisition times by up to 6× and 8×, respectively, as compared to fully sampled spin echo imaging. Magn Reson Med 76:248–258, 2016. © 2015 Wiley Periodicals, Inc. PMID:26302363

  7. Toward prediction of hyperspectral target detection performance after lossy image compression

    NASA Astrophysics Data System (ADS)

    Kaufman, Jason R.; Vongsy, Karmon M.; Dill, Jeffrey C.

    2016-05-01

    Hyperspectral imagery (HSI) offers numerous advantages over traditional sensing modalities with its high spectral content that allows for classification, anomaly detection, target discrimination, and change detection. However, this imaging modality produces a huge amount of data, which requires transmission, processing, and storage resources; hyperspectral compression is a viable solution to these challenges. It is well known that lossy compression of hyperspectral imagery can impact hyperspectral target detection. Here we examine lossy compressed hyperspectral imagery from data-centric and target-centric perspectives. The compression ratio (CR), root mean square error (RMSE), the signal to noise ratio (SNR), and the correlation coefficient are computed directly from the imagery and provide insight to how the imagery has been affected by the lossy compression process. With targets present in the imagery, we perform target detection with the spectral angle mapper (SAM) and adaptive coherence estimator (ACE) and evaluate the change in target detection performance by examining receiver operating characteristic (ROC) curves and the target signal-to-clutter ratio (SCR). Finally, we observe relationships between the data- and target-centric metrics for selected visible/near-infrared to shortwave infrared (VNIR/SWIR) HSI data, targets, and backgrounds that motivate potential prediction of change in target detection performance as a function of compression ratio.

  8. Toward prediction of hyperspectral target detection performance after lossy image compression

    NASA Astrophysics Data System (ADS)

    Kaufman, Jason R.; Vongsy, Karmon M.; Dill, Jeffrey C.

    2016-05-01

    Hyperspectral imagery (HSI) offers numerous advantages over traditional sensing modalities with its high spectral content that allows for classification, anomaly detection, target discrimination, and change detection. However, this imaging modality produces a huge amount of data, which requires transmission, processing, and storage resources; hyperspectral compression is a viable solution to these challenges. It is well known that lossy compression of hyperspectral imagery can impact hyperspectral target detection. Here we examine lossy compressed hyperspectral imagery from data-centric and target-centric perspectives. The compression ratio (CR), root mean square error (RMSE), the signal to noise ratio (SNR), and the correlation coefficient are computed directly from the imagery and provide insight to how the imagery has been affected by the lossy compression process. With targets present in the imagery, we perform target detection with the spectral angle mapper (SAM) and adaptive coherence estimator (ACE) and evaluate the change in target detection performance by examining receiver operating characteristic (ROC) curves and the target signal-to-clutter ratio (SCR). Finally, we observe relationships between the data- and target-centric metrics for selected visible/near-infrared to shortwave infrared (VNIR/SWIR) HSI data, targets, and backgrounds that motivate potential prediction of change in target detection performance as a function of compression ratio.

  9. Evaluation of onboard hyperspectral-image compression techniques for a parallel push-broom sensor

    SciTech Connect

    Briles, S.

    1996-04-01

    A single hyperspectral imaging sensor can produce frames with spatially-continuous rows of differing, but adjacent, spectral wavelength. If the frame sample-rate of the sensor is such that subsequent hyperspectral frames are spatially shifted by one row, then the sensor can be thought of as a parallel (in wavelength) push-broom sensor. An examination of data compression techniques for such a sensor is presented. The compression techniques are intended to be implemented onboard a space-based platform and to have implementation speeds that match the date rate of the sensor. Data partitions examined extend from individually operating on a single hyperspectral frame to operating on a data cube comprising the two spatial axes and the spectral axis. Compression algorithms investigated utilize JPEG-based image compression, wavelet-based compression and differential pulse code modulation. Algorithm performance is quantitatively presented in terms of root-mean-squared error and root-mean-squared correlation coefficient error. Implementation issues are considered in algorithm development.

  10. Imaging evidence and recommendations for traumatic brain injury: conventional neuroimaging techniques.

    PubMed

    Wintermark, Max; Sanelli, Pina C; Anzai, Yoshimi; Tsiouris, A John; Whitlow, Christopher T

    2015-02-01

    Imaging plays an essential role in identifying intracranial injury in patients with traumatic brain injury (TBI). The goals of imaging include (1) detecting injuries that may require immediate surgical or procedural intervention, (2) detecting injuries that may benefit from early medical therapy or vigilant neurologic supervision, and (3) determining the prognosis of patients to tailor rehabilitative therapy or help with family counseling and discharge planning. In this article, the authors perform a review of the evidence on the utility of various imaging techniques in patients presenting with TBI to provide guidance for evidence-based, clinical imaging protocols. The intent of this article is to suggest practical imaging recommendations for patients presenting with TBI across different practice settings and to simultaneously provide the rationale and background evidence supporting their use. These recommendations should ultimately assist referring physicians faced with the task of ordering appropriate imaging tests in particular patients with TBI for whom they are providing care. These recommendations should also help radiologists advise their clinical colleagues on appropriate imaging utilization for patients with TBI.

  11. Supporting image algebra in the Matlab programming language for compression research

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Wilson, Joseph N.; Hayden, Eric T.

    2009-08-01

    Image algebra is a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra was developed under DARPA and US Air Force sponsorship at University of Florida for over 15 years beginning in 1984. Image algebra has been implemented in a variety of programming languages designed specifically to support the development of image processing and computer vision programs. The University of Florida has been associated with implementations supporting the languages FORTRAN, Ada, Lisp, and C++. The latter implementation involved the implementation of a class library, iac++, that supported image algebra programming in C++. Since image processing and computer vision are generally performed with operands that are array-based, the MatlabTM programming language is ideal for implementing the common subset of image algebra. Objects include sets and set operations, images and operations on images, as well as templates and image-template convolution operations. This implementation has been found to be useful for research in data, image, and video compression, as described herein. Due to the widespread acceptance of the Matlab programming language in the computing community, this new implementation offers exciting possibilities for supporting a large group of users. The control over an object's computational resources provided to the algorithm designer by Matlab means that the image algebra Matlab (IAM) library can employ versatile representations for the operands and operations of the algebra. In this paper, we first outline the purpose and structure of image algebra, then present IAM notation in relationship to the preceding (IAC++) implementation. We then provide examples to show how IAM is more convenient and more readily supports efficient algorithm development. Additionally, we show how image algebra and IAM can be employed in compression algorithm development and analysis.

  12. Quality assessment of stereoscopic 3D image compression by binocular integration behaviors.

    PubMed

    Lin, Yu-Hsun; Wu, Ja-Ling

    2014-04-01

    The objective approaches of 3D image quality assessment play a key role for the development of compression standards and various 3D multimedia applications. The quality assessment of 3D images faces more new challenges, such as asymmetric stereo compression, depth perception, and virtual view synthesis, than its 2D counterparts. In addition, the widely used 2D image quality metrics (e.g., PSNR and SSIM) cannot be directly applied to deal with these newly introduced challenges. This statement can be verified by the low correlation between the computed objective measures and the subjectively measured mean opinion scores (MOSs), when 3D images are the tested targets. In order to meet these newly introduced challenges, in this paper, besides traditional 2D image metrics, the binocular integration behaviors-the binocular combination and the binocular frequency integration, are utilized as the bases for measuring the quality of stereoscopic 3D images. The effectiveness of the proposed metrics is verified by conducting subjective evaluations on publicly available stereoscopic image databases. Experimental results show that significant consistency could be reached between the measured MOS and the proposed metrics, in which the correlation coefficient between them can go up to 0.88. Furthermore, we found that the proposed metrics can also address the quality assessment of the synthesized color-plus-depth 3D images well. Therefore, it is our belief that the binocular integration behaviors are important factors in the development of objective quality assessment for 3D images.

  13. Comparative color space analysis of difference images from adjacent visible human slices for lossless compression

    NASA Astrophysics Data System (ADS)

    Thoma, George R.; Pipkin, Ryan; Mitra, Sunanda

    1997-10-01

    This paper reports the compression ratio performance of the RGB, YIQ, and HSV color plane models for the lossless coding of the National Library of Medicine's Visible Human (VH) color data set. In a previous study the correlation between adjacent VH slices was exploited using the RGB color plane model. The results of that study suggested an investigation into possible improvements using the other two color planes, and alternative differencing methods. YIQ and HSV, also know a HSI, both represent the image by separating the intensity from the color information, and we anticipated higher correlation between the intensity components of adjacent VH slices. However the compression ratio did not improve by the transformation from RGB into the other color plane models, since in order to maintain lossless performance, YIQ and HSV both require more bits to store each pixel. This increase in file size is not offset by the increase in compression due to the higher correlation of the intensity value, the best performance being achieved with the RGB color plane model. This study also explored three methods of differencing: average reference image, alternating reference image, and cascaded difference from single reference. The best method proved to be the first iteration of the cascaded difference from single reference. In this method, a single reference image is chosen, and the difference between it and its neighbor is calculated. Then the difference between the neighbor and its next neighbor is calculated. This method requires that all preceding images up to the reference image be reconstructed before the target image is available. The compression ratios obtained from this method are significantly better than the competing methods.

  14. An infrared-visible image fusion scheme based on NSCT and compressed sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Maldague, Xavier

    2015-05-01

    Image fusion, as a research hot point nowadays in the field of infrared computer vision, has been developed utilizing different varieties of methods. Traditional image fusion algorithms are inclined to bring problems, such as data storage shortage and computational complexity increase, etc. Compressed sensing (CS) uses sparse sampling without knowing the priori knowledge and greatly reconstructs the image, which reduces the cost and complexity of image processing. In this paper, an advanced compressed sensing image fusion algorithm based on non-subsampled contourlet transform (NSCT) is proposed. NSCT provides better sparsity than the wavelet transform in image representation. Throughout the NSCT decomposition, the low-frequency and high-frequency coefficients can be obtained respectively. For the fusion processing of low-frequency coefficients of infrared and visible images , the adaptive regional energy weighting rule is utilized. Thus only the high-frequency coefficients are specially measured. Here we use sparse representation and random projection to obtain the required values of high-frequency coefficients, afterwards, the coefficients of each image block can be fused via the absolute maximum selection rule and/or the regional standard deviation rule. In the reconstruction of the compressive sampling results, a gradient-based iterative algorithm and the total variation (TV) method are employed to recover the high-frequency coefficients. Eventually, the fused image is recovered by inverse NSCT. Both the visual effects and the numerical computation results after experiments indicate that the presented approach achieves much higher quality of image fusion, accelerates the calculations, enhances various targets and extracts more useful information.

  15. A Statistical Model for Quantized AC Block DCT Coefficients in JPEG Compression and its Application to Detecting Potential Compression History in Bitmap Images

    NASA Astrophysics Data System (ADS)

    Narayanan, Gopal; Shi, Yun Qing

    We first develop a probability mass function (PMF) for quantized block discrete cosine transform (DCT) coefficients in JPEG compression using statistical analysis of quantization, with a Generalized Gaussian model being considered as the PDF for non-quantized block DCT coefficients. We subsequently propose a novel method to detect potential JPEG compression history in bitmap images using the PMF that has been developed. We show that this method outperforms a classical approach to compression history detection in terms of effectiveness. We also show that it detects history with both independent JPEG group (IJG) and custom quantization tables.

  16. Kronecker compressive sensing-based mechanism with fully independent sampling dimensions for hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Zhao, Rongqiang; Wang, Qiang; Shen, Yi

    2015-11-01

    We propose a new approach for Kronecker compressive sensing of hyperspectral (HS) images, including the imaging mechanism and the corresponding reconstruction method. The proposed mechanism is able to compress the data of all dimensions when sampling, which can be achieved by three fully independent sampling devices. As a result, the mechanism greatly reduces the control points and memory requirement. In addition, we can also select the suitable sparsifying bases and generate the corresponding optimized sensing matrices or change the distribution of sampling ratio for each dimension independently according to different HS images. As the cooperation of the mechanism, we combine the sparsity model and low multilinear-rank model to develop a reconstruction method. Analysis shows that our reconstruction method has a lower computational complexity than the traditional methods based on sparsity model. Simulations verify that the HS images can be reconstructed successfully with very few measurements. In summary, the proposed approach can reduce the complexity and improve the practicability for HS image compressive sensing.

  17. A modified JPEG-LS lossless compression method for remote sensing images

    NASA Astrophysics Data System (ADS)

    Deng, Lihua; Huang, Zhenghua

    2015-12-01

    As many variable length source coders, JPEG-LS is highly vulnerable to channel errors which occur in the transmission of remote sensing images. The error diffusion is one of the important factors which infect its robustness. The common method of improving the error resilience of JPEG-LS is dividing the image into many strips or blocks, and then coding each of them independently, but this method reduces the coding efficiency. In this paper, a block based JPEP-LS lossless compression method with an adaptive parameter is proposed. In the modified scheme, the threshold parameter RESET is adapted to an image and the compression efficiency is close to that of the conventional JPEG-LS.

  18. Independent transmission of sign language interpreter in DVB: assessment of image compression

    NASA Astrophysics Data System (ADS)

    Zatloukal, Petr; Bernas, Martin; Dvořák, LukáÅ.¡

    2015-02-01

    Sign language on television provides information to deaf that they cannot get from the audio content. If we consider the transmission of the sign language interpreter over an independent data stream, the aim is to ensure sufficient intelligibility and subjective image quality of the interpreter with minimum bit rate. The work deals with the ROI-based video compression of Czech sign language interpreter implemented to the x264 open source library. The results of this approach are verified in subjective tests with the deaf. They examine the intelligibility of sign language expressions containing minimal pairs for different levels of compression and various resolution of image with interpreter and evaluate the subjective quality of the final image for a good viewing experience.

  19. A survey of quality measures for gray-scale image compression

    NASA Technical Reports Server (NTRS)

    Eskicioglu, Ahmet M.; Fisher, Paul S.

    1993-01-01

    Although a variety of techniques are available today for gray-scale image compression, a complete evaluation of these techniques cannot be made as there is no single reliable objective criterion for measuring the error in compressed images. The traditional subjective criteria are burdensome, and usually inaccurate or inconsistent. On the other hand, being the most common objective criterion, the mean square error (MSE) does not have a good correlation with the viewer's response. It is now understood that in order to have a reliable quality measure, a representative model of the complex human visual system is required. In this paper, we survey and give a classification of the criteria for the evaluation of monochrome image quality.

  20. Tampered Region Localization of Digital Color Images Based on JPEG Compression Noise

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Dong, Jing; Tan, Tieniu

    With the availability of various digital image edit tools, seeing is no longer believing. In this paper, we focus on tampered region localization for image forensics. We propose an algorithm which can locate tampered region(s) in a lossless compressed tampered image when its unchanged region is output of JPEG decompressor. We find the tampered region and the unchanged region have different responses for JPEG compression. The tampered region has stronger high frequency quantization noise than the unchanged region. We employ PCA to separate different spatial frequencies quantization noises, i.e. low, medium and high frequency quantization noise, and extract high frequency quantization noise for tampered region localization. Post-processing is involved to get final localization result. The experimental results prove the effectiveness of our proposed method.

  1. Comparison of information-preserving and information-losing data-compression algorithms for CT images.

    PubMed

    Bramble, J M

    1989-02-01

    Data compression increases the number of images that can be stored on magnetic disks or tape and reduces the time required for transmission of images between stations. Two algorithms for data compression are compared in application to computed tomographic (CT) images. The first, an information-preserving algorithm combining differential and Huffman encoding, allows reconstruction of the original image. A second algorithm alters the image in a clinically acceptable manner. This second algorithm combines two processes: the suppression of data outside of the head or body and the combination of differential and Huffman encoding. Because the final image is not an exact copy, the second algorithm is information losing. Application of the information-preserving algorithm can double or triple the number of CT images that can be stored on hard disk or magnetic tape. This algorithm may also double or triple the speed with which images may be transmitted. The information-losing algorithm can increase storage or transmission speed by a factor of five. The computation time on this system is excessive, but dedicated hardware is available to allow efficient implementation.

  2. High-resolution MRI of spinal cords by compressive sensing parallel imaging.

    PubMed

    Peng Li; Xiangdong Yu; Griffin, Jay; Levine, Jonathan M; Jim Ji

    2015-08-01

    Spinal Cord Injury (SCI) is a common injury due to diseases or accidents. Noninvasive imaging methods play a critical role in diagnosing SCI and monitoring the response to therapy. Magnetic Resonance Imaging (MRI), by the virtue of providing excellent soft tissue contrast, is the most promising imaging method for this application. However, spinal cord has a very small cross-section, which needs high-resolution images for better visualization and diagnosis. Acquiring high-resolution spinal cord MRI images requires long acquisition time due to the physical and physiological constraints. Moreover, long acquisition time makes MRI more susceptible to motion artifacts. In this paper, we studied the application of compressive sensing (CS) and parallel imaging to achieve high-resolution imaging from sparsely sampled and reduced k-space data acquired by parallel receive arrays. In particular, the studies are limited to the effects of 2D Cartesian sampling with different subsampling schemes and reduction factors. The results show that compressive sensing parallel MRI has the potential to provide high-resolution images of the spinal cord in 1/3 of the acquisition time required by the conventional methods.

  3. Predicting the fidelity of JPEG2000 compressed CT images using DICOM header information

    SciTech Connect

    Kim, Kil Joong; Kim, Bohyoung; Lee, Hyunna; Choi, Hosik; Jeon, Jong-June; Ahn, Jeong-Hwan; Lee, Kyoung Ho

    2011-12-15

    Purpose: To propose multiple logistic regression (MLR) and artificial neural network (ANN) models constructed using digital imaging and communications in medicine (DICOM) header information in predicting the fidelity of Joint Photographic Experts Group (JPEG) 2000 compressed abdomen computed tomography (CT) images. Methods: Our institutional review board approved this study and waived informed patient consent. Using a JPEG2000 algorithm, 360 abdomen CT images were compressed reversibly (n = 48, as negative control) or irreversibly (n = 312) to one of different compression ratios (CRs) ranging from 4:1 to 10:1. Five radiologists independently determined whether the original and compressed images were distinguishable or indistinguishable. The 312 irreversibly compressed images were divided randomly into training (n = 156) and testing (n = 156) sets. The MLR and ANN models were constructed regarding the DICOM header information as independent variables and the pooled radiologists' responses as dependent variable. As independent variables, we selected the CR (DICOM tag number: 0028, 2112), effective tube current-time product (0018, 9332), section thickness (0018, 0050), and field of view (0018, 0090) among the DICOM tags. Using the training set, an optimal subset of independent variables was determined by backward stepwise selection in a four-fold cross-validation scheme. The MLR and ANN models were constructed with the determined independent variables using the training set. The models were then evaluated on the testing set by using receiver-operating-characteristic (ROC) analysis regarding the radiologists' pooled responses as the reference standard and by measuring Spearman rank correlation between the model prediction and the number of radiologists who rated the two images as distinguishable. Results: The CR and section thickness were determined as the optimal independent variables. The areas under the ROC curve for the MLR and ANN predictions were 0.91 (95% CI; 0

  4. Effect of noise and MTF on the compressibility of high-resolution color images

    NASA Astrophysics Data System (ADS)

    Melnychuck, Paul W.; Barry, Michael J.; Mathieu, Michael S.

    1990-06-01

    There are an increasing number of digital image processing systems that employ photographic image capture; that is, a color photographic negative or transparency is digitally scanned, compressed, and stored or transmitted for further use. To capture the information content that a photographic color negative is capable of delivering, it must be scanned at a pixel resolution of at least 50 pixels/mm. This type of high quality imagery presents certain problems and opportunities in image coding that are not present in lower resolution systems. Firstly, photographic granularity increases the entropy of a scanned negative, limiting the extent to which entropy encoding can compress the scanned record. Secondly, any MTFrelated chemical enhancement that is incorporated into a film tends to reduce the pixel-to-pixel correlation that most compression schemes attempt to exploit. This study examines the effect of noise and MTF on the compressibility of scanned photographic images by establishing experimental information theoretic bounds. Images used for this study were corrupted with noise via a computer model of photographic grain and an MTF model of blur and chemical edge enhancement. The measured bounds are expressed in terms of the entropy of a variety of decomposed image records (e.g., DPCM predictor error) for a zeroeth-order Markov-based entropy encoder, and for a context model used by the Q-coder. The resultsshow that the entropy of the DPCM predictor error is 3-5 bits/pixel, illustrating a 2 bits/pixel difference between an ideal grain-free case, and a grainy film case. This suggests that an ideal noise filtering algorithm could lower the bitrate by as much as 50%.

  5. Reference free quality metric using a region-based attention model for JPEG-2000 compressed images

    NASA Astrophysics Data System (ADS)

    Barland, Remi; Saadane, Abdelhakim

    2006-01-01

    At high compression ratios, the current lossy compression algorithms introduce distortions that are generally exploited by the No-Reference quality assessment. For JPEG-2000 compressed images, the blurring and ringing effects cause the principal embarrassment for a human observer. However, the Human Visual System does not carry out a systematic and local research of these impairments in the whole image, but rather, it identifies some regions of interest for judging the perceptual quality. In this paper, we propose to use both of these distortions (ringing and blurring effects), locally weighted by an importance map generated by a region-based attention model, to design a new reference free quality metric for JPEG-2000 compressed images. For the blurring effect, the impairment measure depends on spatial information contained in the whole image while, for the ringing effect, only the local information localized around strong edges is used. To predict the regions in the scene that potentially attract the human attention, a stage of the proposed metric consists to generate an importance map issued from a region-based attention model, defined by Osberger et al [1]. First, explicit regions are obtained by color image segmentation. The segmented image is then analyzed by different factors, known to influence the human attention. The produced importance map is finally used to locally weight each distortion measure. The predicted scores have been compared on one hand, to the subjective scores and on other hand, to previous results, only based on the artefact measurement. This comparative study demonstrates the efficiency of the proposed quality metric.

  6. Near-infrared compressive line sensing imaging system using individually addressable laser diode array

    NASA Astrophysics Data System (ADS)

    Ouyang, Bing; Hou, Weilin; Caimi, Frank M.; Dalgleish, Fraser R.; Vuorenkoski, Anni K.; Gong, Sue; Britton, Walter

    2015-05-01

    The compressive line sensing (CLS) active imaging system was proposed and validated through a series of test-tank experiments. As an energy-efficient alternative to the traditional line-scan serial image, the CLS system will be highly beneficial for long-duration surveillance missions using unmanned, power-constrained platforms such as unmanned aerial or underwater vehicles. In this paper, the application of an active spatial light modulator (SLM), the individually addressable laser diode array, in a CLS imaging system is investigated. In the CLS context, active SLM technology can be advantageous over passive SLMs such as the digital micro-mirror device. Initial experimental results are discussed.

  7. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M. ); Hopper, T. )

    1993-01-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.

  8. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.; Hopper, T.

    1993-05-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI`s Integrated Automated Fingerprint Identification System.

  9. Real-time dispersion-compensated image reconstruction for compressive sensing spectral domain optical coherence tomography.

    PubMed

    Xu, Daguang; Huang, Yong; Kang, Jin U

    2014-09-01

    In this work, we propose a novel dispersion compensation method that enables real-time compressive sensing (CS) spectral domain optical coherence tomography (SD OCT) image reconstruction. We show that dispersion compensation can be incorporated into CS SD OCT by multiplying the dispersion-correcting terms by the undersampled spectral data before CS reconstruction. High-quality SD OCT imaging with dispersion compensation was demonstrated at a speed in excess of 70 frames per s using 40% of the spectral measurements required by the well-known Shannon/Nyquist theory. The data processing and image display were performed on a conventional workstation having three graphics processing units.

  10. An infrared image super-resolution reconstruction method based on compressive sensing

    NASA Astrophysics Data System (ADS)

    Mao, Yuxing; Wang, Yan; Zhou, Jintao; Jia, Haiwei

    2016-05-01

    Limited by the properties of infrared detector and camera lens, infrared images are often detail missing and indistinct in vision. The spatial resolution needs to be improved to satisfy the requirements of practical application. Based on compressive sensing (CS) theory, this thesis presents a single image super-resolution reconstruction (SRR) method. With synthetically adopting image degradation model, difference operation-based sparse transformation method and orthogonal matching pursuit (OMP) algorithm, the image SRR problem is transformed into a sparse signal reconstruction issue in CS theory. In our work, the sparse transformation matrix is obtained through difference operation to image, and, the measurement matrix is achieved analytically from the imaging principle of infrared camera. Therefore, the time consumption can be decreased compared with the redundant dictionary obtained by sample training such as K-SVD. The experimental results show that our method can achieve favorable performance and good stability with low algorithm complexity.

  11. Improving image quality in compressed ultrafast photography with a space- and intensity-constrained reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Liren; Chen, Yujia; Liang, Jinyang; Gao, Liang; Ma, Cheng; Wang, Lihong V.

    2016-03-01

    The single-shot compressed ultrafast photography (CUP) camera is the fastest receive-only camera in the world. In this work, we introduce an external CCD camera and a space- and intensity-constrained (SIC) reconstruction algorithm to improve the image quality of CUP. The CCD camera takes a time-unsheared image of the dynamic scene. Unlike the previously used unconstrained algorithm, the proposed algorithm incorporates both spatial and intensity constraints, based on the additional prior information provided by the external CCD camera. First, a spatial mask is extracted from the time-unsheared image to define the zone of action. Second, an intensity threshold constraint is determined based on the similarity between the temporally projected image of the reconstructed datacube and the time-unsheared image taken by the external CCD. Both simulation and experimental studies showed that the SIC reconstruction improves the spatial resolution, contrast, and general quality of the reconstructed image.

  12. Research on lossless compression of true color RGB image with low time and space complexity

    NASA Astrophysics Data System (ADS)

    Pan, ShuLin; Xie, ChengJun; Xu, Lin

    2008-12-01

    Eliminating correlated redundancy of space and energy by using a DWT lifting scheme and reducing the complexity of the image by using an algebraic transform among the RGB components. An improved Rice Coding algorithm, in which presents an enumerating DWT lifting scheme that fits any size images by image renormalization has been proposed in this paper. This algorithm has a coding and decoding process without backtracking for dealing with the pixels of an image. It support LOCO-I and it can also be applied to Coder / Decoder. Simulation analysis indicates that the proposed method can achieve a high image compression. Compare with Lossless-JPG, PNG(Microsoft), PNG(Rene), PNG(Photoshop), PNG(Anix PicViewer), PNG(ACDSee), PNG(Ulead photo Explorer), JPEG2000, PNG(KoDa Inc), SPIHT and JPEG-LS, the lossless image compression ratio improved 45%, 29%, 25%, 21%, 19%, 17%, 16%, 15%, 11%, 10.5%, 10% separately with 24 pieces of RGB image provided by KoDa Inc. Accessing the main memory in Pentium IV,CPU2.20GHZ and 256MRAM, the coding speed of the proposed coder can be increased about 21 times than the SPIHT and the efficiency of the performance can be increased 166% or so, the decoder's coding speed can be increased about 17 times than the SPIHT and the efficiency of the performance can be increased 128% or so.

  13. Multiple-image encryption based on compressive holography using a multiple-beam interferometer

    NASA Astrophysics Data System (ADS)

    Wan, Yuhong; Wu, Fan; Yang, Jinghuan; Man, Tianlong

    2015-05-01

    Multiple-image encryption techniques not only improve the encryption capacity but also facilitate the transmission and storage of the ciphertext. We present a new method of multiple-image encryption based on compressive holography with enhanced data security using a multiple-beam interferometer. By modifying the Mach-Zehnder interferometer, the interference of multiple object beams and unique reference beam is implemented for encrypting multiple images simultaneously into one hologram. The original images modulated with the random phase masks are put in different positions with different distance away from the CCD camera. Each image plays the role of secret key for other images to realize the mutual encryption. Four-step phase shifting technique is combined with the holographic recording. The holographic recording is treated as a compressive sensing process, thus the decryption process is inverted as a minimization problem and the two-step iterative shrinkage/thresholding algorithm (TwIST) is employed to solve this optimization problem. The simulated results about multiple binary and grayscale images encryption are demonstrated to verify the validity and robustness of our proposed method.

  14. High dynamic range compression and detail enhancement of infrared images in the gradient domain

    NASA Astrophysics Data System (ADS)

    Zhang, Feifei; Xie, Wei; Ma, Guorui; Qin, Qianqing

    2014-11-01

    To find the trade-off between providing an accurate perception of the global scene and improving the visibility of details without excessively distorting radiometric infrared information, a novel gradient-domain-based visualization method for high dynamic range infrared images is proposed in this study. The proposed method adopts an energy function which includes a data constraint term and a gradient constraint term. In the data constraint term, the classical histogram projection method is used to perform the initial dynamic range compression to obtain the desired pixel values and preserve the global contrast. In the gradient constraint term, the moment matching method is adopted to obtain the normalized image; then a gradient gain factor function is designed to adjust the magnitudes of the normalized image gradients and obtain the desired gradient field. Lastly, the low dynamic range image is solved from the proposed energy function. The final image is obtained by linearly mapping the low dynamic range image to the 8-bit display range. The effectiveness and robustness of the proposed method are analyzed using the infrared images obtained from different operating conditions. Compared with other well-established methods, our method shows a significant performance in terms of dynamic range compression, while enhancing the details and avoiding the common artifacts, such as halo, gradient reversal, hazy or saturation.

  15. Study on the application of embedded zero-tree wavelet algorithm in still images compression

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Lu, Yanhe; Li, Taifu; Lei, Gang

    2005-12-01

    An image has directional selection capability with high frequency through wavelet transformation. It is coincident with the visual characteristics of human eyes. The most important visual characteristic in human eyes is the visual covering effect. The embedded Zero-tree Wavelet (EZW) coding method completes the same level coding for a whole image. In an image, important regions (regions of interest) and background regions (indifference regions) are coded through the same levels. On the basis of studying the human visual characteristics, that is, the visual covering effect, this paper employs an image-compressing method with regions of interest, i.e., an algorithm of Embedded Zero-tree Wavelet with Regions of Interest (EZWROI Algorism) to encode the regions of interest and regions of non-interest separately. In this way, the lost important information in the image is much less. It makes full use of channel resource and memory space, and improves the image quality in the regions of interest. Experimental study showed that a resumed image using an EZW_ROI algorithm is better in visual effects than that of EZW on condition of high compression ratio.

  16. Effects of Time-Compressed Narration and Representational Adjunct Images on Cued-Recall, Content Recognition, and Learner Satisfaction

    ERIC Educational Resources Information Center

    Ritzhaupt, Albert Dieter; Barron, Ann

    2008-01-01

    The purpose of this study was to investigate the effect of time-compressed narration and representational adjunct images on a learner's ability to recall and recognize information. The experiment was a 4 Audio Speeds (1.0 = normal vs. 1.5 = moderate vs. 2.0 = fast vs. 2.5 = fastest rate) x Adjunct Image (Image Present vs. Image Absent) factorial…

  17. A mosaic approach for unmanned airship remote sensing images based on compressive sensing

    NASA Astrophysics Data System (ADS)

    Yang, Jilian; Zhang, Aiwu; Sun, Weidong

    2011-12-01

    The recently-emerged compressive sensing (CS) theory goes against the Nyquist-Shannon (NS) sampling theory and shows that signals can be recovered from far fewer samples than what the NS sampling theorem states. In this paper, to solve the problems in image fusion step of the full-scene image mosaic for the multiple images acquired by a low-altitude unmanned airship, a novel information mutual complement (IMC) model based on CS theory is proposed. IMC model rests on a similar concept that was termed as the joint sparsity models (JSMs) in distributed compressive sensing (DCS) theory, but the measurement matrix in our IMC model is rearranged in order for the multiple images to be reconstructed as one combination. The experimental results of the BP and TSW-CS algorithm with our IMC model certified the effectiveness and adaptability of this proposed approach, and demonstrated that it is possible to substantially reduce the measurement rates of the signal ensemble with good performance in the compressive domain.

  18. Shock Compression Induced Hot Spots in Energetic Material Detected by Thermal Imaging Microscopy

    NASA Astrophysics Data System (ADS)

    Chen, Ming-Wei; Dlott, Dana

    2014-06-01

    The chemical reaction of powder energetic material is of great interest in energy and pyrotechnic applications since the high reaction temperature. Under the shock compression, the chemical reaction appears in the sub-microsecond to microsecond time scale, and releases a large amount of energy. Experimental and theoretical research progresses have been made in the past decade, in order to characterize the process under the shock compression. However, the knowledge of energy release and temperature change of this procedure is still limited, due to the difficulties of detecting technologies. We have constructed a thermal imaging microscopy apparatus, and studied the temperature change in energetic materials under the long-wavelength infrared (LWIR) and ultrasound exposure. Additionally, the real-time detection of the localized heating and energy concentration in composite material is capable with our thermal imaging microscopy apparatus. Recently, this apparatus is combined with our laser driven flyer plate system to provide a lab-scale source of shock compression to energetic material. A fast temperature increase of thermite particulars induced by the shock compression is directly observed by thermal imaging with 15-20 μm spatial resolution. Temperature change during the shock loading is evaluated to be at the order of 10^9K/s, through the direct measurement of mid-wavelength infrared (MWIR) emission intensity change. We observe preliminary results to confirm the hot spots appear with shock compression on energetic crystals, and will discuss the data and analysis in further detail. M.-W. Chen, S. You, K. S. Suslick, and D. D. Dlott, {Rev. Sci. Instr., 85, 023705 (2014) M.-W. Chen, S. You, K. S. Suslick, and D. D. Dlott, {Appl. Phys. Lett., 104, 061907 (2014)} K. E. Brown, W. L. Shaw, X. Zheng, and D. D. Dlott, {Rev. Sci. Instr., 83, 103901 (2012)}

  19. Experimental study of a DMD based compressive line sensing imaging system in the turbulence environment

    NASA Astrophysics Data System (ADS)

    Ouyang, Bing; Hou, Weilin; Gong, Cuiling; Caimi, Frank M.; Dalgleish, Fraser R.; Vuorenkoski, Anni K.

    2016-05-01

    The Compressive Line Sensing (CLS) active imaging system has been demonstrated to be effective in scattering mediums, such as turbid coastal water through simulations and test tank experiments. Since turbulence is encountered in many atmospheric and underwater surveillance applications, a new CLS imaging prototype was developed to investigate the effectiveness of the CLS concept in a turbulence environment. Compared with earlier optical bench top prototype, the new system is significantly more robust and compact. A series of experiments were conducted at the Naval Research Lab's optical turbulence test facility with the imaging path subjected to various turbulence intensities. In addition to validating the system design, we obtained some unexpected exciting results - in the strong turbulence environment, the time-averaged measurements using the new CLS imaging prototype improved both SNR and resolution of the reconstructed images. We will discuss the implications of the new findings, the challenges of acquiring data through strong turbulence environment, and future enhancements.

  20. A novel data hiding scheme for block truncation coding compressed images using dynamic programming strategy

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Chun; Liu, Yanjun; Nguyen, Son T.

    2015-03-01

    Data hiding is a technique that embeds information into digital cover data. This technique has been concentrated on the spatial uncompressed domain, and it is considered more challenging to perform in the compressed domain, i.e., vector quantization, JPEG, and block truncation coding (BTC). In this paper, we propose a new data hiding scheme for BTC-compressed images. In the proposed scheme, a dynamic programming strategy was used to search for the optimal solution of the bijective mapping function for LSB substitution. Then, according to the optimal solution, each mean value embeds three secret bits to obtain high hiding capacity with low distortion. The experimental results indicated that the proposed scheme obtained both higher hiding capacity and hiding efficiency than the other four existing schemes, while ensuring good visual quality of the stego-image. In addition, the proposed scheme achieved a low bit rate as original BTC algorithm.

  1. Coherent source imaging and dynamic support tracking for inverse scattering using compressive MUSIC

    NASA Astrophysics Data System (ADS)

    Lee, Okkyun; Kim, Jong Min; Yoo, Jaejoon; Jin, Kyunghwan; Ye, Jong Chul

    2011-09-01

    The goal of this paper is to develop novel algorithms for inverse scattering problems such as EEG/MEG, microwave imaging, and/or diffuse optical tomograpahy, and etc. One of the main contributions of this paper is a class of novel non-iterative exact nonlinear inverse scattering theory for coherent source imaging and moving targets. Specifically, the new algorithms guarantee the exact recovery under a very relaxed constraint on the number of source and receivers, under which the conventional methods fail. Such breakthrough was possible thanks to the recent theory of compressive MUSIC and its extension using support correction criterion, where partial support are estimated using the conventional compressed sensing approaches, then the remaining supports are estimated using a novel generalized MUSIC criterion. Numerical results using coherent sources in EEG/MEG and dynamic targets confirm that the new algorithms outperform the conventional ones.

  2. Assessing mesoscale material response under shock & isentropic compression via high-resolution line-imaging VISAR.

    SciTech Connect

    Hall, Clint Allen; Furnish, Michael David; Podsednik, Jason W.; Reinhart, William Dodd; Trott, Wayne Merle; Mason, Joshua

    2003-10-01

    Of special promise for providing dynamic mesoscale response data is the line-imaging VISAR, an instrument for providing spatially resolved velocity histories in dynamic experiments. We have prepared two line-imaging VISAR systems capable of spatial resolution in the 10-20 micron range, at the Z and STAR facilities. We have applied this instrument to selected experiments on a compressed gas gun, chosen to provide initial data for several problems of interest, including: (1) pore-collapse in copper (two variations: 70 micron diameter hole in single-crystal copper) and (2) response of a welded joint in dissimilar materials (Ta, Nb) to ramp loading relative to that of a compression joint. The instrument is capable of resolving details such as the volume and collapse history of a collapsing isolated pore.

  3. Combining nonlinear multiresolution system and vector quantization for still image compression

    SciTech Connect

    Wong, Y.

    1993-12-17

    It is popular to use multiresolution systems for image coding and compression. However, general-purpose techniques such as filter banks and wavelets are linear. While these systems are rigorous, nonlinear features in the signals cannot be utilized in a single entity for compression. Linear filters are known to blur the edges. Thus, the low-resolution images are typically blurred, carrying little information. We propose and demonstrate that edge-preserving filters such as median filters can be used in generating a multiresolution system using the Laplacian pyramid. The signals in the detail images are small and localized to the edge areas. Principal component vector quantization (PCVQ) is used to encode the detail images. PCVQ is a tree-structured VQ which allows fast codebook design and encoding/decoding. In encoding, the quantization error at each level is fed back through the pyramid to the previous level so that ultimately all the error is confined to the first level. With simple coding methods, we demonstrate that images with PSNR 33 dB can be obtained at 0.66 bpp without the use of entropy coding. When the rate is decreased to 0.25 bpp, the PSNR of 30 dB can still be achieved. Combined with an earlier result, our work demonstrate that nonlinear filters can be used for multiresolution systems and image coding.

  4. Balanced Sparse Model for Tight Frames in Compressed Sensing Magnetic Resonance Imaging

    PubMed Central

    Liu, Yunsong; Cai, Jian-Feng; Zhan, Zhifang; Guo, Di; Ye, Jing; Chen, Zhong; Qu, Xiaobo

    2015-01-01

    Compressed sensing has shown to be promising to accelerate magnetic resonance imaging. In this new technology, magnetic resonance images are usually reconstructed by enforcing its sparsity in sparse image reconstruction models, including both synthesis and analysis models. The synthesis model assumes that an image is a sparse combination of atom signals while the analysis model assumes that an image is sparse after the application of an analysis operator. Balanced model is a new sparse model that bridges analysis and synthesis models by introducing a penalty term on the distance of frame coefficients to the range of the analysis operator. In this paper, we study the performance of the balanced model in tight frame based compressed sensing magnetic resonance imaging and propose a new efficient numerical algorithm to solve the optimization problem. By tuning the balancing parameter, the new model achieves solutions of three models. It is found that the balanced model has a comparable performance with the analysis model. Besides, both of them achieve better results than the synthesis model no matter what value the balancing parameter is. Experiment shows that our proposed numerical algorithm constrained split augmented Lagrangian shrinkage algorithm for balanced model (C-SALSA-B) converges faster than previously proposed algorithms accelerated proximal algorithm (APG) and alternating directional method of multipliers for balanced model (ADMM-B). PMID:25849209

  5. Giant coronary artery aneurysm mimicking a compressive cardiac tumor Imaging features and operative strategy.

    PubMed

    Grandmougin, Daniel; Croisille, Pierre; Robin, Christophe; Péoc'h, Michel; Barral, Xavier

    2005-01-01

    Giant atheromatous coronary aneurysms mimicking a cardiac tumor remain exceptional. We report the case of a patient who experienced a severe inferior myocardial infarction related to a giant thrombosed coronary aneurysm masquerading a cardiac tumor and compressing right cardiac cavities with mechanical detrimental consequences on tricuspid, mitral and aortic valvular competence. The contribution of imaging was essential to assess diagnosis, understand the physiopathogeny of myocardial and valvular consequences and plan the optimal surgical strategy. PMID:16168902

  6. Compressive sensing in reflectance confocal microscopy of skin images: a preliminary comparative study

    NASA Astrophysics Data System (ADS)

    Arias, Fernando X.; Sierra, Heidy; Rajadhyaksha, Milind; Arzuaga, Emmanuel

    2016-03-01

    Compressive Sensing (CS)-based technologies have shown potential to improve the efficiency of acquisition, manipulation, analysis and storage processes on signals and imagery with slight discernible loss in data performance. The CS framework relies on the reconstruction of signals that are presumed sparse in some domain, from a significantly small data collection of linear projections of the signal of interest. As a result, a solution to the underdetermined linear system resulting from this paradigm makes it possible to estimate the original signal with high accuracy. One common approach to solve the linear system is based on methods that minimize the L1-norm. Several fast algorithms have been developed for this purpose. This paper presents a study on the use of CS in high-resolution reflectance confocal microscopy (RCM) images of the skin. RCM offers a cell resolution level similar to that used in histology to identify cellular patterns for diagnosis of skin diseases. However, imaging of large areas (required for effective clinical evaluation) at such high-resolution can turn image capturing, processing and storage processes into a time consuming procedure, which may pose a limitation for use in clinical settings. We present an analysis on the compression ratio that may allow for a simpler capturing approach while reconstructing the required cellular resolution for clinical use. We provide a comparative study in compressive sensing and estimate its effectiveness in terms of compression ratio vs. image reconstruction accuracy. Preliminary results show that by using as little as 25% of the original number of samples, cellular resolution may be reconstructed with high accuracy.

  7. The wavelet transform and the suppression theory of binocular vision for stereo image compression

    SciTech Connect

    Reynolds, W.D. Jr; Kenyon, R.V.

    1996-08-01

    In this paper a method for compression of stereo images. The proposed scheme is a frequency domain approach based on the suppression theory of binocular vision. By using the information in the frequency domain, complex disparity estimation techniques can be avoided. The wavelet transform is used to obtain a multiresolution analysis of the stereo pair by which the subbands convey the necessary frequency domain information.

  8. The moderate resolution imaging spectrometer: An EOS facility instrument candidate for application of data compression methods

    NASA Technical Reports Server (NTRS)

    Salomonson, Vincent V.

    1991-01-01

    The Moderate Resolution Imaging Spectrometer (MODIS) observing facility will operate on the Earth Observing System (EOS) in the late 1990's. It is estimated that this observing facility will produce over 200 gigabytes of data per day requiring a storage capability of just over 300 gigabytes per day. Archiving, browsing, and distributing the data associated with MODIS represents a rich opportunity for testing and applying both lossless and lossy data compression methods.

  9. Improved compressed sensing-based cone-beam CT reconstruction using adaptive prior image constraints

    NASA Astrophysics Data System (ADS)

    Lee, Ho; Xing, Lei; Davidi, Ran; Li, Ruijiang; Qian, Jianguo; Lee, Rena

    2012-04-01

    Volumetric cone-beam CT (CBCT) images are acquired repeatedly during a course of radiation therapy and a natural question to ask is whether CBCT images obtained earlier in the process can be utilized as prior knowledge to reduce patient imaging dose in subsequent scans. The purpose of this work is to develop an adaptive prior image constrained compressed sensing (APICCS) method to solve this problem. Reconstructed images using full projections are taken on the first day of radiation therapy treatment and are used as prior images. The subsequent scans are acquired using a protocol of sparse projections. In the proposed APICCS algorithm, the prior images are utilized as an initial guess and are incorporated into the objective function in the compressed sensing (CS)-based iterative reconstruction process. Furthermore, the prior information is employed to detect any possible mismatched regions between the prior and current images for improved reconstruction. For this purpose, the prior images and the reconstructed images are classified into three anatomical regions: air, soft tissue and bone. Mismatched regions are identified by local differences of the corresponding groups in the two classified sets of images. A distance transformation is then introduced to convert the information into an adaptive voxel-dependent relaxation map. In constructing the relaxation map, the matched regions (unchanged anatomy) between the prior and current images are assigned with smaller weight values, which are translated into less influence on the CS iterative reconstruction process. On the other hand, the mismatched regions (changed anatomy) are associated with larger values and the regions are updated more by the new projection data, thus avoiding any possible adverse effects of prior images. The APICCS approach was systematically assessed by using patient data acquired under standard and low-dose protocols for qualitative and quantitative comparisons. The APICCS method provides an

  10. Compressive spectral imaging using multiple snapshot colored-mosaic detector measurements

    NASA Astrophysics Data System (ADS)

    Hinojosa, Carlos A.; Correa, Claudia V.; Arguello, Henry; Arce, Gonzalo R.

    2016-05-01

    Compressive spectral imaging (CSI) captures coded and dispersed projections of the spatio-spectral source rather than direct measurements of the voxels. Using the coded projections, an l1 minimization reconstruction algorithm is then used to reconstruct the underlying scene. An architecture known as the snapshot colored compressive spectral imager (SCCSI) exploits the compression capabilities of CSI techniques and efficiently senses a spectral image using a single snapshot by means of a colored mosaic FPA detector and a dispersive element. In CSI, different coding patterns are used to acquire multiple snapshots, yielding improved reconstructions of spatially detailed and spectrally rich scenes. SCCSI however, does not admit multiple coding patterns since the pixelated tiling of optical filters is directly attached to the detector. This paper extends the concept of SCCSI to a system admitting multiple measurement shots by rotating the dispersive element such that the dispersed spatio-spectral source is coded and integrated at different detector pixels in each rotation. This approach allows the acquisition of a different set of coded projections on each measurement shot. Simulations show that increasing the number of measurement snapshots results on improved reconstructions. More specifically, a gain up to 7 dB is obtained when results from four measurement shots are compared to the reconstruction from a single SCCSI snapshot.

  11. Design of a receiver operating characteristic (ROC) study of 10:1 lossy image compression

    NASA Astrophysics Data System (ADS)

    Collins, Cary A.; Lane, David; Frank, Mark S.; Hardy, Michael E.; Haynor, David R.; Smith, Donald V.; Parker, James E.; Bender, Gregory N.; Kim, Yongmin

    1994-04-01

    The digital archiving system at Madigan Army Medical Center (MAMC) uses a 10:1 lossy data compression algorithm for most forms of computed radiography. A systematic study on the potential effect of lossy image compression on patient care has been initiated with a series of studies focused on specific diagnostic tasks. The studies are based upon the receiver operating characteristic (ROC) method of analysis for diagnostic systems. The null hypothesis is that observer performance with approximately 10:1 compressed and decompressed images is not different from using original, uncompressed images for detecting subtle pathologic findings seen on computed radiographs of bone, chest, or abdomen, when viewed on a high-resolution monitor. Our design involves collecting cases from eight pathologic categories. Truth is determined by committee using confirmatory studies performed during routine clinical practice whenever possible. Software has been developed to aid in case collection and to allow reading of the cases for the study using stand-alone Siemens Litebox workstations. Data analysis uses two methods, ROC analysis and free-response ROC (FROC) methods. This study will be one of the largest ROC/FROC studies of its kind and could benefit clinical radiology practice using PACS technology. The study design and results from a pilot FROC study are presented.

  12. A novel 3D Cartesian random sampling strategy for Compressive Sensing Magnetic Resonance Imaging.

    PubMed

    Valvano, Giuseppe; Martini, Nicola; Santarelli, Maria Filomena; Chiappino, Dante; Landini, Luigi

    2015-01-01

    In this work we propose a novel acquisition strategy for accelerated 3D Compressive Sensing Magnetic Resonance Imaging (CS-MRI). This strategy is based on a 3D cartesian sampling with random switching of the frequency encoding direction with other K-space directions. Two 3D sampling strategies are presented. In the first strategy, the frequency encoding direction is randomly switched with one of the two phase encoding directions. In the second strategy, the frequency encoding direction is randomly chosen between all the directions of the K-Space. These strategies can lower the coherence of the acquisition, in order to produce reduced aliasing artifacts and to achieve a better image quality after Compressive Sensing (CS) reconstruction. Furthermore, the proposed strategies can reduce the typical smoothing of CS due to the limited sampling of high frequency locations. We demonstrated by means of simulations that the proposed acquisition strategies outperformed the standard Compressive Sensing acquisition. This results in a better quality of the reconstructed images and in a greater achievable acceleration.

  13. Visual Communications for Heterogeneous Networks/Visually Optimized Scalable Image Compression. Final Report for September 1, 1995 - February 28, 2002

    SciTech Connect

    Hemami, S. S.

    2003-06-03

    The authors developed image and video compression algorithms that provide scalability, reconstructibility, and network adaptivity, and developed compression and quantization strategies that are visually optimal at all bit rates. The goal of this research is to enable reliable ''universal access'' to visual communications over the National Information Infrastructure (NII). All users, regardless of their individual network connection bandwidths, qualities-of-service, or terminal capabilities, should have the ability to access still images, video clips, and multimedia information services, and to use interactive visual communications services. To do so requires special capabilities for image and video compression algorithms: scalability, reconstructibility, and network adaptivity. Scalability allows an information service to provide visual information at many rates, without requiring additional compression or storage after the stream has been compressed the first time. Reconstructibility allows reliable visual communications over an imperfect network. Network adaptivity permits real-time modification of compression parameters to adjust to changing network conditions. Furthermore, to optimize the efficiency of the compression algorithms, they should be visually optimal, where each bit expended reduces the visual distortion. Visual optimality is achieved through first extensive experimentation to quantify human sensitivity to supra-threshold compression artifacts and then incorporation of these experimental results into quantization strategies and compression algorithms.

  14. Comparison of wavelet scalar quantization and JPEG for fingerprint image compression

    NASA Astrophysics Data System (ADS)

    Kidd, Robert C.

    1995-01-01

    An overview of the wavelet scalar quantization (WSQ) and Joint Photographic Experts Group (JPEG) image compression algorithms is given. Results of application of both algorithms to a database of 60 fingerprint images are then discussed. Signal-to-noise ratio (SNR) results for WSQ, JPEG with quantization matrix (QM) optimization, and JPEG with standard QM scaling are given at several average bit rates. In all cases, optimized-QM JPEG is equal or superior to WSQ in SNR performance. At 0.48 bit/pixel, which is in the operating range proposed by the Federal Bureau of Investigation (FBI), WSQ and QM-optimized JPEG exhibit nearly identical SNR performance. In addition, neither was subjectively preferred on average by human viewers in a forced-choice image-quality experiment. Although WSQ was chosen by the FBI as the national standard for compression of digital fingerprint images on the basis of image quality that was ostensibly superior to that of existing international standard JPEG, it appears likely that this superiority was due more to lack of optimization of JPEG parameters than to inherent superiority of the WSQ algorithm. Furthermore, substantial worldwide support for JPEG has developed due to its status as an international standard, and WSQ is significantly slower than JPEG in software implementation. Taken together, these facts suggest a decision different from the one that was made by the FBI with regard to its fingerprint image compression standard. Still, it is possible that WSQ enhanced with an optimal quantizer-design algorithm could outperform JPEG. This is a topic for future research.

  15. Tracking lung tissue motion and expansion/compression with inverse consistent image registration and spirometry

    SciTech Connect

    Christensen, Gary E.; Song, Joo Hyun; Lu, Wei; Naqa, Issam El; Low, Daniel A.

    2007-06-15

    Breathing motion is one of the major limiting factors for reducing dose and irradiation of normal tissue for conventional conformal radiotherapy. This paper describes a relationship between tracking lung motion using spirometry data and image registration of consecutive CT image volumes collected from a multislice CT scanner over multiple breathing periods. Temporal CT sequences from 5 individuals were analyzed in this study. The couch was moved from 11 to 14 different positions to image the entire lung. At each couch position, 15 image volumes were collected over approximately 3 breathing periods. It is assumed that the expansion and contraction of lung tissue can be modeled as an elastic material. Furthermore, it is assumed that the deformation of the lung is small over one-fifth of a breathing period and therefore the motion of the lung can be adequately modeled using a small deformation linear elastic model. The small deformation inverse consistent linear elastic image registration algorithm is therefore well suited for this problem and was used to register consecutive image scans. The pointwise expansion and compression of lung tissue was measured by computing the Jacobian of the transformations used to register the images. The logarithm of the Jacobian was computed so that expansion and compression of the lung were scaled equally. The log-Jacobian was computed at each voxel in the volume to produce a map of the local expansion and compression of the lung during the breathing period. These log-Jacobian images demonstrate that the lung does not expand uniformly during the breathing period, but rather expands and contracts locally at different rates during inhalation and exhalation. The log-Jacobian numbers were averaged over a cross section of the lung to produce an estimate of the average expansion or compression from one time point to the next and compared to the air flow rate measured by spirometry. In four out of five individuals, the average log

  16. Screening Magnetic Resonance Imaging Recommendations and Outcomes in Patients at High Risk for Breast Cancer

    PubMed Central

    Ehsani, Sima; Strigel, Roberta M; Pettke, Erica; Wilke, Lee; Tevaarwerk, Amye J; DeMartini, Wendy; Wisinski, Kari B

    2014-01-01

    Objective The purpose of this study was to determine MRI screening recommendations and the subsequent outcomes in women with increased risk for breast cancer evaluated by oncology subspecialists at an academic center. Patients and Methods Patients evaluated between 1/1/2007– 3/1/2011 under diagnosis codes for family history of breast or ovarian cancer, genetic syndromes, lobular carcinoma in situ or atypical hyperplasia were included. Patients with a history of breast cancer were excluded. Retrospective review of prospectively acquired demographics, lifetime risk of breast cancer and screening recommendations were obtained from the medical record. Retrospective review of the results of prospectively interpreted breast imaging examinations and image-guided biopsies were analyzed. Results 282 women were included. The majority of patients were premenopausal with a median age of 43. Most (69%) were referred due to a family history of breast or ovarian cancers. MRI was recommended for 84% of patients based on a documented lifetime risk > 20%. Most women referred for MRI screening (88%) were compliant with this recommendation. A total of 299 breast MRI examinations were performed in 146 patients. Biopsy was performed for 32 (11%) exams and 10 cancers were detected for a PPV of 31% (based on biopsy performed) and an overall per exam cancer yield of 3.3%. Three cancers were detected in patients who did not undergo screening MRI. The 13 cancers were Stage 0-II; all patients were without evidence of disease with a median follow-up of 22 months. Conclusion In a cohort of women seen by breast subspecialty providers, screening breast MRI was recommended according to guidelines, and used primarily premenopausal women with a family history or genetic predisposition to breast cancer. Adherence to MRI screening recommendations was high and cancer yield from breast MRI was similar to that in clinical trials. PMID:25789917

  17. Ultrashort echo time (UTE) imaging using gradient pre-equalization and compressed sensing

    NASA Astrophysics Data System (ADS)

    Fabich, Hilary T.; Benning, Martin; Sederman, Andrew J.; Holland, Daniel J.

    2014-08-01

    Ultrashort echo time (UTE) imaging is a well-known technique used in medical MRI, however, the implementation of the sequence remains non-trivial. This paper introduces UTE for non-medical applications and outlines a method for the implementation of UTE to enable accurate slice selection and short acquisition times. Slice selection in UTE requires fast, accurate switching of the gradient and r.f. pulses. Here a gradient “pre-equalization” technique is used to optimize the gradient switching and achieve an effective echo time of 10 μs. In order to minimize the echo time, k-space is sampled radially. A compressed sensing approach is used to minimize the total acquisition time. Using the corrections for slice selection and acquisition along with novel image reconstruction techniques, UTE is shown to be a viable method to study samples of cork and rubber with a shorter signal lifetime than can typically be measured. Further, the compressed sensing image reconstruction algorithm is shown to provide accurate images of the samples with as little as 12.5% of the full k-space data set, potentially permitting real time imaging of short T2* materials.

  18. Medical physics personnel for medical imaging: requirements, conditions of involvement and staffing levels-French recommendations.

    PubMed

    Isambert, Aurélie; Le Du, Dominique; Valéro, Marc; Guilhem, Marie-Thérèse; Rousse, Carole; Dieudonné, Arnaud; Blanchard, Vincent; Pierrat, Noëlle; Salvat, Cécile

    2015-04-01

    The French regulations concerning the involvement of medical physicists in medical imaging procedures are relatively vague. In May 2013, the ASN and the SFPM issued recommendations regarding Medical Physics Personnel for Medical Imaging: Requirements, Conditions of Involvement and Staffing Levels. In these recommendations, the various areas of activity of medical physicists in radiology and nuclear medicine have been identified and described, and the time required to perform each task has been evaluated. Criteria for defining medical physics staffing levels are thus proposed. These criteria are defined according to the technical platform, the procedures and techniques practised on it, the number of patients treated and the number of persons in the medical and paramedical teams requiring periodic training. The result of this work is an aid available to each medical establishment to determine their own needs in terms of medical physics.

  19. Sequential Principal Component Analysis -An Optimal and Hardware-Implementable Transform for Image Compression

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Duong, Vu A.

    2009-01-01

    This paper presents the JPL-developed Sequential Principal Component Analysis (SPCA) algorithm for feature extraction / image compression, based on "dominant-term selection" unsupervised learning technique that requires an order-of-magnitude lesser computation and has simpler architecture compared to the state of the art gradient-descent techniques. This algorithm is inherently amenable to a compact, low power and high speed VLSI hardware embodiment. The paper compares the lossless image compression performance of the JPL's SPCA algorithm with the state of the art JPEG2000, widely used due to its simplified hardware implementability. JPEG2000 is not an optimal data compression technique because of its fixed transform characteristics, regardless of its data structure. On the other hand, conventional Principal Component Analysis based transform (PCA-transform) is a data-dependent-structure transform. However, it is not easy to implement the PCA in compact VLSI hardware, due to its highly computational and architectural complexity. In contrast, the JPL's "dominant-term selection" SPCA algorithm allows, for the first time, a compact, low-power hardware implementation of the powerful PCA algorithm. This paper presents a direct comparison of the JPL's SPCA versus JPEG2000, incorporating the Huffman and arithmetic coding for completeness of the data compression operation. The simulation results show that JPL's SPCA algorithm is superior as an optimal data-dependent-transform over the state of the art JPEG2000. When implemented in hardware, this technique is projected to be ideally suited to future NASA missions for autonomous on-board image data processing to improve the bandwidth of communication.

  20. Non-reference quality assessment of infrared images reconstructed by compressive sensing

    NASA Astrophysics Data System (ADS)

    Ospina-Borras, J. E.; Benitez-Restrepo, H. D.

    2015-01-01

    Infrared (IR) images are representations of the world and have natural features like images in the visible spectrum. As such, natural features from infrared images support image quality assessment (IQA).1 In this work, we compare the quality of a set of indoor and outdoor IR images reconstructed from measurement functions formed by linear combination of their pixels. The reconstruction methods are: linear discrete cosine transform (DCT) acquisition, DCT augmented with total variation minimization, and compressive sensing scheme. Peak Signal to Noise Ratio (PSNR), three full-reference (FR), and four no-reference (NR) IQA measures compute the qualities of each reconstruction: multi-scale structural similarity (MSSIM), visual information fidelity (VIF), information fidelity criterion (IFC), sharpness identification based on local phase coherence (LPC-SI), blind/referenceless image spatial quality evaluator (BRISQUE), naturalness image quality evaluator (NIQE) and gradient singular value decomposition (GSVD), respectively. Each measure is compared to human scores that were obtained by differential mean opinion score (DMOS) test. We observe that GSVD has the highest correlation coefficients of all NR measures, but all FR have better performance. We use MSSIM to compare the reconstruction methods and we find that CS scheme produces a good-quality IR image, using only 30000 random sub-samples and 1000 DCT coefficients (2%). In contrast, linear DCT provides higher correlation coefficients than CS scheme by using all the pixels of the image and 31000 DCT (47%) coefficients.

  1. The Potential for Bayesian Compressive Sensing to Significantly Reduce Electron Dose in High Resolution STEM Images

    SciTech Connect

    Stevens, Andrew J.; Yang, Hao; Carin, Lawrence; Arslan, Ilke; Browning, Nigel D.

    2014-02-11

    The use of high resolution imaging methods in the scanning transmission electron microscope (STEM) is limited in many cases by the sensitivity of the sample to the beam and the onset of electron beam damage (for example in the study of organic systems, in tomography and during in-situ experiments). To demonstrate that alternative strategies for image acquisition can help alleviate this beam damage issue, here we apply compressive sensing via Bayesian dictionary learning to high resolution STEM images. These experiments successively reduce the number of pixels in the image (thereby reducing the overall dose while maintaining the high resolution information) and show promising results for reconstructing images from this reduced set of randomly collected measurements. We show that this approach is valid for both atomic resolution images and nanometer resolution studies, such as those that might be used in tomography datasets, by applying the method to images of strontium titanate and zeolites. As STEM images are acquired pixel by pixel while the beam is scanned over the surface of the sample, these post acquisition manipulations of the images can, in principle, be directly implemented as a low-dose acquisition method with no change in the electron optics or alignment of the microscope itself.

  2. A simple method for estimating the fractal dimension from digital images: The compression dimension

    NASA Astrophysics Data System (ADS)

    Chamorro-Posada, Pedro

    2016-10-01

    The fractal structure of real world objects is often analyzed using digital images. In this context, the compression fractal dimension is put forward. It provides a simple method for the direct estimation of the dimension of fractals stored as digital image files. The computational scheme can be implemented using readily available free software. Its simplicity also makes it very interesting for introductory elaborations of basic concepts of fractal geometry, complexity, and information theory. A test of the computational scheme using limited-quality images of well-defined fractal sets obtained from the Internet and free software has been performed. Also, a systematic evaluation of the proposed method using computer generated images of the Weierstrass cosine function shows an accuracy comparable to those of the methods most commonly used to estimate the dimension of fractal data sequences applied to the same test problem.

  3. Novel hybrid classified vector quantization using discrete cosine transform for image compression

    NASA Astrophysics Data System (ADS)

    Al-Fayadh, Ali; Hussain, Abir Jaafar; Lisboa, Paulo; Al-Jumeily, Dhiya

    2009-04-01

    We present a novel image compression technique using a classified vector Quantizer and singular value decomposition for the efficient representation of still images. The proposed method is called hybrid classified vector quantization. It involves a simple but efficient classifier-based gradient method in the spatial domain, which employs only one threshold to determine the class of the input image block, and uses three AC coefficients of discrete cosine transform coefficients to determine the orientation of the block without employing any threshold. The proposed technique is benchmarked with each of the standard vector quantizers generated using the k-means algorithm, standard classified vector quantizer schemes, and JPEG-2000. Simulation results indicate that the proposed approach alleviates edge degradation and can reconstruct good visual quality images with higher peak signal-to-noise ratio than the benchmarked techniques, or be competitive with them.

  4. A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression.

    PubMed

    Guo, Chenlei; Zhang, Liming

    2010-01-01

    proposed in this paper to improve coding efficiency in image and video compression. Extensive tests of videos, natural images, and psychological patterns show that the proposed PQFT model is more effective in saliency detection and can predict eye fixations better than other state-of-the-art models in previous literature. Moreover, our model requires low computational cost and, therefore, can work in real time. Additional experiments on image and video compression show that the HS-MWDF model can achieve higher compression rate than the traditional model. PMID:19709976

  5. Echo-power estimation from log-compressed video data in dynamic contrast-enhanced ultrasound imaging.

    PubMed

    Payen, Thomas; Coron, Alain; Lamuraglia, Michele; Le Guillou-Buffello, Delphine; Gaud, Emmanuel; Arditi, Marcel; Lucidarme, Olivier; Bridal, S Lori

    2013-10-01

    Ultrasound (US) scanners typically apply lossy, non-linear modifications to the US data for visualization purposes. The resulting images are then stored as compressed video data. Some system manufacturers provide dedicated software for quantification purposes to eliminate such processing distortions, at least partially. This is currently the recommended approach for quantitatively assessing changes in contrast-agent concentration from clinical data. However, the machine-specific access to US data and the limited set of analysis functionalities offered by each dedicated-software package make it difficult to perform comparable analyses with different US systems. The objective of this work was to establish if linearization of compressed video images obtained with an arbitrary US system can provide an alternative to dedicated-software analysis of machine-specific files for the estimation of echo-power. For this purpose, an Aplio 50 system (Toshiba Medical Systems, Tochigi, Japan), coupled with dedicated CHI-Q (Contrast Harmonic Imaging Quantification) software by Toshiba Medical Systems, was used. Results were compared with two approaches that apply algorithms to estimate relative echo-power from compressed video images: commercially available VueBox software by Bracco Suisse SA (Geneva, Switzerland) and in-laboratory software called PixPower. The echo-power estimated by CHI-Q analysis indicated a strong linear relationship versus agent concentration in vitro (R(2) ≥ 0.9996) for dynamic range (DR) settings of DR60 and DR80, with slopes between 9.22 and 9.57 dB/decade (p = 0.05). These values approach the theoretically predicted dependence of 10.0 dB/decade (equivalent to 3 dB for each concentration doubling). Echo-power estimations obtained from compressed video images with VueBox and PixPower also exhibited strong linear proportionality with concentration (R(2) ≥ 0.9996), with slopes between 9.30 and 9.68 dB/decade (p = 0.05). On an independent in vivo data set (N

  6. Echo-power estimation from log-compressed video data in dynamic contrast-enhanced ultrasound imaging.

    PubMed

    Payen, Thomas; Coron, Alain; Lamuraglia, Michele; Le Guillou-Buffello, Delphine; Gaud, Emmanuel; Arditi, Marcel; Lucidarme, Olivier; Bridal, S Lori

    2013-10-01

    Ultrasound (US) scanners typically apply lossy, non-linear modifications to the US data for visualization purposes. The resulting images are then stored as compressed video data. Some system manufacturers provide dedicated software for quantification purposes to eliminate such processing distortions, at least partially. This is currently the recommended approach for quantitatively assessing changes in contrast-agent concentration from clinical data. However, the machine-specific access to US data and the limited set of analysis functionalities offered by each dedicated-software package make it difficult to perform comparable analyses with different US systems. The objective of this work was to establish if linearization of compressed video images obtained with an arbitrary US system can provide an alternative to dedicated-software analysis of machine-specific files for the estimation of echo-power. For this purpose, an Aplio 50 system (Toshiba Medical Systems, Tochigi, Japan), coupled with dedicated CHI-Q (Contrast Harmonic Imaging Quantification) software by Toshiba Medical Systems, was used. Results were compared with two approaches that apply algorithms to estimate relative echo-power from compressed video images: commercially available VueBox software by Bracco Suisse SA (Geneva, Switzerland) and in-laboratory software called PixPower. The echo-power estimated by CHI-Q analysis indicated a strong linear relationship versus agent concentration in vitro (R(2) ≥ 0.9996) for dynamic range (DR) settings of DR60 and DR80, with slopes between 9.22 and 9.57 dB/decade (p = 0.05). These values approach the theoretically predicted dependence of 10.0 dB/decade (equivalent to 3 dB for each concentration doubling). Echo-power estimations obtained from compressed video images with VueBox and PixPower also exhibited strong linear proportionality with concentration (R(2) ≥ 0.9996), with slopes between 9.30 and 9.68 dB/decade (p = 0.05). On an independent in vivo data set (N

  7. Comparison of the lossy image data compressions for the MESUR Pathfinder and for the Huygens Titan Probe

    NASA Technical Reports Server (NTRS)

    Rueffer, P.; Rabe, F.; Gliem, F.; Keller, H.-U.

    1994-01-01

    The commercial JPEG standard complies well with the specific requirements of exploratory space missions. Therefore, JPEG has been chosen to be the baseline for a series of spaceborne image data compressions (e.g. MARS94-HRSC, -WAOSS, HUYGENS-DISR, MESUR-IMP). One S/W-implementation (IMP) and one H/W-implementation (DISR) of image data compression are presented. Details of the modifications applied to standard JPEG are outlined. Finally, a performance comparison of the two implementations is given.

  8. Chirp-pulse-compression three-dimensional lidar imager with fiber optics.

    PubMed

    Pearson, Guy N; Ridley, Kevin D; Willetts, David V

    2005-01-10

    A coherent three-dimensional (angle-angle-range) lidar imager using a master-oscillator-power-amplifier concept and operating at a wavelength of 1.5 microm with chirp-pulse compression is described. A fiber-optic delay line in the local oscillator path enables a single continuous-wave semiconductor laser source with a modulated drive waveform to generate both the constant-frequency local oscillator and the frequency chirp. A portion of this chirp is gated out and amplified by a two-stage fiber amplifier. The digitized return signal was compressed by cross correlating it with a sample of the outgoing pulse. In this way a 350-ns, 10-microJ pulse with a 250-MHz frequency sweep is compressed to a width of approximately 8 ns. With a 25-mm output aperture, the lidar has been used to produce three-dimensional images of hard targets out to a range of approximately 2 km with near-diffraction-limited angular resolution and submeter range resolution. PMID:15678779

  9. Chirp-pulse-compression three-dimensional lidar imager with fiber optics.

    PubMed

    Pearson, Guy N; Ridley, Kevin D; Willetts, David V

    2005-01-10

    A coherent three-dimensional (angle-angle-range) lidar imager using a master-oscillator-power-amplifier concept and operating at a wavelength of 1.5 microm with chirp-pulse compression is described. A fiber-optic delay line in the local oscillator path enables a single continuous-wave semiconductor laser source with a modulated drive waveform to generate both the constant-frequency local oscillator and the frequency chirp. A portion of this chirp is gated out and amplified by a two-stage fiber amplifier. The digitized return signal was compressed by cross correlating it with a sample of the outgoing pulse. In this way a 350-ns, 10-microJ pulse with a 250-MHz frequency sweep is compressed to a width of approximately 8 ns. With a 25-mm output aperture, the lidar has been used to produce three-dimensional images of hard targets out to a range of approximately 2 km with near-diffraction-limited angular resolution and submeter range resolution.

  10. Quantitative micro-elastography: imaging of tissue elasticity using compression optical coherence elastography

    PubMed Central

    Kennedy, Kelsey M.; Chin, Lixin; McLaughlin, Robert A.; Latham, Bruce; Saunders, Christobel M.; Sampson, David D.; Kennedy, Brendan F.

    2015-01-01

    Probing the mechanical properties of tissue on the microscale could aid in the identification of diseased tissues that are inadequately detected using palpation or current clinical imaging modalities, with potential to guide medical procedures such as the excision of breast tumours. Compression optical coherence elastography (OCE) maps tissue strain with microscale spatial resolution and can delineate microstructural features within breast tissues. However, without a measure of the locally applied stress, strain provides only a qualitative indication of mechanical properties. To overcome this limitation, we present quantitative micro-elastography, which combines compression OCE with a compliant stress sensor to image tissue elasticity. The sensor consists of a layer of translucent silicone with well-characterized stress-strain behaviour. The measured strain in the sensor is used to estimate the two-dimensional stress distribution applied to the sample surface. Elasticity is determined by dividing the stress by the strain in the sample. We show that quantification of elasticity can improve the ability of compression OCE to distinguish between tissues, thereby extending the potential for inter-sample comparison and longitudinal studies of tissue elasticity. We validate the technique using tissue-mimicking phantoms and demonstrate the ability to map elasticity of freshly excised malignant and benign human breast tissues. PMID:26503225

  11. Lossy hyperspectral image compression on a graphics processing unit: parallelization strategy and performance evaluation

    NASA Astrophysics Data System (ADS)

    Santos, Lucana; Magli, Enrico; Vitulli, Raffaele; Núñez, Antonio; López, José F.; Sarmiento, Roberto

    2013-01-01

    There is an intense necessity for the development of new hardware architectures for the implementation of algorithms for hyperspectral image compression on board satellites. Graphics processing units (GPUs) represent a very attractive opportunity, offering the possibility to dramatically increase the computation speed in applications that are data and task parallel. An algorithm for the lossy compression of hyperspectral images is implemented on a GPU using Nvidia computer unified device architecture (CUDA) parallel computing architecture. The parallelization strategy is explained, with emphasis on the entropy coding and bit packing phases, for which a more sophisticated strategy is necessary due to the existing data dependencies. Experimental results are obtained by comparing the performance of the GPU implementation with a single-threaded CPU implementation, showing high speedups of up to 15.41. A profiling of the algorithm is provided, demonstrating the high performance of the designed parallel entropy coding phase. The accuracy of the GPU implementation is presented, as well as the effect of the configuration parameters on performance. The convenience of using GPUs for on-board processing is demonstrated, and solutions to the potential difficulties encountered when accelerating hyperspectral compression algorithms are proposed, if space-qualified GPUs become a reality in the near future.

  12. Hardware-specific image compression techniques for the animation of CFD data

    NASA Astrophysics Data System (ADS)

    Jones, Stephen C.; Moorhead, Robert J., II

    1992-06-01

    The visualization and animation of computational fluid dynamics (CFD) data is vital in understanding the varied parameters that exist in the solution field. Scientists need accurate and efficient visualization techniques. The animation of CFD data is not only computationally expensive but also expensive in the allocation of memory, both RAM and disk. Preserving animations of the CFD data visualizations is useful, since recreation of the animation is expensive when dealing with extremely large data structures. Researchers of CFD data may wish to follow a particle trace over an experimental fuselage design, but are unable to retain the animation for efficient retrieval without rendering or consuming a considerable amount of disk space. The spatial image resolution is reduced from 1280 X 1024 to 512 X 480 in going from the workstation format to a video format, therefore, a desire to save these animations on disk results. Saving on disk allows the animation to maintain the spatial and intensity quality of the rendered image and allows the display of the animation at approximately 30 frames/sec, the standard video rate. The goal is to develop optimal image compression algorithms that allow visualization animations, captures as independent RGB images, to be recorded to tape or disk. If recorded to disk, the image sequence is compressed in non-realtime with a technique which allows subsequent decompression at approximately 30 frames/sec to simulate the temporal resolution of video. Initial compression is obtained through mapping RGB colors in each frame to a 12-bit colormap image. The colormap is animation sequence dependent and is created by histogramming the colors in the animation sequence and mapping those colors with relation to specific regions of the L*a*b* color coordinate system to take advantage of the uniform nature of the L*a*b* color system. Further compression is obtained by taking interframe differences, specifically comparing respective blocks between

  13. Spectral prior image constrained compressed sensing (spectral PICCS) for photon-counting computed tomography

    NASA Astrophysics Data System (ADS)

    Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.

    2016-09-01

    Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43-73%) without sacrificing CT number accuracy or spatial resolution.

  14. Spectral prior image constrained compressed sensing (spectral PICCS) for photon-counting computed tomography

    NASA Astrophysics Data System (ADS)

    Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.

    2016-09-01

    Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43–73%) without sacrificing CT number accuracy or spatial resolution.

  15. Pseudo-random Center Placement O-space Imaging for Improved Incoherence Compressed Sensing Parallel MRI

    PubMed Central

    Tam, Leo K.; Galiana, Gigi; Stockmann, Jason P.; Tagare, Hemant; Peters, Dana C.; Constable, R. Todd

    2014-01-01

    Purpose Nonlinear spatial encoding magnetic (SEM) field strategies such as O-space imaging have previously reported dispersed artifacts during accelerated scans. Compressed sensing (CS) has shown a sparsity-promoting convex program allows image reconstruction from a reduced data set when using the appropriate sampling. The development of a pseudo-random center placement (CP) O-space CS approach optimizes incoherence through SEM field modulation to reconstruct an image with reduced error. Theory and Methods The incoherence parameter determines the sparsity levels for which CS is valid and the related transform point spread function measures the maximum interference for a single point. The O-space acquisition is optimized for CS by perturbing the Z2 strength within 30% of the nominal value and demonstrated on a human 3T scanner. Results Pseudo-random CP O-space imaging is shown to improve incoherence between the sensing and sparse domains. Images indicate pseudo-random CP O-space has reduced mean squared error compared with a typical linear SEM field acquisition method. Conclusion Pseudo-random CP O-space imaging, with a nonlinear SEM field designed for CS, is shown to reduce mean squared error of images at high acceleration over linear encoding methods for a 2D slice when using an eight channel circumferential receiver array for parallel imaging. PMID:25042143

  16. Secure biometric image sensor and authentication scheme based on compressed sensing.

    PubMed

    Suzuki, Hiroyuki; Suzuki, Masamichi; Urabe, Takuya; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2013-11-20

    It is important to ensure the security of biometric authentication information, because its leakage causes serious risks, such as replay attacks using the stolen biometric data, and also because it is almost impossible to replace raw biometric information. In this paper, we propose a secure biometric authentication scheme that protects such information by employing an optical data ciphering technique based on compressed sensing. The proposed scheme is based on two-factor authentication, the biometric information being supplemented by secret information that is used as a random seed for a cipher key. In this scheme, a biometric image is optically encrypted at the time of image capture, and a pair of restored biometric images for enrollment and verification are verified in the authentication server. If any of the biometric information is exposed to risk, it can be reenrolled by changing the secret information. Through numerical experiments, we confirm that finger vein images can be restored from the compressed sensing measurement data. We also present results that verify the accuracy of the scheme.

  17. Secure biometric image sensor and authentication scheme based on compressed sensing.

    PubMed

    Suzuki, Hiroyuki; Suzuki, Masamichi; Urabe, Takuya; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2013-11-20

    It is important to ensure the security of biometric authentication information, because its leakage causes serious risks, such as replay attacks using the stolen biometric data, and also because it is almost impossible to replace raw biometric information. In this paper, we propose a secure biometric authentication scheme that protects such information by employing an optical data ciphering technique based on compressed sensing. The proposed scheme is based on two-factor authentication, the biometric information being supplemented by secret information that is used as a random seed for a cipher key. In this scheme, a biometric image is optically encrypted at the time of image capture, and a pair of restored biometric images for enrollment and verification are verified in the authentication server. If any of the biometric information is exposed to risk, it can be reenrolled by changing the secret information. Through numerical experiments, we confirm that finger vein images can be restored from the compressed sensing measurement data. We also present results that verify the accuracy of the scheme. PMID:24513773

  18. Performance assessment of a single-pixel compressive sensing imaging system

    NASA Astrophysics Data System (ADS)

    Du Bosq, Todd W.; Preece, Bradley L.

    2016-05-01

    Conventional electro-optical and infrared (EO/IR) systems capture an image by measuring the light incident at each of the millions of pixels in a focal plane array. Compressive sensing (CS) involves capturing a smaller number of unconventional measurements from the scene, and then using a companion process known as sparse reconstruction to recover the image as if a fully populated array that satisfies the Nyquist criteria was used. Therefore, CS operates under the assumption that signal acquisition and data compression can be accomplished simultaneously. CS has the potential to acquire an image with equivalent information content to a large format array while using smaller, cheaper, and lower bandwidth components. However, the benefits of CS do not come without compromise. The CS architecture chosen must effectively balance between physical considerations (SWaP-C), reconstruction accuracy, and reconstruction speed to meet operational requirements. To properly assess the value of such systems, it is necessary to fully characterize the image quality, including artifacts and sensitivity to noise. Imagery of the two-handheld object target set at range was collected using a passive SWIR single-pixel CS camera for various ranges, mirror resolution, and number of processed measurements. Human perception experiments were performed to determine the identification performance within the trade space. The performance of the nonlinear CS camera was modeled with the Night Vision Integrated Performance Model (NV-IPM) by mapping the nonlinear degradations to an equivalent linear shift invariant model. Finally, the limitations of CS modeling techniques will be discussed.

  19. A New ADPCM Image Compression Algorithm and the Effect of Fixed-Pattern Sensor Noise

    NASA Astrophysics Data System (ADS)

    Sullivan, James R.

    1989-04-01

    High speed image compression algorithms that achieve visually lossless quality at low bit-rates are essential elements of many digital imaging systems. In examples such as remote sensing, there is often the additional requirement that the compression hardware be compact and consume minimal power. To meet these requirements a new adaptive differential pulse code modulation (ADPCM) algorithm was developed that significantly reduces edge errors by including quantizers that adapt to the local bias of the differential signal. In addition, to reduce the average bit-rate in certain applications a variable rate version of the algorithm called run adaptive differential coding (RADC) was developed that combines run-length and predictive coding and a variable number of levels in each quantizer to produce bit-rates comparable with adaptive discrete cosine transform (ADCT) at a visually lossless level of image quality. It will also be shown that this algorithm is relatively insensitive to fixed-pattern sensor noise and errors in sensor correction, making it possible to perform pixel correction on the decompressed image.

  20. Scampi: a robust approximate message-passing framework for compressive imaging

    NASA Astrophysics Data System (ADS)

    Barbier, Jean; Tramel, Eric W.; Krzakala, Florent

    2016-03-01

    Reconstruction of images from noisy linear measurements is a core problem in image processing, for which convex optimization methods based on total variation (TV) minimization have been the long-standing state-of-the-art. We present an alternative probabilistic reconstruction procedure based on approximate message-passing, Scampi, which operates in the compressive regime, where the inverse imaging problem is underdetermined. While the proposed method is related to the recently proposed GrAMPA algorithm of Borgerding, Schniter, and Rangan, we further develop the probabilistic approach to compressive imaging by introducing an expectation-maximization learning of model parameters, making the Scampi robust to model uncertainties. Additionally, our numerical experiments indicate that Scampi can provide reconstruction performance superior to both GrAMPA as well as convex approaches to TV reconstruction. Finally, through exhaustive best-case experiments, we show that in many cases the maximal performance of both Scampi and convex TV can be quite close, even though the approaches are a prori distinct. The theoretical reasons for this correspondence remain an open question. Nevertheless, the proposed algorithm remains more practical, as it requires far less parameter tuning to perform optimally.

  1. Lossy cardiac x-ray image compression based on acquisition noise

    NASA Astrophysics Data System (ADS)

    de Bruijn, Frederik J.; Slump, Cornelis H.

    1997-05-01

    In lossy medical image compression, the requirements for the preservation of diagnostic integrity cannot be easily formulated in terms of a perceptual model. Especially since, in reality, human visual perception is dependent on numerous factors such as the viewing conditions and psycho-visual factors. Therefore, we investigate the possibility to develop alternative measures for data loss, based on the characteristics of the acquisition system, in our case, a digital cardiac imaging system. In general, due to the low exposure, cardiac x-ray images tend to be relatively noisy. The main noise contributions are quantum noise and electrical noise. The electrical noise is not correlated with the signal. In addition, the signal can be transformed such that the correlated Poisson-distributed quantum noise is transformed into an additional zero-mean Gaussian noise source which is uncorrelated with the signal. Furthermore, the systems modulation transfer function imposes a known spatial-frequency limitation to the output signal. In the assumption that noise which is not correlated with the signal contains no diagnostic information, we have derived a compression measure based on the acquisition parameters of a digital cardiac imaging system. The measure is used for bit- assignment and quantization of transform coefficients. We present a blockwise-DCT compression algorithm which is based on the conventional JPEG-standard. However, the bit- assignment to the transform coefficients is now determined by an assumed noise variance for each coefficient, for a given set of acquisition parameters. Experiments with the algorithm indicate that a bit rate of 0.6 bit/pixel is feasible, without apparent loss of clinical information.

  2. Compression and denoising in magnetic resonance imaging via SVD on the Fourier domain using computer algebra

    NASA Astrophysics Data System (ADS)

    Díaz, Felipe

    2015-09-01

    Magnetic resonance (MR) data reconstruction can be computationally a challenging task. The signal-to-noise ratio might also present complications, especially with high-resolution images. In this sense, data compression can be useful not only for reducing the complexity and memory requirements, but also to reduce noise, even to allow eliminate spurious components.This article proposes the use of a system based on singular value decomposition of low order for noise reconstruction and reduction in MR imaging system. The proposed method is evaluated using in vivo MRI data. Rebuilt images with less than 20 of the original data and with similar quality in terms of visual inspection are presented. Also a quantitative evaluation of the method is presented.

  3. Compressive sensing sectional imaging for single-shot in-line self-interference incoherent holography

    NASA Astrophysics Data System (ADS)

    Weng, Jiawen; Clark, David C.; Kim, Myung K.

    2016-05-01

    A numerical reconstruction method based on compressive sensing (CS) for self-interference incoherent digital holography (SIDH) is proposed to achieve sectional imaging by single-shot in-line self-interference incoherent hologram. The sensing operator is built up based on the physical mechanism of SIDH according to CS theory, and a recovery algorithm is employed for image restoration. Numerical simulation and experimental studies employing LEDs as discrete point-sources and resolution targets as extended sources are performed to demonstrate the feasibility and validity of the method. The intensity distribution and the axial resolution along the propagation direction of SIDH by angular spectrum method (ASM) and by CS are discussed. The analysis result shows that compared to ASM the reconstruction by CS can improve the axial resolution of SIDH, and achieve sectional imaging. The proposed method may be useful to 3D analysis of dynamic systems.

  4. Compressed Sensing in a Fully Non-Mechanical 350 GHz Imaging Setting

    NASA Astrophysics Data System (ADS)

    Augustin, S.; Hieronymus, J.; Jung, P.; Hübers, H.-W.

    2015-05-01

    We investigate a single-pixel camera (SPC) that relies on non-mechanical scanning with a terahertz (THz) spatial light modulator (SLM) and Compressed Sensing (CS) for image generation. The camera is based on a 350 GHz multiplier source and a Golay cell detector. The SLM consists of a Germanium disc, which is illuminated by a halogen lamp. The light of the lamp is transmitted through a thin-film transistor (TFT) liquid crystal display (LCD). This enables the generation of light patterns on the Germanium disc, which in turn produce reflecting patterns for THz radiation. Using up to 1000 different patterns the pseudo-inverse reconstruction algorithm and the CS algorithm CoSaMP are evaluated with respect to image quality. It is shown that CS allows a reduction of the necessary measurements by a factor of three without compromising the image quality.

  5. A new combined prior based reconstruction method for compressed sensing in 3D ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Uddin, Muhammad S.; Islam, Rafiqul; Tahtali, Murat; Lambert, Andrew J.; Pickering, Mark R.

    2015-03-01

    Ultrasound (US) imaging is one of the most popular medical imaging modalities, with 3D US imaging gaining popularity recently due to its considerable advantages over 2D US imaging. However, as it is limited by long acquisition times and the huge amount of data processing it requires, methods for reducing these factors have attracted considerable research interest. Compressed sensing (CS) is one of the best candidates for accelerating the acquisition rate and reducing the data processing time without degrading image quality. However, CS is prone to introduce noise-like artefacts due to random under-sampling. To address this issue, we propose a combined prior-based reconstruction method for 3D US imaging. A Laplacian mixture model (LMM) constraint in the wavelet domain is combined with a total variation (TV) constraint to create a new regularization regularization prior. An experimental evaluation conducted to validate our method using synthetic 3D US images shows that it performs better than other approaches in terms of both qualitative and quantitative measures.

  6. Experimental study of a DMD based compressive line sensing imaging system in the turbulence environment

    NASA Astrophysics Data System (ADS)

    Ouyang, Bing; Hou, Weilin; Gong, Cuiling; Caimi, Frank M.; Dalgleish, Fraser R.; Vuorenkoski, Anni K.; Xiao, Xifeng; Voelz, David G.

    2016-02-01

    The Compressive Line Sensing (CLS) active imaging system has been demonstrated to be effective in scattering mediums, such as coastal turbid water, fog and mist, through simulations and test tank experiments. The CLS prototype hardware consists of a CW laser, a DMD, a photomultiplier tube, and a data acquisition instrument. CLS employs whiskbroom imaging formation that is compatible with traditional survey platforms. The sensing model adopts the distributed compressive sensing theoretical framework that exploits both intra-signal sparsity and highly correlated nature of adjacent areas in a natural scene. During sensing operation, the laser illuminates the spatial light modulator DMD to generate a series of 1D binary sensing pattern from a codebook to "encode" current target line segment. A single element detector PMT acquires target reflections as encoder output. The target can then be recovered using the encoder output and a predicted on-target codebook that reflects the environmental interference of original codebook entries. In this work, we investigated the effectiveness of the CLS imaging system in a turbulence environment. Turbulence poses challenges in many atmospheric and underwater surveillance applications. A series of experiments were conducted in the Naval Research Lab's optical turbulence test facility with the imaging path subjected to various turbulence intensities. The total-variation minimization sparsifying basis was used in imaging reconstruction. The preliminary experimental results showed that the current imaging system was able to recover target information under various turbulence strengths. The challenges of acquiring data through strong turbulence environment and future enhancements of the system will be discussed.

  7. Compressed ultrafast photography (CUP): redefining the limit of passive ultrafast imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Gao, Liang S.

    2016-03-01

    Video recording of ultrafast phenomena using a detector array based on the CCD or CMOS technologies is fundamentally limited by the sensor's on-chip storage and data transfer speed. To get around this problem, the most practical approach is to utilize a streak camera. However, the resultant image is normally one dimensional—only a line of the scene can be seen at a time. Acquiring a two-dimensional image thus requires mechanical scanning across the entire field of view. This requirement poses severe restrictions on the applicable scenes because the event itself must be repetitive. To overcome these limitations, we have developed a new computational ultrafast imaging method, referred to as compressed ultrafast photography (CUP), which can capture two-dimensional dynamic scenes at up to 100 billion frames per second. Based on the concept of compressed sensing, CUP works by encoding the input scene with a random binary pattern in the spatial domain, followed by shearing the resultant image in a streak camera with a fully-opened entrance slit. The image reconstruction is the solution of the inverse problem of above processes. Given sparsity in the spatiotemporal domain, the original event datacube can be reasonably estimated by employing a two-step iterative shrinkage/thresholding algorithm. To demonstrate CUP, we imaged light reflection, refraction, and racing in two different media (air and resin). Our technique, for the first time, enables video recording of photon propagation at a temporal resolution down to tens of picoseconds. Moreover, to further expand CUP's functionality, we added a color separation unit to the system, thereby allowing simultaneous acquisition of a four-dimensional datacube (x,y,t,λ), where λ is wavelength, within a single camera snapshot.

  8. An efficient DCT-based image compression system based on laplacian transparent composite model.

    PubMed

    Sun, Chang; Yang, En-Hui

    2015-03-01

    Recently, a new probability model dubbed the Laplacian transparent composite model (LPTCM) was developed for DCT coefficients, which could identify outlier coefficients in addition to providing superior modeling accuracy. In this paper, we aim at exploring its applications to image compression. To this end, we propose an efficient nonpredictive image compression system, where quantization (including both hard-decision quantization (HDQ) and soft-decision quantization (SDQ)) and entropy coding are completely redesigned based on the LPTCM. When tested over standard test images, the proposed system achieves overall coding results that are among the best and similar to those of H.264 or HEVC intra (predictive) coding, in terms of rate versus visual quality. On the other hand, in terms of rate versus objective quality, it significantly outperforms baseline JPEG by more than 4.3 dB in PSNR on average, with a moderate increase on complexity, and ECEB, the state-of-the-art nonpredictive image coding, by 0.75 dB when SDQ is OFF (i.e., HDQ case), with the same level of computational complexity, and by 1 dB when SDQ is ON, at the cost of slight increase in complexity. In comparison with H.264 intracoding, our system provides an overall 0.4-dB gain or so, with dramatically reduced computational complexity; in comparison with HEVC intracoding, it offers comparable coding performance in the high-rate region or for complicated images, but with only less than 5% of the HEVC intracoding complexity. In addition, our proposed system also offers multiresolution capability, which, together with its comparatively high coding efficiency and low complexity, makes it a good alternative for real-time image processing applications.

  9. Design of a Lossless Image Compression System for Video Capsule Endoscopy and Its Performance in In-Vivo Trials

    PubMed Central

    Khan, Tareq H.; Wahid, Khan A.

    2014-01-01

    In this paper, a new low complexity and lossless image compression system for capsule endoscopy (CE) is presented. The compressor consists of a low-cost YEF color space converter and variable-length predictive with a combination of Golomb-Rice and unary encoding. All these components have been heavily optimized for low-power and low-cost and lossless in nature. As a result, the entire compression system does not incur any loss of image information. Unlike transform based algorithms, the compressor can be interfaced with commercial image sensors which send pixel data in raster-scan fashion that eliminates the need of having large buffer memory. The compression algorithm is capable to work with white light imaging (WLI) and narrow band imaging (NBI) with average compression ratio of 78% and 84% respectively. Finally, a complete capsule endoscopy system is developed on a single, low-power, 65-nm field programmable gate arrays (FPGA) chip. The prototype is developed using circular PCBs having a diameter of 16 mm. Several in-vivo and ex-vivo trials using pig's intestine have been conducted using the prototype to validate the performance of the proposed lossless compression algorithm. The results show that, compared with all other existing works, the proposed algorithm offers a solution to wireless capsule endoscopy with lossless and yet acceptable level of compression. PMID:25375753

  10. Selection of bi-level image compression method for reduction of communication energy in wireless visual sensor networks

    NASA Astrophysics Data System (ADS)

    Khursheed, Khursheed; Imran, Muhammad; Ahmad, Naeem; O'Nils, Mattias

    2012-06-01

    Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN.

  11. Effects of image compression and degradation on an automatic diabetic retinopathy screening algorithm

    NASA Astrophysics Data System (ADS)

    Agurto, C.; Barriga, S.; Murray, V.; Pattichis, M.; Soliz, P.

    2010-03-01

    Diabetic retinopathy (DR) is one of the leading causes of blindness among adult Americans. Automatic methods for detection of the disease have been developed in recent years, most of them addressing the segmentation of bright and red lesions. In this paper we present an automatic DR screening system that does approach the problem through the segmentation of features. The algorithm determines non-diseased retinal images from those with pathology based on textural features obtained using multiscale Amplitude Modulation-Frequency Modulation (AM-FM) decompositions. The decomposition is represented as features that are the inputs to a classifier. The algorithm achieves 0.88 area under the ROC curve (AROC) for a set of 280 images from the MESSIDOR database. The algorithm is then used to analyze the effects of image compression and degradation, which will be present in most actual clinical or screening environments. Results show that the algorithm is insensitive to illumination variations, but high rates of compression and large blurring effects degrade its performance.

  12. A simplified Integer Cosine Transform and its application in image compression

    NASA Technical Reports Server (NTRS)

    Costa, M.; Tong, K.

    1994-01-01

    A simplified version of the integer cosine transform (ICT) is described. For practical reasons, the transform is considered jointly with the quantization of its coefficients. It differs from conventional ICT algorithms in that the combined factors for normalization and quantization are approximated by powers of two. In conventional algorithms, the normalization/quantization stage typically requires as many integer divisions as the number of transform coefficients. By restricting the factors to powers of two, these divisions can be performed by variable shifts in the binary representation of the coefficients, with speed and cost advantages to the hardware implementation of the algorithm. The error introduced by the factor approximations is compensated for in the inverse ICT operation, executed with floating point precision. The simplified ICT algorithm has potential applications in image-compression systems with disparate cost and speed requirements in the encoder and decoder ends. For example, in deep space image telemetry, the image processors on board the spacecraft could take advantage of the simplified, faster encoding operation, which would be adjusted on the ground, with high-precision arithmetic. A dual application is found in compressed video broadcasting. Here, a fast, high-performance processor at the transmitter would precompensate for the factor approximations in the inverse ICT operation, to be performed in real time, at a large number of low-cost receivers.

  13. Reconstruction of images from compressive sensing based on the stagewise fast LASSO

    NASA Astrophysics Data System (ADS)

    Wu, Jiao; Liu, Fang; Jiao, Licheng

    2009-10-01

    Compressive sensing (CS) is a theory about that one may achieve a nearly exact signal reconstruction from the fewer samples, if the signal is sparse or compressible under some basis. The reconstruction of signal can be obtained by solving a convex program, which is equivalent to a LASSO problem with l1-formulation. In this paper, we propose a stage-wise fast LASSO (StF-LASSO) algorithm for the image reconstruction from CS. It uses an insensitive Huber loss function to the objective function of LASSO, and iteratively builds the decision function and updates the parameters by introducing a stagewise fast learning strategy. Simulation studies in the CS reconstruction of the natural images and SAR images widely applied in practice demonstrate that the good reconstruction performance both in evaluation indexes and visual effect can be achieved by StF-LASSO with the fast recovered speed among the algorithms which have been implemented in our simulations in most of the cases. Theoretical analysis and experiments show that StF-LASSO is a CS reconstruction algorithm with the low complexity and stability.

  14. High-Performance Motion Estimation for Image Sensors with Video Compression

    PubMed Central

    Xu, Weizhi; Yin, Shouyi; Liu, Leibo; Liu, Zhiyong; Wei, Shaojun

    2015-01-01

    It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME) is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME). Pixels of reconstructed frames are kept on-chip until they are used by the next current frame to avoid off-chip memory access. On-chip buffers with smart schedules of data access are designed to perform the new data reuse scheme. Three levels of the proposed inter-frame data reuse scheme are presented and analyzed. They give different choices with tradeoff between off-chip bandwidth requirement and on-chip memory size. All three levels have better data reuse efficiency than their intra-frame counterparts, so off-chip memory traffic is reduced effectively. Comparing the new inter-frame data reuse scheme with the traditional intra-frame data reuse scheme, the memory traffic can be reduced by 50% for VC-ME. PMID:26307996

  15. Lossy hyperspectral image compression tuned for spectral mixture analysis applications on NVidia graphics processing units

    NASA Astrophysics Data System (ADS)

    Plaza, Antonio; Plaza, Javier; Sánchez, Sergio; Paz, Abel

    2009-08-01

    In this paper, we develop a computationally efficient approach for lossy compression of remotely sensed hyperspectral images which has been specifically tuned to preserve the relevant information required in spectral mixture analysis (SMA) applications. The proposed method is based on two steps: 1) endmember extraction, and 2) linear spectral unmixing. Two endmember extraction algorithms: the pixel purity index (PPI) and the automatic morphological endmember extraction (AMEE), and a fully constrained linear spectral unmixing (FCLSU) algorithm have been considered in this work to devise the proposed lossy compression strategy. The proposed methodology has been implemented in graphics processing units (GPUs) of NVidiaTM type. Our experiments demonstrate that it can achieve very high compression ratios when applied to standard hyperspectral data sets, and can also retain the relevant information required for spectral unmixing in a computationally efficient way, achieving speedups in the order of 26 on a NVidiaTM GeForce 8800 GTX graphic card when compared to an optimized implementation of the same code in a dual-core CPU.

  16. High-Performance Motion Estimation for Image Sensors with Video Compression.

    PubMed

    Xu, Weizhi; Yin, Shouyi; Liu, Leibo; Liu, Zhiyong; Wei, Shaojun

    2015-01-01

    It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME) is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME). Pixels of reconstructed frames are kept on-chip until they are used by the next current frame to avoid off-chip memory access. On-chip buffers with smart schedules of data access are designed to perform the new data reuse scheme. Three levels of the proposed inter-frame data reuse scheme are presented and analyzed. They give different choices with tradeoff between off-chip bandwidth requirement and on-chip memory size. All three levels have better data reuse efficiency than their intra-frame counterparts, so off-chip memory traffic is reduced effectively. Comparing the new inter-frame data reuse scheme with the traditional intra-frame data reuse scheme, the memory traffic can be reduced by 50% for VC-ME.

  17. Cardiac diffusion tensor imaging based on compressed sensing using joint sparsity and low-rank approximation.

    PubMed

    Huang, Jianping; Wang, Lihui; Chu, Chunyu; Zhang, Yanli; Liu, Wanyu; Zhu, Yuemin

    2016-04-29

    Diffusion tensor magnetic resonance (DTMR) imaging and diffusion tensor imaging (DTI) have been widely used to probe noninvasively biological tissue structures. However, DTI suffers from long acquisition times, which limit its practical and clinical applications. This paper proposes a new Compressed Sensing (CS) reconstruction method that employs joint sparsity and rank deficiency to reconstruct cardiac DTMR images from undersampled k-space data. Diffusion-weighted images acquired in different diffusion directions were firstly stacked as columns to form the matrix. The matrix was row sparse in the transform domain and had a low rank. These two properties were then incorporated into the CS reconstruction framework. The underlying constrained optimization problem was finally solved by the first-order fast method. Experiments were carried out on both simulation and real human cardiac DTMR images. The results demonstrated that the proposed approach had lower reconstruction errors for DTI indices, including fractional anisotropy (FA) and mean diffusivities (MD), compared to the existing CS-DTMR image reconstruction techniques. PMID:27163322

  18. Analysis of bandwidth limitation in time-stretch compressive sampling imaging system

    NASA Astrophysics Data System (ADS)

    Chen, Hongwei; Weng, Zhiliang; Guo, Qiang; Chen, Minghua; Yang, Sigang; Xie, Shizhong

    2016-03-01

    Compressive sampling (CS) is an emerging field that provides a new framework for image reconstruction and has potentially powerful implications for the design of optical imaging devices. Single-pixel camera, as a representative example of CS, enables the use of exotic detectors and can operate efficiently across a much broader spectral range than conventional silicon-based cameras. Recently, time-stretch CS imaging system is proposed to overcome the speed limitation of the conventional single-pixel camera. In the proposed system, as ultra-short optical pulses are used for active illumination, the performance of the imaging system is affected by the detection bandwidth. In this paper, we experimentally analyze the bandwidth limitation in the CS-based time-stretch imaging system. Various detector bandwidths are introduced in the system and the mean square error (MSE) is calculated to evaluate the quality of reconstructed images. The results show that the decreasing detection bandwidth leads to serious energy spread of the pulses, where the MSE increases rapidly and system performance is degraded severely.

  19. An easily-achieved time-domain beamformer for ultrafast ultrasound imaging based on compressive sensing.

    PubMed

    Wang, Congzhi; Peng, Xi; Liang, Dong; Xiao, Yang; Qiu, Weibao; Qian, Ming; Zheng, Hairong

    2015-01-01

    In ultrafast ultrasound imaging technique, how to maintain the high frame rate, and at the same time to improve the image quality as far as possible, has become a significant issue. Several novel beamforming methods based on compressive sensing (CS) theory have been proposed in previous literatures, but all have their own limitations, such as the excessively large memory consumption and the errors caused by the short-time discrete Fourier transform (STDFT). In this study, a novel CS-based time-domain beamformer for plane-wave ultrasound imaging is proposed and its image quality has been verified to be better than the traditional DAS method and even the popular coherent compounding method on several simulated phantoms. Comparing to the existing CS method, the memory consumption of our method is significantly reduced since the encoding matrix can be sparse-expressed. In addition, the time-delay calculations of the echo signals are directly accomplished in time-domain with a dictionary concept, avoiding the errors induced by the short-time Fourier translation calculation in those frequency-domain methods. The proposed method can be easily implemented on some low-cost hardware platforms, and can obtain ultrasound images with both high frame rate and good image quality, which make it has a great potential for clinical application.

  20. Accelerated 3D MERGE Carotid Imaging using Compressed Sensing with a Hidden Markov Tree Model

    PubMed Central

    Makhijani, Mahender K.; Balu, Niranjan; Yamada, Kiyofumi; Yuan, Chun; Nayak, Krishna S.

    2012-01-01

    Purpose To determine the potential for accelerated 3D carotid magnetic resonance imaging (MRI) using wavelet based compressed sensing (CS) with a hidden Markov tree (HMT) model. Materials and Methods We retrospectively applied HMT model-based CS and conventional CS to 3D carotid MRI data with 0.7 mm isotropic resolution, from six subjects with known carotid stenosis (12 carotids). We applied a wavelet-tree model learnt from a training database of carotid images to improve CS reconstruction. Quantitative endpoints such as lumen area, wall area, mean and maximum wall thickness, plaque calicification, and necrotic core area, were measured and compared using Bland-Altman analysis along with image quality. Results Rate-4.5 acceleration with HMT model-based CS provided image quality comparable to that of rate-3 acceleration with conventional CS and fully sampled reference reconstructions. Morphological measurements made on rate-4.5 HMT model-based CS reconstructions were in good agreement with measurements made on fully sampled reference images. There was no significant bias or correlation between mean and difference of measurements when comparing rate 4.5 HMT model-based CS with fully sampled reference images. Conclusion HMT model-based CS can potentially be used to accelerate clinical carotid MRI by a factor of 4.5 without impacting diagnostic quality or quantitative endpoints. PMID:22826159

  1. An adaptive fusion approach for infrared and visible images based on NSCT and compressed sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Maldague, Xavier

    2016-01-01

    A novel nonsubsampled contourlet transform (NSCT) based image fusion approach, implementing an adaptive-Gaussian (AG) fuzzy membership method, compressed sensing (CS) technique, total variation (TV) based gradient descent reconstruction algorithm, is proposed for the fusion computation of infrared and visible images. Compared with wavelet, contourlet, or any other multi-resolution analysis method, NSCT has many evident advantages, such as multi-scale, multi-direction, and translation invariance. As is known, a fuzzy set is characterized by its membership function (MF), while the commonly known Gaussian fuzzy membership degree can be introduced to establish an adaptive control of the fusion processing. The compressed sensing technique can sparsely sample the image information in a certain sampling rate, and the sparse signal can be recovered by solving a convex problem employing gradient descent based iterative algorithm(s). In the proposed fusion process, the pre-enhanced infrared image and the visible image are decomposed into low-frequency subbands and high-frequency subbands, respectively, via the NSCT method as a first step. The low-frequency coefficients are fused using the adaptive regional average energy rule; the highest-frequency coefficients are fused using the maximum absolute selection rule; the other high-frequency coefficients are sparsely sampled, fused using the adaptive-Gaussian regional standard deviation rule, and then recovered by employing the total variation based gradient descent recovery algorithm. Experimental results and human visual perception illustrate the effectiveness and advantages of the proposed fusion approach. The efficiency and robustness are also analyzed and discussed through different evaluation methods, such as the standard deviation, Shannon entropy, root-mean-square error, mutual information and edge-based similarity index.

  2. Assessment of commercial compression algorithms, of the lossy DCT and lossless types, applied to diagnostic digital image files.

    PubMed

    Okkalides, D

    1998-01-01

    The need for diagnostic image compression of the lossy or irreversible type has been declining due to the rapid increase in commercially available formatted hard disk capacity. It is estimated that the latter has increased about three orders of magnitude in the past 14 years while the size of diagnostic image files has, of course, remained constant. During the same period, despite claims for significantly improved performance by vendors, it seems that only small progress has been made in commercial lossless and lossy compression algorithms. There is still no consensus for lossy compression to a level acceptable for diagnosis. This is mostly considered to be around a ratio of 10:1. However, acceptable compression ratios depend heavily on the type of images processed and may be compared with the 3:1 ratio produced by lossless algorithms. This last value was shown to increase to more than 5.5:1 for gamma-camera images when corrected for the noise content of individual bit planes and for the display capabilities of computer monitors. Therefore, any possible benefits of lossy over lossless compression become questionable when the currently available hard disk capacity and network transmission speed are considered against the inevitable loss of information in the lossy type of compression. PMID:9745939

  3. Design of a multi-spectral imager built using the compressive sensing single-pixel camera architecture

    NASA Astrophysics Data System (ADS)

    McMackin, Lenore; Herman, Matthew A.; Weston, Tyler

    2016-02-01

    We present the design of a multi-spectral imager built using the architecture of the single-pixel camera. The architecture is enabled by the novel sampling theory of compressive sensing implemented optically using the Texas Instruments DLP™ micro-mirror array. The array not only implements spatial modulation necessary for compressive imaging but also provides unique diffractive spectral features that result in a multi-spectral, high-spatial resolution imager design. The new camera design provides multi-spectral imagery in a wavelength range that extends from the visible to the shortwave infrared without reduction in spatial resolution. In addition to the compressive imaging spectrometer design, we present a diffractive model of the architecture that allows us to predict a variety of detailed functional spatial and spectral design features. We present modeling results, architectural design and experimental results that prove the concept.

  4. Dragonfly: an implementation of the expand–maximize–compress algorithm for single-particle imaging1

    PubMed Central

    Ayyer, Kartik; Lan, Ti-Yen; Elser, Veit; Loh, N. Duane

    2016-01-01

    Single-particle imaging (SPI) with X-ray free-electron lasers has the potential to change fundamentally how biomacromolecules are imaged. The structure would be derived from millions of diffraction patterns, each from a different copy of the macromolecule before it is torn apart by radiation damage. The challenges posed by the resultant data stream are staggering: millions of incomplete, noisy and un-oriented patterns have to be computationally assembled into a three-dimensional intensity map and then phase reconstructed. In this paper, the Dragonfly software package is described, based on a parallel implementation of the expand–maximize–compress reconstruction algorithm that is well suited for this task. Auxiliary modules to simulate SPI data streams are also included to assess the feasibility of proposed SPI experiments at the Linac Coherent Light Source, Stanford, California, USA. PMID:27504078

  5. Bandwidth compression of the digitized HDTV images for transmission via satellites

    NASA Technical Reports Server (NTRS)

    Al-Asmari, A. KH.; Kwatra, S. C.

    1992-01-01

    This paper investigates a subband coding scheme to reduce the transmission bandwidth of the digitized HDTV images. The HDTV signals are decomposed into seven bands. Each band is then independently encoded. The based band is DPCM encoded and the high bands are encoded by using nonuniform Laplacian quantizers with a dead zone. By selecting the dead zone on the basis of energy in the high bands an acceptable image quality is achieved at an average of 45 Mbits/sec (Mbps) rate. This rate is comparable to some very hardware intensive schemes of transform compression or vector quantization proposed in the literature. The subband coding scheme used in this study is considered to be of medium complexity. The 45 Mbps rate is suitable for transmission of HDTV signals via satellites.

  6. High speed X-ray phase contrast imaging of energetic composites under dynamic compression

    NASA Astrophysics Data System (ADS)

    Parab, Niranjan D.; Roberts, Zane A.; Harr, Michael H.; Mares, Jesus O.; Casey, Alex D.; Gunduz, I. Emre; Hudspeth, Matthew; Claus, Benjamin; Sun, Tao; Fezzaa, Kamel; Son, Steven F.; Chen, Weinong W.

    2016-09-01

    Fracture of crystals and frictional heating are associated with the formation of "hot spots" (localized heating) in energetic composites such as polymer bonded explosives (PBXs). Traditional high speed optical imaging methods cannot be used to study the dynamic sub-surface deformation and the fracture behavior of such materials due to their opaque nature. In this study, high speed synchrotron X-ray experiments are conducted to visualize the in situ deformation and the fracture mechanisms in PBXs composed of octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX) crystals and hydroxyl-terminated polybutadiene binder doped with iron (III) oxide. A modified Kolsky bar apparatus was used to apply controlled dynamic compression on the PBX specimens, and a high speed synchrotron X-ray phase contrast imaging (PCI) setup was used to record the in situ deformation and failure in the specimens. The experiments show that synchrotron X-ray PCI provides a sufficient contrast between the HMX crystals and the doped binder, even at ultrafast recording rates. Under dynamic compression, most of the cracking in the crystals was observed to be due to the tensile stress generated by the diametral compression applied from the contacts between the crystals. Tensile stress driven cracking was also observed for some of the crystals due to the transverse deformation of the binder and superior bonding between the crystal and the binder. The obtained results are vital to develop improved understanding and to validate the macroscopic and mesoscopic numerical models for energetic composites so that eventually hot spot formation can be predicted.

  7. Image reconstruction for single detector rosette scanning systems based on compressive sensing theory

    NASA Astrophysics Data System (ADS)

    Uzeler, Hande; Cakir, Serdar; Aytaç, Tayfun

    2016-02-01

    Compressive sensing (CS) is a signal processing technique that enables a signal that has a sparse representation in a known basis to be reconstructed using measurements obtained below the Nyquist rate. Single detector image reconstruction applications using CS have been shown to give promising results. In this study, we investigate the application of CS theory to single detector infrared (IR) rosette scanning systems which suffer from low performance compared to costly focal plane array (FPA) detectors. The single detector pseudoimaging rosette scanning system scans the scene with a specific pattern and performs processing to estimate the target location without forming an image. In this context, this generation of scanning systems may be improved by utilizing the samples obtained by the rosette scanning pattern in conjunction with the CS framework. For this purpose, we consider surface-to-air engagement scenarios using IR images containing aerial targets and flares. The IR images have been reconstructed from samples obtained with the rosette scanning pattern and other baseline sampling strategies. It has been shown that the proposed scheme exhibits good reconstruction performance and a large size FPA imaging performance can be achieved using a single IR detector with a rosette scanning pattern.

  8. Relating speech production to tongue muscle compressions using tagged and high-resolution magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Xing, Fangxu; Ye, Chuyang; Woo, Jonghye; Stone, Maureen; Prince, Jerry

    2015-03-01

    The human tongue is composed of multiple internal muscles that work collaboratively during the production of speech. Assessment of muscle mechanics can help understand the creation of tongue motion, interpret clinical observations, and predict surgical outcomes. Although various methods have been proposed for computing the tongue's motion, associating motion with muscle activity in an interdigitated fiber framework has not been studied. In this work, we aim to develop a method that reveals different tongue muscles' activities in different time phases during speech. We use fourdimensional tagged magnetic resonance (MR) images and static high-resolution MR images to obtain tongue motion and muscle anatomy, respectively. Then we compute strain tensors and local tissue compression along the muscle fiber directions in order to reveal their shortening pattern. This process relies on the support from multiple image analysis methods, including super-resolution volume reconstruction from MR image slices, segmentation of internal muscles, tracking the incompressible motion of tissue points using tagged images, propagation of muscle fiber directions over time, and calculation of strain in the line of action, etc. We evaluated the method on a control subject and two postglossectomy patients in a controlled speech task. The normal subject's tongue muscle activity shows high correspondence with the production of speech in different time instants, while both patients' muscle activities show different patterns from the control due to their resected tongues. This method shows potential for relating overall tongue motion to particular muscle activity, which may provide novel information for future clinical and scientific studies.

  9. Two-Layer Tight Frame Sparsifying Model for Compressed Sensing Magnetic Resonance Imaging

    PubMed Central

    Peng, Xi; Dong, Pei

    2016-01-01

    Compressed sensing magnetic resonance imaging (CSMRI) employs image sparsity to reconstruct MR images from incoherently undersampled K-space data. Existing CSMRI approaches have exploited analysis transform, synthesis dictionary, and their variants to trigger image sparsity. Nevertheless, the accuracy, efficiency, or acceleration rate of existing CSMRI methods can still be improved due to either lack of adaptability, high complexity of the training, or insufficient sparsity promotion. To properly balance the three factors, this paper proposes a two-layer tight frame sparsifying (TRIMS) model for CSMRI by sparsifying the image with a product of a fixed tight frame and an adaptively learned tight frame. The two-layer sparsifying and adaptive learning nature of TRIMS has enabled accurate MR reconstruction from highly undersampled data with efficiency. To solve the reconstruction problem, a three-level Bregman numerical algorithm is developed. The proposed approach has been compared to three state-of-the-art methods over scanned physical phantom and in vivo MR datasets and encouraging performances have been achieved. PMID:27747226

  10. Frame-based compressive sensing MR image reconstruction with balanced regularization.

    PubMed

    Shoulie Xie; Cuntai Guan; Weimin Huang; Zhongkang Lu

    2015-08-01

    This paper addresses the frame-based MR image reconstruction from undersampled k-space measurements by using a balanced ℓ(1)-regularized approach. Analysis-based and synthesis-based approaches are two common methods in ℓ(1)-regularized image restoration. They are equivalent under the orthogonal transform, but there exists a gap between them under redundant transform such as frame. Thus the third approach was developed to reduce the gap by penalizing the distance between the representation vector and the canonical frame coefficient of the estimated image, this balanced approach bridges the synthesis-based and analysis-based approaches and balances the fidelity, sparsity and smoothness of the solution. These frame-based approaches have been studied and compared for optical image restoration over the last few years. In this paper, we further study and compare these three approaches for the compressed sensing MR image reconstruction under redundant frame domain. These ℓ(1)-regularized optimization problems are solved by using a variable splitting strategy and the classical alternating direction method of multiplier (ADMM). Numerical simulation results show that the balanced approach can reduce the gap between the analysis-based and synthesis-based approaches and are even better than these two approaches under our experimental conditions.

  11. Two-dimensional orthogonal DCT expansion in trapezoid and triangular blocks and modified JPEG image compression.

    PubMed

    Ding, Jian-Jiun; Huang, Ying-Wun; Lin, Pao-Yen; Pei, Soo-Chang; Chen, Hsin-Hui; Wang, Yu-Hsiang

    2013-09-01

    In the conventional JPEG algorithm, an image is divided into eight by eight blocks and then the 2-D DCT is applied to encode each block. In this paper, we find that, in addition to rectangular blocks, the 2-D DCT is also orthogonal in the trapezoid and triangular blocks. Therefore, instead of eight by eight blocks, we can generalize the JPEG algorithm and divide an image into trapezoid and triangular blocks according to the shapes of objects and achieve higher compression ratio. Compared with the existing shape adaptive compression algorithms, as we do not try to match the shape of each object exactly, the number of bytes used for encoding the edges can be less and the error caused from the high frequency component at the boundary can be avoided. The simulations show that, when the bit rate is fixed, our proposed algorithm can achieve higher PSNR than the JPEG algorithm and other shape adaptive algorithms. Furthermore, in addition to the 2-D DCT, we can also use our proposed method to generate the 2-D complete and orthogonal sine basis, Hartley basis, Walsh basis, and discrete polynomial basis in a trapezoid or a triangular block.

  12. Lossy compression of hyperspectral images using shearlet transform and 3D SPECK

    NASA Astrophysics Data System (ADS)

    Karami, A.

    2015-10-01

    In this paper, a new lossy compression method for hyperspectral images (HSI) is introduced. HSI are considered as a 3D dataset with two dimensions in the spatial and one dimension in the spectral domain. In the proposed method, first 3D multidirectional anisotropic shearlet transform is applied to the HSI. Because, unlike traditional wavelets, shearlets are theoretically optimal in representing images with edges and other geometrical features. Second, soft thresholding method is applied to the shearlet transform coefficients and finally the modified coefficients are encoded using Three Dimensional- Set Partitioned Embedded bloCK (3D SPECK). Our simulation results show that the proposed method, in comparison with well-known approaches such as 3D SPECK (using 3D wavelet) and combined PCA and JPEG2000 algorithms, provides a higher SNR (signal to noise ratio) for any given compression ratio (CR). It is noteworthy to mention that the superiority of proposed method is distinguishable as the value of CR grows. In addition, the effect of proposed method on the spectral unmixing analysis is also evaluated.

  13. Comparison of Liver Tumor Motion With and Without Abdominal Compression Using Cine-Magnetic Resonance Imaging

    SciTech Connect

    Eccles, Cynthia L.; Patel, Ritesh; Simeonov, Anna K.; Lockwood, Gina; Haider, Masoom; Dawson, Laura A.

    2011-02-01

    Purpose: Abdominal compression (AC) can be used to reduce respiratory liver motion in patients undergoing liver stereotactic body radiotherapy. The purpose of the present study was to measure the changes in three-dimensional liver tumor motion with and without compression using cine-magnetic resonance imaging. Patients and Methods: A total of 60 patients treated as a part of an institutional research ethics board-approved liver stereotactic body radiotherapy protocol underwent cine T2-weighted magnetic resonance imaging through the tumor centroid in the coronal and sagittal planes. A total of 240 cine-magnetic resonance imaging sequences acquired at one to three images each second for 30-60 s were evaluated using an in-house-developed template matching tool (based on the coefficient correlation) to measure the magnitude of the tumor motion. The average tumor edge displacements were used to determine the magnitude of changes in the caudal-cranial (CC) and anteroposterior (AP) directions, with and without AC. Results: The mean tumor motion without AC of 11.7 mm (range, 4.8-23.3) in the CC direction was reduced to 9.4 mm (range, 1.6-23.4) with AC. The tumor motion was reduced in both directions (CC and AP) in 52% of the patients and in a single direction (CC or AP) in 90% of the patients. The mean decrease in tumor motion with AC was 2.3 and 0.6 mm in the CC and AP direction, respectively. Increased motion occurred in one or more directions in 28% of patients. Clinically significant (>3 mm) decreases were observed in 40% and increases in <2% of patients in the CC direction. Conclusion: AC can significantly reduce three-dimensional liver tumor motion in most patients, although the magnitude of the reduction was smaller than previously reported.

  14. Compressed histogram attribute profiles for the classification of VHR remote sensing images

    NASA Astrophysics Data System (ADS)

    Battiti, Romano; Demir, Begüm; Bruzzone, Lorenzo

    2015-10-01

    This paper presents a novel compressed histogram attribute profile (CHAP) for classification of very high resolution remote sensing images. The CHAP characterizes the marginal local distribution of attribute filter responses to model the texture information of each sample with a small number of image features. This is achieved based on a three steps algorithm. The first step is devoted to provide a complete characterization of spatial properties of objects in a scene. To this end, the attribute profile (AP) is initially built by the sequential application of attribute filters to the considered image. Then, to capture complete spatial characteristics of the structures in the scene a local histogram is calculated for each sample of each image in the AP. The local histograms of the same pixel location can contain redundant information since: i) adjacent histogram bins can provide similar information; and ii) the attributes obtained with similar attribute filter threshold values lead to redundant features. In the second step, to point out the redundancies the local histograms of the same pixel locations in the AP are organized into a 2D matrix representation, where columns are associated to the local histograms and rows represents a specific bin in all histograms of the considered sequence of filtered attributes in the profile. This representation results in the characterization of the texture information of each sample through a 2D texture descriptor. In the final step, a novel compression approach based on a uniform 2D quantization strategy is applied to remove the redundancy of the 2D texture descriptors. Finally the CHAP is classified by a Support Vector Machine classifier with histogram intersection kernel that is very effective for high dimensional histogram-based feature representations. Experimental results confirm the effectiveness of the proposed CHAP in terms of computational complexity, storage requirements and classification accuracy when compared to the

  15. Adaptive block-wise alphabet reduction scheme for lossless compression of images with sparse and locally sparse histograms

    NASA Astrophysics Data System (ADS)

    Masmoudi, Atef; Zouari, Sonia; Ghribi, Abdelaziz

    2015-11-01

    We propose a new adaptive block-wise lossless image compression algorithm, which is based on the so-called alphabet reduction scheme combined with an adaptive arithmetic coding (AC). This new encoding algorithm is particularly efficient for lossless compression of images with sparse and locally sparse histograms. AC is a very efficient technique for lossless data compression and produces a rate that is close to the entropy; however, a compression performance loss occurs when encoding images or blocks with a limited number of active symbols by comparison with the number of symbols in the nominal alphabet, which consists in the amplification of the zero frequency problem. Generally, most methods add one to the frequency count of each symbol from the nominal alphabet, which leads to a statistical model distortion, and therefore reduces the efficiency of the AC. The aim of this work is to overcome this drawback by assigning to each image block the smallest possible set including all the existing symbols called active symbols. This is an alternative of using the nominal alphabet when applying the conventional arithmetic encoders. We show experimentally that the proposed method outperforms several lossless image compression encoders and standards including the conventional arithmetic encoders, JPEG2000, and JPEG-LS.

  16. Fast imaging of laboratory core floods using 3D compressed sensing RARE MRI.

    PubMed

    Ramskill, N P; Bush, I; Sederman, A J; Mantle, M D; Benning, M; Anger, B C; Appel, M; Gladden, L F

    2016-09-01

    Three-dimensional (3D) imaging of the fluid distributions within the rock is essential to enable the unambiguous interpretation of core flooding data. Magnetic resonance imaging (MRI) has been widely used to image fluid saturation in rock cores; however, conventional acquisition strategies are typically too slow to capture the dynamic nature of the displacement processes that are of interest. Using Compressed Sensing (CS), it is possible to reconstruct a near-perfect image from significantly fewer measurements than was previously thought necessary, and this can result in a significant reduction in the image acquisition times. In the present study, a method using the Rapid Acquisition with Relaxation Enhancement (RARE) pulse sequence with CS to provide 3D images of the fluid saturation in rock core samples during laboratory core floods is demonstrated. An objective method using image quality metrics for the determination of the most suitable regularisation functional to be used in the CS reconstructions is reported. It is shown that for the present application, Total Variation outperforms the Haar and Daubechies3 wavelet families in terms of the agreement of their respective CS reconstructions with a fully-sampled reference image. Using the CS-RARE approach, 3D images of the fluid saturation in the rock core have been acquired in 16min. The CS-RARE technique has been applied to image the residual water saturation in the rock during a water-water displacement core flood. With a flow rate corresponding to an interstitial velocity of vi=1.89±0.03ftday(-1), 0.1 pore volumes were injected over the course of each image acquisition, a four-fold reduction when compared to a fully-sampled RARE acquisition. Finally, the 3D CS-RARE technique has been used to image the drainage of dodecane into the water-saturated rock in which the dynamics of the coalescence of discrete clusters of the non-wetting phase are clearly observed. The enhancement in the temporal resolution that has

  17. Fast imaging of laboratory core floods using 3D compressed sensing RARE MRI

    NASA Astrophysics Data System (ADS)

    Ramskill, N. P.; Bush, I.; Sederman, A. J.; Mantle, M. D.; Benning, M.; Anger, B. C.; Appel, M.; Gladden, L. F.

    2016-09-01

    Three-dimensional (3D) imaging of the fluid distributions within the rock is essential to enable the unambiguous interpretation of core flooding data. Magnetic resonance imaging (MRI) has been widely used to image fluid saturation in rock cores; however, conventional acquisition strategies are typically too slow to capture the dynamic nature of the displacement processes that are of interest. Using Compressed Sensing (CS), it is possible to reconstruct a near-perfect image from significantly fewer measurements than was previously thought necessary, and this can result in a significant reduction in the image acquisition times. In the present study, a method using the Rapid Acquisition with Relaxation Enhancement (RARE) pulse sequence with CS to provide 3D images of the fluid saturation in rock core samples during laboratory core floods is demonstrated. An objective method using image quality metrics for the determination of the most suitable regularisation functional to be used in the CS reconstructions is reported. It is shown that for the present application, Total Variation outperforms the Haar and Daubechies3 wavelet families in terms of the agreement of their respective CS reconstructions with a fully-sampled reference image. Using the CS-RARE approach, 3D images of the fluid saturation in the rock core have been acquired in 16 min. The CS-RARE technique has been applied to image the residual water saturation in the rock during a water-water displacement core flood. With a flow rate corresponding to an interstitial velocity of vi = 1.89 ± 0.03 ft day-1, 0.1 pore volumes were injected over the course of each image acquisition, a four-fold reduction when compared to a fully-sampled RARE acquisition. Finally, the 3D CS-RARE technique has been used to image the drainage of dodecane into the water-saturated rock in which the dynamics of the coalescence of discrete clusters of the non-wetting phase are clearly observed. The enhancement in the temporal resolution

  18. Fast imaging of laboratory core floods using 3D compressed sensing RARE MRI.

    PubMed

    Ramskill, N P; Bush, I; Sederman, A J; Mantle, M D; Benning, M; Anger, B C; Appel, M; Gladden, L F

    2016-09-01

    Three-dimensional (3D) imaging of the fluid distributions within the rock is essential to enable the unambiguous interpretation of core flooding data. Magnetic resonance imaging (MRI) has been widely used to image fluid saturation in rock cores; however, conventional acquisition strategies are typically too slow to capture the dynamic nature of the displacement processes that are of interest. Using Compressed Sensing (CS), it is possible to reconstruct a near-perfect image from significantly fewer measurements than was previously thought necessary, and this can result in a significant reduction in the image acquisition times. In the present study, a method using the Rapid Acquisition with Relaxation Enhancement (RARE) pulse sequence with CS to provide 3D images of the fluid saturation in rock core samples during laboratory core floods is demonstrated. An objective method using image quality metrics for the determination of the most suitable regularisation functional to be used in the CS reconstructions is reported. It is shown that for the present application, Total Variation outperforms the Haar and Daubechies3 wavelet families in terms of the agreement of their respective CS reconstructions with a fully-sampled reference image. Using the CS-RARE approach, 3D images of the fluid saturation in the rock core have been acquired in 16min. The CS-RARE technique has been applied to image the residual water saturation in the rock during a water-water displacement core flood. With a flow rate corresponding to an interstitial velocity of vi=1.89±0.03ftday(-1), 0.1 pore volumes were injected over the course of each image acquisition, a four-fold reduction when compared to a fully-sampled RARE acquisition. Finally, the 3D CS-RARE technique has been used to image the drainage of dodecane into the water-saturated rock in which the dynamics of the coalescence of discrete clusters of the non-wetting phase are clearly observed. The enhancement in the temporal resolution that has

  19. Dose reduction using prior image constrained compressed sensing (DR-PICCS)

    NASA Astrophysics Data System (ADS)

    Tang, Jie; Thériault Lauzier, Pascal; Chen, Guang-Hong

    2011-03-01

    A technique for dose reduction using prior image constrained compressed sensing (DR-PICCS) in computed tomography (CT) is proposed in this work. In DR-PICCS, a standard FBP reconstructed image is forward projected to get a fully sampled projection data set. Meanwhile, it is low-pass filtered and used as the prior image in the PICCS reconstruction framework. Next, the prior image and the forward projection data are used together by the PICCS algorithm to obtain a low noise DR-PICCS reconstruction, which maintains the spatial resolution of the original FBP images. The spatial resolution of DR-PICCS was studied using a Catphan phantom by MTF measurement. The noise reduction factor, CT number change and noise texture were studied using human subject data consisting of 20 CT colonography exams performed under an IRB-approved protocol. In each human subject study, six ROIs (two soft tissue, two colonic air columns, and two subcutaneous fat) were selected for the CT number and noise measurements study. Skewness and kurtosis were used as figures of merit to indicate the noise texture. A Bland-Altman analysis was performed to study the accuracy of the CT number. The results showed that, compared with FBP reconstructions, the MTF curve shows very little change in DR-PICCS reconstructions, spatial resolution loss is less than 0.1 lp/cm, and the noise standard deviation can be reduced by a factor of 3 with DR-PICCS. The CT numbers in FBP and DR-PICCS reconstructions agree well, which indicates that DR-PICCS does not change CT numbers. The noise textures indicators measured from DR-PICCS images are in a similar range as FBP images.

  20. Advances in image compression and automatic target recognition; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    NASA Technical Reports Server (NTRS)

    Tescher, Andrew G. (Editor)

    1989-01-01

    Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.

  1. The {alpha}-particle imaging of a compressed core of microtargets in a pinhole camera with a regular multi-pinhole diaphragm

    SciTech Connect

    Suslov, N A

    2000-08-31

    The {alpha}-particle imaging of a compressed core of microtargets using a multi-pinhole regular diaphragm is proposed. The image reconstruction technique is described. The results of the {alpha}-particle imaging of a compressed core of microtargets obtained at the 'Iskra-4' laser facility are reported. (interaction of laser radiation with matter. laser plasma)

  2. Accelerated diffusion spectrum imaging via compressed sensing for the human connectome project

    NASA Astrophysics Data System (ADS)

    Lee, Namgyun; Wilkins, Bryce; Singh, Manbir

    2012-02-01

    Diffusion Spectrum Imaging (DSI) has been developed as a model-free approach to solving the so called multiple-fibers-per- voxel problem in diffusion MRI. However, inferring heterogeneous microstructures of an imaging voxel rapidly remains a challenge in DSI because of extensive sampling requirements in a Cartesian grid of q-space. In this study, we propose compressed sensing based diffusion spectrum imaging (CS-DSI) to significantly reduce the number of diffusion measurements required for accurate estimation of fiber orientations. This method reconstructs each diffusion propagator of an MR data set from 100 variable density undersampled diffusion measurements minimizing the l1-norm of the finite-differences (i.e.,anisotropic total variation) of the diffusion propagator. The proposed method is validated against a ground truth from synthetic data mimicking the FiberCup phantom, demonstrating the robustness of CS-DSI on accurately estimating underlying fiber orientations from noisy diffusion data. We demonstrate the effectiveness of our CS-DSI method on a human brain dataset acquired from a clinical scanner without specialized pulse sequences. Estimated ODFs from CS-DSI method are qualitatively compared to those from the full dataset (DSI203). Lastly, we demonstrate that streamline tractography based on our CS-DSI method has a comparable quality to conventional DSI203. This illustrates the feasibility of CS-DSI for reconstructing whole brain white-matter fiber tractography from clinical data acquired at imaging centers, including hospitals, for human brain connectivity studies.

  3. Characterization of statistical prior image constrained compressed sensing (PICCS): II. Application to dose reduction

    PubMed Central

    Lauzier, Pascal Thériault; Chen, Guang-Hong

    2013-01-01

    Purpose: The ionizing radiation imparted to patients during computed tomography exams is raising concerns. This paper studies the performance of a scheme called dose reduction using prior image constrained compressed sensing (DR-PICCS). The purpose of this study is to characterize the effects of a statistical model of x-ray detection in the DR-PICCS framework and its impact on spatial resolution. Methods: Both numerical simulations with known ground truth and in vivo animal dataset were used in this study. In numerical simulations, a phantom was simulated with Poisson noise and with varying levels of eccentricity. Both the conventional filtered backprojection (FBP) and the PICCS algorithms were used to reconstruct images. In PICCS reconstructions, the prior image was generated using two different denoising methods: a simple Gaussian blur and a more advanced diffusion filter. Due to the lack of shift-invariance in nonlinear image reconstruction such as the one studied in this paper, the concept of local spatial resolution was used to study the sharpness of a reconstructed image. Specifically, a directional metric of image sharpness, the so-called pseudopoint spread function (pseudo-PSF), was employed to investigate local spatial resolution. Results: In the numerical studies, the pseudo-PSF was reduced from twice the voxel width in the prior image down to less than 1.1 times the voxel width in DR-PICCS reconstructions when the statistical model was not included. At the same noise level, when statistical weighting was used, the pseudo-PSF width in DR-PICCS reconstructed images varied between 1.5 and 0.75 times the voxel width depending on the direction along which it was measured. However, this anisotropy was largely eliminated when the prior image was generated using diffusion filtering; the pseudo-PSF width was reduced to below one voxel width in that case. In the in vivo study, a fourfold improvement in CNR was achieved while qualitatively maintaining sharpness

  4. Application of pulse compression signal processing techniques to electromagnetic acoustic transducers for noncontact thickness measurements and imaging

    SciTech Connect

    Ho, K.S.; Gan, T.H.; Billson, D.R.; Hutchins, D.A.

    2005-05-15

    A pair of noncontact Electromagnetic Acoustic Transducers (EMATs) has been used for thickness measurements and imaging of metallic plates. This was performed using wide bandwidth EMATs and pulse-compression signal processing techniques, using chirp excitation. This gives a greatly improved signal-to-noise ratio for air-coupled experiments, increasing the speed of data acquisition. A numerical simulation of the technique has confirmed the performance. Experimental results indicate that it is possible to perform noncontact ultrasonic imaging and thickness gauging in a wide range of metal plates. An accuracy of up to 99% has been obtained for aluminum, brass, and copper samples. The resolution of the image obtained using the pulse compression approach was also improved compared to a transient pulse signal from conventional pulser(receiver). It is thus suggested that the combination of EMATs and pulse compression can lead to a wide range of online applications where fast time acquisition is required.

  5. Multiple Local Coils in Magnetic Resonance Imaging: Design Analyses and Recommended Improvements.

    NASA Astrophysics Data System (ADS)

    Jones, Randall Wayne

    The use of local coils in Magnetic Resonance Imaging (MR) is becoming increasingly popular. Local coils offer improved image quality within their inherently smaller region-of-sensitivity (ROS) compared to that of the body coil. As the MR experiment matures, an increased demand for improvements in local anatomical imaging is placed upon MR equipment manufacturers. Developing anatomically specific quadrature detection coils is one method for increasing image quality. Another method is to switch the coil's ROS to a smaller size during the scanning process; hence, improving the coil sensitivity. Also, optimizing the quality factor or Q of the basic coil element is important if it is to offer improvements over existing designs. Q is significantly affected by such things as: the system cable coupling; the geometry; and the decoupling mechanism--whether it is via varactor detuning, PIN diode switching, passive diode decoupling or inherent positioning. Analyses of these variations in coil design are presented and recommendations are given to minimize Q degradations. Computer modeling is used for analyzing Q effects of the cable and the tuning and decoupling networks. Also, a convenient program was developed for entering three dimensional coil geometries and plotting their corresponding sensitivity profiles. Images, taken on the MR system, are provided to verify the model's plots and predictions, as well as to demonstrate the feasibility of new coil designs created as a result of the above studies. The culmination of the results of the research effort is demonstrated by several new coil designs: a tunable, fused, pseudo-volume shoulder coil, a pseudo-volume pelvic coil, a planar, quadrature detection spine coil, a switchable ROS, pseudo-volume neck coil, and switchable ROS, planar coil.

  6. Efficient lossy compression implementations of hyperspectral images: tools, hardware platforms, and comparisons

    NASA Astrophysics Data System (ADS)

    García, Aday; Santos, Lucana; López, Sebastián.; Callicó, Gustavo M.; Lopez, Jose F.; Sarmiento, Roberto

    2014-05-01

    Efficient onboard satellite hyperspectral image compression represents a necessity and a challenge for current and future space missions. Therefore, it is mandatory to provide hardware implementations for this type of algorithms in order to achieve the constraints required for onboard compression. In this work, we implement the Lossy Compression for Exomars (LCE) algorithm on an FPGA by means of high-level synthesis (HSL) in order to shorten the design cycle. Specifically, we use CatapultC HLS tool to obtain a VHDL description of the LCE algorithm from C-language specifications. Two different approaches are followed for HLS: on one hand, introducing the whole C-language description in CatapultC and on the other hand, splitting the C-language description in functional modules to be implemented independently with CatapultC, connecting and controlling them by an RTL description code without HLS. In both cases the goal is to obtain an FPGA implementation. We explain the several changes applied to the original Clanguage source code in order to optimize the results obtained by CatapultC for both approaches. Experimental results show low area occupancy of less than 15% for a SRAM-based Virtex-5 FPGA and a maximum frequency above 80 MHz. Additionally, the LCE compressor was implemented into an RTAX2000S antifuse-based FPGA, showing an area occupancy of 75% and a frequency around 53 MHz. All these serve to demonstrate that the LCE algorithm can be efficiently executed on an FPGA onboard a satellite. A comparison between both implementation approaches is also provided. The performance of the algorithm is finally compared with implementations on other technologies, specifically a graphics processing unit (GPU) and a single-threaded CPU.

  7. Using compressive sensing to recover images from PET scanners with partial detector rings

    SciTech Connect

    Valiollahzadeh, SeyyedMajid; Clark, John W.; Mawlawi, Osama

    2015-01-15

    Purpose: Most positron emission tomography/computed tomography (PET/CT) scanners consist of tightly packed discrete detector rings to improve scanner efficiency. The authors’ aim was to use compressive sensing (CS) techniques in PET imaging to investigate the possibility of decreasing the number of detector elements per ring (introducing gaps) while maintaining image quality. Methods: A CS model based on a combination of gradient magnitude and wavelet domains (wavelet-TV) was developed to recover missing observations in PET data acquisition. The model was designed to minimize the total variation (TV) and L1-norm of wavelet coefficients while constrained by the partially observed data. The CS model also incorporated a Poisson noise term that modeled the observed noise while suppressing its contribution by penalizing the Poisson log likelihood function. Three experiments were performed to evaluate the proposed CS recovery algorithm: a simulation study, a phantom study, and six patient studies. The simulation dataset comprised six disks of various sizes in a uniform background with an activity concentration of 5:1. The simulated image was multiplied by the system matrix to obtain the corresponding sinogram and then Poisson noise was added. The resultant sinogram was masked to create the effect of partial detector removal and then the proposed CS algorithm was applied to recover the missing PET data. In addition, different levels of noise were simulated to assess the performance of the proposed algorithm. For the phantom study, an IEC phantom with six internal spheres each filled with F-18 at an activity-to-background ratio of 10:1 was used. The phantom was imaged twice on a RX PET/CT scanner: once with all detectors operational (baseline) and once with four detector blocks (11%) turned off at each of 0 °, 90 °, 180 °, and 270° (partially sampled). The partially acquired sinograms were then recovered using the proposed algorithm. For the third test, PET images

  8. Modeling of video traffic in packet networks, low rate video compression, and the development of a lossy+lossless image compression algorithm

    NASA Technical Reports Server (NTRS)

    Sayood, K.; Chen, Y. C.; Wang, X.

    1992-01-01

    During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.

  9. Monitoring the influence of compression therapy on pathophysiology and structure of a swine scar model using multispectral imaging system

    NASA Astrophysics Data System (ADS)

    Ghassemi, Pejhman; Travis, Taryn E.; Shuppa, Jeffrey W.; Moffatt, Lauren T.; Ramella-Romana, Jessica C.

    2014-03-01

    Scar contractures can lead to significant reduction in function and inhibit patients from returning to work, participating in leisure activities and even render them unable to provide care for themselves. Compression therapy has long been a standard treatment for scar prevention but due to the lack of quantifiable metrics of scar formation scant evidence exists of its efficacy. We have recently introduced a multispectral imaging system to quantify pathophysiology (hemoglobin, blood oxygenation, melanin, etc) and structural features (roughness and collagen matrix) of scar. In this study, hypertrophic scars are monitored in-vivo in a porcine model using the imaging system to investigate influence of compression therapy on its quality.

  10. A rapid compression technique for 4-D functional MRI images using data rearrangement and modified binary array techniques.

    PubMed

    Uma Vetri Selvi, G; Nadarajan, R

    2015-12-01

    Compression techniques are vital for efficient storage and fast transfer of medical image data. The existing compression techniques take significant amount of time for performing encoding and decoding and hence the purpose of compression is not fully satisfied. In this paper a rapid 4-D lossy compression method constructed using data rearrangement, wavelet-based contourlet transformation and a modified binary array technique has been proposed for functional magnetic resonance imaging (fMRI) images. In the proposed method, the image slices of fMRI data are rearranged so that the redundant slices form a sequence. The image sequence is then divided into slices and transformed using wavelet-based contourlet transform (WBCT). In WBCT, the high frequency sub-band obtained from wavelet transform is further decomposed into multiple directional sub-bands by directional filter bank to obtain more directional information. The relationship between the coefficients has been changed in WBCT as it has more directions. The differences in parent–child relationships are handled by a repositioning algorithm. The repositioned coefficients are then subjected to quantization. The quantized coefficients are further compressed by modified binary array technique where the most frequently occurring value of a sequence is coded only once. The proposed method has been experimented with fMRI images the results indicated that the processing time of the proposed method is less compared to existing wavelet-based set partitioning in hierarchical trees and set partitioning embedded block coder (SPECK) compression schemes [1]. The proposed method could also yield a better compression performance compared to wavelet-based SPECK coder. The objective results showed that the proposed method could gain good compression ratio in maintaining a peak signal noise ratio value of above 70 for all the experimented sequences. The SSIM value is equal to 1 and the value of CC is greater than 0.9 for all

  11. Turbulent eddies in a compressible jet in crossflow measured using pulse-burst particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Beresh, Steven J.; Wagner, Justin L.; Henfling, John F.; Spillers, Russell W.; Pruett, Brian O. M.

    2016-02-01

    Pulse-burst Particle Image Velocimetry (PIV) has been employed to acquire time-resolved data at 25 kHz of a supersonic jet exhausting into a subsonic compressible crossflow. Data were acquired along the windward boundary of the jet shear layer and used to identify turbulent eddies as they convect downstream in the far-field of the interaction. Eddies were found to have a tendency to occur in closely spaced counter-rotating pairs and are routinely observed in the PIV movies, but the variable orientation of these pairs makes them difficult to detect statistically. Correlated counter-rotating vortices are more strongly observed to pass by at a larger spacing, both leading and trailing the reference eddy. This indicates the paired nature of the turbulent eddies and the tendency for these pairs to recur at repeatable spacing. Velocity spectra reveal a peak at a frequency consistent with this larger spacing between shear-layer vortices rotating with identical sign. The spatial scale of these vortices appears similar to previous observations of compressible jets in crossflow. Super-sampled velocity spectra to 150 kHz reveal a power-law dependency of -5/3 in the inertial subrange as well as a -1 dependency at lower frequencies attributed to the scales of the dominant shear-layer eddies.

  12. Turbulent eddies in a compressible jet in crossflow measured using pulse-burst particle image velocimetry

    SciTech Connect

    Beresh, Steven J.; Wagner, Justin L.; Henfling, John F.; Spillers, Russell Wayne; Pruett, Brian Owen Matthew

    2016-01-01

    Pulse-burst Particle Image Velocimetry(PIV) has been employed to acquire time-resolved data at 25 kHz of a supersonic jet exhausting into a subsonic compressible crossflow. Data were acquired along the windward boundary of the jet shear layer and used to identify turbulenteddies as they convect downstream in the far-field of the interaction. Eddies were found to have a tendency to occur in closely spaced counter-rotating pairs and are routinely observed in the PIV movies, but the variable orientation of these pairs makes them difficult to detect statistically. Correlated counter-rotating vortices are more strongly observed to pass by at a larger spacing, both leading and trailing the reference eddy. This indicates the paired nature of the turbulenteddies and the tendency for these pairs to recur at repeatable spacing. Velocity spectra reveal a peak at a frequency consistent with this larger spacing between shear-layer vortices rotating with identical sign. The spatial scale of these vortices appears similar to previous observations of compressible jets in crossflow. Furthermore,super-sampled velocity spectra to 150 kHz reveal a power-law dependency of –5/3 in the inertial subrange as well as a –1 dependency at lower frequencies attributed to the scales of the dominant shear-layer eddies.

  13. Turbulent eddies in a compressible jet in crossflow measured using pulse-burst particle image velocimetry

    DOE PAGES

    Beresh, Steven J.; Wagner, Justin L.; Henfling, John F.; Spillers, Russell Wayne; Pruett, Brian Owen Matthew

    2016-01-01

    Pulse-burst Particle Image Velocimetry(PIV) has been employed to acquire time-resolved data at 25 kHz of a supersonic jet exhausting into a subsonic compressible crossflow. Data were acquired along the windward boundary of the jet shear layer and used to identify turbulenteddies as they convect downstream in the far-field of the interaction. Eddies were found to have a tendency to occur in closely spaced counter-rotating pairs and are routinely observed in the PIV movies, but the variable orientation of these pairs makes them difficult to detect statistically. Correlated counter-rotating vortices are more strongly observed to pass by at a larger spacing,more » both leading and trailing the reference eddy. This indicates the paired nature of the turbulenteddies and the tendency for these pairs to recur at repeatable spacing. Velocity spectra reveal a peak at a frequency consistent with this larger spacing between shear-layer vortices rotating with identical sign. The spatial scale of these vortices appears similar to previous observations of compressible jets in crossflow. Furthermore,super-sampled velocity spectra to 150 kHz reveal a power-law dependency of –5/3 in the inertial subrange as well as a –1 dependency at lower frequencies attributed to the scales of the dominant shear-layer eddies.« less

  14. Comparison of no-reference image quality assessment machine learning-based algorithms on compressed images

    NASA Astrophysics Data System (ADS)

    Charrier, Christophe; Saadane, AbdelHakim; Fernandez-Maloigne, Christine

    2015-01-01

    No-reference image quality metrics are of fundamental interest as they can be embedded in practical applications. The main goal of this paper is to perform a comparative study of seven well known no-reference learning-based image quality algorithms. To test the performance of these algorithms, three public databases are used. As a first step, the trial algorithms are compared when no new learning is performed. The second step investigates how the training set influences the results. The Spearman Rank Ordered Correlation Coefficient (SROCC) is utilized to measure and compare the performance. In addition, an hypothesis test is conducted to evaluate the statistical significance of performance of each tested algorithm.

  15. Digital image correlation used to analyze a brick under compression test

    NASA Astrophysics Data System (ADS)

    Saldaña Heredia, Alonso; Márquez Aguilar, Pedro A.; Molina Ocampo, Arturo; Zamudio Lara, Álvaro

    2015-08-01

    In mechanics of materials it is important to know the stress-strain relation of each material in order to understand their behaviour under different loads. The brick is one of the most used materials in structural mechanics and they are always under loads. This work is implemented using one beam and the speckles created by its reflection. Strain field measurement with noninvasive techniques is needed in order to sense rubber-like materials. We present an experimental approach that describes the mechanical behavior of structural materials under compression tests, which are done in a universal testing machine. In this work we show an evaluation of the displacement field obtained by digital image correlation allowing us to evaluate the heterogeneous strain field evolution observed during these test.

  16. Classification of vertebral compression fractures in magnetic resonance images using spectral and fractal analysis.

    PubMed

    Azevedo-Marques, P M; Spagnoli, H F; Frighetto-Pereira, L; Menezes-Reis, R; Metzner, G A; Rangayyan, R M; Nogueira-Barbosa, M H

    2015-08-01

    Fractures with partial collapse of vertebral bodies are generically referred to as "vertebral compression fractures" or VCFs. VCFs can have different etiologies comprising trauma, bone failure related to osteoporosis, or metastatic cancer affecting bone. VCFs related to osteoporosis (benign fractures) and to cancer (malignant fractures) are commonly found in the elderly population. In the clinical setting, the differentiation between benign and malignant fractures is complex and difficult. This paper presents a study aimed at developing a system for computer-aided diagnosis to help in the differentiation between malignant and benign VCFs in magnetic resonance imaging (MRI). We used T1-weighted MRI of the lumbar spine in the sagittal plane. Images from 47 consecutive patients (31 women, 16 men, mean age 63 years) were studied, including 19 malignant fractures and 54 benign fractures. Spectral and fractal features were extracted from manually segmented images of 73 vertebral bodies with VCFs. The classification of malignant vs. benign VCFs was performed using the k-nearest neighbor classifier with the Euclidean distance. Results obtained show that combinations of features derived from Fourier and wavelet transforms, together with the fractal dimension, were able to obtain correct classification rate up to 94.7% with area under the receiver operating characteristic curve up to 0.95. PMID:26736364

  17. Chemical Shift Encoded Water–Fat Separation Using Parallel Imaging and Compressed Sensing

    PubMed Central

    Sharma, Samir D.; Hu, Houchun H.; Nayak, Krishna S.

    2013-01-01

    Chemical shift encoded techniques have received considerable attention recently because they can reliably separate water and fat in the presence of off-resonance. The insensitivity to off-resonance requires that data be acquired at multiple echo times, which increases the scan time as compared to a single echo acquisition. The increased scan time often requires that a compromise be made between the spatial resolution, the volume coverage, and the tolerance to artifacts from subject motion. This work describes a combined parallel imaging and compressed sensing approach for accelerated water–fat separation. In addition, the use of multiscale cubic B-splines for B0 field map estimation is introduced. The water and fat images and the B0 field map are estimated via an alternating minimization. Coil sensitivity information is derived from a calculated k-space convolution kernel and l1-regularization is imposed on the coil-combined water and fat image estimates. Uniform water–fat separation is demonstrated from retrospectively undersampled data in the liver, brachial plexus, ankle, and knee as well as from a prospectively undersampled acquisition of the knee at 8.6x acceleration. PMID:22505285

  18. Near-lossless image compression by adaptive prediction: new developments and comparison of algorithms

    NASA Astrophysics Data System (ADS)

    Aiazzi, Bruno; Alparone, Luciano; Baronti, Stefano

    2003-01-01

    This paper describes state-of-the-art approaches to near-lossless image compression by adaptive causal DPCM and presents two advanced schemes based on crisp and fuzzy switching of predictors, respectively. The former relies on a linear-regression prediction in which a different predictor is employed for each image block. Such block-representative predictors are calculated from the original data set through an iterative relaxation-labeling procedure. Coding time are affordable thanks to fast convergence of training. Decoding is always performed in real time. The latter is still based on adaptive MMSE prediction in which a different predictor at each pixel position is achieved by blending a number of prototype predictors through adaptive weights calculated from the past decoded samples. Quantization error feedback loops are introduced into the basic lossless encoders to enable user-defined upper-bounded reconstruction errors. Both schemes exploit context modeling of prediction errors followed by arithmetic coding to enhance entropy coding performances. A thorough performance comparison on a wide test image set show the superiority of the proposed schemes over both up-to-date encoders in the literature and new/upcoming standards.

  19. Investigation of the possibility of gamma-ray diagnostic imaging of target compression at NIF

    PubMed Central

    Lemieux, Daniel A.; Baudet, Camille; Grim, Gary P.; Barber, H. Bradford; Miller, Brian W.; Fasje, David; Furenlid, Lars R.

    2013-01-01

    The National Ignition Facility at Lawrence Livermore National Laboratory is the world’s leading facility to study the physics of igniting plasmas. Plasmas of hot deuterium and tritium, undergo d(t,n)α reactions that produce a 14.1 MeV neutron and 3.5 MeV a particle, in the center of mass. As these neutrons pass through the materials surrounding the hot core, they may undergo subsequent (n,x) reactions. For example, 12C(n,n’γ)12C reactions occur in remnant debris from the polymer ablator resulting in a significant fluence of 4.44 MeV gamma-rays. Imaging of these gammas will enable the determination of the volumetric size and symmetry of the ablation; large size and high asymmetry is expected to correlate with poor compression and lower fusion yield. Results from a gamma-ray imaging system are expected to be complimentary to a neutron imaging diagnostic system already in place at the NIF. This paper describes initial efforts to design a gamma-ray imaging system for the NIF using the existing neutron imaging system as a baseline for study. Due to the cross-section and expected range of ablator areal densities, the gamma flux should be approximately 10−3 of the neutron flux. For this reason, care must be taken to maximize the efficiency of the gamma-ray imaging system because it will be gamma starved. As with the neutron imager, use of pinholes and/or coded apertures are anticipated. Along with aperture and detector design, the selection of an appropriate scintillator is discussed. The volume of energy deposition of the interacting 4.44 MeV gamma-rays is a critical parameter limiting the imaging system spatial resolution. The volume of energy deposition is simulated with GEANT4, and plans to measure the volume of energy deposition experimentally are described. Results of tests on a pixellated LYSO scintillator are also presented. PMID:23420688

  20. Investigation of the possibility of gamma-ray diagnostic imaging of target compression at NIF.

    PubMed

    Lemieux, Daniel A; Baudet, Camille; Grim, Gary P; Barber, H Bradford; Miller, Brian W; Fasje, David; Furenlid, Lars R

    2011-09-23

    The National Ignition Facility at Lawrence Livermore National Laboratory is the world's leading facility to study the physics of igniting plasmas. Plasmas of hot deuterium and tritium, undergo d(t,n)α reactions that produce a 14.1 MeV neutron and 3.5 MeV a particle, in the center of mass. As these neutrons pass through the materials surrounding the hot core, they may undergo subsequent (n,x) reactions. For example, (12)C(n,n'γ)(12)C reactions occur in remnant debris from the polymer ablator resulting in a significant fluence of 4.44 MeV gamma-rays. Imaging of these gammas will enable the determination of the volumetric size and symmetry of the ablation; large size and high asymmetry is expected to correlate with poor compression and lower fusion yield. Results from a gamma-ray imaging system are expected to be complimentary to a neutron imaging diagnostic system already in place at the NIF. This paper describes initial efforts to design a gamma-ray imaging system for the NIF using the existing neutron imaging system as a baseline for study. Due to the cross-section and expected range of ablator areal densities, the gamma flux should be approximately 10(-3) of the neutron flux. For this reason, care must be taken to maximize the efficiency of the gamma-ray imaging system because it will be gamma starved. As with