Sample records for image compression techniques

  1. Image splitting and remapping method for radiological image compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  2. Radiological Image Compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  3. Lossless Astronomical Image Compression and the Effects of Random Noise

    NASA Technical Reports Server (NTRS)

    Pence, William

    2009-01-01

    In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.

  4. Compression for radiological images

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.

    1992-07-01

    The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

  5. An adaptive technique to maximize lossless image data compression of satellite images

    NASA Technical Reports Server (NTRS)

    Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe

    1994-01-01

    Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.

  6. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.

  7. Measurement of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique

    DTIC Science & Technology

    2013-05-01

    Measurement of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC...of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique Todd C...Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c

  8. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-12-30

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.

  9. Comparison of lossless compression techniques for prepress color images

    NASA Astrophysics Data System (ADS)

    Van Assche, Steven; Denecker, Koen N.; Philips, Wilfried R.; Lemahieu, Ignace L.

    1998-12-01

    In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only lossless compression is acceptable. Most existing lossless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. In this paper we present a comparison of three state-of-the-art lossless compression techniques which exploit such color redundancies: IEP (Inter- color Error Prediction) and a KLT-based technique, which are both linear color decorrelation techniques, and Interframe CALIC, which uses a non-linear approach to color decorrelation. It is shown that these techniques are able to exploit color redundancies and that color decorrelation can be done effectively and efficiently. The linear color decorrelators provide a considerable coding gain (about 2 bpp) on some typical prepress images. The non-linear interframe CALIC predictor does not yield better results, but the full interframe CALIC technique does.

  10. The Pixon Method for Data Compression Image Classification, and Image Reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard; Yahil, Amos

    2002-01-01

    As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.

  11. Application of content-based image compression to telepathology

    NASA Astrophysics Data System (ADS)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  12. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  13. A Posteriori Restoration of Block Transform-Compressed Data

    NASA Technical Reports Server (NTRS)

    Brown, R.; Boden, A. F.

    1995-01-01

    The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.

  14. Integrating dynamic and distributed compressive sensing techniques to enhance image quality of the compressive line sensing system for unmanned aerial vehicles application

    NASA Astrophysics Data System (ADS)

    Ouyang, Bing; Hou, Weilin; Caimi, Frank M.; Dalgleish, Fraser R.; Vuorenkoski, Anni K.; Gong, Cuiling

    2017-07-01

    The compressive line sensing imaging system adopts distributed compressive sensing (CS) to acquire data and reconstruct images. Dynamic CS uses Bayesian inference to capture the correlated nature of the adjacent lines. An image reconstruction technique that incorporates dynamic CS in the distributed CS framework was developed to improve the quality of reconstructed images. The effectiveness of the technique was validated using experimental data acquired in an underwater imaging test facility. Results that demonstrate contrast and resolution improvements will be presented. The improved efficiency is desirable for unmanned aerial vehicles conducting long-duration missions.

  15. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  16. Digital compression algorithms for HDTV transmission

    NASA Technical Reports Server (NTRS)

    Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.

    1990-01-01

    Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.

  17. The compression and storage method of the same kind of medical images: DPCM

    NASA Astrophysics Data System (ADS)

    Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong

    2006-09-01

    Medical imaging has started to take advantage of digital technology, opening the way for advanced medical imaging and teleradiology. Medical images, however, require large amounts of memory. At over 1 million bytes per image, a typical hospital needs a staggering amount of memory storage (over one trillion bytes per year), and transmitting an image over a network (even the promised superhighway) could take minutes--too slow for interactive teleradiology. This calls for image compression to reduce significantly the amount of data needed to represent an image. Several compression techniques with different compression ratio have been developed. However, the lossless techniques, which allow for perfect reconstruction of the original images, yield modest compression ratio, while the techniques that yield higher compression ratio are lossy, that is, the original image is reconstructed only approximately. Medical imaging poses the great challenge of having compression algorithms that are lossless (for diagnostic and legal reasons) and yet have high compression ratio for reduced storage and transmission time. To meet this challenge, we are developing and studying some compression schemes, which are either strictly lossless or diagnostically lossless, taking advantage of the peculiarities of medical images and of the medical practice. In order to increase the Signal to Noise Ratio (SNR) by exploitation of correlations within the source signal, a method of combining differential pulse code modulation (DPCM) is presented.

  18. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  19. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  20. Compressed domain indexing of losslessly compressed images

    NASA Astrophysics Data System (ADS)

    Schaefer, Gerald

    2001-12-01

    Image retrieval and image compression have been pursued separately in the past. Only little research has been done on a synthesis of the two by allowing image retrieval to be performed directly in the compressed domain of images without the need to uncompress them first. In this paper methods for image retrieval in the compressed domain of losslessly compressed images are introduced. While most image compression techniques are lossy, i.e. discard visually less significant information, lossless techniques are still required in fields like medical imaging or in situations where images must not be changed due to legal reasons. The algorithms in this paper are based on predictive coding methods where a pixel is encoded based on the pixel values of its (already encoded) neighborhood. The first method is based on an understanding that predictively coded data is itself indexable and represents a textural description of the image. The second method operates directly on the entropy encoded data by comparing codebooks of images. Experiments show good image retrieval results for both approaches.

  1. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications of our technology to the special problems of telemedicine.

  2. A High Performance Image Data Compression Technique for Space Applications

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Venbrux, Jack

    2003-01-01

    A highly performing image data compression technique is currently being developed for space science applications under the requirement of high-speed and pushbroom scanning. The technique is also applicable to frame based imaging data. The algorithm combines a two-dimensional transform with a bitplane encoding; this results in an embedded bit string with exact desirable compression rate specified by the user. The compression scheme performs well on a suite of test images acquired from spacecraft instruments. It can also be applied to three-dimensional data cube resulting from hyper-spectral imaging instrument. Flight qualifiable hardware implementations are in development. The implementation is being designed to compress data in excess of 20 Msampledsec and support quantization from 2 to 16 bits. This paper presents the algorithm, its applications and status of development.

  3. Lossless compression of VLSI layout image data.

    PubMed

    Dai, Vito; Zakhor, Avideh

    2006-09-01

    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.

  4. Architecture for one-shot compressive imaging using computer-generated holograms.

    PubMed

    Macfaden, Alexander J; Kindness, Stephen J; Wilkinson, Timothy D

    2016-09-10

    We propose a synchronous implementation of compressive imaging. This method is mathematically equivalent to prevailing sequential methods, but uses a static holographic optical element to create a spatially distributed spot array from which the image can be reconstructed with an instantaneous measurement. We present the holographic design requirements and demonstrate experimentally that the linear algebra of compressed imaging can be implemented with this technique. We believe this technique can be integrated with optical metasurfaces, which will allow the development of new compressive sensing methods.

  5. Compression of multispectral fluorescence microscopic images based on a modified set partitioning in hierarchal trees

    NASA Astrophysics Data System (ADS)

    Mansoor, Awais; Robinson, J. Paul; Rajwa, Bartek

    2009-02-01

    Modern automated microscopic imaging techniques such as high-content screening (HCS), high-throughput screening, 4D imaging, and multispectral imaging are capable of producing hundreds to thousands of images per experiment. For quick retrieval, fast transmission, and storage economy, these images should be saved in a compressed format. A considerable number of techniques based on interband and intraband redundancies of multispectral images have been proposed in the literature for the compression of multispectral and 3D temporal data. However, these works have been carried out mostly in the elds of remote sensing and video processing. Compression for multispectral optical microscopy imaging, with its own set of specialized requirements, has remained under-investigated. Digital photography{oriented 2D compression techniques like JPEG (ISO/IEC IS 10918-1) and JPEG2000 (ISO/IEC 15444-1) are generally adopted for multispectral images which optimize visual quality but do not necessarily preserve the integrity of scientic data, not to mention the suboptimal performance of 2D compression techniques in compressing 3D images. Herein we report our work on a new low bit-rate wavelet-based compression scheme for multispectral fluorescence biological imaging. The sparsity of signicant coefficients in high-frequency subbands of multispectral microscopic images is found to be much greater than in natural images; therefore a quad-tree concept such as Said et al.'s SPIHT1 along with correlation of insignicant wavelet coefficients has been proposed to further exploit redundancy at high-frequency subbands. Our work propose a 3D extension to SPIHT, incorporating a new hierarchal inter- and intra-spectral relationship amongst the coefficients of 3D wavelet-decomposed image. The new relationship, apart from adopting the parent-child relationship of classical SPIHT, also brought forth the conditional "sibling" relationship by relating only the insignicant wavelet coefficients of subbands at the same level of decomposition. The insignicant quadtrees in dierent subbands in the high-frequency subband class are coded by a combined function to reduce redundancy. A number of experiments conducted on microscopic multispectral images have shown promising results for the proposed method over current state-of-the-art image-compression techniques.

  6. Survey Of Lossless Image Coding Techniques

    NASA Astrophysics Data System (ADS)

    Melnychuck, Paul W.; Rabbani, Majid

    1989-04-01

    Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.

  7. Planning/scheduling techniques for VQ-based image compression

    NASA Technical Reports Server (NTRS)

    Short, Nicholas M., Jr.; Manohar, Mareboyana; Tilton, James C.

    1994-01-01

    The enormous size of the data holding and the complexity of the information system resulting from the EOS system pose several challenges to computer scientists, one of which is data archival and dissemination. More than ninety percent of the data holdings of NASA is in the form of images which will be accessed by users across the computer networks. Accessing the image data in its full resolution creates data traffic problems. Image browsing using a lossy compression reduces this data traffic, as well as storage by factor of 30-40. Of the several image compression techniques, VQ is most appropriate for this application since the decompression of the VQ compressed images is a table lookup process which makes minimal additional demands on the user's computational resources. Lossy compression of image data needs expert level knowledge in general and is not straightforward to use. This is especially true in the case of VQ. It involves the selection of appropriate codebooks for a given data set and vector dimensions for each compression ratio, etc. A planning and scheduling system is described for using the VQ compression technique in the data access and ingest of raw satellite data.

  8. Integer cosine transform compression for Galileo at Jupiter: A preliminary look

    NASA Technical Reports Server (NTRS)

    Ekroot, L.; Dolinar, S.; Cheung, K.-M.

    1993-01-01

    The Galileo low-gain antenna mission has a severely rate-constrained channel over which we wish to send large amounts of information. Because of this link pressure, compression techniques for image and other data are being selected. The compression technique that will be used for images is the integer cosine transform (ICT). This article investigates the compression performance of Galileo's ICT algorithm as applied to Galileo images taken during the early portion of the mission and to images that simulate those expected from the encounter at Jupiter.

  9. Optimal Compression Methods for Floating-point Format Images

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2009-01-01

    We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.

  10. Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm

    NASA Astrophysics Data System (ADS)

    Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan

    2017-12-01

    Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.

  11. Data Compression Techniques for Maps

    DTIC Science & Technology

    1989-01-01

    Lempel - Ziv compression is applied to the classified and unclassified images as also to the output of the compression algorithms . The algorithms ...resulted in a compression of 7:1. The output of the quadtree coding algorithm was then compressed using Lempel - Ziv coding. The compression ratio achieved...using Lempel - Ziv coding. The unclassified image gave a compression ratio of only 1.4:1. The K means classified image

  12. Edge-preserving image compression for magnetic-resonance images using dynamic associative neural networks (DANN)-based neural networks

    NASA Astrophysics Data System (ADS)

    Wan, Tat C.; Kabuka, Mansur R.

    1994-05-01

    With the tremendous growth in imaging applications and the development of filmless radiology, the need for compression techniques that can achieve high compression ratios with user specified distortion rates becomes necessary. Boundaries and edges in the tissue structures are vital for detection of lesions and tumors, which in turn requires the preservation of edges in the image. The proposed edge preserving image compressor (EPIC) combines lossless compression of edges with neural network compression techniques based on dynamic associative neural networks (DANN), to provide high compression ratios with user specified distortion rates in an adaptive compression system well-suited to parallel implementations. Improvements to DANN-based training through the use of a variance classifier for controlling a bank of neural networks speed convergence and allow the use of higher compression ratios for `simple' patterns. The adaptation and generalization capabilities inherent in EPIC also facilitate progressive transmission of images through varying the number of quantization levels used to represent compressed patterns. Average compression ratios of 7.51:1 with an averaged average mean squared error of 0.0147 were achieved.

  13. A block-based JPEG-LS compression technique with lossless region of interest

    NASA Astrophysics Data System (ADS)

    Deng, Lihua; Huang, Zhenghua; Yao, Shoukui

    2018-03-01

    JPEG-LS lossless compression algorithm is used in many specialized applications that emphasize on the attainment of high fidelity for its lower complexity and better compression ratios than the lossless JPEG standard. But it cannot prevent error diffusion because of the context dependence of the algorithm, and have low compression rate when compared to lossy compression. In this paper, we firstly divide the image into two parts: ROI regions and non-ROI regions. Then we adopt a block-based image compression technique to decrease the range of error diffusion. We provide JPEG-LS lossless compression for the image blocks which include the whole or part region of interest (ROI) and JPEG-LS near lossless compression for the image blocks which are included in the non-ROI (unimportant) regions. Finally, a set of experiments are designed to assess the effectiveness of the proposed compression method.

  14. A new simultaneous compression and encryption method for images suitable to recognize form by optical correlation

    NASA Astrophysics Data System (ADS)

    Alfalou, Ayman; Elbouz, Marwa; Jridi, Maher; Loussert, Alain

    2009-09-01

    In some recognition form applications (which require multiple images: facial identification or sign-language), many images should be transmitted or stored. This requires the use of communication systems with a good security level (encryption) and an acceptable transmission rate (compression rate). In the literature, several encryption and compression techniques can be found. In order to use optical correlation, encryption and compression techniques cannot be deployed independently and in a cascade manner. Otherwise, our system will suffer from two major problems. In fact, we cannot simply use these techniques in a cascade manner without considering the impact of one technique over another. Secondly, a standard compression can affect the correlation decision, because the correlation is sensitive to the loss of information. To solve both problems, we developed a new technique to simultaneously compress & encrypt multiple images using a BPOF optimized filter. The main idea of our approach consists in multiplexing the spectrums of different transformed images by a Discrete Cosine Transform (DCT). To this end, the spectral plane should be divided into several areas and each of them corresponds to the spectrum of one image. On the other hand, Encryption is achieved using the multiplexing, a specific rotation functions, biometric encryption keys and random phase keys. A random phase key is widely used in optical encryption approaches. Finally, many simulations have been conducted. Obtained results corroborate the good performance of our approach. We should also mention that the recording of the multiplexed and encrypted spectra is optimized using an adapted quantification technique to improve the overall compression rate.

  15. An Image Processing Technique for Achieving Lossy Compression of Data at Ratios in Excess of 100:1

    DTIC Science & Technology

    1992-11-01

    5 Lempel , Ziv , Welch (LZW) Compression ............... 7 Lossless Compression Tests Results ................. 9 Exact...since IBM holds the patent for this technique. Lempel , Ziv , Welch (LZW) Compression The LZW compression is related to two compression techniques known as... compression , using the input stream as data . This step is possible because the compression algorithm always outputs the phrase and character components of a

  16. Data compression experiments with LANDSAT thematic mapper and Nimbus-7 coastal zone color scanner data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Ramapriyan, H. K.

    1989-01-01

    A case study is presented where an image segmentation based compression technique is applied to LANDSAT Thematic Mapper (TM) and Nimbus-7 Coastal Zone Color Scanner (CZCS) data. The compression technique, called Spatially Constrained Clustering (SCC), can be regarded as an adaptive vector quantization approach. The SCC can be applied to either single or multiple spectral bands of image data. The segmented image resulting from SCC is encoded in small rectangular blocks, with the codebook varying from block to block. Lossless compression potential (LDP) of sample TM and CZCS images are evaluated. For the TM test image, the LCP is 2.79. For the CZCS test image the LCP is 1.89, even though when only a cloud-free section of the image is considered the LCP increases to 3.48. Examples of compressed images are shown at several compression ratios ranging from 4 to 15. In the case of TM data, the compressed data are classified using the Bayes' classifier. The results show an improvement in the similarity between the classification results and ground truth when compressed data are used, thus showing that compression is, in fact, a useful first step in the analysis.

  17. Novel approach to multispectral image compression on the Internet

    NASA Astrophysics Data System (ADS)

    Zhu, Yanqiu; Jin, Jesse S.

    2000-10-01

    Still image coding techniques such as JPEG have been always applied onto intra-plane images. Coding fidelity is always utilized in measuring the performance of intra-plane coding methods. In many imaging applications, it is more and more necessary to deal with multi-spectral images, such as the color images. In this paper, a novel approach to multi-spectral image compression is proposed by using transformations among planes for further compression of spectral planes. Moreover, a mechanism of introducing human visual system to the transformation is provided for exploiting the psycho visual redundancy. The new technique for multi-spectral image compression, which is designed to be compatible with the JPEG standard, is demonstrated on extracting correlation among planes based on human visual system. A high measure of compactness in the data representation and compression can be seen with the power of the scheme taken into account.

  18. A data compression technique for synthetic aperture radar images

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Minden, G. J.

    1986-01-01

    A data compression technique is developed for synthetic aperture radar (SAR) imagery. The technique is based on an SAR image model and is designed to preserve the local statistics in the image by an adaptive variable rate modification of block truncation coding (BTC). A data rate of approximately 1.6 bit/pixel is achieved with the technique while maintaining the image quality and cultural (pointlike) targets. The algorithm requires no large data storage and is computationally simple.

  19. Improved compression technique for multipass color printers

    NASA Astrophysics Data System (ADS)

    Honsinger, Chris

    1998-01-01

    A multipass color printer prints a color image by printing one color place at a time in a prescribed order, e.g., in a four-color systems, the cyan plane may be printed first, the magenta next, and so on. It is desirable to discard the data related to each color plane once it has been printed, so that data from the next print may be downloaded. In this paper, we present a compression scheme that allows the release of a color plane memory, but still takes advantage of the correlation between the color planes. The compression scheme is based on a block adaptive technique for decorrelating the color planes followed by a spatial lossy compression of the decorrelated data. A preferred method of lossy compression is the DCT-based JPEG compression standard, as it is shown that the block adaptive decorrelation operations can be efficiently performed in the DCT domain. The result of the compression technique are compared to that of using JPEG on RGB data without any decorrelating transform. In general, the technique is shown to improve the compression performance over a practical range of compression ratios by at least 30 percent in all images, and up to 45 percent in some images.

  20. Bandwidth compression of multispectral satellite imagery

    NASA Technical Reports Server (NTRS)

    Habibi, A.

    1978-01-01

    The results of two studies aimed at developing efficient adaptive and nonadaptive techniques for compressing the bandwidth of multispectral images are summarized. These techniques are evaluated and compared using various optimality criteria including MSE, SNR, and recognition accuracy of the bandwidth compressed images. As an example of future requirements, the bandwidth requirements for the proposed Landsat-D Thematic Mapper are considered.

  1. Telemedicine + OCT: toward design of optimized algorithms for high-quality compressed images

    NASA Astrophysics Data System (ADS)

    Mousavi, Mahta; Lurie, Kristen; Land, Julian; Javidi, Tara; Ellerbee, Audrey K.

    2014-03-01

    Telemedicine is an emerging technology that aims to provide clinical healthcare at a distance. Among its goals, the transfer of diagnostic images over telecommunication channels has been quite appealing to the medical community. When viewed as an adjunct to biomedical device hardware, one highly important consideration aside from the transfer rate and speed is the accuracy of the reconstructed image at the receiver end. Although optical coherence tomography (OCT) is an established imaging technique that is ripe for telemedicine, the effects of OCT data compression, which may be necessary on certain telemedicine platforms, have not received much attention in the literature. We investigate the performance and efficiency of several lossless and lossy compression techniques for OCT data and characterize their effectiveness with respect to achievable compression ratio, compression rate and preservation of image quality. We examine the effects of compression in the interferogram vs. A-scan domain as assessed with various objective and subjective metrics.

  2. Pulse compression favourable aperiodic infrared imaging approach for non-destructive testing and evaluation of bio-materials

    NASA Astrophysics Data System (ADS)

    Mulaveesala, Ravibabu; Dua, Geetika; Arora, Vanita; Siddiqui, Juned A.; Muniyappa, Amarnath

    2017-05-01

    In recent years, aperiodic, transient pulse compression favourable infrared imaging methodologies demonstrated as reliable, quantitative, remote characterization and evaluation techniques for testing and evaluation of various biomaterials. This present work demonstrates a pulse compression favourable aperiodic thermal wave imaging technique, frequency modulated thermal wave imaging technique for bone diagnostics, especially by considering the bone with tissue, skin and muscle over layers. In order to find the capabilities of the proposed frequency modulated thermal wave imaging technique to detect the density variations in a multi layered skin-fat-muscle-bone structure, finite element modeling and simulation studies have been carried out. Further, frequency and time domain post processing approaches have been adopted on the temporal temperature data in order to improve the detection capabilities of frequency modulated thermal wave imaging.

  3. An investigative study of multispectral data compression for remotely-sensed images using vector quantization and difference-mapped shift-coding

    NASA Technical Reports Server (NTRS)

    Jaggi, S.

    1993-01-01

    A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.

  4. A Unified Steganalysis Framework

    DTIC Science & Technology

    2013-04-01

    contains more than 1800 images of different scenes. In the experiments, we used four JPEG based steganography techniques: Out- guess [13], F5 [16], model...also compressed these images again since some of the steganography meth- ods are double compressing the images . Stego- images are generated by embedding...randomly chosen messages (in bits) into 1600 grayscale images using each of the four steganography techniques. A random message length was determined

  5. Clinical utility of wavelet compression for resolution-enhanced chest radiography

    NASA Astrophysics Data System (ADS)

    Andriole, Katherine P.; Hovanes, Michael E.; Rowberg, Alan H.

    2000-05-01

    This study evaluates the usefulness of wavelet compression for resolution-enhanced storage phosphor chest radiographs in the detection of subtle interstitial disease, pneumothorax and other abnormalities. A wavelet compression technique, MrSIDTM (LizardTech, Inc., Seattle, WA), is implemented which compresses the images from their original 2,000 by 2,000 (2K) matrix size, and then decompresses the image data for display at optimal resolution by matching the spatial frequency characteristics of image objects using a 4,000- square matrix. The 2K-matrix computed radiography (CR) chest images are magnified to a 4K-matrix using wavelet series expansion. The magnified images are compared with the original uncompressed 2K radiographs and with two-times magnification of the original images. Preliminary results show radiologist preference for MrSIDTM wavelet-based magnification over magnification of original data, and suggest that the compressed/decompressed images may provide an enhancement to the original. Data collection for clinical trials of 100 chest radiographs including subtle interstitial abnormalities and/or subtle pneumothoraces and normal cases, are in progress. Three experienced thoracic radiologists will view images side-by- side on calibrated softcopy workstations under controlled viewing conditions, and rank order preference tests will be performed. This technique combines image compression with image enhancement, and suggests that compressed/decompressed images can actually improve the originals.

  6. Compact storage of medical images with patient information.

    PubMed

    Acharya, R; Anand, D; Bhat, S; Niranjan, U C

    2001-12-01

    Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images to reduce storage and transmission overheads. The text data are encrypted before interleaving with images to ensure greater security. The graphical signals are compressed and subsequently interleaved with the image. Differential pulse-code-modulation and adaptive-delta-modulation techniques are employed for data compression, and encryption and results are tabulated for a specific example.

  7. Information extraction and transmission techniques for spaceborne synthetic aperture radar images

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Yurovsky, L.; Watson, E.; Townsend, K.; Gardner, S.; Boberg, D.; Watson, J.; Minden, G. J.; Shanmugan, K. S.

    1984-01-01

    Information extraction and transmission techniques for synthetic aperture radar (SAR) imagery were investigated. Four interrelated problems were addressed. An optimal tonal SAR image classification algorithm was developed and evaluated. A data compression technique was developed for SAR imagery which is simple and provides a 5:1 compression with acceptable image quality. An optimal textural edge detector was developed. Several SAR image enhancement algorithms have been proposed. The effectiveness of each algorithm was compared quantitatively.

  8. Task-oriented lossy compression of magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques

    1996-04-01

    A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.

  9. Neural network for image compression

    NASA Astrophysics Data System (ADS)

    Panchanathan, Sethuraman; Yeap, Tet H.; Pilache, B.

    1992-09-01

    In this paper, we propose a new scheme for image compression using neural networks. Image data compression deals with minimization of the amount of data required to represent an image while maintaining an acceptable quality. Several image compression techniques have been developed in recent years. We note that the coding performance of these techniques may be improved by employing adaptivity. Over the last few years neural network has emerged as an effective tool for solving a wide range of problems involving adaptivity and learning. A multilayer feed-forward neural network trained using the backward error propagation algorithm is used in many applications. However, this model is not suitable for image compression because of its poor coding performance. Recently, a self-organizing feature map (SOFM) algorithm has been proposed which yields a good coding performance. However, this algorithm requires a long training time because the network starts with random initial weights. In this paper we have used the backward error propagation algorithm (BEP) to quickly obtain the initial weights which are then used to speedup the training time required by the SOFM algorithm. The proposed approach (BEP-SOFM) combines the advantages of the two techniques and, hence, achieves a good coding performance in a shorter training time. Our simulation results demonstrate the potential gains using the proposed technique.

  10. Injectant mole-fraction imaging in compressible mixing flows using planar laser-induced iodine fluorescence

    NASA Technical Reports Server (NTRS)

    Hartfield, Roy J., Jr.; Abbitt, John D., III; Mcdaniel, James C.

    1989-01-01

    A technique is described for imaging the injectant mole-fraction distribution in nonreacting compressible mixing flow fields. Planar fluorescence from iodine, seeded into air, is induced by a broadband argon-ion laser and collected using an intensified charge-injection-device array camera. The technique eliminates the thermodynamic dependence of the iodine fluorescence in the compressible flow field by taking the ratio of two images collected with identical thermodynamic flow conditions but different iodine seeding conditions.

  11. Watermarking of ultrasound medical images in teleradiology using compressed watermark

    PubMed Central

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohamad; Ali, Mushtaq

    2016-01-01

    Abstract. The open accessibility of Internet-based medical images in teleradialogy face security threats due to the nonsecured communication media. This paper discusses the spatial domain watermarking of ultrasound medical images for content authentication, tamper detection, and lossless recovery. For this purpose, the image is divided into two main parts, the region of interest (ROI) and region of noninterest (RONI). The defined ROI and its hash value are combined as watermark, lossless compressed, and embedded into the RONI part of images at pixel’s least significant bits (LSBs). The watermark lossless compression and embedding at pixel’s LSBs preserve image diagnostic and perceptual qualities. Different lossless compression techniques including Lempel-Ziv-Welch (LZW) were tested for watermark compression. The performances of these techniques were compared based on more bit reduction and compression ratio. LZW was found better than others and used in tamper detection and recovery watermarking of medical images (TDARWMI) scheme development to be used for ROI authentication, tamper detection, localization, and lossless recovery. TDARWMI performance was compared and found to be better than other watermarking schemes. PMID:26839914

  12. A Novel 2D Image Compression Algorithm Based on Two Levels DWT and DCT Transforms with Enhanced Minimize-Matrix-Size Algorithm for High Resolution Structured Light 3D Surface Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2015-09-01

    Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.

  13. Compressed Sensing for Body MRI

    PubMed Central

    Feng, Li; Benkert, Thomas; Block, Kai Tobias; Sodickson, Daniel K; Otazo, Ricardo; Chandarana, Hersh

    2016-01-01

    The introduction of compressed sensing for increasing imaging speed in MRI has raised significant interest among researchers and clinicians, and has initiated a large body of research across multiple clinical applications over the last decade. Compressed sensing aims to reconstruct unaliased images from fewer measurements than that are traditionally required in MRI by exploiting image compressibility or sparsity. Moreover, appropriate combinations of compressed sensing with previously introduced fast imaging approaches, such as parallel imaging, have demonstrated further improved performance. The advent of compressed sensing marks the prelude to a new era of rapid MRI, where the focus of data acquisition has changed from sampling based on the nominal number of voxels and/or frames to sampling based on the desired information content. This paper presents a brief overview of the application of compressed sensing techniques in body MRI, where imaging speed is crucial due to the presence of respiratory motion along with stringent constraints on spatial and temporal resolution. The first section provides an overview of the basic compressed sensing methodology, including the notion of sparsity, incoherence, and non-linear reconstruction. The second section reviews state-of-the-art compressed sensing techniques that have been demonstrated for various clinical body MRI applications. In the final section, the paper discusses current challenges and future opportunities. PMID:27981664

  14. High-performance compression of astronomical images

    NASA Technical Reports Server (NTRS)

    White, Richard L.

    1993-01-01

    Astronomical images have some rather unusual characteristics that make many existing image compression techniques either ineffective or inapplicable. A typical image consists of a nearly flat background sprinkled with point sources and occasional extended sources. The images are often noisy, so that lossless compression does not work very well; furthermore, the images are usually subjected to stringent quantitative analysis, so any lossy compression method must be proven not to discard useful information, but must instead discard only the noise. Finally, the images can be extremely large. For example, the Space Telescope Science Institute has digitized photographic plates covering the entire sky, generating 1500 images each having 14000 x 14000 16-bit pixels. Several astronomical groups are now constructing cameras with mosaics of large CCD's (each 2048 x 2048 or larger); these instruments will be used in projects that generate data at a rate exceeding 100 MBytes every 5 minutes for many years. An effective technique for image compression may be based on the H-transform (Fritze et al. 1977). The method that we have developed can be used for either lossless or lossy compression. The digitized sky survey images can be compressed by at least a factor of 10 with no noticeable losses in the astrometric and photometric properties of the compressed images. The method has been designed to be computationally efficient: compression or decompression of a 512 x 512 image requires only 4 seconds on a Sun SPARCstation 1. The algorithm uses only integer arithmetic, so it is completely reversible in its lossless mode, and it could easily be implemented in hardware for space applications.

  15. Evaluation of the robustness of the preprocessing technique improving reversible compressibility of CT images: Tested on various CT examinations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeon, Chang Ho; Kim, Bohyoung; Gu, Bon Seung

    2013-10-15

    Purpose: To modify the preprocessing technique, which was previously proposed, improving compressibility of computed tomography (CT) images to cover the diversity of three dimensional configurations of different body parts and to evaluate the robustness of the technique in terms of segmentation correctness and increase in reversible compression ratio (CR) for various CT examinations.Methods: This study had institutional review board approval with waiver of informed patient consent. A preprocessing technique was previously proposed to improve the compressibility of CT images by replacing pixel values outside the body region with a constant value resulting in maximizing data redundancy. Since the technique wasmore » developed aiming at only chest CT images, the authors modified the segmentation method to cover the diversity of three dimensional configurations of different body parts. The modified version was evaluated as follows. In randomly selected 368 CT examinations (352 787 images), each image was preprocessed by using the modified preprocessing technique. Radiologists visually confirmed whether the segmented region covers the body region or not. The images with and without the preprocessing were reversibly compressed using Joint Photographic Experts Group (JPEG), JPEG2000 two-dimensional (2D), and JPEG2000 three-dimensional (3D) compressions. The percentage increase in CR per examination (CR{sub I}) was measured.Results: The rate of correct segmentation was 100.0% (95% CI: 99.9%, 100.0%) for all the examinations. The median of CR{sub I} were 26.1% (95% CI: 24.9%, 27.1%), 40.2% (38.5%, 41.1%), and 34.5% (32.7%, 36.2%) in JPEG, JPEG2000 2D, and JPEG2000 3D, respectively.Conclusions: In various CT examinations, the modified preprocessing technique can increase in the CR by 25% or more without concerning about degradation of diagnostic information.« less

  16. Complex adaptation-based LDR image rendering for 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  17. In vivo optical elastography: stress and strain imaging of human skin lesions

    NASA Astrophysics Data System (ADS)

    Es'haghian, Shaghayegh; Gong, Peijun; Kennedy, Kelsey M.; Wijesinghe, Philip; Sampson, David D.; McLaughlin, Robert A.; Kennedy, Brendan F.

    2015-03-01

    Probing the mechanical properties of skin at high resolution could aid in the assessment of skin pathologies by, for example, detecting the extent of cancerous skin lesions and assessing pathology in burn scars. Here, we present two elastography techniques based on optical coherence tomography (OCT) to probe the local mechanical properties of skin. The first technique, optical palpation, is a high-resolution tactile imaging technique, which uses a complaint silicone layer positioned on the tissue surface to measure spatially-resolved stress imparted by compressive loading. We assess the performance of optical palpation, using a handheld imaging probe on a skin-mimicking phantom, and demonstrate its use on human skin. The second technique is a strain imaging technique, phase-sensitive compression OCE that maps depth-resolved mechanical variations within skin. We show preliminary results of in vivo phase-sensitive compression OCE on a human skin lesion.

  18. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  19. Steganographic optical image encryption system based on reversible data hiding and double random phase encoding

    NASA Astrophysics Data System (ADS)

    Chuang, Cheng-Hung; Chen, Yen-Lin

    2013-02-01

    This study presents a steganographic optical image encryption system based on reversible data hiding and double random phase encoding (DRPE) techniques. Conventional optical image encryption systems can securely transmit valuable images using an encryption method for possible application in optical transmission systems. The steganographic optical image encryption system based on the DRPE technique has been investigated to hide secret data in encrypted images. However, the DRPE techniques vulnerable to attacks and many of the data hiding methods in the DRPE system can distort the decrypted images. The proposed system, based on reversible data hiding, uses a JBIG2 compression scheme to achieve lossless decrypted image quality and perform a prior encryption process. Thus, the DRPE technique enables a more secured optical encryption process. The proposed method extracts and compresses the bit planes of the original image using the lossless JBIG2 technique. The secret data are embedded in the remaining storage space. The RSA algorithm can cipher the compressed binary bits and secret data for advanced security. Experimental results show that the proposed system achieves a high data embedding capacity and lossless reconstruction of the original images.

  20. A Real-Time High Performance Data Compression Technique For Space Applications

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Venbrux, Jack; Bhatia, Prakash; Miller, Warner H.

    2000-01-01

    A high performance lossy data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on block-transform combined with bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate. The lossy coder is described. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Hardware implementations are in development; a functional chip set is expected by the end of 2001.

  1. An image compression survey and algorithm switching based on scene activity

    NASA Technical Reports Server (NTRS)

    Hart, M. M.

    1985-01-01

    Data compression techniques are presented. A description of these techniques is provided along with a performance evaluation. The complexity of the hardware resulting from their implementation is also addressed. The compression effect on channel distortion and the applicability of these algorithms to real-time processing are presented. Also included is a proposed new direction for an adaptive compression technique for real-time processing.

  2. Efficient image acquisition design for a cancer detection system

    NASA Astrophysics Data System (ADS)

    Nguyen, Dung; Roehrig, Hans; Borders, Marisa H.; Fitzpatrick, Kimberly A.; Roveda, Janet

    2013-09-01

    Modern imaging modalities, such as Computed Tomography (CT), Digital Breast Tomosynthesis (DBT) or Magnetic Resonance Tomography (MRT) are able to acquire volumetric images with an isotropic resolution in micrometer (um) or millimeter (mm) range. When used in interactive telemedicine applications, these raw images need a huge storage unit, thereby necessitating the use of high bandwidth data communication link. To reduce the cost of transmission and enable archiving, especially for medical applications, image compression is performed. Recent advances in compression algorithms have resulted in a vast array of data compression techniques, but because of the characteristics of these images, there are challenges to overcome to transmit these images efficiently. In addition, the recent studies raise the low dose mammography risk on high risk patient. Our preliminary studies indicate that by bringing the compression before the analog-to-digital conversion (ADC) stage is more efficient than other compression techniques after the ADC. The linearity characteristic of the compressed sensing and ability to perform the digital signal processing (DSP) during data conversion open up a new area of research regarding the roles of sparsity in medical image registration, medical image analysis (for example, automatic image processing algorithm to efficiently extract the relevant information for the clinician), further Xray dose reduction for mammography, and contrast enhancement.

  3. Compressive Sensing Image Sensors-Hardware Implementation

    PubMed Central

    Dadkhah, Mohammadreza; Deen, M. Jamal; Shirani, Shahram

    2013-01-01

    The compressive sensing (CS) paradigm uses simultaneous sensing and compression to provide an efficient image acquisition technique. The main advantages of the CS method include high resolution imaging using low resolution sensor arrays and faster image acquisition. Since the imaging philosophy in CS imagers is different from conventional imaging systems, new physical structures have been developed for cameras that use the CS technique. In this paper, a review of different hardware implementations of CS encoding in optical and electrical domains is presented. Considering the recent advances in CMOS (complementary metal–oxide–semiconductor) technologies and the feasibility of performing on-chip signal processing, important practical issues in the implementation of CS in CMOS sensors are emphasized. In addition, the CS coding for video capture is discussed. PMID:23584123

  4. Volume and tissue composition preserving deformation of breast CT images to simulate breast compression in mammographic imaging

    NASA Astrophysics Data System (ADS)

    Han, Tao; Chen, Lingyun; Lai, Chao-Jen; Liu, Xinming; Shen, Youtao; Zhong, Yuncheng; Ge, Shuaiping; Yi, Ying; Wang, Tianpeng; Shaw, Chris C.

    2009-02-01

    Images of mastectomy breast specimens have been acquired with a bench top experimental Cone beam CT (CBCT) system. The resulting images have been segmented to model an uncompressed breast for simulation of various CBCT techniques. To further simulate conventional or tomosynthesis mammographic imaging for comparison with the CBCT technique, a deformation technique was developed to convert the CT data for an uncompressed breast to a compressed breast without altering the breast volume or regional breast density. With this technique, 3D breast deformation is separated into two 2D deformations in coronal and axial views. To preserve the total breast volume and regional tissue composition, each 2D deformation step was achieved by altering the square pixels into rectangular ones with the pixel areas unchanged and resampling with the original square pixels using bilinear interpolation. The compression was modeled by first stretching the breast in the superior-inferior direction in the coronal view. The image data were first deformed by distorting the voxels with a uniform distortion ratio. These deformed data were then deformed again using distortion ratios varying with the breast thickness and re-sampled. The deformation procedures were applied in the axial view to stretch the breast in the chest wall to nipple direction while shrinking it in the mediolateral to lateral direction re-sampled and converted into data for uniform cubic voxels. Threshold segmentation was applied to the final deformed image data to obtain the 3D compressed breast model. Our results show that the original segmented CBCT image data were successfully converted into those for a compressed breast with the same volume and regional density preserved. Using this compressed breast model, conventional and tomosynthesis mammograms were simulated for comparison with CBCT.

  5. Digital Image Compression Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.

    1993-01-01

    The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.

  6. Optical image transformation and encryption by phase-retrieval-based double random-phase encoding and compressive ghost imaging

    NASA Astrophysics Data System (ADS)

    Yuan, Sheng; Yang, Yangrui; Liu, Xuemei; Zhou, Xin; Wei, Zhenzhuo

    2018-01-01

    An optical image transformation and encryption scheme is proposed based on double random-phase encoding (DRPE) and compressive ghost imaging (CGI) techniques. In this scheme, a secret image is first transformed into a binary image with the phase-retrieval-based DRPE technique, and then encoded by a series of random amplitude patterns according to the ghost imaging (GI) principle. Compressive sensing, corrosion and expansion operations are implemented to retrieve the secret image in the decryption process. This encryption scheme takes the advantage of complementary capabilities offered by the phase-retrieval-based DRPE and GI-based encryption techniques. That is the phase-retrieval-based DRPE is used to overcome the blurring defect of the decrypted image in the GI-based encryption, and the CGI not only reduces the data amount of the ciphertext, but also enhances the security of DRPE. Computer simulation results are presented to verify the performance of the proposed encryption scheme.

  7. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    PubMed

    Gillespy, T; Rowberg, A H

    1994-02-01

    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.

  8. SAR data compression: Application, requirements, and designs

    NASA Technical Reports Server (NTRS)

    Curlander, John C.; Chang, C. Y.

    1991-01-01

    The feasibility of reducing data volume and data rate is evaluated for the Earth Observing System (EOS) Synthetic Aperture Radar (SAR). All elements of data stream from the sensor downlink data stream to electronic delivery of browse data products are explored. The factors influencing design of a data compression system are analyzed, including the signal data characteristics, the image quality requirements, and the throughput requirements. The conclusion is that little or no reduction can be achieved in the raw signal data using traditional data compression techniques (e.g., vector quantization, adaptive discrete cosine transform) due to the induced phase errors in the output image. However, after image formation, a number of techniques are effective for data compression.

  9. Compression techniques in tele-radiology

    NASA Astrophysics Data System (ADS)

    Lu, Tianyu; Xiong, Zixiang; Yun, David Y.

    1999-10-01

    This paper describes a prototype telemedicine system for remote 3D radiation treatment planning. Due to voluminous medical image data and image streams generated in interactive frame rate involved in the application, the importance of deploying adjustable lossy to lossless compression techniques is emphasized in order to achieve acceptable performance via various kinds of communication networks. In particular, the compression of the data substantially reduces the transmission time and therefore allows large-scale radiation distribution simulation and interactive volume visualization using remote supercomputing resources in a timely fashion. The compression algorithms currently used in the software we developed are JPEG and H.263 lossy methods and Lempel-Ziv (LZ77) lossless methods. Both objective and subjective assessment of the effect of lossy compression methods on the volume data are conducted. Favorable results are obtained showing that substantial compression ratio is achievable within distortion tolerance. From our experience, we conclude that 30dB (PSNR) is about the lower bound to achieve acceptable quality when applying lossy compression to anatomy volume data (e.g. CT). For computer simulated data, much higher PSNR (up to 100dB) is expectable. This work not only introduces such novel approach for delivering medical services that will have significant impact on the existing cooperative image-based services, but also provides a platform for the physicians to assess the effects of lossy compression techniques on the diagnostic and aesthetic appearance of medical imaging.

  10. Two-dimensional compression of surface electromyographic signals using column-correlation sorting and image encoders.

    PubMed

    Costa, Marcus V C; Carvalho, Joao L A; Berger, Pedro A; Zaghetto, Alexandre; da Rocha, Adson F; Nascimento, Francisco A O

    2009-01-01

    We present a new preprocessing technique for two-dimensional compression of surface electromyographic (S-EMG) signals, based on correlation sorting. We show that the JPEG2000 coding system (originally designed for compression of still images) and the H.264/AVC encoder (video compression algorithm operating in intraframe mode) can be used for compression of S-EMG signals. We compare the performance of these two off-the-shelf image compression algorithms for S-EMG compression, with and without the proposed preprocessing step. Compression of both isotonic and isometric contraction S-EMG signals is evaluated. The proposed methods were compared with other S-EMG compression algorithms from the literature.

  11. Image coding of SAR imagery

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Kwok, R.; Curlander, J. C.

    1987-01-01

    Five coding techniques in the spatial and transform domains have been evaluated for SAR image compression: linear three-point predictor (LTPP), block truncation coding (BTC), microadaptive picture sequencing (MAPS), adaptive discrete cosine transform (ADCT), and adaptive Hadamard transform (AHT). These techniques have been tested with Seasat data. Both LTPP and BTC spatial domain coding techniques provide very good performance at rates of 1-2 bits/pixel. The two transform techniques, ADCT and AHT, demonstrate the capability to compress the SAR imagery to less than 0.5 bits/pixel without visible artifacts. Tradeoffs such as the rate distortion performance, the computational complexity, the algorithm flexibility, and the controllability of compression ratios are also discussed.

  12. NIR hyperspectral compressive imager based on a modified Fabry–Perot resonator

    NASA Astrophysics Data System (ADS)

    Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Stern, Adrian

    2018-04-01

    The acquisition of hyperspectral (HS) image datacubes with available 2D sensor arrays involves a time consuming scanning process. In the last decade, several compressive sensing (CS) techniques were proposed to reduce the HS acquisition time. In this paper, we present a method for near-infrared (NIR) HS imaging which relies on our rapid CS resonator spectroscopy technique. Within the framework of CS, and by using a modified Fabry–Perot resonator, a sequence of spectrally modulated images is used to recover NIR HS datacubes. Owing to the innovative CS design, we demonstrate the ability to reconstruct NIR HS images with hundreds of spectral bands from an order of magnitude fewer measurements, i.e. with a compression ratio of about 10:1. This high compression ratio, together with the high optical throughput of the system, facilitates fast acquisition of large HS datacubes.

  13. Coil Compression for Accelerated Imaging with Cartesian Sampling

    PubMed Central

    Zhang, Tao; Pauly, John M.; Vasanawala, Shreyas S.; Lustig, Michael

    2012-01-01

    MRI using receiver arrays with many coil elements can provide high signal-to-noise ratio and increase parallel imaging acceleration. At the same time, the growing number of elements results in larger datasets and more computation in the reconstruction. This is of particular concern in 3D acquisitions and in iterative reconstructions. Coil compression algorithms are effective in mitigating this problem by compressing data from many channels into fewer virtual coils. In Cartesian sampling there often are fully sampled k-space dimensions. In this work, a new coil compression technique for Cartesian sampling is presented that exploits the spatially varying coil sensitivities in these non-subsampled dimensions for better compression and computation reduction. Instead of directly compressing in k-space, coil compression is performed separately for each spatial location along the fully-sampled directions, followed by an additional alignment process that guarantees the smoothness of the virtual coil sensitivities. This important step provides compatibility with autocalibrating parallel imaging techniques. Its performance is not susceptible to artifacts caused by a tight imaging fieldof-view. High quality compression of in-vivo 3D data from a 32 channel pediatric coil into 6 virtual coils is demonstrated. PMID:22488589

  14. Spectral compression algorithms for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R.

    2007-10-16

    A method for spectrally compressing data sets enables the efficient analysis of very large multivariate images. The spectral compression algorithm uses a factored representation of the data that can be obtained from Principal Components Analysis or other factorization technique. Furthermore, a block algorithm can be used for performing common operations more efficiently. An image analysis can be performed on the factored representation of the data, using only the most significant factors. The spectral compression algorithm can be combined with a spatial compression algorithm to provide further computational efficiencies.

  15. SEMG signal compression based on two-dimensional techniques.

    PubMed

    de Melo, Wheidima Carneiro; de Lima Filho, Eddie Batista; da Silva Júnior, Waldir Sabino

    2016-04-18

    Recently, two-dimensional techniques have been successfully employed for compressing surface electromyographic (SEMG) records as images, through the use of image and video encoders. Such schemes usually provide specific compressors, which are tuned for SEMG data, or employ preprocessing techniques, before the two-dimensional encoding procedure, in order to provide a suitable data organization, whose correlations can be better exploited by off-the-shelf encoders. Besides preprocessing input matrices, one may also depart from those approaches and employ an adaptive framework, which is able to directly tackle SEMG signals reassembled as images. This paper proposes a new two-dimensional approach for SEMG signal compression, which is based on a recurrent pattern matching algorithm called multidimensional multiscale parser (MMP). The mentioned encoder was modified, in order to efficiently work with SEMG signals and exploit their inherent redundancies. Moreover, a new preprocessing technique, named as segmentation by similarity (SbS), which has the potential to enhance the exploitation of intra- and intersegment correlations, is introduced, the percentage difference sorting (PDS) algorithm is employed, with different image compressors, and results with the high efficiency video coding (HEVC), H.264/AVC, and JPEG2000 encoders are presented. Experiments were carried out with real isometric and dynamic records, acquired in laboratory. Dynamic signals compressed with H.264/AVC and HEVC, when combined with preprocessing techniques, resulted in good percent root-mean-square difference [Formula: see text] compression factor figures, for low and high compression factors, respectively. Besides, regarding isometric signals, the modified two-dimensional MMP algorithm outperformed state-of-the-art schemes, for low compression factors, the combination between SbS and HEVC proved to be competitive, for high compression factors, and JPEG2000, combined with PDS, provided good performance allied to low computational complexity, all in terms of percent root-mean-square difference [Formula: see text] compression factor. The proposed schemes are effective and, specifically, the modified MMP algorithm can be considered as an interesting alternative for isometric signals, regarding traditional SEMG encoders. Besides, the approach based on off-the-shelf image encoders has the potential of fast implementation and dissemination, given that many embedded systems may already have such encoders available, in the underlying hardware/software architecture.

  16. A survey of quality measures for gray-scale image compression

    NASA Technical Reports Server (NTRS)

    Eskicioglu, Ahmet M.; Fisher, Paul S.

    1993-01-01

    Although a variety of techniques are available today for gray-scale image compression, a complete evaluation of these techniques cannot be made as there is no single reliable objective criterion for measuring the error in compressed images. The traditional subjective criteria are burdensome, and usually inaccurate or inconsistent. On the other hand, being the most common objective criterion, the mean square error (MSE) does not have a good correlation with the viewer's response. It is now understood that in order to have a reliable quality measure, a representative model of the complex human visual system is required. In this paper, we survey and give a classification of the criteria for the evaluation of monochrome image quality.

  17. Adjustable lossless image compression based on a natural splitting of an image into drawing, shading, and fine-grained components

    NASA Technical Reports Server (NTRS)

    Novik, Dmitry A.; Tilton, James C.

    1993-01-01

    The compression, or efficient coding, of single band or multispectral still images is becoming an increasingly important topic. While lossy compression approaches can produce reconstructions that are visually close to the original, many scientific and engineering applications require exact (lossless) reconstructions. However, the most popular and efficient lossless compression techniques do not fully exploit the two-dimensional structural links existing in the image data. We describe here a general approach to lossless data compression that effectively exploits two-dimensional structural links of any length. After describing in detail two main variants on this scheme, we discuss experimental results.

  18. Interband coding extension of the new lossless JPEG standard

    NASA Astrophysics Data System (ADS)

    Memon, Nasir D.; Wu, Xiaolin; Sippy, V.; Miller, G.

    1997-01-01

    Due to the perceived inadequacy of current standards for lossless image compression, the JPEG committee of the International Standards Organization (ISO) has been developing a new standard. A baseline algorithm, called JPEG-LS, has already been completed and is awaiting approval by national bodies. The JPEG-LS baseline algorithm despite being simple is surprisingly efficient, and provides compression performance that is within a few percent of the best and more sophisticated techniques reported in the literature. Extensive experimentations performed by the authors seem to indicate that an overall improvement by more than 10 percent in compression performance will be difficult to obtain even at the cost of great complexity; at least not with traditional approaches to lossless image compression. However, if we allow inter-band decorrelation and modeling in the baseline algorithm, nearly 30 percent improvement in compression gains for specific images in the test set become possible with a modest computational cost. In this paper we propose and investigate a few techniques for exploiting inter-band correlations in multi-band images. These techniques have been designed within the framework of the baseline algorithm, and require minimal changes to the basic architecture of the baseline, retaining its essential simplicity.

  19. Data compression techniques applied to high resolution high frame rate video technology

    NASA Technical Reports Server (NTRS)

    Hartz, William G.; Alexovich, Robert E.; Neustadter, Marc S.

    1989-01-01

    An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended.

  20. Adaptive multifocus image fusion using block compressed sensing with smoothed projected Landweber integration in the wavelet domain.

    PubMed

    V S, Unni; Mishra, Deepak; Subrahmanyam, G R K S

    2016-12-01

    The need for image fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. Image fusion is the process of combining substantial information from several sensors using mathematical techniques in order to create a single composite image that will be more comprehensive and thus more useful for a human operator or other computer vision tasks. This paper presents a new approach to multifocus image fusion based on sparse signal representation. Block-based compressive sensing integrated with a projection-driven compressive sensing (CS) recovery that encourages sparsity in the wavelet domain is used as a method to get the focused image from a set of out-of-focus images. Compression is achieved during the image acquisition process using a block compressive sensing method. An adaptive thresholding technique within the smoothed projected Landweber recovery process reconstructs high-resolution focused images from low-dimensional CS measurements of out-of-focus images. Discrete wavelet transform and dual-tree complex wavelet transform are used as the sparsifying basis for the proposed fusion. The main finding lies in the fact that sparsification enables a better selection of the fusion coefficients and hence better fusion. A Laplacian mixture model fit is done in the wavelet domain and estimation of the probability density function (pdf) parameters by expectation maximization leads us to the proper selection of the coefficients of the fused image. Using the proposed method compared with the fusion scheme without employing the projected Landweber (PL) scheme and the other existing CS-based fusion approaches, it is observed that with fewer samples itself, the proposed method outperforms other approaches.

  1. Visually Lossless Data Compression for Real-Time Frame/Pushbroom Space Science Imagers

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Venbrux, Jack; Bhatia, Prakash; Miller, Warner H.

    2000-01-01

    A visually lossless data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also applicable to frame based imaging and is error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on a block transform of a hybrid of modulated lapped transform (MLT) and discrete cosine transform (DCT), or a 2-dimensional lapped transform, followed by bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate as desired by the user. The approach requires no unique table to maximize its performance. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Flight qualified hardware implementations are in development; a functional chip set is expected by the end of 2001. The chip set is being designed to compress data in excess of 20 Msamples/sec and support quantizations from 2 to 16 bits.

  2. A Framework of Hyperspectral Image Compression using Neural Networks

    DOE PAGES

    Masalmah, Yahya M.; Martínez Nieves, Christian; Rivera Soto, Rafael; ...

    2015-01-01

    Hyperspectral image analysis has gained great attention due to its wide range of applications. Hyperspectral images provide a vast amount of information about underlying objects in an image by using a large range of the electromagnetic spectrum for each pixel. However, since the same image is taken multiple times using distinct electromagnetic bands, the size of such images tend to be significant, which leads to greater processing requirements. The aim of this paper is to present a proposed framework for image compression and to study the possible effects of spatial compression on quality of unmixing results. Image compression allows usmore » to reduce the dimensionality of an image while still preserving most of the original information, which could lead to faster image processing. Lastly, this paper presents preliminary results of different training techniques used in Artificial Neural Network (ANN) based compression algorithm.« less

  3. Effect of data compression on diagnostic accuracy in digital hand and chest radiography

    NASA Astrophysics Data System (ADS)

    Sayre, James W.; Aberle, Denise R.; Boechat, Maria I.; Hall, Theodore R.; Huang, H. K.; Ho, Bruce K. T.; Kashfian, Payam; Rahbar, Guita

    1992-05-01

    Image compression is essential to handle a large volume of digital images including CT, MR, CR, and digitized films in a digital radiology operation. The full-frame bit allocation using the cosine transform technique developed during the last few years has been proven to be an excellent irreversible image compression method. This paper describes the effect of using the hardware compression module on diagnostic accuracy in hand radiographs with subperiosteal resorption and chest radiographs with interstitial disease. Receiver operating characteristic analysis using 71 hand radiographs and 52 chest radiographs with five observers each demonstrates that there is no statistical significant difference in diagnostic accuracy between the original films and the compressed images with a compression ratio as high as 20:1.

  4. Adaptive compressive ghost imaging based on wavelet trees and sparse representation.

    PubMed

    Yu, Wen-Kai; Li, Ming-Fei; Yao, Xu-Ri; Liu, Xue-Feng; Wu, Ling-An; Zhai, Guang-Jie

    2014-03-24

    Compressed sensing is a theory which can reconstruct an image almost perfectly with only a few measurements by finding its sparsest representation. However, the computation time consumed for large images may be a few hours or more. In this work, we both theoretically and experimentally demonstrate a method that combines the advantages of both adaptive computational ghost imaging and compressed sensing, which we call adaptive compressive ghost imaging, whereby both the reconstruction time and measurements required for any image size can be significantly reduced. The technique can be used to improve the performance of all computational ghost imaging protocols, especially when measuring ultra-weak or noisy signals, and can be extended to imaging applications at any wavelength.

  5. Pulse-compression ghost imaging lidar via coherent detection.

    PubMed

    Deng, Chenjin; Gong, Wenlin; Han, Shensheng

    2016-11-14

    Ghost imaging (GI) lidar, as a novel remote sensing technique, has been receiving increasing interest in recent years. By combining pulse-compression technique and coherent detection with GI, we propose a new lidar system called pulse-compression GI lidar. Our analytical results, which are backed up by numerical simulations, demonstrate that pulse-compression GI lidar can obtain the target's spatial intensity distribution, range and moving velocity. Compared with conventional pulsed GI lidar system, pulse-compression GI lidar, without decreasing the range resolution, is easy to obtain high single pulse energy with the use of a long pulse, and the mechanism of coherent detection can eliminate the influence of the stray light, which is helpful to improve the detection sensitivity and detection range.

  6. Digital mammography, cancer screening: Factors important for image compression

    NASA Technical Reports Server (NTRS)

    Clarke, Laurence P.; Blaine, G. James; Doi, Kunio; Yaffe, Martin J.; Shtern, Faina; Brown, G. Stephen; Winfield, Daniel L.; Kallergi, Maria

    1993-01-01

    The use of digital mammography for breast cancer screening poses several novel problems such as development of digital sensors, computer assisted diagnosis (CAD) methods for image noise suppression, enhancement, and pattern recognition, compression algorithms for image storage, transmission, and remote diagnosis. X-ray digital mammography using novel direct digital detection schemes or film digitizers results in large data sets and, therefore, image compression methods will play a significant role in the image processing and analysis by CAD techniques. In view of the extensive compression required, the relative merit of 'virtually lossless' versus lossy methods should be determined. A brief overview is presented here of the developments of digital sensors, CAD, and compression methods currently proposed and tested for mammography. The objective of the NCI/NASA Working Group on Digital Mammography is to stimulate the interest of the image processing and compression scientific community for this medical application and identify possible dual use technologies within the NASA centers.

  7. Use of zerotree coding in a high-speed pyramid image multiresolution decomposition

    NASA Astrophysics Data System (ADS)

    Vega-Pineda, Javier; Cabrera, Sergio D.; Lucero, Aldo

    1995-03-01

    A Zerotree (ZT) coding scheme is applied as a post-processing stage to avoid transmitting zero data in the High-Speed Pyramid (HSP) image compression algorithm. This algorithm has features that increase the capability of the ZT coding to give very high compression rates. In this paper the impact of the ZT coding scheme is analyzed and quantified. The HSP algorithm creates a discrete-time multiresolution analysis based on a hierarchical decomposition technique that is a subsampling pyramid. The filters used to create the image residues and expansions can be related to wavelet representations. According to the pixel coordinates and the level in the pyramid, N2 different wavelet basis functions of various sizes and rotations are linearly combined. The HSP algorithm is computationally efficient because of the simplicity of the required operations, and as a consequence, it can be very easily implemented with VLSI hardware. This is the HSP's principal advantage over other compression schemes. The ZT coding technique transforms the different quantized image residual levels created by the HSP algorithm into a bit stream. The use of ZT's compresses even further the already compressed image taking advantage of parent-child relationships (trees) between the pixels of the residue images at different levels of the pyramid. Zerotree coding uses the links between zeros along the hierarchical structure of the pyramid, to avoid transmission of those that form branches of all zeros. Compression performance and algorithm complexity of the combined HSP-ZT method are compared with those of the JPEG standard technique.

  8. Compressive sensing imaging through a drywall barrier at sub-THz and THz frequencies in transmission and reflection modes

    NASA Astrophysics Data System (ADS)

    Takan, Taylan; Özkan, Vedat A.; Idikut, Fırat; Yildirim, Ihsan Ozan; Şahin, Asaf B.; Altan, Hakan

    2014-10-01

    In this work sub-terahertz imaging using Compressive Sensing (CS) techniques for targets placed behind a visibly opaque barrier is demonstrated both experimentally and theoretically. Using a multiplied Schottky diode based millimeter wave source working at 118 GHz, metal cutout targets were illuminated in both reflection and transmission configurations with and without barriers which were made out of drywall. In both modes the image is spatially discretized using laser machined, 10 × 10 pixel metal apertures to demonstrate the technique of compressive sensing. The images were collected by modulating the source and measuring the transmitted flux through the apertures using a Golay cell. Experimental results were compared to simulations of the expected transmission through the metal apertures. Image quality decreases as expected when going from the non-obscured transmission case to the obscured transmission case and finally to the obscured reflection case. However, in all instances the image appears below the Nyquist rate which demonstrates that this technique is a viable option for Through the Wall Reflection Imaging (TWRI) applications.

  9. About a method for compressing x-ray computed microtomography data

    NASA Astrophysics Data System (ADS)

    Mancini, Lucia; Kourousias, George; Billè, Fulvio; De Carlo, Francesco; Fidler, Aleš

    2018-04-01

    The management of scientific data is of high importance especially for experimental techniques that produce big data volumes. Such a technique is x-ray computed tomography (CT) and its community has introduced advanced data formats which allow for better management of experimental data. Rather than the organization of the data and the associated meta-data, the main topic on this work is data compression and its applicability to experimental data collected from a synchrotron-based CT beamline at the Elettra-Sincrotrone Trieste facility (Italy) and studies images acquired from various types of samples. This study covers parallel beam geometry, but it could be easily extended to a cone-beam one. The reconstruction workflow used is the one currently in operation at the beamline. Contrary to standard image compression studies, this manuscript proposes a systematic framework and workflow for the critical examination of different compression techniques and does so by applying it to experimental data. Beyond the methodology framework, this study presents and examines the use of JPEG-XR in combination with HDF5 and TIFF formats providing insights and strategies on data compression and image quality issues that can be used and implemented at other synchrotron facilities and laboratory systems. In conclusion, projection data compression using JPEG-XR appears as a promising, efficient method to reduce data file size and thus to facilitate data handling and image reconstruction.

  10. Cloud solution for histopathological image analysis using region of interest based compression.

    PubMed

    Kanakatte, Aparna; Subramanya, Rakshith; Delampady, Ashik; Nayak, Rajarama; Purushothaman, Balamuralidhar; Gubbi, Jayavardhana

    2017-07-01

    Recent technological gains have led to the adoption of innovative cloud based solutions in medical imaging field. Once the medical image is acquired, it can be viewed, modified, annotated and shared on many devices. This advancement is mainly due to the introduction of Cloud computing in medical domain. Tissue pathology images are complex and are normally collected at different focal lengths using a microscope. The single whole slide image contains many multi resolution images stored in a pyramidal structure with the highest resolution image at the base and the smallest thumbnail image at the top of the pyramid. Highest resolution image will be used for tissue pathology diagnosis and analysis. Transferring and storing such huge images is a big challenge. Compression is a very useful and effective technique to reduce the size of these images. As pathology images are used for diagnosis, no information can be lost during compression (lossless compression). A novel method of extracting the tissue region and applying lossless compression on this region and lossy compression on the empty regions has been proposed in this paper. The resulting compression ratio along with lossless compression on tissue region is in acceptable range allowing efficient storage and transmission to and from the Cloud.

  11. Wavelet-based compression of pathological images for telemedicine applications

    NASA Astrophysics Data System (ADS)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  12. Visual pattern image sequence coding

    NASA Technical Reports Server (NTRS)

    Silsbee, Peter; Bovik, Alan C.; Chen, Dapang

    1990-01-01

    The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.

  13. Compression and information recovery in ptychography

    NASA Astrophysics Data System (ADS)

    Loetgering, L.; Treffer, D.; Wilhein, T.

    2018-04-01

    Ptychographic coherent diffraction imaging (PCDI) is a scanning microscopy modality that allows for simultaneous recovery of object and illumination information. This ability renders PCDI a suitable technique for x-ray lensless imaging and optics characterization. Its potential for information recovery typically relies on large amounts of data redundancy. However, the field of view in ptychography is practically limited by the memory and the computational facilities available. We describe techniques that achieve robust ptychographic information recovery at high compression rates. The techniques are compared and tested with experimental data.

  14. 2D-pattern matching image and video compression: theory, algorithms, and experiments.

    PubMed

    Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth

    2002-01-01

    In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.

  15. High-performance compression and double cryptography based on compressive ghost imaging with the fast Fourier transform

    NASA Astrophysics Data System (ADS)

    Leihong, Zhang; Zilan, Pan; Luying, Wu; Xiuhua, Ma

    2016-11-01

    To solve the problem that large images can hardly be retrieved for stringent hardware restrictions and the security level is low, a method based on compressive ghost imaging (CGI) with Fast Fourier Transform (FFT) is proposed, named FFT-CGI. Initially, the information is encrypted by the sender with FFT, and the FFT-coded image is encrypted by the system of CGI with a secret key. Then the receiver decrypts the image with the aid of compressive sensing (CS) and FFT. Simulation results are given to verify the feasibility, security, and compression of the proposed encryption scheme. The experiment suggests the method can improve the quality of large images compared with conventional ghost imaging and achieve the imaging for large-sized images, further the amount of data transmitted largely reduced because of the combination of compressive sensing and FFT, and improve the security level of ghost images through ciphertext-only attack (COA), chosen-plaintext attack (CPA), and noise attack. This technique can be immediately applied to encryption and data storage with the advantages of high security, fast transmission, and high quality of reconstructed information.

  16. Alaska SAR Facility (ASF5) SAR Communications (SARCOM) Data Compression System

    NASA Technical Reports Server (NTRS)

    Mango, Stephen A.

    1989-01-01

    The real-time operational requirements for SARCOM translation into a high speed image data handler and processor to achieve the desired compression ratios and the selection of a suitable image data compression technique with as low as possible fidelity (information) losses and which can be implemented in an algorithm placing a relatively low arithmetic load on the system are described.

  17. Damage assessment and residual compression strength of thick composite plates with through-the-thickness reinforcements

    NASA Technical Reports Server (NTRS)

    Smith, Barry T.

    1990-01-01

    Damage in composite materials was studied with through-the-thickness reinforcements. As a first step it was necessary to develop new ultrasonic imaging technology to better assess internal damage of the composite. A useful ultrasonic imaging technique was successfully developed to assess the internal damage of composite panels. The ultrasonic technique accurately determines the size of the internal damage. It was found that the ultrasonic imaging technique was better able to assess the damage in a composite panel with through-the-thickness reinforcements than by destructively sectioning the specimen and visual inspection under a microscope. Five composite compression-after-impact panels were tested. The compression-after-impact strength of the panels with the through-the-thickness reinforcements was almost twice that of the comparable panel without through-the-thickness reinforcement.

  18. A comparison of spectral decorrelation techniques and performance evaluation metrics for a wavelet-based, multispectral data compression algorithm

    NASA Technical Reports Server (NTRS)

    Matic, Roy M.; Mosley, Judith I.

    1994-01-01

    Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.

  19. EVALUATION OF REGISTRATION, COMPRESSION AND CLASSIFICATION ALGORITHMS

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R.

    1994-01-01

    Several types of algorithms are generally used to process digital imagery such as Landsat data. The most commonly used algorithms perform the task of registration, compression, and classification. Because there are different techniques available for performing registration, compression, and classification, imagery data users need a rationale for selecting a particular approach to meet their particular needs. This collection of registration, compression, and classification algorithms was developed so that different approaches could be evaluated and the best approach for a particular application determined. Routines are included for six registration algorithms, six compression algorithms, and two classification algorithms. The package also includes routines for evaluating the effects of processing on the image data. This collection of routines should be useful to anyone using or developing image processing software. Registration of image data involves the geometrical alteration of the imagery. Registration routines available in the evaluation package include image magnification, mapping functions, partitioning, map overlay, and data interpolation. The compression of image data involves reducing the volume of data needed for a given image. Compression routines available in the package include adaptive differential pulse code modulation, two-dimensional transforms, clustering, vector reduction, and picture segmentation. Classification of image data involves analyzing the uncompressed or compressed image data to produce inventories and maps of areas of similar spectral properties within a scene. The classification routines available include a sequential linear technique and a maximum likelihood technique. The choice of the appropriate evaluation criteria is quite important in evaluating the image processing functions. The user is therefore given a choice of evaluation criteria with which to investigate the available image processing functions. All of the available evaluation criteria basically compare the observed results with the expected results. For the image reconstruction processes of registration and compression, the expected results are usually the original data or some selected characteristics of the original data. For classification processes the expected result is the ground truth of the scene. Thus, the comparison process consists of determining what changes occur in processing, where the changes occur, how much change occurs, and the amplitude of the change. The package includes evaluation routines for performing such comparisons as average uncertainty, average information transfer, chi-square statistics, multidimensional histograms, and computation of contingency matrices. This collection of routines is written in FORTRAN IV for batch execution and has been implemented on an IBM 360 computer with a central memory requirement of approximately 662K of 8 bit bytes. This collection of image processing and evaluation routines was developed in 1979.

  20. Image compression using singular value decomposition

    NASA Astrophysics Data System (ADS)

    Swathi, H. R.; Sohini, Shah; Surbhi; Gopichand, G.

    2017-11-01

    We often need to transmit and store the images in many applications. Smaller the image, less is the cost associated with transmission and storage. So we often need to apply data compression techniques to reduce the storage space consumed by the image. One approach is to apply Singular Value Decomposition (SVD) on the image matrix. In this method, digital image is given to SVD. SVD refactors the given digital image into three matrices. Singular values are used to refactor the image and at the end of this process, image is represented with smaller set of values, hence reducing the storage space required by the image. Goal here is to achieve the image compression while preserving the important features which describe the original image. SVD can be adapted to any arbitrary, square, reversible and non-reversible matrix of m × n size. Compression ratio and Mean Square Error is used as performance metrics.

  1. An image registration-based technique for noninvasive vascular elastography

    NASA Astrophysics Data System (ADS)

    Valizadeh, Sina; Makkiabadi, Bahador; Mirbagheri, Alireza; Soozande, Mehdi; Manwar, Rayyan; Mozaffarzadeh, Moein; Nasiriavanaki, Mohammadreza

    2018-02-01

    Non-invasive vascular elastography is an emerging technique in vascular tissue imaging. During the past decades, several techniques have been suggested to estimate the tissue elasticity by measuring the displacement of the Carotid vessel wall. Cross correlation-based methods are the most prevalent approaches to measure the strain exerted in the wall vessel by the blood pressure. In the case of a low pressure, the displacement is too small to be apparent in ultrasound imaging, especially in the regions far from the center of the vessel, causing a high error of displacement measurement. On the other hand, increasing the compression leads to a relatively large displacement in the regions near the center, which reduces the performance of the cross correlation-based methods. In this study, a non-rigid image registration-based technique is proposed to measure the tissue displacement for a relatively large compression. The results show that the error of the displacement measurement obtained by the proposed method is reduced by increasing the amount of compression while the error of the cross correlationbased method rises for a relatively large compression. We also used the synthetic aperture imaging method, benefiting the directivity diagram, to improve the image quality, especially in the superficial regions. The best relative root-mean-square error (RMSE) of the proposed method and the adaptive cross correlation method were 4.5% and 6%, respectively. Consequently, the proposed algorithm outperforms the conventional method and reduces the relative RMSE by 25%.

  2. Optical information authentication using compressed double-random-phase-encoded images and quick-response codes.

    PubMed

    Wang, Xiaogang; Chen, Wen; Chen, Xudong

    2015-03-09

    In this paper, we develop a new optical information authentication system based on compressed double-random-phase-encoded images and quick-response (QR) codes, where the parameters of optical lightwave are used as keys for optical decryption and the QR code is a key for verification. An input image attached with QR code is first optically encoded in a simplified double random phase encoding (DRPE) scheme without using interferometric setup. From the single encoded intensity pattern recorded by a CCD camera, a compressed double-random-phase-encoded image, i.e., the sparse phase distribution used for optical decryption, is generated by using an iterative phase retrieval technique with QR code. We compare this technique to the other two methods proposed in literature, i.e., Fresnel domain information authentication based on the classical DRPE with holographic technique and information authentication based on DRPE and phase retrieval algorithm. Simulation results show that QR codes are effective on improving the security and data sparsity of optical information encryption and authentication system.

  3. The FBI compression standard for digitized fingerprint images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.

    1996-10-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the currentmore » status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.« less

  4. FBI compression standard for digitized fingerprint images

    NASA Astrophysics Data System (ADS)

    Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas

    1996-11-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  5. Artifacts in slab average-intensity-projection images reformatted from JPEG 2000 compressed thin-section abdominal CT data sets.

    PubMed

    Kim, Bohyoung; Lee, Kyoung Ho; Kim, Kil Joong; Mantiuk, Rafal; Kim, Hye-ri; Kim, Young Hoon

    2008-06-01

    The objective of our study was to assess the effects of compressing source thin-section abdominal CT images on final transverse average-intensity-projection (AIP) images. At reversible, 4:1, 6:1, 8:1, 10:1, and 15:1 Joint Photographic Experts Group (JPEG) 2000 compressions, we compared the artifacts in 20 matching compressed thin sections (0.67 mm), compressed thick sections (5 mm), and AIP images (5 mm) reformatted from the compressed thin sections. The artifacts were quantitatively measured with peak signal-to-noise ratio (PSNR) and a perceptual quality metric (High Dynamic Range Visual Difference Predictor [HDR-VDP]). By comparing the compressed and original images, three radiologists independently graded the artifacts as 0 (none, indistinguishable), 1 (barely perceptible), 2 (subtle), or 3 (significant). Friedman tests and exact tests for paired proportions were used. At irreversible compressions, the artifacts tended to increase in the order of AIP, thick-section, and thin-section images in terms of PSNR (p < 0.0001), HDR-VDP (p < 0.0001), and the readers' grading (p < 0.01 at 6:1 or higher compressions). At 6:1 and 8:1, distinguishable pairs (grades 1-3) tended to increase in the order of AIP, thick-section, and thin-section images. Visually lossless threshold for the compression varied between images but decreased in the order of AIP, thick-section, and thin-section images (p < 0.0001). Compression artifacts in thin sections are significantly attenuated in AIP images. On the premise that thin sections are typically reviewed using an AIP technique, it is justifiable to compress them to a compression level currently accepted for thick sections.

  6. JP3D compressed-domain watermarking of volumetric medical data sets

    NASA Astrophysics Data System (ADS)

    Ouled Zaid, Azza; Makhloufi, Achraf; Olivier, Christian

    2010-01-01

    Increasing transmission of medical data across multiple user systems raises concerns for medical image watermarking. Additionaly, the use of volumetric images triggers the need for efficient compression techniques in picture archiving and communication systems (PACS), or telemedicine applications. This paper describes an hybrid data hiding/compression system, adapted to volumetric medical imaging. The central contribution is to integrate blind watermarking, based on turbo trellis-coded quantization (TCQ), to JP3D encoder. Results of our method applied to Magnetic Resonance (MR) and Computed Tomography (CT) medical images have shown that our watermarking scheme is robust to JP3D compression attacks and can provide relative high data embedding rate whereas keep a relative lower distortion.

  7. A novel high-frequency encoding algorithm for image compression

    NASA Astrophysics Data System (ADS)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-12-01

    In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.

  8. Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method

    NASA Astrophysics Data System (ADS)

    Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan

    2018-04-01

    Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.

  9. Binary video codec for data reduction in wireless visual sensor networks

    NASA Astrophysics Data System (ADS)

    Khursheed, Khursheed; Ahmad, Naeem; Imran, Muhammad; O'Nils, Mattias

    2013-02-01

    Wireless Visual Sensor Networks (WVSN) is formed by deploying many Visual Sensor Nodes (VSNs) in the field. Typical applications of WVSN include environmental monitoring, health care, industrial process monitoring, stadium/airports monitoring for security reasons and many more. The energy budget in the outdoor applications of WVSN is limited to the batteries and the frequent replacement of batteries is usually not desirable. So the processing as well as the communication energy consumption of the VSN needs to be optimized in such a way that the network remains functional for longer duration. The images captured by VSN contain huge amount of data and require efficient computational resources for processing the images and wide communication bandwidth for the transmission of the results. Image processing algorithms must be designed and developed in such a way that they are computationally less complex and must provide high compression rate. For some applications of WVSN, the captured images can be segmented into bi-level images and hence bi-level image coding methods will efficiently reduce the information amount in these segmented images. But the compression rate of the bi-level image coding methods is limited by the underlined compression algorithm. Hence there is a need for designing other intelligent and efficient algorithms which are computationally less complex and provide better compression rate than that of bi-level image coding methods. Change coding is one such algorithm which is computationally less complex (require only exclusive OR operations) and provide better compression efficiency compared to image coding but it is effective for applications having slight changes between adjacent frames of the video. The detection and coding of the Region of Interest (ROIs) in the change frame efficiently reduce the information amount in the change frame. But, if the number of objects in the change frames is higher than a certain level then the compression efficiency of both the change coding and ROI coding becomes worse than that of image coding. This paper explores the compression efficiency of the Binary Video Codec (BVC) for the data reduction in WVSN. We proposed to implement all the three compression techniques i.e. image coding, change coding and ROI coding at the VSN and then select the smallest bit stream among the results of the three compression techniques. In this way the compression performance of the BVC will never become worse than that of image coding. We concluded that the compression efficiency of BVC is always better than that of change coding and is always better than or equal that of ROI coding and image coding.

  10. Study and simulation of low rate video coding schemes

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Yun-Chung; Kipp, G.

    1992-01-01

    The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design.

  11. Wavelet compression techniques for hyperspectral data

    NASA Technical Reports Server (NTRS)

    Evans, Bruce; Ringer, Brian; Yeates, Mathew

    1994-01-01

    Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet transform coder was used for the two-dimensional compression. The third case used a three dimensional extension of this same algorithm.

  12. Comparative data compression techniques and multi-compression results

    NASA Astrophysics Data System (ADS)

    Hasan, M. R.; Ibrahimy, M. I.; Motakabber, S. M. A.; Ferdaus, M. M.; Khan, M. N. H.

    2013-12-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms.

  13. Texture characterization for joint compression and classification based on human perception in the wavelet domain.

    PubMed

    Fahmy, Gamal; Black, John; Panchanathan, Sethuraman

    2006-06-01

    Today's multimedia applications demand sophisticated compression and classification techniques in order to store, transmit, and retrieve audio-visual information efficiently. Over the last decade, perceptually based image compression methods have been gaining importance. These methods take into account the abilities (and the limitations) of human visual perception (HVP) when performing compression. The upcoming MPEG 7 standard also addresses the need for succinct classification and indexing of visual content for efficient retrieval. However, there has been no research that has attempted to exploit the characteristics of the human visual system to perform both compression and classification jointly. One area of HVP that has unexplored potential for joint compression and classification is spatial frequency perception. Spatial frequency content that is perceived by humans can be characterized in terms of three parameters, which are: 1) magnitude; 2) phase; and 3) orientation. While the magnitude of spatial frequency content has been exploited in several existing image compression techniques, the novel contribution of this paper is its focus on the use of phase coherence for joint compression and classification in the wavelet domain. Specifically, this paper describes a human visual system-based method for measuring the degree to which an image contains coherent (perceptible) phase information, and then exploits that information to provide joint compression and classification. Simulation results that demonstrate the efficiency of this method are presented.

  14. Virtual Sonography Through the Internet: Volume Compression Issues

    PubMed Central

    Vilarchao-Cavia, Joseba; Troyano-Luque, Juan-Mario; Clavijo, Matilde

    2001-01-01

    Background Three-dimensional ultrasound images allow virtual sonography even at a distance. However, the size of final 3-D files limits their transmission through slow networks such as the Internet. Objective To analyze compression techniques that transform ultrasound images into small 3-D volumes that can be transmitted through the Internet without loss of relevant medical information. Methods Samples were selected from ultrasound examinations performed during, 1999-2000, in the Obstetrics and Gynecology Department at the University Hospital in La Laguna, Canary Islands, Spain. The conventional ultrasound video output was recorded at 25 fps (frames per second) on a PC, producing 100- to 120-MB files (for from 500 to 550 frames). Processing to obtain 3-D images progressively reduced file size. Results The original frames passed through different compression stages: selecting the region of interest, rendering techniques, and compression for storage. Final 3-D volumes reached 1:25 compression rates (1.5- to 2-MB files). Those volumes need 7 to 8 minutes to be transmitted through the Internet at a mean data throughput of 6.6 Kbytes per second. At the receiving site, virtual sonography is possible using orthogonal projections or oblique cuts. Conclusions Modern volume-rendering techniques allowed distant virtual sonography through the Internet. This is the result of their efficient data compression that maintains its attractiveness as a main criterion for distant diagnosis. PMID:11720963

  15. High speed fluorescence imaging with compressed ultrafast photography

    NASA Astrophysics Data System (ADS)

    Thompson, J. V.; Mason, J. D.; Beier, H. T.; Bixler, J. N.

    2017-02-01

    Fluorescent lifetime imaging is an optical technique that facilitates imaging molecular interactions and cellular functions. Because the excited lifetime of a fluorophore is sensitive to its local microenvironment,1, 2 measurement of fluorescent lifetimes can be used to accurately detect regional changes in temperature, pH, and ion concentration. However, typical state of the art fluorescent lifetime methods are severely limited when it comes to acquisition time (on the order of seconds to minutes) and video rate imaging. Here we show that compressed ultrafast photography (CUP) can be used in conjunction with fluorescent lifetime imaging to overcome these acquisition rate limitations. Frame rates up to one hundred billion frames per second have been demonstrated with compressed ultrafast photography using a streak camera.3 These rates are achieved by encoding time in the spatial direction with a pseudo-random binary pattern. The time domain information is then reconstructed using a compressed sensing algorithm, resulting in a cube of data (x,y,t) for each readout image. Thus, application of compressed ultrafast photography will allow us to acquire an entire fluorescent lifetime image with a single laser pulse. Using a streak camera with a high-speed CMOS camera, acquisition rates of 100 frames per second can be achieved, which will significantly enhance our ability to quantitatively measure complex biological events with high spatial and temporal resolution. In particular, we will demonstrate the ability of this technique to do single-shot fluorescent lifetime imaging of cells and microspheres.

  16. Reducing acquisition time in clinical MRI by data undersampling and compressed sensing reconstruction

    NASA Astrophysics Data System (ADS)

    Hollingsworth, Kieren Grant

    2015-11-01

    MRI is often the most sensitive or appropriate technique for important measurements in clinical diagnosis and research, but lengthy acquisition times limit its use due to cost and considerations of patient comfort and compliance. Once an image field of view and resolution is chosen, the minimum scan acquisition time is normally fixed by the amount of raw data that must be acquired to meet the Nyquist criteria. Recently, there has been research interest in using the theory of compressed sensing (CS) in MR imaging to reduce scan acquisition times. The theory argues that if our target MR image is sparse, having signal information in only a small proportion of pixels (like an angiogram), or if the image can be mathematically transformed to be sparse then it is possible to use that sparsity to recover a high definition image from substantially less acquired data. This review starts by considering methods of k-space undersampling which have already been incorporated into routine clinical imaging (partial Fourier imaging and parallel imaging), and then explains the basis of using compressed sensing in MRI. The practical considerations of applying CS to MRI acquisitions are discussed, such as designing k-space undersampling schemes, optimizing adjustable parameters in reconstructions and exploiting the power of combined compressed sensing and parallel imaging (CS-PI). A selection of clinical applications that have used CS and CS-PI prospectively are considered. The review concludes by signposting other imaging acceleration techniques under present development before concluding with a consideration of the potential impact and obstacles to bringing compressed sensing into routine use in clinical MRI.

  17. An Efficient Framework for Compressed Sensing Reconstruction of Highly Accelerated Dynamic Cardiac MRI

    NASA Astrophysics Data System (ADS)

    Ting, Samuel T.

    The research presented in this work seeks to develop, validate, and deploy practical techniques for improving diagnosis of cardiovascular disease. In the philosophy of biomedical engineering, we seek to identify an existing medical problem having significant societal and economic effects and address this problem using engineering approaches. Cardiovascular disease is the leading cause of mortality in the United States, accounting for more deaths than any other major cause of death in every year since 1900 with the exception of the year 1918. Cardiovascular disease is estimated to account for almost one-third of all deaths in the United States, with more than 2150 deaths each day, or roughly 1 death every 40 seconds. In the past several decades, a growing array of imaging modalities have proven useful in aiding the diagnosis and evaluation of cardiovascular disease, including computed tomography, single photon emission computed tomography, and echocardiography. In particular, cardiac magnetic resonance imaging is an excellent diagnostic tool that can provide within a single exam a high quality evaluation of cardiac function, blood flow, perfusion, viability, and edema without the use of ionizing radiation. The scope of this work focuses on the application of engineering techniques for improving imaging using cardiac magnetic resonance with the goal of improving the utility of this powerful imaging modality. Dynamic cine imaging, or the capturing of movies of a single slice or volume within the heart or great vessel region, is used in nearly every cardiac magnetic resonance imaging exam, and adequate evaluation of cardiac function and morphology for diagnosis and evaluation of cardiovascular disease depends heavily on both the spatial and temporal resolution as well as the image quality of the reconstruction cine images. This work focuses primarily on image reconstruction techniques utilized in cine imaging; however, the techniques discussed are also relevant to other dynamic and static imaging techniques based on cardiac magnetic resonance. Conventional segmented techniques for cardiac cine imaging require breath-holding as well as regular cardiac rhythm, and can be time-consuming to acquire. Inadequate breath-holding or irregular cardiac rhythm can result in completely non-diagnostic images, limiting the utility of these techniques in a significant patient population. Real-time single-shot cardiac cine imaging enables free-breathing acquisition with significantly shortened imaging time and promises to significantly improve the utility of cine imaging for diagnosis and evaluation of cardiovascular disease. However, utility of real-time cine images depends heavily on the successful reconstruction of final cine images from undersampled data. Successful reconstruction of images from more highly undersampled data results directly in images exhibiting finer spatial and temporal resolution provided that image quality is sufficient. This work focuses primarily on the development, validation, and deployment of practical techniques for enabling the reconstruction of real-time cardiac cine images at the spatial and temporal resolutions and image quality needed for diagnostic utility. Particular emphasis is placed on the development of reconstruction approaches resulting in with short computation times that can be used in the clinical environment. Specifically, the use of compressed sensing signal recovery techniques is considered; such techniques show great promise in allowing successful reconstruction of highly undersampled data. The scope of this work concerns two primary topics related to signal recovery using compressed sensing: (1) long reconstruction times of these techniques, and (2) improved sparsity models for signal recovery from more highly undersampled data. Both of these aspects are relevant to the practical application of compressed sensing techniques in the context of improving image reconstruction of real-time cardiac cine images. First, algorithmic and implementational approaches are proposed for reducing the computational time for a compressed sensing reconstruction framework. Specific optimization algorithms based on the fast iterative/shrinkage algorithm (FISTA) are applied in the context of real-time cine image reconstruction to achieve efficient per-iteration computation time. Implementation within a code framework utilizing commercially available graphics processing units (GPUs) allows for practical and efficient implementation directly within the clinical environment. Second, patch-based sparsity models are proposed to enable compressed sensing signal recovery from highly undersampled data. Numerical studies demonstrate that this approach can help improve image quality at higher undersampling ratios, enabling real-time cine imaging at higher acceleration rates. In this work, it is shown that these techniques yield a holistic framework for achieving efficient reconstruction of real-time cine images with spatial and temporal resolution sufficient for use in the clinical environment. A thorough description of these techniques from both a theoretical and practical view is provided - both of which may be of interest to the reader in terms of future work.

  18. Secure Oblivious Hiding, Authentication, Tamper Proofing, and Verification Techniques

    DTIC Science & Technology

    2002-08-01

    compressing the bit- planes. The algorithm always starts with inspecting the 5th LSB plane. For color images , all three color-channels are compressed...use classical encryption engines, such as IDEA or DES . These algorithms have a fixed encryption block size, and, depending on the image dimensions, we...information can be stored either in a separate file, in the image header, or embedded in the image itself utilizing the modern concepts of steganography

  19. Data compression for satellite images

    NASA Technical Reports Server (NTRS)

    Chen, P. H.; Wintz, P. A.

    1976-01-01

    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.

  20. Context dependent prediction and category encoding for DPCM image compression

    NASA Technical Reports Server (NTRS)

    Beaudet, Paul R.

    1989-01-01

    Efficient compression of image data requires the understanding of the noise characteristics of sensors as well as the redundancy expected in imagery. Herein, the techniques of Differential Pulse Code Modulation (DPCM) are reviewed and modified for information-preserving data compression. The modifications include: mapping from intensity to an equal variance space; context dependent one and two dimensional predictors; rationale for nonlinear DPCM encoding based upon an image quality model; context dependent variable length encoding of 2x2 data blocks; and feedback control for constant output rate systems. Examples are presented at compression rates between 1.3 and 2.8 bits per pixel. The need for larger block sizes, 2D context dependent predictors, and the hope for sub-bits-per-pixel compression which maintains spacial resolution (information preserving) are discussed.

  1. Observer performance assessment of JPEG-compressed high-resolution chest images

    NASA Astrophysics Data System (ADS)

    Good, Walter F.; Maitz, Glenn S.; King, Jill L.; Gennari, Rose C.; Gur, David

    1999-05-01

    The JPEG compression algorithm was tested on a set of 529 chest radiographs that had been digitized at a spatial resolution of 100 micrometer and contrast sensitivity of 12 bits. Images were compressed using five fixed 'psychovisual' quantization tables which produced average compression ratios in the range 15:1 to 61:1, and were then printed onto film. Six experienced radiologists read all cases from the laser printed film, in each of the five compressed modes as well as in the non-compressed mode. For comparison purposes, observers also read the same cases with reduced pixel resolutions of 200 micrometer and 400 micrometer. The specific task involved detecting masses, pneumothoraces, interstitial disease, alveolar infiltrates and rib fractures. Over the range of compression ratios tested, for images digitized at 100 micrometer, we were unable to demonstrate any statistically significant decrease (p greater than 0.05) in observer performance as measured by ROC techniques. However, the observers' subjective assessments of image quality did decrease significantly as image resolution was reduced and suggested a decreasing, but nonsignificant, trend as the compression ratio was increased. The seeming discrepancy between our failure to detect a reduction in observer performance, and other published studies, is likely due to: (1) the higher resolution at which we digitized our images; (2) the higher signal-to-noise ratio of our digitized films versus typical CR images; and (3) our particular choice of an optimized quantization scheme.

  2. The development of machine technology processing for earth resource survey

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A.

    1970-01-01

    The following technologies are considered for automatic processing of earth resources data: (1) registration of multispectral and multitemporal images, (2) digital image display systems, (3) data system parameter effects on satellite remote sensing systems, and (4) data compression techniques based on spectral redundancy. The importance of proper spectral band and compression algorithm selections is pointed out.

  3. Low-Complexity Lossless and Near-Lossless Data Compression Technique for Multispectral Imagery

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Klimesh, Matthew A.

    2009-01-01

    This work extends the lossless data compression technique described in Fast Lossless Compression of Multispectral- Image Data, (NPO-42517) NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26. The original technique was extended to include a near-lossless compression option, allowing substantially smaller compressed file sizes when a small amount of distortion can be tolerated. Near-lossless compression is obtained by including a quantization step prior to encoding of prediction residuals. The original technique uses lossless predictive compression and is designed for use on multispectral imagery. A lossless predictive data compression algorithm compresses a digitized signal one sample at a time as follows: First, a sample value is predicted from previously encoded samples. The difference between the actual sample value and the prediction is called the prediction residual. The prediction residual is encoded into the compressed file. The decompressor can form the same predicted sample and can decode the prediction residual from the compressed file, and so can reconstruct the original sample. A lossless predictive compression algorithm can generally be converted to a near-lossless compression algorithm by quantizing the prediction residuals prior to encoding them. In this case, since the reconstructed sample values will not be identical to the original sample values, the encoder must determine the values that will be reconstructed and use these values for predicting later sample values. The technique described here uses this method, starting with the original technique, to allow near-lossless compression. The extension to allow near-lossless compression adds the ability to achieve much more compression when small amounts of distortion are tolerable, while retaining the low complexity and good overall compression effectiveness of the original algorithm.

  4. Study on the key technology of optical encryption based on compressive ghost imaging with double random-phase encoding

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Pan, Zilan; Liang, Dong; Ma, Xiuhua; Zhang, Dawei

    2015-12-01

    An optical encryption method based on compressive ghost imaging (CGI) with double random-phase encoding (DRPE), named DRPE-CGI, is proposed. The information is first encrypted by the sender with DRPE, the DRPE-coded image is encrypted by the system of computational ghost imaging with a secret key. The key of N random-phase vectors is generated by the sender and will be shared with the receiver who is the authorized user. The receiver decrypts the DRPE-coded image with the key, with the aid of CGI and a compressive sensing technique, and then reconstructs the original information by the technique of DRPE-decoding. The experiments suggest that cryptanalysts cannot get any useful information about the original image even if they eavesdrop 60% of the key at a given time, so the security of DRPE-CGI is higher than that of the security of conventional ghost imaging. Furthermore, this method can reduce 40% of the information quantity compared with ghost imaging while the qualities of reconstructing the information are the same. It can also improve the quality of the reconstructed plaintext information compared with DRPE-GI with the same sampling times. This technique can be immediately applied to encryption and data storage with the advantages of high security, fast transmission, and high quality of reconstructed information.

  5. Quality of reconstruction of compressed off-axis digital holograms by frequency filtering and wavelets.

    PubMed

    Cheremkhin, Pavel A; Kurbatova, Ekaterina A

    2018-01-01

    Compression of digital holograms can significantly help with the storage of objects and data in 2D and 3D form, its transmission, and its reconstruction. Compression of standard images by methods based on wavelets allows high compression ratios (up to 20-50 times) with minimum losses of quality. In the case of digital holograms, application of wavelets directly does not allow high values of compression to be obtained. However, additional preprocessing and postprocessing can afford significant compression of holograms and the acceptable quality of reconstructed images. In this paper application of wavelet transforms for compression of off-axis digital holograms are considered. The combined technique based on zero- and twin-order elimination, wavelet compression of the amplitude and phase components of the obtained Fourier spectrum, and further additional compression of wavelet coefficients by thresholding and quantization is considered. Numerical experiments on reconstruction of images from the compressed holograms are performed. The comparative analysis of applicability of various wavelets and methods of additional compression of wavelet coefficients is performed. Optimum parameters of compression of holograms by the methods can be estimated. Sizes of holographic information were decreased up to 190 times.

  6. Adaptive bit plane quadtree-based block truncation coding for image compression

    NASA Astrophysics Data System (ADS)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  7. Estimating JPEG2000 compression for image forensics using Benford's Law

    NASA Astrophysics Data System (ADS)

    Qadir, Ghulam; Zhao, Xi; Ho, Anthony T. S.

    2010-05-01

    With the tremendous growth and usage of digital images nowadays, the integrity and authenticity of digital content is becoming increasingly important, and a growing concern to many government and commercial sectors. Image Forensics, based on a passive statistical analysis of the image data only, is an alternative approach to the active embedding of data associated with Digital Watermarking. Benford's Law was first introduced to analyse the probability distribution of the 1st digit (1-9) numbers of natural data, and has since been applied to Accounting Forensics for detecting fraudulent income tax returns [9]. More recently, Benford's Law has been further applied to image processing and image forensics. For example, Fu et al. [5] proposed a Generalised Benford's Law technique for estimating the Quality Factor (QF) of JPEG compressed images. In our previous work, we proposed a framework incorporating the Generalised Benford's Law to accurately detect unknown JPEG compression rates of watermarked images in semi-fragile watermarking schemes. JPEG2000 (a relatively new image compression standard) offers higher compression rates and better image quality as compared to JPEG compression. In this paper, we propose the novel use of Benford's Law for estimating JPEG2000 compression for image forensics applications. By analysing the DWT coefficients and JPEG2000 compression on 1338 test images, the initial results indicate that the 1st digit probability of DWT coefficients follow the Benford's Law. The unknown JPEG2000 compression rates of the image can also be derived, and proved with the help of a divergence factor, which shows the deviation between the probabilities and Benford's Law. Based on 1338 test images, the mean divergence for DWT coefficients is approximately 0.0016, which is lower than DCT coefficients at 0.0034. However, the mean divergence for JPEG2000 images compression rate at 0.1 is 0.0108, which is much higher than uncompressed DWT coefficients. This result clearly indicates a presence of compression in the image. Moreover, we compare the results of 1st digit probability and divergence among JPEG2000 compression rates at 0.1, 0.3, 0.5 and 0.9. The initial results show that the expected difference among them could be used for further analysis to estimate the unknown JPEG2000 compression rates.

  8. Compression of high-density EMG signals for trapezius and gastrocnemius muscles.

    PubMed

    Itiki, Cinthia; Furuie, Sergio S; Merletti, Roberto

    2014-03-10

    New technologies for data transmission and multi-electrode arrays increased the demand for compressing high-density electromyography (HD EMG) signals. This article aims the compression of HD EMG signals recorded by two-dimensional electrode matrices at different muscle-contraction forces. It also shows methodological aspects of compressing HD EMG signals for non-pinnate (upper trapezius) and pinnate (medial gastrocnemius) muscles, using image compression techniques. HD EMG signals were placed in image rows, according to two distinct electrode orders: parallel and perpendicular to the muscle longitudinal axis. For the lossless case, the images obtained from single-differential signals as well as their differences in time were compressed. For the lossy algorithm, the images associated to the recorded monopolar or single-differential signals were compressed for different compression levels. Lossless compression provided up to 59.3% file-size reduction (FSR), with lower contraction forces associated to higher FSR. For lossy compression, a 90.8% reduction on the file size was attained, while keeping the signal-to-noise ratio (SNR) at 21.19 dB. For a similar FSR, higher contraction forces corresponded to higher SNR CONCLUSIONS: The computation of signal differences in time improves the performance of lossless compression while the selection of signals in the transversal order improves the lossy compression of HD EMG, for both pinnate and non-pinnate muscles.

  9. Compression of high-density EMG signals for trapezius and gastrocnemius muscles

    PubMed Central

    2014-01-01

    Background New technologies for data transmission and multi-electrode arrays increased the demand for compressing high-density electromyography (HD EMG) signals. This article aims the compression of HD EMG signals recorded by two-dimensional electrode matrices at different muscle-contraction forces. It also shows methodological aspects of compressing HD EMG signals for non-pinnate (upper trapezius) and pinnate (medial gastrocnemius) muscles, using image compression techniques. Methods HD EMG signals were placed in image rows, according to two distinct electrode orders: parallel and perpendicular to the muscle longitudinal axis. For the lossless case, the images obtained from single-differential signals as well as their differences in time were compressed. For the lossy algorithm, the images associated to the recorded monopolar or single-differential signals were compressed for different compression levels. Results Lossless compression provided up to 59.3% file-size reduction (FSR), with lower contraction forces associated to higher FSR. For lossy compression, a 90.8% reduction on the file size was attained, while keeping the signal-to-noise ratio (SNR) at 21.19 dB. For a similar FSR, higher contraction forces corresponded to higher SNR Conclusions The computation of signal differences in time improves the performance of lossless compression while the selection of signals in the transversal order improves the lossy compression of HD EMG, for both pinnate and non-pinnate muscles. PMID:24612604

  10. Compression of electromyographic signals using image compression techniques.

    PubMed

    Costa, Marcus Vinícius Chaffim; Berger, Pedro de Azevedo; da Rocha, Adson Ferreira; de Carvalho, João Luiz Azevedo; Nascimento, Francisco Assis de Oliveira

    2008-01-01

    Despite the growing interest in the transmission and storage of electromyographic signals for long periods of time, few studies have addressed the compression of such signals. In this article we present an algorithm for compression of electromyographic signals based on the JPEG2000 coding system. Although the JPEG2000 codec was originally designed for compression of still images, we show that it can also be used to compress EMG signals for both isotonic and isometric contractions. For EMG signals acquired during isometric contractions, the proposed algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.75% to 13.7%. For isotonic EMG signals, the algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.4% to 7%. The compression results using the JPEG2000 algorithm were compared to those using other algorithms based on the wavelet transform.

  11. Wavelet Compression of Satellite-Transmitted Digital Mammograms

    NASA Technical Reports Server (NTRS)

    Zheng, Yuan F.

    2001-01-01

    Breast cancer is one of the major causes of cancer death in women in the United States. The most effective way to treat breast cancer is to detect it at an early stage by screening patients periodically. Conventional film-screening mammography uses X-ray films which are effective in detecting early abnormalities of the breast. Direct digital mammography has the potential to improve the image quality and to take advantages of convenient storage, efficient transmission, and powerful computer-aided diagnosis, etc. One effective alternative to direct digital imaging is secondary digitization of X-ray films. This technique may not provide as high an image quality as the direct digital approach, but definitely have other advantages inherent to digital images. One of them is the usage of satellite-transmission technique for transferring digital mammograms between a remote image-acquisition site and a central image-reading site. This technique can benefit a large population of women who reside in remote areas where major screening and diagnosing facilities are not available. The NASA-Lewis Research Center (LeRC), in collaboration with the Cleveland Clinic Foundation (CCF), has begun a pilot study to investigate the application of the Advanced Communications Technology Satellite (ACTS) network to telemammography. The bandwidth of the T1 transmission is limited (1.544 Mbps) while the size of a mammographic image is huge. It takes a long time to transmit a single mammogram. For example, a mammogram of 4k by 4k pixels with 16 bits per pixel needs more than 4 minutes to transmit. Four images for a typical screening exam would take more than 16 minutes. This is too long a time period for a convenient screening. Consequently, compression is necessary for making satellite-transmission of mammographic images practically possible. The Wavelet Research Group of the Department of Electrical Engineering at The Ohio State University (OSU) participated in the LeRC-CCF collaboration by providing advanced compression technology using wavelet transform. OSU developed a time-efficient software package with various wavelets to compress a serious of mammographic images. This documents reports the result of the compression activities.

  12. Detection of rebars in concrete using advanced ultrasonic pulse compression techniques.

    PubMed

    Laureti, S; Ricci, M; Mohamed, M N I B; Senni, L; Davis, L A J; Hutchins, D A

    2018-04-01

    A pulse compression technique has been developed for the non-destructive testing of concrete samples. Scattering of signals from aggregate has historically been a problem in such measurements. Here, it is shown that a combination of piezocomposite transducers, pulse compression and post processing can lead to good images of a reinforcement bar at a cover depth of 55 mm. This has been achieved using a combination of wide bandwidth operation over the 150-450 kHz range, and processing based on measuring the cumulative energy scattered back to the receiver. Results are presented in the form of images of a 20 mm rebar embedded within a sample containing 10 mm aggregate. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. A novel data hiding scheme for block truncation coding compressed images using dynamic programming strategy

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Chun; Liu, Yanjun; Nguyen, Son T.

    2015-03-01

    Data hiding is a technique that embeds information into digital cover data. This technique has been concentrated on the spatial uncompressed domain, and it is considered more challenging to perform in the compressed domain, i.e., vector quantization, JPEG, and block truncation coding (BTC). In this paper, we propose a new data hiding scheme for BTC-compressed images. In the proposed scheme, a dynamic programming strategy was used to search for the optimal solution of the bijective mapping function for LSB substitution. Then, according to the optimal solution, each mean value embeds three secret bits to obtain high hiding capacity with low distortion. The experimental results indicated that the proposed scheme obtained both higher hiding capacity and hiding efficiency than the other four existing schemes, while ensuring good visual quality of the stego-image. In addition, the proposed scheme achieved a low bit rate as original BTC algorithm.

  14. Fractal-Based Image Compression, II

    DTIC Science & Technology

    1990-06-01

    data for figure 3 ----------------------------------- 10 iv 1. INTRODUCTION The need for data compression is not new. With humble beginnings such as...the use of acronyms and abbreviations in spoken and written word, the methods for data compression became more advanced as the need for information...grew. The Morse code, developed because of the need for faster telegraphy, was an early example of a data compression technique. Largely because of the

  15. Studies on image compression and image reconstruction

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Nori, Sekhar; Araj, A.

    1994-01-01

    During this six month period our works concentrated on three, somewhat different areas. We looked at and developed a number of error concealment schemes for use in a variety of video coding environments. This work is described in an accompanying (draft) Masters thesis. In the thesis we describe application of this techniques to the MPEG video coding scheme. We felt that the unique frame ordering approach used in the MPEG scheme would be a challenge to any error concealment/error recovery technique. We continued with our work in the vector quantization area. We have also developed a new type of vector quantizer, which we call a scan predictive vector quantization. The scan predictive VQ was tested on data processed at Goddard to approximate Landsat 7 HRMSI resolution and compared favorably with existing VQ techniques. A paper describing this work is included. The third area is concerned more with reconstruction than compression. While there is a variety of efficient lossless image compression schemes, they all have a common property that they use past data to encode future data. This is done either via taking differences, context modeling, or by building dictionaries. When encoding large images, this common property becomes a common flaw. When the user wishes to decode just a portion of the image, the requirement that the past history be available forces the decoding of a significantly larger portion of the image than desired by the user. Even with intelligent partitioning of the image dataset, the number of pixels decoded may be four times the number of pixels requested. We have developed an adaptive scanning strategy which can be used with any lossless compression scheme and which lowers the additional number of pixels to be decoded to about 7 percent of the number of pixels requested! A paper describing these results is included.

  16. A spatially adaptive spectral re-ordering technique for lossless coding of hyper-spectral images

    NASA Technical Reports Server (NTRS)

    Memon, Nasir D.; Galatsanos, Nikolas

    1995-01-01

    In this paper, we propose a new approach, applicable to lossless compression of hyper-spectral images, that alleviates some limitations of linear prediction as applied to this problem. According to this approach, an adaptive re-ordering of the spectral components of each pixel is performed prior to prediction and encoding. This re-ordering adaptively exploits, on a pixel-by pixel basis, the presence of inter-band correlations for prediction. Furthermore, the proposed approach takes advantage of spatial correlations, and does not introduce any coding overhead to transmit the order of the spectral bands. This is accomplished by using the assumption that two spatially adjacent pixels are expected to have similar spectral relationships. We thus have a simple technique to exploit spectral and spatial correlations in hyper-spectral data sets, leading to compression performance improvements as compared to our previously reported techniques for lossless compression. We also look at some simple error modeling techniques for further exploiting any structure that remains in the prediction residuals prior to entropy coding.

  17. Adaptive coding of MSS imagery. [Multi Spectral band Scanners

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Samulon, A. S.; Fultz, G. L.; Lumb, D.

    1977-01-01

    A number of adaptive data compression techniques are considered for reducing the bandwidth of multispectral data. They include adaptive transform coding, adaptive DPCM, adaptive cluster coding, and a hybrid method. The techniques are simulated and their performance in compressing the bandwidth of Landsat multispectral images is evaluated and compared using signal-to-noise ratio and classification consistency as fidelity criteria.

  18. Dual photon excitation microscopy and image threshold segmentation in live cell imaging during compression testing.

    PubMed

    Moo, Eng Kuan; Abusara, Ziad; Abu Osman, Noor Azuan; Pingguan-Murphy, Belinda; Herzog, Walter

    2013-08-09

    Morphological studies of live connective tissue cells are imperative to helping understand cellular responses to mechanical stimuli. However, photobleaching is a constant problem to accurate and reliable live cell fluorescent imaging, and various image thresholding methods have been adopted to account for photobleaching effects. Previous studies showed that dual photon excitation (DPE) techniques are superior over conventional one photon excitation (OPE) confocal techniques in minimizing photobleaching. In this study, we investigated the effects of photobleaching resulting from OPE and DPE on morphology of in situ articular cartilage chondrocytes across repeat laser exposures. Additionally, we compared the effectiveness of three commonly-used image thresholding methods in accounting for photobleaching effects, with and without tissue loading through compression. In general, photobleaching leads to an apparent volume reduction for subsequent image scans. Performing seven consecutive scans of chondrocytes in unloaded cartilage, we found that the apparent cell volume loss caused by DPE microscopy is much smaller than that observed using OPE microscopy. Applying scan-specific image thresholds did not prevent the photobleaching-induced volume loss, and volume reductions were non-uniform over the seven repeat scans. During cartilage loading through compression, cell fluorescence increased and, depending on the thresholding method used, led to different volume changes. Therefore, different conclusions on cell volume changes may be drawn during tissue compression, depending on the image thresholding methods used. In conclusion, our findings confirm that photobleaching directly affects cell morphology measurements, and that DPE causes less photobleaching artifacts than OPE for uncompressed cells. When cells are compressed during tissue loading, a complicated interplay between photobleaching effects and compression-induced fluorescence increase may lead to interpretations in cell responses to mechanical stimuli that depend on the microscopic approach and the thresholding methods used and may result in contradictory interpretations. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Study of adaptive methods for data compression of scanner data

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The performance of adaptive image compression techniques and the applicability of a variety of techniques to the various steps in the data dissemination process are examined in depth. It is concluded that the bandwidth of imagery generated by scanners can be reduced without introducing significant degradation such that the data can be transmitted over an S-band channel. This corresponds to a compression ratio equivalent to 1.84 bits per pixel. It is also shown that this can be achieved using at least two fairly simple techniques with weight-power requirements well within the constraints of the LANDSAT-D satellite. These are the adaptive 2D DPCM and adaptive hybrid techniques.

  20. Image processing using Gallium Arsenide (GaAs) technology

    NASA Technical Reports Server (NTRS)

    Miller, Warner H.

    1989-01-01

    The need to increase the information return from space-borne imaging systems has increased in the past decade. The use of multi-spectral data has resulted in the need for finer spatial resolution and greater spectral coverage. Onboard signal processing will be necessary in order to utilize the available Tracking and Data Relay Satellite System (TDRSS) communication channel at high efficiency. A generally recognized approach to the increased efficiency of channel usage is through data compression techniques. The compression technique implemented is a differential pulse code modulation (DPCM) scheme with a non-uniform quantizer. The need to advance the state-of-the-art of onboard processing was recognized and a GaAs integrated circuit technology was chosen. An Adaptive Programmable Processor (APP) chip set was developed which is based on an 8-bit slice general processor. The reason for choosing the compression technique for the Multi-spectral Linear Array (MLA) instrument is described. Also a description is given of the GaAs integrated circuit chip set which will demonstrate that data compression can be performed onboard in real time at data rate in the order of 500 Mb/s.

  1. Ultrasound Elastography: Review of Techniques and Clinical Applications

    PubMed Central

    Sigrist, Rosa M.S.; Liau, Joy; Kaffas, Ahmed El; Chammas, Maria Cristina; Willmann, Juergen K.

    2017-01-01

    Elastography-based imaging techniques have received substantial attention in recent years for non-invasive assessment of tissue mechanical properties. These techniques take advantage of changed soft tissue elasticity in various pathologies to yield qualitative and quantitative information that can be used for diagnostic purposes. Measurements are acquired in specialized imaging modes that can detect tissue stiffness in response to an applied mechanical force (compression or shear wave). Ultrasound-based methods are of particular interest due to its many inherent advantages, such as wide availability including at the bedside and relatively low cost. Several ultrasound elastography techniques using different excitation methods have been developed. In general, these can be classified into strain imaging methods that use internal or external compression stimuli, and shear wave imaging that use ultrasound-generated traveling shear wave stimuli. While ultrasound elastography has shown promising results for non-invasive assessment of liver fibrosis, new applications in breast, thyroid, prostate, kidney and lymph node imaging are emerging. Here, we review the basic principles, foundation physics, and limitations of ultrasound elastography and summarize its current clinical use and ongoing developments in various clinical applications. PMID:28435467

  2. Performance of target detection algorithm in compressive sensing miniature ultraspectral imaging compressed sensing system

    NASA Astrophysics Data System (ADS)

    Gedalin, Daniel; Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Rotman, Stanley R.; Stern, Adrian

    2017-04-01

    Compressive sensing theory was proposed to deal with the high quantity of measurements demanded by traditional hyperspectral systems. Recently, a compressive spectral imaging technique dubbed compressive sensing miniature ultraspectral imaging (CS-MUSI) was presented. This system uses a voltage controlled liquid crystal device to create multiplexed hyperspectral cubes. We evaluate the utility of the data captured using the CS-MUSI system for the task of target detection. Specifically, we compare the performance of the matched filter target detection algorithm in traditional hyperspectral systems and in CS-MUSI multiplexed hyperspectral cubes. We found that the target detection algorithm performs similarly in both cases, despite the fact that the CS-MUSI data is up to an order of magnitude less than that in conventional hyperspectral cubes. Moreover, the target detection is approximately an order of magnitude faster in CS-MUSI data.

  3. An Implementation Of Elias Delta Code And ElGamal Algorithm In Image Compression And Security

    NASA Astrophysics Data System (ADS)

    Rachmawati, Dian; Andri Budiman, Mohammad; Saffiera, Cut Amalia

    2018-01-01

    In data transmission such as transferring an image, confidentiality, integrity, and efficiency of data storage aspects are highly needed. To maintain the confidentiality and integrity of data, one of the techniques used is ElGamal. The strength of this algorithm is found on the difficulty of calculating discrete logs in a large prime modulus. ElGamal belongs to the class of Asymmetric Key Algorithm and resulted in enlargement of the file size, therefore data compression is required. Elias Delta Code is one of the compression algorithms that use delta code table. The image was first compressed using Elias Delta Code Algorithm, then the result of the compression was encrypted by using ElGamal algorithm. Prime test was implemented using Agrawal Biswas Algorithm. The result showed that ElGamal method could maintain the confidentiality and integrity of data with MSE and PSNR values 0 and infinity. The Elias Delta Code method generated compression ratio and space-saving each with average values of 62.49%, and 37.51%.

  4. Radiometric resolution enhancement by lossy compression as compared to truncation followed by lossless compression

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Manohar, Mareboyana

    1994-01-01

    Recent advances in imaging technology make it possible to obtain imagery data of the Earth at high spatial, spectral and radiometric resolutions from Earth orbiting satellites. The rate at which the data is collected from these satellites can far exceed the channel capacity of the data downlink. Reducing the data rate to within the channel capacity can often require painful trade-offs in which certain scientific returns are sacrificed for the sake of others. In this paper we model the radiometric version of this form of lossy compression by dropping a specified number of least significant bits from each data pixel and compressing the remaining bits using an appropriate lossless compression technique. We call this approach 'truncation followed by lossless compression' or TLLC. We compare the TLLC approach with applying a lossy compression technique to the data for reducing the data rate to the channel capacity, and demonstrate that each of three different lossy compression techniques (JPEG/DCT, VQ and Model-Based VQ) give a better effective radiometric resolution than TLLC for a given channel rate.

  5. Dynamic magnetic resonance imaging method based on golden-ratio cartesian sampling and compressed sensing.

    PubMed

    Li, Shuo; Zhu, Yanchun; Xie, Yaoqin; Gao, Song

    2018-01-01

    Dynamic magnetic resonance imaging (DMRI) is used to noninvasively trace the movements of organs and the process of drug delivery. The results can provide quantitative or semiquantitative pathology-related parameters, thus giving DMRI great potential for clinical applications. However, conventional DMRI techniques suffer from low temporal resolution and long scan time owing to the limitations of the k-space sampling scheme and image reconstruction algorithm. In this paper, we propose a novel DMRI sampling scheme based on a golden-ratio Cartesian trajectory in combination with a compressed sensing reconstruction algorithm. The results of two simulation experiments, designed according to the two major DMRI techniques, showed that the proposed method can improve the temporal resolution and shorten the scan time and provide high-quality reconstructed images.

  6. Coding Strategies and Implementations of Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.

  7. High Order Entropy-Constrained Residual VQ for Lossless Compression of Images

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen

    1995-01-01

    High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance.

  8. Techniques for Field Application of Lingual Ultrasound Imaging

    ERIC Educational Resources Information Center

    Gick, Bryan; Bird, Sonya; Wilson, Ian

    2005-01-01

    Techniques are discussed for using ultrasound for lingual imaging in field-related applications. The greatest challenges we have faced distinguishing the field setting from the laboratory setting are the lack of controlled head/transducer movement, and the related issue of tissue compression. Two experiments are reported. First, a pilot study…

  9. Echocardiographic image of an active human heart

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Echocardiographic images provide quick, safe images of the heart as it beats. While a state-of-the art echocardiograph unit is part of the Human Research Facility on International Space Station, quick transmission of images and data to Earth is a challenge. NASA is developing techniques to improve the echocardiography available to diagnose sick astronauts as well as study the long-term effects of space travel on their health. Echocardiography uses ultrasound, generated in a sensor head placed against the patient's chest, to produce images of the structure of the heart walls and valves. However, ultrasonic imaging creates an enormous volume of data, up to 220 million bits per second. This can challenge ISS communications as well as Earth-based providers. Compressing data for rapid transmission back to Earth can degrade the quality of the images. Researchers at the Cleveland Clinic Foundation are working with NASA to develop compression techniques that meet imaging standards now used on the Internet and by the medical community, and that ensure that physicians receive quality diagnostic images.

  10. Video bandwidth compression system

    NASA Astrophysics Data System (ADS)

    Ludington, D.

    1980-08-01

    The objective of this program was the development of a Video Bandwidth Compression brassboard model for use by the Air Force Avionics Laboratory, Wright-Patterson Air Force Base, in evaluation of bandwidth compression techniques for use in tactical weapons and to aid in the selection of particular operational modes to be implemented in an advanced flyable model. The bandwidth compression system is partitioned into two major divisions: the encoder, which processes the input video with a compression algorithm and transmits the most significant information; and the decoder where the compressed data is reconstructed into a video image for display.

  11. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  12. Human Motion Capture Data Tailored Transform Coding.

    PubMed

    Junhui Hou; Lap-Pui Chau; Magnenat-Thalmann, Nadia; Ying He

    2015-07-01

    Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed.

  13. Hyperspectral image compressing using wavelet-based method

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  14. Research on the principle and experimentation of optical compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Chen, Yuheng; Chen, Xinhua; Zhou, Jiankang; Ji, Yiqun; Shen, Weimin

    2013-12-01

    The optical compressive spectral imaging method is a novel spectral imaging technique that draws in the inspiration of compressed sensing, which takes on the advantages such as reducing acquisition data amount, realizing snapshot imaging, increasing signal to noise ratio and so on. Considering the influence of the sampling quality on the ultimate imaging quality, researchers match the sampling interval with the modulation interval in former reported imaging system, while the depressed sampling rate leads to the loss on the original spectral resolution. To overcome that technical defect, the demand for the matching between the sampling interval and the modulation interval is disposed of and the spectral channel number of the designed experimental device increases more than threefold comparing to that of the previous method. Imaging experiment is carried out by use of the experiment installation and the spectral data cube of the shooting target is reconstructed with the acquired compressed image by use of the two-step iterative shrinkage/thresholding algorithms. The experimental result indicates that the spectral channel number increases effectively and the reconstructed data stays high-fidelity. The images and spectral curves are able to accurately reflect the spatial and spectral character of the target.

  15. Image reconstruction of dynamic infrared single-pixel imaging system

    NASA Astrophysics Data System (ADS)

    Tong, Qi; Jiang, Yilin; Wang, Haiyan; Guo, Limin

    2018-03-01

    Single-pixel imaging technique has recently received much attention. Most of the current single-pixel imaging is aimed at relatively static targets or the imaging system is fixed, which is limited by the number of measurements received through the single detector. In this paper, we proposed a novel dynamic compressive imaging method to solve the imaging problem, where exists imaging system motion behavior, for the infrared (IR) rosette scanning system. The relationship between adjacent target images and scene is analyzed under different system movement scenarios. These relationships are used to build dynamic compressive imaging models. Simulation results demonstrate that the proposed method can improve the reconstruction quality of IR image and enhance the contrast between the target and the background in the presence of system movement.

  16. Encrypted Three-dimensional Dynamic Imaging using Snapshot Time-of-flight Compressed Ultrafast Photography

    PubMed Central

    Liang, Jinyang; Gao, Liang; Hai, Pengfei; Li, Chiye; Wang, Lihong V.

    2015-01-01

    Compressed ultrafast photography (CUP), a computational imaging technique, is synchronized with short-pulsed laser illumination to enable dynamic three-dimensional (3D) imaging. By leveraging the time-of-flight (ToF) information of pulsed light backscattered by the object, ToF-CUP can reconstruct a volumetric image from a single camera snapshot. In addition, the approach unites the encryption of depth data with the compressed acquisition of 3D data in a single snapshot measurement, thereby allowing efficient and secure data storage and transmission. We demonstrated high-speed 3D videography of moving objects at up to 75 volumes per second. The ToF-CUP camera was applied to track the 3D position of a live comet goldfish. We have also imaged a moving object obscured by a scattering medium. PMID:26503834

  17. A joint source-channel distortion model for JPEG compressed images.

    PubMed

    Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C

    2006-06-01

    The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.

  18. Image Segmentation, Registration, Compression, and Matching

    NASA Technical Reports Server (NTRS)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity/topology components of the generated models. The highly efficient triangular mesh compression compacts the connectivity information at the rate of 1.5-4 bits per vertex (on average for triangle meshes), while reducing the 3D geometry by 40-50 percent. Finally, taking into consideration the characteristics of 3D terrain data, and using the innovative, regularized binary decomposition mesh modeling, a multistage, pattern-drive modeling, and compression technique has been developed to provide an effective framework for compressing digital elevation model (DEM) surfaces, high-resolution aerial imagery, and other types of NASA data.

  19. Study of on-board compression of earth resources data

    NASA Technical Reports Server (NTRS)

    Habibi, A.

    1975-01-01

    The current literature on image bandwidth compression was surveyed and those methods relevant to compression of multispectral imagery were selected. Typical satellite multispectral data was then analyzed statistically and the results used to select a smaller set of candidate bandwidth compression techniques particularly relevant to earth resources data. These were compared using both theoretical analysis and simulation, under various criteria of optimality such as mean square error (MSE), signal-to-noise ratio, classification accuracy, and computational complexity. By concatenating some of the most promising techniques, three multispectral data compression systems were synthesized which appear well suited to current and future NASA earth resources applications. The performance of these three recommended systems was then examined in detail by all of the above criteria. Finally, merits and deficiencies were summarized and a number of recommendations for future NASA activities in data compression proposed.

  20. Compression of multispectral Landsat imagery using the Embedded Zerotree Wavelet (EZW) algorithm

    NASA Technical Reports Server (NTRS)

    Shapiro, Jerome M.; Martucci, Stephen A.; Czigler, Martin

    1994-01-01

    The Embedded Zerotree Wavelet (EZW) algorithm has proven to be an extremely efficient and flexible compression algorithm for low bit rate image coding. The embedding algorithm attempts to order the bits in the bit stream in numerical importance and thus a given code contains all lower rate encodings of the same algorithm. Therefore, precise bit rate control is achievable and a target rate or distortion metric can be met exactly. Furthermore, the technique is fully image adaptive. An algorithm for multispectral image compression which combines the spectral redundancy removal properties of the image-dependent Karhunen-Loeve Transform (KLT) with the efficiency, controllability, and adaptivity of the embedded zerotree wavelet algorithm is presented. Results are shown which illustrate the advantage of jointly encoding spectral components using the KLT and EZW.

  1. Protection of Health Imagery by Region Based Lossless Reversible Watermarking Scheme

    PubMed Central

    Priya, R. Lakshmi; Sadasivam, V.

    2015-01-01

    Providing authentication and integrity in medical images is a problem and this work proposes a new blind fragile region based lossless reversible watermarking technique to improve trustworthiness of medical images. The proposed technique embeds the watermark using a reversible least significant bit embedding scheme. The scheme combines hashing, compression, and digital signature techniques to create a content dependent watermark making use of compressed region of interest (ROI) for recovery of ROI as reported in literature. The experiments were carried out to prove the performance of the scheme and its assessment reveals that ROI is extracted in an intact manner and PSNR values obtained lead to realization that the presented scheme offers greater protection for health imageries. PMID:26649328

  2. Compressive Sampling based Image Coding for Resource-deficient Visual Communication.

    PubMed

    Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen

    2016-04-14

    In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.

  3. Lossless and lossy compression of quantitative phase images of red blood cells obtained by digital holographic imaging.

    PubMed

    Jaferzadeh, Keyvan; Gholami, Samaneh; Moon, Inkyu

    2016-12-20

    In this paper, we evaluate lossless and lossy compression techniques to compress quantitative phase images of red blood cells (RBCs) obtained by an off-axis digital holographic microscopy (DHM). The RBC phase images are numerically reconstructed from their digital holograms and are stored in 16-bit unsigned integer format. In the case of lossless compression, predictive coding of JPEG lossless (JPEG-LS), JPEG2000, and JP3D are evaluated, and compression ratio (CR) and complexity (compression time) are compared against each other. It turns out that JP2k can outperform other methods by having the best CR. In the lossy case, JP2k and JP3D with different CRs are examined. Because some data is lost in a lossy way, the degradation level is measured by comparing different morphological and biochemical parameters of RBC before and after compression. Morphological parameters are volume, surface area, RBC diameter, sphericity index, and the biochemical cell parameter is mean corpuscular hemoglobin (MCH). Experimental results show that JP2k outperforms JP3D not only in terms of mean square error (MSE) when CR increases, but also in compression time in the lossy compression way. In addition, our compression results with both algorithms demonstrate that with high CR values the three-dimensional profile of RBC can be preserved and morphological and biochemical parameters can still be within the range of reported values.

  4. Three dimensional range geometry and texture data compression with space-filling curves.

    PubMed

    Chen, Xia; Zhang, Song

    2017-10-16

    This paper presents a novel method to effectively store three-dimensional (3D) data and 2D texture data into a regular 24-bit image. The proposed method uses the Hilbert space-filling curve to map the normalized unwrapped phase map to two 8-bit color channels, and saves the third color channel for 2D texture storage. By further leveraging existing 2D image and video compression techniques, the proposed method can achieve high compression ratios while effectively preserving data quality. Since the encoding and decoding processes can be applied to most of the current 2D media platforms, this proposed compression method can make 3D data storage and transmission available for many electrical devices without requiring special hardware changes. Experiments demonstrate that if a lossless 2D image/video format is used, both original 3D geometry and 2D color texture can be accurately recovered; if lossy image/video compression is used, only black-and-white or grayscale texture can be properly recovered, but much higher compression ratios (e.g., 1543:1 against the ASCII OBJ format) are achieved with slight loss of 3D geometry quality.

  5. An ultra-low-power image compressor for capsule endoscope.

    PubMed

    Lin, Meng-Chun; Dung, Lan-Rong; Weng, Ping-Kuo

    2006-02-25

    Gastrointestinal (GI) endoscopy has been popularly applied for the diagnosis of diseases of the alimentary canal including Crohn's Disease, Celiac disease and other malabsorption disorders, benign and malignant tumors of the small intestine, vascular disorders and medication related small bowel injury. The wireless capsule endoscope has been successfully utilized to diagnose diseases of the small intestine and alleviate the discomfort and pain of patients. However, the resolution of demosaicked image is still low, and some interesting spots may be unintentionally omitted. Especially, the images will be severely distorted when physicians zoom images in for detailed diagnosis. Increasing resolution may cause significant power consumption in RF transmitter; hence, image compression is necessary for saving the power dissipation of RF transmitter. To overcome this drawback, we have been developing a new capsule endoscope, called GICam. We developed an ultra-low-power image compression processor for capsule endoscope or swallowable imaging capsules. In applications of capsule endoscopy, it is imperative to consider battery life/performance trade-offs. Applying state-of-the-art video compression techniques may significantly reduce the image bit rate by their high compression ratio, but they all require intensive computation and consume much battery power. There are many fast compression algorithms for reducing computation load; however, they may result in distortion of the original image, which is not good for use in the medical care. Thus, this paper will first simplify traditional video compression algorithms and propose a scalable compression architecture. As the result, the developed video compressor only costs 31 K gates at 2 frames per second, consumes 14.92 mW, and reduces the video size by 75% at least.

  6. Combining Vector Quantization and Histogram Equalization.

    ERIC Educational Resources Information Center

    Cosman, Pamela C.; And Others

    1992-01-01

    Discussion of contrast enhancement techniques focuses on the use of histogram equalization with a data compression technique, i.e., tree-structured vector quantization. The enhancement technique of intensity windowing is described, and the use of enhancement techniques for medical images is explained, including adaptive histogram equalization.…

  7. Digital Compositing Techniques for Coronal Imaging (Invited review)

    NASA Astrophysics Data System (ADS)

    Espenak, F.

    2000-04-01

    The solar corona exhibits a huge range in brightness which cannot be captured in any single photographic exposure. Short exposures show the bright inner corona and prominences, while long exposures reveal faint details in equatorial streamers and polar brushes. For many years, radial gradient filters and other analog techniques have been used to compress the corona's dynamic range in order to study its morphology. Such techniques demand perfect pointing and tracking during the eclipse, and can be difficult to calibrate. In the past decade, the speed, memory and hard disk capacity of personal computers have rapidly increased as prices continue to drop. It is now possible to perform sophisticated image processing of eclipse photographs on commercially available CPU's. Software programs such as Adobe Photoshop permit combining multiple eclipse photographs into a composite image which compresses the corona's dynamic range and can reveal subtle features and structures. Algorithms and digital techniques used for processing 1998 eclipse photographs will be discussed which are equally applicable to the recent eclipse of 1999 August 11.

  8. Perceptual distortion analysis of color image VQ-based coding

    NASA Astrophysics Data System (ADS)

    Charrier, Christophe; Knoblauch, Kenneth; Cherifi, Hocine

    1997-04-01

    It is generally accepted that a RGB color image can be easily encoded by using a gray-scale compression technique on each of the three color planes. Such an approach, however, fails to take into account correlations existing between color planes and perceptual factors. We evaluated several linear and non-linear color spaces, some introduced by the CIE, compressed with the vector quantization technique for minimum perceptual distortion. To study these distortions, we measured contrast and luminance of the video framebuffer, to precisely control color. We then obtained psychophysical judgements to measure how well these methods work to minimize perceptual distortion in a variety of color space.

  9. Maximizing Science Return from Future Mars Missions with Onboard Image Analyses

    NASA Technical Reports Server (NTRS)

    Gulick, V. C.; Morris, R. L.; Bandari, E. B.; Roush, T. L.

    2000-01-01

    We have developed two new techniques to enhance science return and to decrease returned data volume for near-term Mars missions: 1) multi-spectral image compression and 2) autonomous identification and fusion of in-focus regions in an image series.

  10. Fast and memory efficient text image compression with JBIG2.

    PubMed

    Ye, Yan; Cosman, Pamela

    2003-01-01

    In this paper, we investigate ways to reduce encoding time, memory consumption and substitution errors for text image compression with JBIG2. We first look at page striping where the encoder splits the input image into horizontal stripes and processes one stripe at a time. We propose dynamic dictionary updating procedures for page striping to reduce the bit rate penalty it incurs. Experiments show that splitting the image into two stripes can save 30% of encoding time and 40% of physical memory with a small coding loss of about 1.5%. Using more stripes brings further savings in time and memory but the return diminishes. We also propose an adaptive way to update the dictionary only when it has become out-of-date. The adaptive updating scheme can resolve the time versus bit rate tradeoff and the memory versus bit rate tradeoff well simultaneously. We then propose three speedup techniques for pattern matching, the most time-consuming encoding activity in JBIG2. When combined together, these speedup techniques can save up to 75% of the total encoding time with at most 1.7% of bit rate penalty. Finally, we look at improving reconstructed image quality for lossy compression. We propose enhanced prescreening and feature monitored shape unifying to significantly reduce substitution errors in the reconstructed images.

  11. Assessment of low-contrast detectability for compressed digital chest images

    NASA Astrophysics Data System (ADS)

    Cook, Larry T.; Insana, Michael F.; McFadden, Michael A.; Hall, Timothy J.; Cox, Glendon G.

    1994-04-01

    The ability of human observers to detect low-contrast targets in screen-film (SF) images, computed radiographic (CR) images, and compressed CR images was measured using contrast detail (CD) analysis. The results of these studies were used to design a two- alternative forced-choice (2AFC) experiment to investigate the detectability of nodules in adult chest radiographs. CD curves for a common screen-film system were compared with CR images compressed up to 125:1. Data from clinical chest exams were used to define a CD region of clinical interest that sufficiently challenged the observer. From that data, simulated lesions were introduced into 100 normal CR chest films, and forced-choice observer performance studies were performed. CR images were compressed using a full-frame discrete cosine transform (FDCT) technique, where the 2D Fourier space was divided into four areas of different quantization depending on the cumulative power spectrum (energy) of each image. The characteristic curve of the CR images was adjusted so that optical densities matched those of the SF system. The CD curves for SF and uncompressed CR systems were statistically equivalent. The slope of the CD curve for each was - 1.0 as predicted by the Rose model. There was a significant degradation in detection found for CR images compressed to 125:1. Furthermore, contrast-detail analysis demonstrated that many pulmonary nodules encountered in clinical practice are significantly above the average observer threshold for detection. We designed a 2AFC observer study using simulated 1-cm lesions introduced into normal CR chest radiographs. Detectability was reduced for all compressed CR radiographs.

  12. Nearest neighbor, bilinear interpolation and bicubic interpolation geographic correction effects on LANDSAT imagery

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R., Jr.

    1976-01-01

    Geographical correction effects on LANDSAT image data are identified, using the nearest neighbor, bilinear interpolation and bicubic interpolation techniques. Potential impacts of registration on image compression and classification are explored.

  13. Computer-aided, multi-modal, and compression diffuse optical studies of breast tissue

    NASA Astrophysics Data System (ADS)

    Busch, David Richard, Jr.

    Diffuse Optical Tomography and Spectroscopy permit measurement of important physiological parameters non-invasively through ˜10 cm of tissue. I have applied these techniques in measurements of human breast and breast cancer. My thesis integrates three loosely connected themes in this context: multi-modal breast cancer imaging, automated data analysis of breast cancer images, and microvascular hemodynamics of breast under compression. As per the first theme, I describe construction, testing, and the initial clinical usage of two generations of imaging systems for simultaneous diffuse optical and magnetic resonance imaging. The second project develops a statistical analysis of optical breast data from many spatial locations in a population of cancers to derive a novel optical signature of malignancy; I then apply this data-derived signature for localization of cancer in additional subjects. Finally, I construct and deploy diffuse optical instrumentation to measure blood content and blood flow during breast compression; besides optics, this research has implications for any method employing breast compression, e.g., mammography.

  14. [Medical image compression: a review].

    PubMed

    Noreña, Tatiana; Romero, Eduardo

    2013-01-01

    Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings.

  15. Optical Measurement Technique for Space Column Characterization

    NASA Technical Reports Server (NTRS)

    Barrows, Danny A.; Watson, Judith J.; Burner, Alpheus W.; Phelps, James E.

    2004-01-01

    A simple optical technique for the structural characterization of lightweight space columns is presented. The technique is useful for determining the coefficient of thermal expansion during cool down as well as the induced strain during tension and compression testing. The technique is based upon object-to-image plane scaling and does not require any photogrammetric calibrations or computations. Examples of the measurement of the coefficient of thermal expansion are presented for several lightweight space columns. Examples of strain measured during tension and compression testing are presented along with comparisons to results obtained with Linear Variable Differential Transformer (LVDT) position transducers.

  16. Image acquisition system using on sensor compressed sampling technique

    NASA Astrophysics Data System (ADS)

    Gupta, Pravir Singh; Choi, Gwan Seong

    2018-01-01

    Advances in CMOS technology have made high-resolution image sensors possible. These image sensors pose significant challenges in terms of the amount of raw data generated, energy efficiency, and frame rate. This paper presents a design methodology for an imaging system and a simplified image sensor pixel design to be used in the system so that the compressed sensing (CS) technique can be implemented easily at the sensor level. This results in significant energy savings as it not only cuts the raw data rate but also reduces transistor count per pixel; decreases pixel size; increases fill factor; simplifies analog-to-digital converter, JPEG encoder, and JPEG decoder design; decreases wiring; and reduces the decoder size by half. Thus, CS has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23% to 65%.

  17. An object-oriented simulator for 3D digital breast tomosynthesis imaging system.

    PubMed

    Seyyedi, Saeed; Cengiz, Kubra; Kamasak, Mustafa; Yildirim, Isa

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values.

  18. An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System

    PubMed Central

    Cengiz, Kubra

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values. PMID:24371468

  19. Compressive spectral testbed imaging system based on thin-film color-patterned filter arrays.

    PubMed

    Rueda, Hoover; Arguello, Henry; Arce, Gonzalo R

    2016-11-20

    Compressive spectral imaging systems can reliably capture multispectral data using far fewer measurements than traditional scanning techniques. In this paper, a thin-film patterned filter array-based compressive spectral imager is demonstrated, including its optical design and implementation. The use of a patterned filter array entails a single-step three-dimensional spatial-spectral coding on the input data cube, which provides higher flexibility on the selection of voxels being multiplexed on the sensor. The patterned filter array is designed and fabricated with micrometer pitch size thin films, referred to as pixelated filters, with three different wavelengths. The performance of the system is evaluated in terms of references measured by a commercially available spectrometer and the visual quality of the reconstructed images. Different distributions of the pixelated filters, including random and optimized structures, are explored.

  20. High Spatial and Temporal Resolution Dynamic Contrast-Enhanced Magnetic Resonance Angiography (CE-MRA) using Compressed Sensing with Magnitude Image Subtraction

    PubMed Central

    Rapacchi, Stanislas; Han, Fei; Natsuaki, Yutaka; Kroeker, Randall; Plotnik, Adam; Lehman, Evan; Sayre, James; Laub, Gerhard; Finn, J Paul; Hu, Peng

    2014-01-01

    Purpose We propose a compressed-sensing (CS) technique based on magnitude image subtraction for high spatial and temporal resolution dynamic contrast-enhanced MR angiography (CE-MRA). Methods Our technique integrates the magnitude difference image into the CS reconstruction to promote subtraction sparsity. Fully sampled Cartesian 3D CE-MRA datasets from 6 volunteers were retrospectively under-sampled and three reconstruction strategies were evaluated: k-space subtraction CS, independent CS, and magnitude subtraction CS. The techniques were compared in image quality (vessel delineation, image artifacts, and noise) and image reconstruction error. Our CS technique was further tested on 7 volunteers using a prospectively under-sampled CE-MRA sequence. Results Compared with k-space subtraction and independent CS, our magnitude subtraction CS provides significantly better vessel delineation and less noise at 4X acceleration, and significantly less reconstruction error at 4X and 8X (p<0.05 for all). On a 1–4 point image quality scale in vessel delineation, our technique scored 3.8±0.4 at 4X, 2.8±0.4 at 8X and 2.3±0.6 at 12X acceleration. Using our CS sequence at 12X acceleration, we were able to acquire dynamic CE-MRA with higher spatial and temporal resolution than current clinical TWIST protocol while maintaining comparable image quality (2.8±0.5 vs. 3.0±0.4, p=NS). Conclusion Our technique is promising for dynamic CE-MRA. PMID:23801456

  1. Impact of JPEG2000 compression on endmember extraction and unmixing of remotely sensed hyperspectral data

    NASA Astrophysics Data System (ADS)

    Martin, Gabriel; Gonzalez-Ruiz, Vicente; Plaza, Antonio; Ortiz, Juan P.; Garcia, Inmaculada

    2010-07-01

    Lossy hyperspectral image compression has received considerable interest in recent years due to the extremely high dimensionality of the data. However, the impact of lossy compression on spectral unmixing techniques has not been widely studied. These techniques characterize mixed pixels (resulting from insufficient spatial resolution) in terms of a suitable combination of spectrally pure substances (called endmembers) weighted by their estimated fractional abundances. This paper focuses on the impact of JPEG2000-based lossy compression of hyperspectral images on the quality of the endmembers extracted by different algorithms. The three considered algorithms are the orthogonal subspace projection (OSP), which uses only spatial information, and the automatic morphological endmember extraction (AMEE) and spatial spectral endmember extraction (SSEE), which integrate both spatial and spectral information in the search for endmembers. The impact of compression on the resulting abundance estimation based on the endmembers derived by different methods is also substantiated. Experimental results are conducted using a hyperspectral data set collected by NASA Jet Propulsion Laboratory over the Cuprite mining district in Nevada. The experimental results are quantitatively analyzed using reference information available from U.S. Geological Survey, resulting in recommendations to specialists interested in applying endmember extraction and unmixing algorithms to compressed hyperspectral data.

  2. Motion-compensated compressed sensing for dynamic imaging

    NASA Astrophysics Data System (ADS)

    Sundaresan, Rajagopalan; Kim, Yookyung; Nadar, Mariappan S.; Bilgin, Ali

    2010-08-01

    The recently introduced Compressed Sensing (CS) theory explains how sparse or compressible signals can be reconstructed from far fewer samples than what was previously believed possible. The CS theory has attracted significant attention for applications such as Magnetic Resonance Imaging (MRI) where long acquisition times have been problematic. This is especially true for dynamic MRI applications where high spatio-temporal resolution is needed. For example, in cardiac cine MRI, it is desirable to acquire the whole cardiac volume within a single breath-hold in order to avoid artifacts due to respiratory motion. Conventional MRI techniques do not allow reconstruction of high resolution image sequences from such limited amount of data. Vaswani et al. recently proposed an extension of the CS framework to problems with partially known support (i.e. sparsity pattern). In their work, the problem of recursive reconstruction of time sequences of sparse signals was considered. Under the assumption that the support of the signal changes slowly over time, they proposed using the support of the previous frame as the "known" part of the support for the current frame. While this approach works well for image sequences with little or no motion, motion causes significant change in support between adjacent frames. In this paper, we illustrate how motion estimation and compensation techniques can be used to reconstruct more accurate estimates of support for image sequences with substantial motion (such as cardiac MRI). Experimental results using phantoms as well as real MRI data sets illustrate the improved performance of the proposed technique.

  3. Deterministic compressive sampling for high-quality image reconstruction of ultrasound tomography.

    PubMed

    Huy, Tran Quang; Tue, Huynh Huu; Long, Ton That; Duc-Tan, Tran

    2017-05-25

    A well-known diagnostic imaging modality, termed ultrasound tomography, was quickly developed for the detection of very small tumors whose sizes are smaller than the wavelength of the incident pressure wave without ionizing radiation, compared to the current gold-standard X-ray mammography. Based on inverse scattering technique, ultrasound tomography uses some material properties such as sound contrast or attenuation to detect small targets. The Distorted Born Iterative Method (DBIM) based on first-order Born approximation is an efficient diffraction tomography approach. One of the challenges for a high quality reconstruction is to obtain many measurements from the number of transmitters and receivers. Given the fact that biomedical images are often sparse, the compressed sensing (CS) technique could be therefore effectively applied to ultrasound tomography by reducing the number of transmitters and receivers, while maintaining a high quality of image reconstruction. There are currently several work on CS that dispose randomly distributed locations for the measurement system. However, this random configuration is relatively difficult to implement in practice. Instead of it, we should adopt a methodology that helps determine the locations of measurement devices in a deterministic way. For this, we develop the novel DCS-DBIM algorithm that is highly applicable in practice. Inspired of the exploitation of the deterministic compressed sensing technique (DCS) introduced by the authors few years ago with the image reconstruction process implemented using l 1 regularization. Simulation results of the proposed approach have demonstrated its high performance, with the normalized error approximately 90% reduced, compared to the conventional approach, this new approach can save half of number of measurements and only uses two iterations. Universal image quality index is also evaluated in order to prove the efficiency of the proposed approach. Numerical simulation results indicate that CS and DCS techniques offer equivalent image reconstruction quality with simpler practical implementation. It would be a very promising approach in practical applications of modern biomedical imaging technology.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceglio, N.M.; George, E.V.; Brooks, K.M.

    The first successful demonstration of high resolution, tomographic imaging of a laboratory plasma using coded imaging techniques is reported. ZPCI has been used to image the x-ray emission from laser compressed DT filled microballoons. The zone plate camera viewed an x-ray spectral window extending from below 2 keV to above 6 keV. It exhibited a resolution approximately 8 ..mu..m, a magnification factor approximately 13, and subtended a radiation collection solid angle at the target approximately 10/sup -2/ sr. X-ray images using ZPCI were compared with those taken using a grazing incidence reflection x-ray microscope. The agreement was excellent. In addition,more » the zone plate camera produced tomographic images. The nominal tomographic resolution was approximately 75 ..mu..m. This allowed three dimensional viewing of target emission from a single shot in planar ''slices''. In addition to its tomographic capability, the great advantage of the coded imaging technique lies in its applicability to hard (greater than 10 keV) x-ray and charged particle imaging. Experiments involving coded imaging of the suprathermal x-ray and high energy alpha particle emission from laser compressed microballoon targets are discussed.« less

  5. Mammogram registration: a phantom-based evaluation of compressed breast thickness variation effects.

    PubMed

    Richard, Frédéric J P; Bakić, Predrag R; Maidment, Andrew D A

    2006-02-01

    The temporal comparison of mammograms is complex; a wide variety of factors can cause changes in image appearance. Mammogram registration is proposed as a method to reduce the effects of these changes and potentially to emphasize genuine alterations in breast tissue. Evaluation of such registration techniques is difficult since ground truth regarding breast deformations is not available in clinical mammograms. In this paper, we propose a systematic approach to evaluate sensitivity of registration methods to various types of changes in mammograms using synthetic breast images with known deformations. As a first step, images of the same simulated breasts with various amounts of simulated physical compression have been used to evaluate a previously described nonrigid mammogram registration technique. Registration performance is measured by calculating the average displacement error over a set of evaluation points identified in mammogram pairs. Applying appropriate thickness compensation and using a preferred order of the registered images, we obtained an average displacement error of 1.6 mm for mammograms with compression differences of 1-3 cm. The proposed methodology is applicable to analysis of other sources of mammogram differences and can be extended to the registration of multimodality breast data.

  6. High performance optical encryption based on computational ghost imaging with QR code and compressive sensing technique

    NASA Astrophysics Data System (ADS)

    Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan

    2015-10-01

    In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.

  7. High compression image and image sequence coding

    NASA Technical Reports Server (NTRS)

    Kunt, Murat

    1989-01-01

    The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.

  8. Low bit-rate image compression via adaptive down-sampling and constrained least squares upconversion.

    PubMed

    Wu, Xiaolin; Zhang, Xiangjun; Wang, Xiaohan

    2009-03-01

    Recently, many researchers started to challenge a long-standing practice of digital photography: oversampling followed by compression and pursuing more intelligent sparse sampling techniques. In this paper, we propose a practical approach of uniform down sampling in image space and yet making the sampling adaptive by spatially varying, directional low-pass prefiltering. The resulting down-sampled prefiltered image remains a conventional square sample grid, and, thus, it can be compressed and transmitted without any change to current image coding standards and systems. The decoder first decompresses the low-resolution image and then upconverts it to the original resolution in a constrained least squares restoration process, using a 2-D piecewise autoregressive model and the knowledge of directional low-pass prefiltering. The proposed compression approach of collaborative adaptive down-sampling and upconversion (CADU) outperforms JPEG 2000 in PSNR measure at low to medium bit rates and achieves superior visual quality, as well. The superior low bit-rate performance of the CADU approach seems to suggest that oversampling not only wastes hardware resources and energy, and it could be counterproductive to image quality given a tight bit budget.

  9. Temporary morphological changes in plus disease induced during contact digital imaging

    PubMed Central

    Zepeda-Romero, L C; Martinez-Perez, M E; Ruiz-Velasco, S; Ramirez-Ortiz, M A; Gutierrez-Padilla, J A

    2011-01-01

    Objective To compare and quantify the retinal vascular changes induced by non-intentional pressure contact by digital handheld camera during retinopathy of prematurity (ROP) imaging by means of a computer-based image analysis system, Retinal Image multiScale Analysis. Methods A set of 10 wide-angle retinal pairs of photographs per patient, who underwent routine ROP examinations, was measured. Vascular trees were matched between ‘compression artifact' (absence of the vascular column at the optic nerve) and ‘not compression artifact' conditions. Parameters were analyzed using a two-level linear model for each individual parameter for arterioles and venules separately: integrated curvature (IC), diameter (d), and tortuosity index (TI). Results Images affected with compression artifact showed significant vascular d (P<0.01) changes in both arteries and veins, as well as in artery IC (P<0.05). Vascular TI remained unchanged in both groups. Conclusions Non-adverted corneal pressure with the RetCam lens could compress and decrease intra-arterial diameter or even collapse retinal vessels. Careful attention to technique is essential to avoid absence of the arterial blood column at the optic nerve head that is indicative of increased pressure during imaging. PMID:21760627

  10. NMR imaging of density distributions in tablets.

    PubMed

    Djemai, A; Sinka, I C

    2006-08-17

    This paper describes the use of (1)H nuclear magnetic resonance (NMR) for 3D mapping of the relative density distribution in pharmaceutical tablets manufactured under controlled conditions. The tablets are impregnated with a compatible liquid. The technique involves imaging of the presence of liquid which occupies the open pore space. The method does not require special calibration as the signal is directly proportional to the porosity for the imaging conditions used. The NMR imaging method is validated using uniform density flat faced tablets and also by direct comparison with X-ray computed tomography. The results illustrate (1) the effect of die wall friction on density distribution by compressing round, curved faced tablets using clean and pre-lubricated tooling, (2) the evolution of density distribution during compaction for both clean and pre-lubricated die wall conditions, by imaging tablets compressed to different compaction forces, and (3) the effect of tablet image on density distribution by compressing two complex shape tablets in identical dies to the same average density using punches with different geometries.

  11. Distributed single source coding with side information

    NASA Astrophysics Data System (ADS)

    Vila-Forcen, Jose E.; Koval, Oleksiy; Voloshynovskiy, Sviatoslav V.

    2004-01-01

    In the paper we advocate image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: classical image compression is considered from the positions of source coding with side information and, contrarily to the existing scenarios, where side information is given explicitly, side information is created based on deterministic approximation of local image features. We consider an image in the transform domain as a realization of a source with a bounded codebook of symbols where each symbol represents a particular edge shape. The codebook is image independent and plays the role of auxiliary source. Due to the partial availability of side information at both encoder and decoder we treat our problem as a modification of Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available only at decoder. Finally, we present a practical compression algorithm for passport photo images based on our concept that demonstrates the superior performance in very low bit rate regime.

  12. Complex-Difference Constrained Compressed Sensing Reconstruction for Accelerated PRF Thermometry with Application to MRI Induced RF Heating

    PubMed Central

    Cao, Zhipeng; Oh, Sukhoon; Otazo, Ricardo; Sica, Christopher T.; Griswold, Mark A.; Collins, Christopher M.

    2014-01-01

    Purpose Introduce a novel compressed sensing reconstruction method to accelerate proton resonance frequency (PRF) shift temperature imaging for MRI induced radiofrequency (RF) heating evaluation. Methods A compressed sensing approach that exploits sparsity of the complex difference between post-heating and baseline images is proposed to accelerate PRF temperature mapping. The method exploits the intra- and inter-image correlations to promote sparsity and remove shared aliasing artifacts. Validations were performed on simulations and retrospectively undersampled data acquired in ex-vivo and in-vivo studies by comparing performance with previously proposed techniques. Results The proposed complex difference constrained compressed sensing reconstruction method improved the reconstruction of smooth and local PRF temperature change images compared to various available reconstruction methods in a simulation study, a retrospective study with heating of a human forearm in vivo, and a retrospective study with heating of a sample of beef ex vivo . Conclusion Complex difference based compressed sensing with utilization of a fully-sampled baseline image improves the reconstruction accuracy for accelerated PRF thermometry. It can be used to improve the volumetric coverage and temporal resolution in evaluation of RF heating due to MRI, and may help facilitate and validate temperature-based methods for safety assurance. PMID:24753099

  13. Real-time windowing in imaging radar using FPGA technique

    NASA Astrophysics Data System (ADS)

    Ponomaryov, Volodymyr I.; Escamilla-Hernandez, Enrique

    2005-02-01

    The imaging radar uses the high frequency electromagnetic waves reflected from different objects for estimating of its parameters. Pulse compression is a standard signal processing technique used to minimize the peak transmission power and to maximize SNR, and to get a better resolution. Usually the pulse compression can be achieved using a matched filter. The level of the side-lobes in the imaging radar can be reduced using the special weighting function processing. There are very known different weighting functions: Hamming, Hanning, Blackman, Chebyshev, Blackman-Harris, Kaiser-Bessel, etc., widely used in the signal processing applications. Field Programmable Gate Arrays (FPGAs) offers great benefits like instantaneous implementation, dynamic reconfiguration, design, and field programmability. This reconfiguration makes FPGAs a better solution over custom-made integrated circuits. This work aims at demonstrating a reasonably flexible implementation of FM-linear signal and pulse compression using Matlab, Simulink, and System Generator. Employing FPGA and mentioned software we have proposed the pulse compression design on FPGA using classical and novel windows technique to reduce the side-lobes level. This permits increasing the detection ability of the small or nearly placed targets in imaging radar. The advantage of FPGA that can do parallelism in real time processing permits to realize the proposed algorithms. The paper also presents the experimental results of proposed windowing procedure in the marine radar with such the parameters: signal is linear FM (Chirp); frequency deviation DF is 9.375MHz; the pulse width T is 3.2μs taps number in the matched filter is 800 taps; sampling frequency 253.125*106 MHz. It has been realized the reducing of side-lobes levels in real time permitting better resolution of the small targets.

  14. Simultaneous usage of pinhole and penumbral apertures for imaging small scale neutron sources from inertial confinement fusion experiments.

    PubMed

    Guler, N; Volegov, P; Danly, C R; Grim, G P; Merrill, F E; Wilde, C H

    2012-10-01

    Inertial confinement fusion experiments at the National Ignition Facility are designed to understand the basic principles of creating self-sustaining fusion reactions by laser driven compression of deuterium-tritium (DT) filled cryogenic plastic capsules. The neutron imaging diagnostic provides information on the distribution of the central fusion reaction region and the surrounding DT fuel by observing neutron images in two different energy bands for primary (13-17 MeV) and down-scattered (6-12 MeV) neutrons. From this, the final shape and size of the compressed capsule can be estimated and the symmetry of the compression can be inferred. These experiments provide small sources with high yield neutron flux. An aperture design that includes an array of pinholes and penumbral apertures has provided the opportunity to image the same source with two different techniques. This allows for an evaluation of these different aperture designs and reconstruction algorithms.

  15. Complementary compressive imaging for the telescopic system

    PubMed Central

    Yu, Wen-Kai; Liu, Xue-Feng; Yao, Xu-Ri; Wang, Chao; Zhai, Yun; Zhai, Guang-Jie

    2014-01-01

    Conventional single-pixel cameras recover images only from the data recorded in one arm of the digital micromirror device, with the light reflected to the other direction not to be collected. Actually, the sampling in these two reflection orientations is correlated with each other, in view of which we propose a sampling concept of complementary compressive imaging, for the first time to our knowledge. We use this method in a telescopic system and acquire images of a target at about 2.0 km range with 20 cm resolution, with the variance of the noise decreasing by half. The influence of the sampling rate and the integration time of photomultiplier tubes on the image quality is also investigated experimentally. It is evident that this technique has advantages of large field of view over a long distance, high-resolution, high imaging speed, high-quality imaging capabilities, and needs fewer measurements in total than any single-arm sampling, thus can be used to improve the performance of all compressive imaging schemes and opens up possibilities for new applications in the remote-sensing area. PMID:25060569

  16. Improved image decompression for reduced transform coding artifacts

    NASA Technical Reports Server (NTRS)

    Orourke, Thomas P.; Stevenson, Robert L.

    1994-01-01

    The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.

  17. Compressed Sensing Techniques Applied to Ultrasonic Imaging of Cargo Containers.

    PubMed

    López, Yuri Álvarez; Lorenzo, José Ángel Martínez

    2017-01-15

    One of the key issues in the fight against the smuggling of goods has been the development of scanners for cargo inspection. X-ray-based radiographic system scanners are the most developed sensing modality. However, they are costly and use bulky sources that emit hazardous, ionizing radiation. Aiming to improve the probability of threat detection, an ultrasonic-based technique, capable of detecting the footprint of metallic containers or compartments concealed within the metallic structure of the inspected cargo, has been proposed. The system consists of an array of acoustic transceivers that is attached to the metallic structure-under-inspection, creating a guided acoustic Lamb wave. Reflections due to discontinuities are detected in the images, provided by an imaging algorithm. Taking into consideration that the majority of those images are sparse, this contribution analyzes the application of Compressed Sensing (CS) techniques in order to reduce the amount of measurements needed, thus achieving faster scanning, without compromising the detection capabilities of the system. A parametric study of the image quality, as a function of the samples needed in spatial and frequency domains, is presented, as well as the dependence on the sampling pattern. For this purpose, realistic cargo inspection scenarios have been simulated.

  18. Compressed Sensing Techniques Applied to Ultrasonic Imaging of Cargo Containers

    PubMed Central

    Álvarez López, Yuri; Martínez Lorenzo, José Ángel

    2017-01-01

    One of the key issues in the fight against the smuggling of goods has been the development of scanners for cargo inspection. X-ray-based radiographic system scanners are the most developed sensing modality. However, they are costly and use bulky sources that emit hazardous, ionizing radiation. Aiming to improve the probability of threat detection, an ultrasonic-based technique, capable of detecting the footprint of metallic containers or compartments concealed within the metallic structure of the inspected cargo, has been proposed. The system consists of an array of acoustic transceivers that is attached to the metallic structure-under-inspection, creating a guided acoustic Lamb wave. Reflections due to discontinuities are detected in the images, provided by an imaging algorithm. Taking into consideration that the majority of those images are sparse, this contribution analyzes the application of Compressed Sensing (CS) techniques in order to reduce the amount of measurements needed, thus achieving faster scanning, without compromising the detection capabilities of the system. A parametric study of the image quality, as a function of the samples needed in spatial and frequency domains, is presented, as well as the dependence on the sampling pattern. For this purpose, realistic cargo inspection scenarios have been simulated. PMID:28098841

  19. Multi-pass encoding of hyperspectral imagery with spectral quality control

    NASA Astrophysics Data System (ADS)

    Wasson, Steven; Walker, William

    2015-05-01

    Multi-pass encoding is a technique employed in the field of video compression that maximizes the quality of an encoded video sequence within the constraints of a specified bit rate. This paper presents research where multi-pass encoding is extended to the field of hyperspectral image compression. Unlike video, which is primarily intended to be viewed by a human observer, hyperspectral imagery is processed by computational algorithms that generally attempt to classify the pixel spectra within the imagery. As such, these algorithms are more sensitive to distortion in the spectral dimension of the image than they are to perceptual distortion in the spatial dimension. The compression algorithm developed for this research, which uses the Karhunen-Loeve transform for spectral decorrelation followed by a modified H.264/Advanced Video Coding (AVC) encoder, maintains a user-specified spectral quality level while maximizing the compression ratio throughout the encoding process. The compression performance may be considered near-lossless in certain scenarios. For qualitative purposes, this paper presents the performance of the compression algorithm for several Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Hyperion datasets using spectral angle as the spectral quality assessment function. Specifically, the compression performance is illustrated in the form of rate-distortion curves that plot spectral angle versus bits per pixel per band (bpppb).

  20. Compression of facsimile graphics for transmission over digital mobile satellite circuits

    NASA Astrophysics Data System (ADS)

    Dimolitsas, Spiros; Corcoran, Frank L.

    A technique for reducing the transmission requirements of facsimile images while maintaining high intelligibility in mobile communications environments is described. The algorithms developed are capable of achieving a compression of approximately 32 to 1. The technique focuses on the implementation of a low-cost interface unit suitable for facsimile communication between low-power mobile stations and fixed stations for both point-to-point and point-to-multipoint transmissions. This interface may be colocated with the transmitting facsimile terminals. The technique was implemented and tested by intercepting facsimile documents in a store-and-forward mode.

  1. Compressed sensing for rapid late gadolinium enhanced imaging of the left atrium: A preliminary study.

    PubMed

    Kamesh Iyer, Srikant; Tasdizen, Tolga; Burgon, Nathan; Kholmovski, Eugene; Marrouche, Nassir; Adluru, Ganesh; DiBella, Edward

    2016-09-01

    Current late gadolinium enhancement (LGE) imaging of left atrial (LA) scar or fibrosis is relatively slow and requires 5-15min to acquire an undersampled (R=1.7) 3D navigated dataset. The GeneRalized Autocalibrating Partially Parallel Acquisitions (GRAPPA) based parallel imaging method is the current clinical standard for accelerating 3D LGE imaging of the LA and permits an acceleration factor ~R=1.7. Two compressed sensing (CS) methods have been developed to achieve higher acceleration factors: a patch based collaborative filtering technique tested with acceleration factor R~3, and a technique that uses a 3D radial stack-of-stars acquisition pattern (R~1.8) with a 3D total variation constraint. The long reconstruction time of these CS methods makes them unwieldy to use, especially the patch based collaborative filtering technique. In addition, the effect of CS techniques on the quantification of percentage of scar/fibrosis is not known. We sought to develop a practical compressed sensing method for imaging the LA at high acceleration factors. In order to develop a clinically viable method with short reconstruction time, a Split Bregman (SB) reconstruction method with 3D total variation (TV) constraints was developed and implemented. The method was tested on 8 atrial fibrillation patients (4 pre-ablation and 4 post-ablation datasets). Blur metric, normalized mean squared error and peak signal to noise ratio were used as metrics to analyze the quality of the reconstructed images, Quantification of the extent of LGE was performed on the undersampled images and compared with the fully sampled images. Quantification of scar from post-ablation datasets and quantification of fibrosis from pre-ablation datasets showed that acceleration factors up to R~3.5 gave good 3D LGE images of the LA wall, using a 3D TV constraint and constrained SB methods. This corresponds to reducing the scan time by half, compared to currently used GRAPPA methods. Reconstruction of 3D LGE images using the SB method was over 20 times faster than standard gradient descent methods. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Three-dimensional integral imaging displays using a quick-response encoded elemental image array: an overview

    NASA Astrophysics Data System (ADS)

    Markman, A.; Javidi, B.

    2016-06-01

    Quick-response (QR) codes are barcodes that can store information such as numeric data and hyperlinks. The QR code can be scanned using a QR code reader, such as those built into smartphone devices, revealing the information stored in the code. Moreover, the QR code is robust to noise, rotation, and illumination when scanning due to error correction built in the QR code design. Integral imaging is an imaging technique used to generate a three-dimensional (3D) scene by combining the information from two-dimensional (2D) elemental images (EIs) each with a different perspective of a scene. Transferring these 2D images in a secure manner can be difficult. In this work, we overview two methods to store and encrypt EIs in multiple QR codes. The first method uses run-length encoding with Huffman coding and the double-random-phase encryption (DRPE) to compress and encrypt an EI. This information is then stored in a QR code. An alternative compression scheme is to perform photon-counting on the EI prior to compression. Photon-counting is a non-linear transformation of data that creates redundant information thus improving image compression. The compressed data is encrypted using the DRPE. Once information is stored in the QR codes, it is scanned using a smartphone device. The information scanned is decompressed and decrypted and an EI is recovered. Once all EIs have been recovered, a 3D optical reconstruction is generated.

  3. Performance analysis of algorithms for retrieval of magnetic resonance images for interactive teleradiology

    NASA Astrophysics Data System (ADS)

    Atkins, M. Stella; Hwang, Robert; Tang, Simon

    2001-05-01

    We have implemented a prototype system consisting of a Java- based image viewer and a web server extension component for transmitting Magnetic Resonance Images (MRI) to an image viewer, to test the performance of different image retrieval techniques. We used full-resolution images, and images compressed/decompressed using the Set Partitioning in Hierarchical Trees (SPIHT) image compression algorithm. We examined the SPIHT decompression algorithm using both non- progressive and progressive transmission, focusing on the running times of the algorithm, client memory usage and garbage collection. We also compared the Java implementation with a native C++ implementation of the non- progressive SPIHT decompression variant. Our performance measurements showed that for uncompressed image retrieval using a 10Mbps Ethernet, a film of 16 MR images can be retrieved and displayed almost within interactive times. The native C++ code implementation of the client-side decoder is twice as fast as the Java decoder. If the network bandwidth is low, the high communication time for retrieving uncompressed images may be reduced by use of SPIHT-compressed images, although the image quality is then degraded. To provide diagnostic quality images, we also investigated the retrieval of up to 3 images on a MR film at full-resolution, using progressive SPIHT decompression. The Java-based implementation of progressive decompression performed badly, mainly due to the memory requirements for maintaining the image states, and the high cost of execution of the Java garbage collector. Hence, in systems where the bandwidth is high, such as found in a hospital intranet, SPIHT image compression does not provide advantages for image retrieval performance.

  4. Secure biometric image sensor and authentication scheme based on compressed sensing.

    PubMed

    Suzuki, Hiroyuki; Suzuki, Masamichi; Urabe, Takuya; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2013-11-20

    It is important to ensure the security of biometric authentication information, because its leakage causes serious risks, such as replay attacks using the stolen biometric data, and also because it is almost impossible to replace raw biometric information. In this paper, we propose a secure biometric authentication scheme that protects such information by employing an optical data ciphering technique based on compressed sensing. The proposed scheme is based on two-factor authentication, the biometric information being supplemented by secret information that is used as a random seed for a cipher key. In this scheme, a biometric image is optically encrypted at the time of image capture, and a pair of restored biometric images for enrollment and verification are verified in the authentication server. If any of the biometric information is exposed to risk, it can be reenrolled by changing the secret information. Through numerical experiments, we confirm that finger vein images can be restored from the compressed sensing measurement data. We also present results that verify the accuracy of the scheme.

  5. Stereo sequence transmission via conventional transmission channel

    NASA Astrophysics Data System (ADS)

    Lee, Ho-Keun; Kim, Chul-Hwan; Han, Kyu-Phil; Ha, Yeong-Ho

    2003-05-01

    This paper proposes a new stereo sequence transmission technique using digital watermarking for compatibility with conventional 2D digital TV. We, generally, compress and transmit image sequence using temporal-spatial redundancy between stereo images. It is difficult for users with conventional digital TV to watch the transmitted 3D image sequence because many 3D image compression methods are different. To solve such a problem, in this paper, we perceive the concealment of new information of digital watermarking and conceal information of the other stereo image into three channels of the reference image. The main target of the technique presented is to let the people who have conventional DTV watch stereo movies at the same time. This goal is reached by considering the response of human eyes to color information and by using digital watermarking. To hide right images into left images effectively, bit-change in 3 color channels and disparity estimation according to the value of estimated disparity are performed. The proposed method assigns the displacement information of right image to each channel of YCbCr on DCT domain. Each LSB bit on YCbCr channels is changed according to the bits of disparity information. The performance of the presented methods is confirmed by several computer experiments.

  6. Schlieren image velocimetry measurements in a rocket engine exhaust plume

    NASA Astrophysics Data System (ADS)

    Morales, Rudy; Peguero, Julio; Hargather, Michael

    2017-11-01

    Schlieren image velocimetry (SIV) measures velocity fields by tracking the motion of naturally-occurring turbulent flow features in a compressible flow. Here the technique is applied to measuring the exhaust velocity profile of a liquid rocket engine. The SIV measurements presented include discussion of visibility of structures, image pre-processing for structure visibility, and ability to process resulting images using commercial particle image velocimetry (PIV) codes. The small-scale liquid bipropellant rocket engine operates on nitrous oxide and ethanol as propellants. Predictions of the exhaust velocity are obtained through NASA CEA calculations and simple compressible flow relationships, which are compared against the measured SIV profiles. Analysis of shear layer turbulence along the exhaust plume edge is also presented.

  7. Three-Dimensional Inverse Transport Solver Based on Compressive Sensing Technique

    NASA Astrophysics Data System (ADS)

    Cheng, Yuxiong; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi

    2013-09-01

    According to the direct exposure measurements from flash radiographic image, a compressive sensing-based method for three-dimensional inverse transport problem is presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. It is always very expensive to obtain enough measurements. With limited measurements, compressive sensing sparse reconstruction technique orthogonal matching pursuit is applied to obtain the sparse coefficients by solving an optimization problem. A three-dimensional inverse transport solver is developed based on a compressive sensing-based technique. There are three features in this solver: (1) AutoCAD is employed as a geometry preprocessor due to its powerful capacity in graphic. (2) The forward projection matrix rather than Gauss matrix is constructed by the visualization tool generator. (3) Fourier transform and Daubechies wavelet transform are adopted to convert an underdetermined system to a well-posed system in the algorithm. Simulations are performed and numerical results in pseudo-sine absorption problem, two-cube problem and two-cylinder problem when using compressive sensing-based solver agree well with the reference value.

  8. High-speed railway signal trackside equipment patrol inspection system

    NASA Astrophysics Data System (ADS)

    Wu, Nan

    2018-03-01

    High-speed railway signal trackside equipment patrol inspection system comprehensively applies TDI (time delay integration), high-speed and highly responsive CMOS architecture, low illumination photosensitive technique, image data compression technique, machine vision technique and so on, installed on high-speed railway inspection train, and achieves the collection, management and analysis of the images of signal trackside equipment appearance while the train is running. The system will automatically filter out the signal trackside equipment images from a large number of the background image, and identify of the equipment changes by comparing the original image data. Combining with ledger data and train location information, the system accurately locate the trackside equipment, conscientiously guiding maintenance.

  9. CNES studies for on-board implementation via HLS tools of a cloud-detection module for selective compression

    NASA Astrophysics Data System (ADS)

    Camarero, R.; Thiebaut, C.; Dejean, Ph.; Speciel, A.

    2010-08-01

    Future CNES high resolution instruments for remote sensing missions will lead to higher data-rates because of the increase in resolution and dynamic range. For example, the ground resolution improvement has induced a data-rate multiplied by 8 from SPOT4 to SPOT5 [1] and by 28 to PLEIADES-HR [2]. Innovative "smart" compression techniques will be then required, performing different types of compression inside a scene, in order to reach higher global compression ratios while complying with image quality requirements. This socalled "selective compression", allows important compression gains by detecting and then differently compressing the regions-of-interest (ROI) and non-interest in the image (e.g. higher compression ratios are assigned to the non-interesting data). Given that most of CNES high resolution images are cloudy [1], significant mass-memory and transmission gain could be reached by just detecting and suppressing (or compressing significantly) the areas covered by clouds. Since 2007, CNES works on a cloud detection module [3] as a simplification for on-board implementation of an already existing module used on-ground for PLEIADES-HR album images [4]. The different steps of this Support Vector Machine classifier have already been analyzed, for simplification and optimization, during this on-board implementation study: reflectance computation, characteristics vector computation (based on multispectral criteria) and computation of the SVM output. In order to speed up the hardware design phase, a new approach based on HLS [5] tools is being tested for the VHDL description stage. The aim is to obtain a bit-true VDHL design directly from a high level description language as C or Matlab/Simulink [6].

  10. A novel shape similarity based elastography system for prostate cancer assessment

    NASA Astrophysics Data System (ADS)

    Wang, Haisu; Mousavi, Seyed Reza; Samani, Abbas

    2012-03-01

    Prostate cancer is the second common cancer among men worldwide and remains the second leading cancer-related cause of death in mature men. The disease can be cured if it is detected at early stage. This implies that prostate cancer detection at early stage is very critical for desirable treatment outcome. Conventional techniques of prostate cancer screening and detection, such as Digital Rectal Examination (DRE), Prostate-Specific Antigen (PSA) and Trans Rectal Ultra-Sonography (TRUS), are known to have low sensitivity and specificity. Elastography is an imaging technique that uses tissue stiffness as contrast mechanism. As the association between the degree of prostate tissue stiffness alteration and its pathology is well established, elastography can potentially detect prostate cancer with a high degree of sensitivity and specificity. In this paper, we present a novel elastography technique which, unlike other elastography techniques, does not require displacement data acquisition system. This technique requires the prostate's pre-compression and postcompression transrectal ultrasound images. The conceptual foundation of reconstructing the prostate's normal and pathological tissues elastic moduli is to determine these moduli such that the similarity between calculated and observed shape features of the post compression prostate image is maximized. Results indicate that this technique is highly accurate and robust.

  11. Data compression strategies for ptychographic diffraction imaging

    NASA Astrophysics Data System (ADS)

    Loetgering, Lars; Rose, Max; Treffer, David; Vartanyants, Ivan A.; Rosenhahn, Axel; Wilhein, Thomas

    2017-12-01

    Ptychography is a computational imaging method for solving inverse scattering problems. To date, the high amount of redundancy present in ptychographic data sets requires computer memory that is orders of magnitude larger than the retrieved information. Here, we propose and compare data compression strategies that significantly reduce the amount of data required for wavefield inversion. Information metrics are used to measure the amount of data redundancy present in ptychographic data. Experimental results demonstrate the technique to be memory efficient and stable in the presence of systematic errors such as partial coherence and noise.

  12. A Comparison of the Multiscale Retinex With Other Image Enhancement Techniques

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-Ur; Woodell, Glenn A.; Jobson, Daniel J.

    1997-01-01

    The multiscale retinex with color restoration (MSRCR) has shown itself to be a very versatile automatic image enhancement algorithm that simultaneously provides dynamic range compression, color constancy, and color rendition. A number of algorithms exist that provide one or more of these features, but not all. In this paper we compare the performance of the MSRCR with techniques that are widely used for image enhancement. Specifically, we compare the MSRCR with color adjustment methods such as gamma correction and gain/offset application, histogram modification techniques such as histogram equalization and manual histogram adjustment, and other more powerful techniques such as homomorphic filtering and 'burning and dodging'. The comparison is carried out by testing the suite of image enhancement methods on a set of diverse images. We find that though some of these techniques work well for some of these images, only the MSRCR performs universally well on the test set.

  13. Low-rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging

    PubMed Central

    Ravishankar, Saiprasad; Moore, Brian E.; Nadakuditi, Raj Rao; Fessler, Jeffrey A.

    2017-01-01

    Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery from undersampled measurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamic magnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method. PMID:28092528

  14. Low-Rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging.

    PubMed

    Ravishankar, Saiprasad; Moore, Brian E; Nadakuditi, Raj Rao; Fessler, Jeffrey A

    2017-05-01

    Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery fromundersampledmeasurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamicmagnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method.

  15. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received prior to the loss can be used to reconstruct that partition at lower fidelity. By virtue of the compression improvement it achieves relative to previous means of onboard data compression, this software enables (1) increased return of hyperspectral scientific data in the presence of limits on the rates of transmission of data from spacecraft to Earth via radio communication links and/or (2) reduction in spacecraft radio-communication power and/or cost through reduction in the amounts of data required to be downlinked and stored onboard prior to downlink. The software is also suitable for compressing hyperspectral images for ground storage or archival purposes.

  16. A Fourier-based compressed sensing technique for accelerated CT image reconstruction using first-order methods.

    PubMed

    Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei

    2014-06-21

    As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate [Formula: see text]. In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques.

  17. Wavelet-Based Interpolation and Representation of Non-Uniformly Sampled Spacecraft Mission Data

    NASA Technical Reports Server (NTRS)

    Bose, Tamal

    2000-01-01

    A well-documented problem in the analysis of data collected by spacecraft instruments is the need for an accurate, efficient representation of the data set. The data may suffer from several problems, including additive noise, data dropouts, an irregularly-spaced sampling grid, and time-delayed sampling. These data irregularities render most traditional signal processing techniques unusable, and thus the data must be interpolated onto an even grid before scientific analysis techniques can be applied. In addition, the extremely large volume of data collected by scientific instrumentation presents many challenging problems in the area of compression, visualization, and analysis. Therefore, a representation of the data is needed which provides a structure which is conducive to these applications. Wavelet representations of data have already been shown to possess excellent characteristics for compression, data analysis, and imaging. The main goal of this project is to develop a new adaptive filtering algorithm for image restoration and compression. The algorithm should have low computational complexity and a fast convergence rate. This will make the algorithm suitable for real-time applications. The algorithm should be able to remove additive noise and reconstruct lost data samples from images.

  18. Digital map databases in support of avionic display systems

    NASA Astrophysics Data System (ADS)

    Trenchard, Michael E.; Lohrenz, Maura C.; Rosche, Henry, III; Wischow, Perry B.

    1991-08-01

    The emergence of computerized mission planning systems (MPS) and airborne digital moving map systems (DMS) has necessitated the development of a global database of raster aeronautical chart data specifically designed for input to these systems. The Naval Oceanographic and Atmospheric Research Laboratory''s (NOARL) Map Data Formatting Facility (MDFF) is presently dedicated to supporting these avionic display systems with the development of the Compressed Aeronautical Chart (CAC) database on Compact Disk Read Only Memory (CDROM) optical discs. The MDFF is also developing a series of aircraft-specific Write-Once Read Many (WORM) optical discs. NOARL has initiated a comprehensive research program aimed at improving the pilots'' moving map displays current research efforts include the development of an alternate image compression technique and generation of a standard set of color palettes. The CAC database will provide digital aeronautical chart data in six different scales. CAC is derived from the Defense Mapping Agency''s (DMA) Equal Arc-second (ARC) Digitized Raster Graphics (ADRG) a series of scanned aeronautical charts. NOARL processes ADRG to tailor the chart image resolution to that of the DMS display while reducing storage requirements through image compression techniques. CAC is being distributed by DMA as a library of CDROMs.

  19. Improved JPEG anti-forensics with better image visual quality and forensic undetectability.

    PubMed

    Singh, Gurinder; Singh, Kulbir

    2017-08-01

    There is an immediate need to validate the authenticity of digital images due to the availability of powerful image processing tools that can easily manipulate the digital image information without leaving any traces. The digital image forensics most often employs the tampering detectors based on JPEG compression. Therefore, to evaluate the competency of the JPEG forensic detectors, an anti-forensic technique is required. In this paper, two improved JPEG anti-forensic techniques are proposed to remove the blocking artifacts left by the JPEG compression in both spatial and DCT domain. In the proposed framework, the grainy noise left by the perceptual histogram smoothing in DCT domain can be reduced significantly by applying the proposed de-noising operation. Two types of denoising algorithms are proposed, one is based on the constrained minimization problem of total variation of energy and other on the normalized weighted function. Subsequently, an improved TV based deblocking operation is proposed to eliminate the blocking artifacts in the spatial domain. Then, a decalibration operation is applied to bring the processed image statistics back to its standard position. The experimental results show that the proposed anti-forensic approaches outperform the existing state-of-the-art techniques in achieving enhanced tradeoff between image visual quality and forensic undetectability, but with high computational cost. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Hardware Implementation of Lossless Adaptive Compression of Data From a Hyperspectral Imager

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didlier; Aranki, Nazeeh I.; Klimesh, Matthew A.; Bakhshi, Alireza

    2012-01-01

    Efficient onboard data compression can reduce the data volume from hyperspectral imagers on NASA and DoD spacecraft in order to return as much imagery as possible through constrained downlink channels. Lossless compression is important for signature extraction, object recognition, and feature classification capabilities. To provide onboard data compression, a hardware implementation of a lossless hyperspectral compression algorithm was developed using a field programmable gate array (FPGA). The underlying algorithm is the Fast Lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral- Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), p. 26 with the modification reported in Lossless, Multi-Spectral Data Comressor for Improved Compression for Pushbroom-Type Instruments (NPO-45473), NASA Tech Briefs, Vol. 32, No. 7 (July 2008) p. 63, which provides improved compression performance for data from pushbroom-type imagers. An FPGA implementation of the unmodified FL algorithm was previously developed and reported in Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System (NPO-46867), NASA Tech Briefs, Vol. 36, No. 5 (May 2012) p. 42. The essence of the FL algorithm is adaptive linear predictive compression using the sign algorithm for filter adaption. The FL compressor achieves a combination of low complexity and compression effectiveness that exceeds that of stateof- the-art techniques currently in use. The modification changes the predictor structure to tolerate differences in sensitivity of different detector elements, as occurs in pushbroom-type imagers, which are suitable for spacecraft use. The FPGA implementation offers a low-cost, flexible solution compared to traditional ASIC (application specific integrated circuit) and can be integrated as an intellectual property (IP) for part of, e.g., a design that manages the instrument interface. The FPGA implementation was benchmarked on the Xilinx Virtex IV LX25 device, and ported to a Xilinx prototype board. The current implementation has a critical path of 29.5 ns, which dictated a clock speed of 33 MHz. The critical path delay is end-to-end measurement between the uncompressed input data and the output compression data stream. The implementation compresses one sample every clock cycle, which results in a speed of 33 Msample/s. The implementation has a rather low device use of the Xilinx Virtex IV LX25, making the total power consumption of the implementation about 1.27 W.

  1. Parallel design of JPEG-LS encoder on graphics processing units

    NASA Astrophysics Data System (ADS)

    Duan, Hao; Fang, Yong; Huang, Bormin

    2012-01-01

    With recent technical advances in graphic processing units (GPUs), GPUs have outperformed CPUs in terms of compute capability and memory bandwidth. Many successful GPU applications to high performance computing have been reported. JPEG-LS is an ISO/IEC standard for lossless image compression which utilizes adaptive context modeling and run-length coding to improve compression ratio. However, adaptive context modeling causes data dependency among adjacent pixels and the run-length coding has to be performed in a sequential way. Hence, using JPEG-LS to compress large-volume hyperspectral image data is quite time-consuming. We implement an efficient parallel JPEG-LS encoder for lossless hyperspectral compression on a NVIDIA GPU using the computer unified device architecture (CUDA) programming technology. We use the block parallel strategy, as well as such CUDA techniques as coalesced global memory access, parallel prefix sum, and asynchronous data transfer. We also show the relation between GPU speedup and AVIRIS block size, as well as the relation between compression ratio and AVIRIS block size. When AVIRIS images are divided into blocks, each with 64×64 pixels, we gain the best GPU performance with 26.3x speedup over its original CPU code.

  2. Virtual pathology of cervical radiculopathy based on 3D MR/CT fusion images: impingement, flattening or twisted condition of the compressed nerve root in three cases.

    PubMed

    Kamogawa, Junji; Kato, Osamu; Morizane, Tatsunori; Hato, Taizo

    2015-01-01

    There have been several imaging studies of cervical radiculopathy, but no three-dimensional (3D) images have shown the path, position, and pathological changes of the cervical nerve roots and spinal root ganglion relative to the cervical bony structure. The objective of this study was to introduce a technique that enables the virtual pathology of the nerve root to be assessed using 3D magnetic resonance (MR)/computed tomography (CT) fusion images that show the compression of the proximal portion of the cervical nerve root by both the herniated disc and the preforaminal or foraminal bony spur in patients with cervical radiculopathy. MR and CT images were obtained from three patients with cervical radiculopathy. 3D MR images were placed onto 3D CT images using a computer workstation. The entire nerve root could be visualized in 3D with or without the vertebrae. The most important characteristic evident on the images was flattening of the nerve root by a bony spur. The affected root was constricted at a pre-ganglion site. In cases of severe deformity, the flattened portion of the root seemed to change the angle of its path, resulting in twisted condition. The 3D MR/CT fusion imaging technique enhances visualization of pathoanatomy in cervical hidden area that is composed of the root and intervertebral foramen. This technique provides two distinct advantages for diagnosis of cervical radiculopathy. First, the isolation of individual vertebra clarifies the deformities of the whole root groove, including both the uncinate process and superior articular process in the cervical spine. Second, the tortuous or twisted condition of a compressed root can be visualized. The surgeon can identify the narrowest face of the root if they view the MR/CT fusion image from the posterolateral-inferior direction. Surgeons use MR/CT fusion images as a pre-operative map and for intraoperative navigation. The MR/CT fusion images can also be used as educational materials for all hospital staff and for patients and patients' families who provide informed consent for treatments.

  3. A novel pulse compression algorithm for frequency modulated active thermography using band-pass filter

    NASA Astrophysics Data System (ADS)

    Chatterjee, Krishnendu; Roy, Deboshree; Tuli, Suneet

    2017-05-01

    This paper proposes a novel pulse compression algorithm, in the context of frequency modulated thermal wave imaging. The compression filter is derived from a predefined reference pixel in a recorded video, which contains direct measurement of the excitation signal alongside the thermal image of a test piece. The filter causes all the phases of the constituent frequencies to be adjusted to nearly zero value, so that on reconstruction a pulse is obtained. Further, due to band-limited nature of the excitation, signal-to-noise ratio is improved by suppressing out-of-band noise. The result is similar to that of a pulsed thermography experiment, although the peak power is drastically reduced. The algorithm is successfully demonstrated on mild steel and carbon fibre reference samples. Objective comparisons of the proposed pulse compression algorithm with the existing techniques are presented.

  4. Modeling of video traffic in packet networks, low rate video compression, and the development of a lossy+lossless image compression algorithm

    NASA Technical Reports Server (NTRS)

    Sayood, K.; Chen, Y. C.; Wang, X.

    1992-01-01

    During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.

  5. Shear wave pulse compression for dynamic elastography using phase-sensitive optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Nguyen, Thu-Mai; Song, Shaozhen; Arnal, Bastien; Wong, Emily Y.; Huang, Zhihong; Wang, Ruikang K.; O'Donnell, Matthew

    2014-01-01

    Assessing the biomechanical properties of soft tissue provides clinically valuable information to supplement conventional structural imaging. In the previous studies, we introduced a dynamic elastography technique based on phase-sensitive optical coherence tomography (PhS-OCT) to characterize submillimetric structures such as skin layers or ocular tissues. Here, we propose to implement a pulse compression technique for shear wave elastography. We performed shear wave pulse compression in tissue-mimicking phantoms. Using a mechanical actuator to generate broadband frequency-modulated vibrations (1 to 5 kHz), induced displacements were detected at an equivalent frame rate of 47 kHz using a PhS-OCT. The recorded signal was digitally compressed to a broadband pulse. Stiffness maps were then reconstructed from spatially localized estimates of the local shear wave speed. We demonstrate that a simple pulse compression scheme can increase shear wave detection signal-to-noise ratio (>12 dB gain) and reduce artifacts in reconstructing stiffness maps of heterogeneous media.

  6. Survey of adaptive image coding techniques

    NASA Technical Reports Server (NTRS)

    Habibi, A.

    1977-01-01

    The general problem of image data compression is discussed briefly with attention given to the use of Karhunen-Loeve transforms, suboptimal systems, and block quantization. A survey is then conducted encompassing the four categories of adaptive systems: (1) adaptive transform coding (adaptive sampling, adaptive quantization, etc.), (2) adaptive predictive coding (adaptive delta modulation, adaptive DPCM encoding, etc.), (3) adaptive cluster coding (blob algorithms and the multispectral cluster coding technique), and (4) adaptive entropy coding.

  7. Remote driving with reduced bandwidth communication

    NASA Technical Reports Server (NTRS)

    Depiero, Frederick W.; Noell, Timothy E.; Gee, Timothy F.

    1993-01-01

    Oak Ridge National Laboratory has developed a real-time video transmission system for low bandwidth remote operations. The system supports both continuous transmission of video for remote driving and progressive transmission of still images. Inherent in the system design is a spatiotemporal limitation to the effects of channel errors. The average data rate of the system is 64,000 bits/s, a compression of approximately 1000:1 for the black and white National Television Standard Code video. The image quality of the transmissions is maintained at a level that supports teleoperation of a high mobility multipurpose wheeled vehicle at speeds up to 15 mph on a moguled dirt track. Video compression is achieved by using Laplacian image pyramids and a combination of classical techniques. Certain subbands of the image pyramid are transmitted by using interframe differencing with a periodic refresh to aid in bandwidth reduction. Images are also foveated to concentrate image detail in a steerable region. The system supports dynamic video quality adjustments between frame rate, image detail, and foveation rate. A typical configuration for the system used during driving has a frame rate of 4 Hz, a compression per frame of 125:1, and a resulting latency of less than 1s.

  8. Adaptive single-pixel imaging with aggregated sampling and continuous differential measurements

    NASA Astrophysics Data System (ADS)

    Huo, Yaoran; He, Hongjie; Chen, Fan; Tai, Heng-Ming

    2018-06-01

    This paper proposes an adaptive compressive imaging technique with one single-pixel detector and single arm. The aggregated sampling (AS) method enables the reduction of resolutions of the reconstructed images. It aims to reduce the time and space consumption. The target image with a resolution up to 1024 × 1024 can be reconstructed successfully at the 20% sampling rate. The continuous differential measurement (CDM) method combined with a ratio factor of significant coefficient (RFSC) improves the imaging quality. Moreover, RFSC reduces the human intervention in parameter setting. This technique enhances the practicability of single-pixel imaging with the benefits from less time and space consumption, better imaging quality and less human intervention.

  9. Digital image compression for a 2f multiplexing optical setup

    NASA Astrophysics Data System (ADS)

    Vargas, J.; Amaya, D.; Rueda, E.

    2016-07-01

    In this work a virtual 2f multiplexing system was implemented in combination with digital image compression techniques and redundant information elimination. Depending on the image type to be multiplexed, a memory-usage saving of as much as 99% was obtained. The feasibility of the system was tested using three types of images, binary characters, QR codes, and grey level images. A multiplexing step was implemented digitally, while a demultiplexing step was implemented in a virtual 2f optical setup following real experimental parameters. To avoid cross-talk noise, each image was codified with a specially designed phase diffraction carrier that would allow the separation and relocation of the multiplexed images on the observation plane by simple light propagation. A description of the system is presented together with simulations that corroborate the method. The present work may allow future experimental implementations that will make use of all the parallel processing capabilities of optical systems.

  10. Full-field measurement of micromotion around a cementless femoral stem using micro-CT imaging and radiopaque markers.

    PubMed

    Malfroy Camine, V; Rüdiger, H A; Pioletti, D P; Terrier, A

    2016-12-08

    A good primary stability of cementless femoral stems is essential for the long-term success of total hip arthroplasty. Experimental measurement of implant micromotion with linear variable differential transformers is commonly used to assess implant primary stability in pre-clinical testing. But these measurements are often limited to a few distinct points at the interface. New techniques based on micro-computed tomography (micro-CT) have recently been introduced, such as Digital Volume Correlation (DVC) or markers-based approaches. DVC is however limited to measurement around non-metallic implants due to metal-induced imaging artifacts, and markers-based techniques are confined to a small portion of the implant. In this paper, we present a technique based on micro-CT imaging and radiopaque markers to provide the first full-field micromotion measurement at the entire bone-implant interface of a cementless femoral stem implanted in a cadaveric femur. Micromotion was measured during compression and torsion. Over 300 simultaneous measurement points were obtained. Micromotion amplitude ranged from 0 to 24µm in compression and from 0 to 49µm in torsion. Peak micromotion was distal in compression and proximal in torsion. The technique bias was 5.1µm and its repeatability standard deviation was 4µm. The method was thus highly reliable and compared well with results obtained with linear variable differential transformers (LVDTs) reported in the literature. These results indicate that this micro-CT based technique is perfectly relevant to observe local variations in primary stability around metallic implants. Possible applications include pre-clinical testing of implants and validation of patient-specific models for pre-operative planning. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Low-rate image coding using vector quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makur, A.

    1990-01-01

    This thesis deals with the development and analysis of a computationally simple vector quantization image compression system for coding monochrome images at low bit rate. Vector quantization has been known to be an effective compression scheme when a low bit rate is desirable, but the intensive computation required in a vector quantization encoder has been a handicap in using it for low rate image coding. The present work shows that, without substantially increasing the coder complexity, it is indeed possible to achieve acceptable picture quality while attaining a high compression ratio. Several modifications to the conventional vector quantization coder aremore » proposed in the thesis. These modifications are shown to offer better subjective quality when compared to the basic coder. Distributed blocks are used instead of spatial blocks to construct the input vectors. A class of input-dependent weighted distortion functions is used to incorporate psychovisual characteristics in the distortion measure. Computationally simple filtering techniques are applied to further improve the decoded image quality. Finally, unique designs of the vector quantization coder using electronic neural networks are described, so that the coding delay is reduced considerably.« less

  12. Characterization of particle deformation during compression measured by confocal laser scanning microscopy.

    PubMed

    Guo, H X; Heinämäki, J; Yliruusi, J

    1999-09-20

    Direct compression of riboflavin sodium phosphate tablets was studied by confocal laser scanning microscopy (CLSM). The technique is non-invasive and generates three-dimensional (3D) images. Tablets of 1% riboflavin sodium phosphate with two grades of microcrystalline cellulose (MCC) were individually compressed at compression forces of 1.0 and 26.8 kN. The behaviour and deformation of drug particles on the upper and lower surfaces of the tablets were studied under compression forces. Even at the lower compression force, distinct recrystallized areas in the riboflavin sodium phosphate particles were observed in both Avicel PH-101 and Avicel PH-102 tablets. At the higher compression force, the recrystallization of riboflavin sodium phosphate was more extensive on the upper surface of the Avicel PH-102 tablet than the Avicel PH-101 tablet. The plastic deformation properties of both MCC grades reduced the fragmentation of riboflavin sodium phosphate particles. When compressed with MCC, riboflavin sodium phosphate behaved as a plastic material. The riboflavin sodium phosphate particles were more tightly bound on the upper surface of the tablet than on the lower surface, and this could also be clearly distinguished by CLSM. Drug deformation could not be visualized by other techniques. Confocal laser scanning microscopy provides valuable information on the internal mechanisms of direct compression of tablets.

  13. Video multiple watermarking technique based on image interlacing using DWT.

    PubMed

    Ibrahim, Mohamed M; Abdel Kader, Neamat S; Zorkany, M

    2014-01-01

    Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.

  14. Information-Adaptive Image Encoding and Restoration

    NASA Technical Reports Server (NTRS)

    Park, Stephen K.; Rahman, Zia-ur

    1998-01-01

    The multiscale retinex with color restoration (MSRCR) has shown itself to be a very versatile automatic image enhancement algorithm that simultaneously provides dynamic range compression, color constancy, and color rendition. A number of algorithms exist that provide one or more of these features, but not all. In this paper we compare the performance of the MSRCR with techniques that are widely used for image enhancement. Specifically, we compare the MSRCR with color adjustment methods such as gamma correction and gain/offset application, histogram modification techniques such as histogram equalization and manual histogram adjustment, and other more powerful techniques such as homomorphic filtering and 'burning and dodging'. The comparison is carried out by testing the suite of image enhancement methods on a set of diverse images. We find that though some of these techniques work well for some of these images, only the MSRCR performs universally well oil the test set.

  15. Compressed sensing with cyclic-S Hadamard matrix for terahertz imaging applications

    NASA Astrophysics Data System (ADS)

    Ermeydan, Esra Şengün; ćankaya, Ilyas

    2018-01-01

    Compressed Sensing (CS) with Cyclic-S Hadamard matrix is proposed for single pixel imaging applications in this study. In single pixel imaging scheme, N = r . c samples should be taken for r×c pixel image where . denotes multiplication. CS is a popular technique claiming that the sparse signals can be reconstructed with samples under Nyquist rate. Therefore to solve the slow data acquisition problem in Terahertz (THz) single pixel imaging, CS is a good candidate. However, changing mask for each measurement is a challenging problem since there is no commercial Spatial Light Modulators (SLM) for THz band yet, therefore circular masks are suggested so that for each measurement one or two column shifting will be enough to change the mask. The CS masks are designed using cyclic-S matrices based on Hadamard transform for 9 × 7 and 15 × 17 pixel images within the framework of this study. The %50 compressed images are reconstructed using total variation based TVAL3 algorithm. Matlab simulations demonstrates that cyclic-S matrices can be used for single pixel imaging based on CS. The circular masks have the advantage to reduce the mechanical SLMs to a single sliding strip, whereas the CS helps to reduce acquisition time and energy since it allows to reconstruct the image from fewer samples.

  16. Compressibility of porous TiO2 nanoparticle coating on paperboard

    PubMed Central

    2013-01-01

    Compressibility of liquid flame spray-deposited porous TiO2 nanoparticle coating was studied on paperboard samples using a traditional calendering technique in which the paperboard is compressed between a metal and polymer roll. Surface superhydrophobicity is lost due to a smoothening effect when the number of successive calendering cycles is increased. Field emission scanning electron microscope surface and cross‒sectional images support the atomic force microscope roughness analysis that shows a significant compressibility of the deposited TiO2 nanoparticle coating with decrease in the surface roughness and nanoscale porosity under external pressure. PACS 61.46.-w; 68.08.Bc; 81.07.-b PMID:24160373

  17. 3D single point imaging with compressed sensing provides high temporal resolution R 2* mapping for in vivo preclinical applications.

    PubMed

    Rioux, James A; Beyea, Steven D; Bowen, Chris V

    2017-02-01

    Purely phase-encoded techniques such as single point imaging (SPI) are generally unsuitable for in vivo imaging due to lengthy acquisition times. Reconstruction of highly undersampled data using compressed sensing allows SPI data to be quickly obtained from animal models, enabling applications in preclinical cellular and molecular imaging. TurboSPI is a multi-echo single point technique that acquires hundreds of images with microsecond spacing, enabling high temporal resolution relaxometry of large-R 2 * systems such as iron-loaded cells. TurboSPI acquisitions can be pseudo-randomly undersampled in all three dimensions to increase artifact incoherence, and can provide prior information to improve reconstruction. We evaluated the performance of CS-TurboSPI in phantoms, a rat ex vivo, and a mouse in vivo. An algorithm for iterative reconstruction of TurboSPI relaxometry time courses does not affect image quality or R 2 * mapping in vitro at acceleration factors up to 10. Imaging ex vivo is possible at similar acceleration factors, and in vivo imaging is demonstrated at an acceleration factor of 8, such that acquisition time is under 1 h. Accelerated TurboSPI enables preclinical R 2 * mapping without loss of data quality, and may show increased specificity to iron oxide compared to other sequences.

  18. Recent advances in lossless coding techniques

    NASA Astrophysics Data System (ADS)

    Yovanof, Gregory S.

    Current lossless techniques are reviewed with reference to both sequential data files and still images. Two major groups of sequential algorithms, dictionary and statistical techniques, are discussed. In particular, attention is given to Lempel-Ziv coding, Huffman coding, and arithmewtic coding. The subject of lossless compression of imagery is briefly discussed. Finally, examples of practical implementations of lossless algorithms and some simulation results are given.

  19. JPEG and wavelet compression of ophthalmic images

    NASA Astrophysics Data System (ADS)

    Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.

    1999-05-01

    This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.

  20. [Usefulness of curved coronal MPR imaging for the diagnosis of cervical radiculopathy].

    PubMed

    Inukai, Chikage; Inukai, Takashi; Matsuo, Naoki; Shimizu, Ikuo; Goto, Hisaharu; Takagi, Teruhide; Takayasu, Masakazu

    2010-03-01

    In surgical treatment of cervical radiculopathy, localization of the responsible lesions by various imaging modalities is essential. Among them, MRI is non-invasive and plays a primary role in the assessment of spinal radicular symptoms. However, demonstration of nerve root compression is sometimes difficult by the conventional methods of MRI, such as T1 weighted (T1W) and T2 weighted (T2W) sagittal or axial images. We have applied a new technique of curved coronal multiplanar reconstruction (MPR) imaging for the diagnosis of cervical radiculopathy. Ten patients (4 male, 6 female) with ages between 31 and 79 year-old, who had clinical diagnosis of cervical radiculopathy, were included in this study. Seven patients underwent anterior key-hole foraminotomy to decompress the nerve root with successful results. All the patients had 3D MRI studies, such as true fast imaging with steady-state precession (FISP), 3DT2W sampling perfection with application optimized contrasts using different fillip angle evolution (SPACE), and 3D multi-echo data image combination (MEDIC) imagings in addition to the routine MRI (1.5 T Avanto, Siemens, Germany) with a phased array coil. The curved coronal MPR images were produced from these MRI data using a workstation. The nerve root compression was diagnosed by curved coronal MPR images in all the patients. The compression sites were compatible with those of the operative findings in 7 patients, who underwent surgical treatment. The MEDIC imagings were the most demonstrable to visualize the nerve root, while the 3D-space imagings were the next. The curved coronal MPR imaging is useful for the diagnosis of accurate localization of the compressing lesions in patients with cervical radiculopathy.

  1. Backwards compatible high dynamic range video compression

    NASA Astrophysics Data System (ADS)

    Dolzhenko, Vladimir; Chesnokov, Vyacheslav; Edirisinghe, Eran A.

    2014-02-01

    This paper presents a two layer CODEC architecture for high dynamic range video compression. The base layer contains the tone mapped video stream encoded with 8 bits per component which can be decoded using conventional equipment. The base layer content is optimized for rendering on low dynamic range displays. The enhancement layer contains the image difference, in perceptually uniform color space, between the result of inverse tone mapped base layer content and the original video stream. Prediction of the high dynamic range content reduces the redundancy in the transmitted data while still preserves highlights and out-of-gamut colors. Perceptually uniform colorspace enables using standard ratedistortion optimization algorithms. We present techniques for efficient implementation and encoding of non-uniform tone mapping operators with low overhead in terms of bitstream size and number of operations. The transform representation is based on human vision system model and suitable for global and local tone mapping operators. The compression techniques include predicting the transform parameters from previously decoded frames and from already decoded data for current frame. Different video compression techniques are compared: backwards compatible and non-backwards compatible using AVC and HEVC codecs.

  2. Grid-Independent Compressive Imaging and Fourier Phase Retrieval

    ERIC Educational Resources Information Center

    Liao, Wenjing

    2013-01-01

    This dissertation is composed of two parts. In the first part techniques of band exclusion(BE) and local optimization(LO) are proposed to solve linear continuum inverse problems independently of the grid spacing. The second part is devoted to the Fourier phase retrieval problem. Many situations in optics, medical imaging and signal processing call…

  3. A Compressive Sensing Approach for Glioma Margin Delineation Using Mass Spectrometry

    PubMed Central

    Gholami, Behnood; Agar, Nathalie Y. R.; Jolesz, Ferenc A.; Haddad, Wassim M.; Tannenbaum, Allen R.

    2013-01-01

    Surgery, and specifically, tumor resection, is the primary treatment for most patients suffering from brain tumors. Medical imaging techniques, and in particular, magnetic resonance imaging are currently used in diagnosis as well as image-guided surgery procedures. However, studies show that computed tomography and magnetic resonance imaging fail to accurately identify the full extent of malignant brain tumors and their microscopic infiltration. Mass spectrometry is a well-known analytical technique used to identify molecules in a given sample based on their mass. In a recent study, it is proposed to use mass spectrometry as an intraoperative tool for discriminating tumor and non-tumor tissue. Integration of mass spectrometry with the resection module allows for tumor resection and immediate molecular analysis. In this paper, we propose a framework for tumor margin delineation using compressive sensing. Specifically, we show that the spatial distribution of tumor cell concentration can be efficiently reconstructed and updated using mass spectrometry information from the resected tissue. In addition, our proposed framework is model-free, and hence, requires no prior information of spatial distribution of the tumor cell concentration. PMID:22255629

  4. The Simultaneous Combination of Phase Contrast Imaging with In Situ X-ray diffraction from Shock Compressed Matter

    NASA Astrophysics Data System (ADS)

    McBride, Emma Elizabeth; Seiboth, Frank; Cooper, Leora; Frost, Mungo; Goede, Sebastian; Harmand, Marion; Levitan, Abe; McGonegle, David; Miyanishi, Kohei; Ozaki, Norimasa; Roedel, Melanie; Sun, Peihao; Wark, Justin; Hastings, Jerry; Glenzer, Siegfried; Fletcher, Luke

    2017-10-01

    Here, we present the simultaneous combination of phase contrast imaging (PCI) techniques with in situ X-ray diffraction to investigate multiple-wave features in laser-driven shock-compressed germanium. Experiments were conducted at the Matter at Extreme Conditions end station at the LCLS, and measurements were made perpendicular to the shock propagation direction. PCI allows one to take femtosecond snapshots of magnified real-space images of shock waves as they progress though matter. X-ray diffraction perpendicular to the shock propagation direction provides the opportunity to isolate and identify different waves and determine the crystal structure unambiguously. Here, we combine these two powerful techniques simultaneously, by using the same Be lens setup to focus the fundamental beam at 8.2 keV to a size of 1.5 mm on target for PCI and the 3rd harmonic at 24.6 keV to a spot size of 2 um on target for diffraction.

  5. Photogrammetric point cloud compression for tactical networks

    NASA Astrophysics Data System (ADS)

    Madison, Andrew C.; Massaro, Richard D.; Wayant, Clayton D.; Anderson, John E.; Smith, Clint B.

    2017-05-01

    We report progress toward the development of a compression schema suitable for use in the Army's Common Operating Environment (COE) tactical network. The COE facilitates the dissemination of information across all Warfighter echelons through the establishment of data standards and networking methods that coordinate the readout and control of a multitude of sensors in a common operating environment. When integrated with a robust geospatial mapping functionality, the COE enables force tracking, remote surveillance, and heightened situational awareness to Soldiers at the tactical level. Our work establishes a point cloud compression algorithm through image-based deconstruction and photogrammetric reconstruction of three-dimensional (3D) data that is suitable for dissimination within the COE. An open source visualization toolkit was used to deconstruct 3D point cloud models based on ground mobile light detection and ranging (LiDAR) into a series of images and associated metadata that can be easily transmitted on a tactical network. Stereo photogrammetric reconstruction is then conducted on the received image stream to reveal the transmitted 3D model. The reported method boasts nominal compression ratios typically on the order of 250 while retaining tactical information and accurate georegistration. Our work advances the scope of persistent intelligence, surveillance, and reconnaissance through the development of 3D visualization and data compression techniques relevant to the tactical operations environment.

  6. Impact of JPEG2000 compression on spatial-spectral endmember extraction from hyperspectral data

    NASA Astrophysics Data System (ADS)

    Martín, Gabriel; Ruiz, V. G.; Plaza, Antonio; Ortiz, Juan P.; García, Inmaculada

    2009-08-01

    Hyperspectral image compression has received considerable interest in recent years. However, an important issue that has not been investigated in the past is the impact of lossy compression on spectral mixture analysis applications, which characterize mixed pixels in terms of a suitable combination of spectrally pure spectral substances (called endmembers) weighted by their estimated fractional abundances. In this paper, we specifically investigate the impact of JPEG2000 compression of hyperspectral images on the quality of the endmembers extracted by algorithms that incorporate both the spectral and the spatial information (useful for incorporating contextual information in the spectral endmember search). The two considered algorithms are the automatic morphological endmember extraction (AMEE) and the spatial spectral endmember extraction (SSEE) techniques. Experimental results are conducted using a well-known data set collected by AVIRIS over the Cuprite mining district in Nevada and with detailed ground-truth information available from U. S. Geological Survey. Our experiments reveal some interesting findings that may be useful to specialists applying spatial-spectral endmember extraction algorithms to compressed hyperspectral imagery.

  7. Efficacy of gradual pressure-decline compressing stockings in Asian patients with lower leg varicose veins: analysis by general measurements and magnetic resonance image.

    PubMed

    Leung, T K; Lin, J M; Chu, C L; Wu, Y S; Chao, Y J

    2012-12-01

    Most applications of gradual pressure-decline compressing stockings (GPDCS) are used in the United States and Western European countries, with over a decade of clinical experiments. Up to know, there is no standard establishment of gradual pressure-decline compressing stockings for Asian patients with venous insufficiency and varicose vein formations. We collected data on volunteer candidates of varicose vein for general measurements and assessments and magnetic resonance imaging (MRI) by non-contrast enhanced MRV techniques, and for post processing data analysis. Clinical use of GPCDS provide a mild to moderate improvement in the varicose vein conditions of patients with deep venous insufficiency by improving their deep vein circulation, by general measurements; recording major symptoms and complaint; comfort and stretching/flexibility to the candidates after using GPDCS; and area changes/flow velocity changes/available hemoglobin changes in deep veins monitored by MRI. The benefits and data collected in these results may help in developing compression stockings standards in Taiwanese and Asian countries, and to establishing criterias for product sizes, compression levels, and related parameters.

  8. Wavelet compression of multichannel ECG data by enhanced set partitioning in hierarchical trees algorithm.

    PubMed

    Sharifahmadian, Ershad

    2006-01-01

    The set partitioning in hierarchical trees (SPIHT) algorithm is very effective and computationally simple technique for image and signal compression. Here the author modified the algorithm which provides even better performance than the SPIHT algorithm. The enhanced set partitioning in hierarchical trees (ESPIHT) algorithm has performance faster than the SPIHT algorithm. In addition, the proposed algorithm reduces the number of bits in a bit stream which is stored or transmitted. I applied it to compression of multichannel ECG data. Also, I presented a specific procedure based on the modified algorithm for more efficient compression of multichannel ECG data. This method employed on selected records from the MIT-BIH arrhythmia database. According to experiments, the proposed method attained the significant results regarding compression of multichannel ECG data. Furthermore, in order to compress one signal which is stored for a long time, the proposed multichannel compression method can be utilized efficiently.

  9. Holographic techniques for cellular fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Kim, Myung K.

    2017-04-01

    We have constructed a prototype instrument for holographic fluorescence microscopy (HFM) based on self-interference incoherent digital holography (SIDH) and demonstrate novel imaging capabilities such as differential 3D fluorescence microscopy and optical sectioning by compressive sensing.

  10. Micropillar Compression Technique Applied to Micron-Scale Mudstone Elasto-Plastic Deformation

    NASA Astrophysics Data System (ADS)

    Dewers, T. A.; Boyce, B.; Buchheit, T.; Heath, J. E.; Chidsey, T.; Michael, J.

    2010-12-01

    Mudstone mechanical testing is often limited by poor core recovery and sample size, preservation and preparation issues, which can lead to sampling bias, damage, and time-dependent effects. A micropillar compression technique, originally developed by Uchic et al. 2004, here is applied to elasto-plastic deformation of small volumes of mudstone, in the range of cubic microns. This study examines behavior of the Gothic shale, the basal unit of the Ismay zone of the Pennsylvanian Paradox Formation and potential shale gas play in southeastern Utah, USA. Precision manufacture of micropillars 5 microns in diameter and 10 microns in length are prepared using an ion-milling method. Characterization of samples is carried out using: dual focused ion - scanning electron beam imaging of nano-scaled pores and distribution of matrix clay and quartz, as well as pore-filling organics; laser scanning confocal (LSCM) 3D imaging of natural fractures; and gas permeability, among other techniques. Compression testing of micropillars under load control is performed using two different nanoindenter techniques. Deformation of 0.5 cm in diameter by 1 cm in length cores is carried out and visualized by a microscope loading stage and laser scanning confocal microscopy. Axisymmetric multistage compression testing and multi-stress path testing is carried out using 2.54 cm plugs. Discussion of results addresses size of representative elementary volumes applicable to continuum-scale mudstone deformation, anisotropy, and size-scale plasticity effects. Other issues include fabrication-induced damage, alignment, and influence of substrate. This work is funded by the US Department of Energy, Office of Basic Energy Sciences. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Jesse S.; Sinogeikin, Stanislav V.; Lin, Chuanlong

    Complementary advances in high pressure research apparatus and techniques make it possible to carry out time-resolved high pressure research using what would customarily be considered static high pressure apparatus. This work specifically explores time-resolved high pressure x-ray diffraction with rapid compression and/or decompression of a sample in a diamond anvil cell. Key aspects of the synchrotron beamline and ancillary equipment are presented, including source considerations, rapid (de)compression apparatus, high frequency imaging detectors, and software suitable for processing large volumes of data. A number of examples are presented, including fast equation of state measurements, compression rate dependent synthesis of metastable statesmore » in silicon and germanium, and ultrahigh compression rates using a piezoelectric driven diamond anvil cell.« less

  12. Tissue Acoustoelectric Effect Modeling From Solid Mechanics Theory.

    PubMed

    Song, Xizi; Qin, Yexian; Xu, Yanbin; Ingram, Pier; Witte, Russell S; Dong, Feng

    2017-10-01

    The acoustoelectric (AE) effect is a basic physical phenomenon, which underlies the changes made in the conductivity of a medium by the application of focused ultrasound. Recently, based on the AE effect, several biomedical imaging techniques have been widely studied, such as ultrasound-modulated electrical impedance tomography and ultrasound current source density imaging. To further investigate the mechanism of the AE effect in tissue and to provide guidance for such techniques, we have modeled the tissue AE effect using the theory of solid mechanics. Both bulk compression and thermal expansion of tissue are considered and discussed. Computation simulation shows that the muscle AE effect result, conductivity change rate, is 3.26×10 -3 with 4.3-MPa peak pressure, satisfying the theoretical value. Bulk compression plays the main role for muscle AE effect, while thermal expansion makes almost no contribution to it. In addition, the AE signals of porcine muscle are measured at different focal positions. With the same magnitude order and the same change trend, the experiment result confirms that the simulation result is effective. Both simulation and experimental results validate that tissue AE effect modeling using solid mechanics theory is feasible, which is of significance for the further development of related biomedical imaging techniques.

  13. A CMOS Imager with Focal Plane Compression using Predictive Coding

    NASA Technical Reports Server (NTRS)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  14. Single-scan patient-specific scatter correction in computed tomography using peripheral detection of scatter and compressed sensing scatter retrieval

    PubMed Central

    Meng, Bowen; Lee, Ho; Xing, Lei; Fahimian, Benjamin P.

    2013-01-01

    Purpose: X-ray scatter results in a significant degradation of image quality in computed tomography (CT), representing a major limitation in cone-beam CT (CBCT) and large field-of-view diagnostic scanners. In this work, a novel scatter estimation and correction technique is proposed that utilizes peripheral detection of scatter during the patient scan to simultaneously acquire image and patient-specific scatter information in a single scan, and in conjunction with a proposed compressed sensing scatter recovery technique to reconstruct and correct for the patient-specific scatter in the projection space. Methods: The method consists of the detection of patient scatter at the edges of the field of view (FOV) followed by measurement based compressed sensing recovery of the scatter through-out the projection space. In the prototype implementation, the kV x-ray source of the Varian TrueBeam OBI system was blocked at the edges of the projection FOV, and the image detector in the corresponding blocked region was used for scatter detection. The design enables image data acquisition of the projection data on the unblocked central region of and scatter data at the blocked boundary regions. For the initial scatter estimation on the central FOV, a prior consisting of a hybrid scatter model that combines the scatter interpolation method and scatter convolution model is estimated using the acquired scatter distribution on boundary region. With the hybrid scatter estimation model, compressed sensing optimization is performed to generate the scatter map by penalizing the L1 norm of the discrete cosine transform of scatter signal. The estimated scatter is subtracted from the projection data by soft-tuning, and the scatter-corrected CBCT volume is obtained by the conventional Feldkamp-Davis-Kress algorithm. Experimental studies using image quality and anthropomorphic phantoms on a Varian TrueBeam system were carried out to evaluate the performance of the proposed scheme. Results: The scatter shading artifacts were markedly suppressed in the reconstructed images using the proposed method. On the Catphan©504 phantom, the proposed method reduced the error of CT number to 13 Hounsfield units, 10% of that without scatter correction, and increased the image contrast by a factor of 2 in high-contrast regions. On the anthropomorphic phantom, the spatial nonuniformity decreased from 10.8% to 6.8% after correction. Conclusions: A novel scatter correction method, enabling unobstructed acquisition of the high frequency image data and concurrent detection of the patient-specific low frequency scatter data at the edges of the FOV, is proposed and validated in this work. Relative to blocker based techniques, rather than obstructing the central portion of the FOV which degrades and limits the image reconstruction, compressed sensing is used to solve for the scatter from detection of scatter at the periphery of the FOV, enabling for the highest quality reconstruction in the central region and robust patient-specific scatter correction. PMID:23298098

  15. Mismatch and resolution in compressive imaging

    NASA Astrophysics Data System (ADS)

    Fannjiang, Albert; Liao, Wenjing

    2011-09-01

    Highly coherent sensing matrices arise in discretization of continuum problems such as radar and medical imaging when the grid spacing is below the Rayleigh threshold as well as in using highly coherent, redundant dictionaries as sparsifying operators. Algorithms (BOMP, BLOOMP) based on techniques of band exclusion and local optimization are proposed to enhance Orthogonal Matching Pursuit (OMP) and deal with such coherent sensing matrices. BOMP and BLOOMP have provably performance guarantee of reconstructing sparse, widely separated objects independent of the redundancy and have a sparsity constraint and computational cost similar to OMP's. Numerical study demonstrates the effectiveness of BLOOMP for compressed sensing with highly coherent, redundant sensing matrices.

  16. Low dose reconstruction algorithm for differential phase contrast imaging.

    PubMed

    Wang, Zhentian; Huang, Zhifeng; Zhang, Li; Chen, Zhiqiang; Kang, Kejun; Yin, Hongxia; Wang, Zhenchang; Marco, Stampanoni

    2011-01-01

    Differential phase contrast imaging computed tomography (DPCI-CT) is a novel x-ray inspection method to reconstruct the distribution of refraction index rather than the attenuation coefficient in weakly absorbing samples. In this paper, we propose an iterative reconstruction algorithm for DPCI-CT which benefits from the new compressed sensing theory. We first realize a differential algebraic reconstruction technique (DART) by discretizing the projection process of the differential phase contrast imaging into a linear partial derivative matrix. In this way the compressed sensing reconstruction problem of DPCI reconstruction can be transformed to a resolved problem in the transmission imaging CT. Our algorithm has the potential to reconstruct the refraction index distribution of the sample from highly undersampled projection data. Thus it can significantly reduce the dose and inspection time. The proposed algorithm has been validated by numerical simulations and actual experiments.

  17. Split Bregman's optimization method for image construction in compressive sensing

    NASA Astrophysics Data System (ADS)

    Skinner, D.; Foo, S.; Meyer-Bäse, A.

    2014-05-01

    The theory of compressive sampling (CS) was reintroduced by Candes, Romberg and Tao, and D. Donoho in 2006. Using a priori knowledge that a signal is sparse, it has been mathematically proven that CS can defY Nyquist sampling theorem. Theoretically, reconstruction of a CS image relies on the minimization and optimization techniques to solve this complex almost NP-complete problem. There are many paths to consider when compressing and reconstructing an image but these methods have remained untested and unclear on natural images, such as underwater sonar images. The goal of this research is to perfectly reconstruct the original sonar image from a sparse signal while maintaining pertinent information, such as mine-like object, in Side-scan sonar (SSS) images. Goldstein and Osher have shown how to use an iterative method to reconstruct the original image through a method called Split Bregman's iteration. This method "decouples" the energies using portions of the energy from both the !1 and !2 norm. Once the energies are split, Bregman iteration is used to solve the unconstrained optimization problem by recursively solving the problems simultaneously. The faster these two steps or energies can be solved then the faster the overall method becomes. While the majority of CS research is still focused on the medical field, this paper will demonstrate the effectiveness of the Split Bregman's methods on sonar images.

  18. Recognizable or Not: Towards Image Semantic Quality Assessment for Compression

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Dandan; Li, Houqiang

    2017-12-01

    Traditionally, image compression was optimized for the pixel-wise fidelity or the perceptual quality of the compressed images given a bit-rate budget. But recently, compressed images are more and more utilized for automatic semantic analysis tasks such as recognition and retrieval. For these tasks, we argue that the optimization target of compression is no longer perceptual quality, but the utility of the compressed images in the given automatic semantic analysis task. Accordingly, we propose to evaluate the quality of the compressed images neither at pixel level nor at perceptual level, but at semantic level. In this paper, we make preliminary efforts towards image semantic quality assessment (ISQA), focusing on the task of optical character recognition (OCR) from compressed images. We propose a full-reference ISQA measure by comparing the features extracted from text regions of original and compressed images. We then propose to integrate the ISQA measure into an image compression scheme. Experimental results show that our proposed ISQA measure is much better than PSNR and SSIM in evaluating the semantic quality of compressed images; accordingly, adopting our ISQA measure to optimize compression for OCR leads to significant bit-rate saving compared to using PSNR or SSIM. Moreover, we perform subjective test about text recognition from compressed images, and observe that our ISQA measure has high consistency with subjective recognizability. Our work explores new dimensions in image quality assessment, and demonstrates promising direction to achieve higher compression ratio for specific semantic analysis tasks.

  19. Biochemical Imaging of Gliomas Using MR Spectroscopic Imaging for Radiotherapy Treatment Planning

    NASA Astrophysics Data System (ADS)

    Heikal, Amr Ahmed

    This thesis discusses the main obstacles facing wide clinical implementation of magnetic resonance spectroscopic imaging (MRSI) as a tumor delineation tool for radiotherapy treatment planning, particularly for gliomas. These main obstacles are identified as 1. observer bias and poor interpretational reproducibility of the results of MRSI scans, and 2. the long scan times required to conduct MRSI scans. An examination of an existing user-independent MRSI tumor delineation technique known as the choline-to-NAA index (CNI) is conducted to assess its utility in providing a tool for reproducible interpretation of MRSI results. While working with spatial resolutions typically twice those on which the CNI model was originally designed, a region of statistical uncertainty was discovered between the tumor and normal tissue populations and as such a modification to the CNI model was introduced to clearly identify that region. To address the issue of long scan times, a series of studies were conducted to adapt a scan acceleration technique, compressed sensing (CS), to work with MRSI and to quantify the effects of such a novel technique on the modulation transfer function (MTF), an important quantitative imaging metric. The studies included the development of the first phantom based method of measuring the MTF for MRSI data, a study of the correlation between the k-space sampling patterns used for compressed sensing and the resulting MTFs, and the introduction of a technique circumventing some of side-effects of compressed sensing by exploiting the conjugate symmetry property of k-space. The work in this thesis provides two essential steps towards wide clinical implementation of MRSI-based tumor delineation. The proposed modifications to the CNI method coupled with the application of CS to MRSI address the two main obstacles outlined. However, there continues to be room for improvement and questions that need to be answered by future research.

  20. R&D 100, 2016: Ultrafast X-ray Imager

    ScienceCinema

    Porter, John; Claus, Liam; Sanchez, Marcos; Robertson, Gideon; Riley, Nathan; Rochau, Greg

    2018-06-13

    The Ultrafast X-ray Imager is a solid-state camera capable of capturing a sequence of images with user-selectable exposure times as short as 2 billionths of a second. Using 3D semiconductor integration techniques to form a hybrid chip, this camera was developed to enable scientists to study the heating and compression of fusion targets in the quest to harness the energy process that powers the stars.

  1. R&D 100, 2016: Ultrafast X-ray Imager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, John; Claus, Liam; Sanchez, Marcos

    The Ultrafast X-ray Imager is a solid-state camera capable of capturing a sequence of images with user-selectable exposure times as short as 2 billionths of a second. Using 3D semiconductor integration techniques to form a hybrid chip, this camera was developed to enable scientists to study the heating and compression of fusion targets in the quest to harness the energy process that powers the stars.

  2. Filtering, Coding, and Compression with Malvar Wavelets

    DTIC Science & Technology

    1993-12-01

    speech coding techniques being investigated by the military (38). Imagery: Space imagery often requires adaptive restoration to deblur out-of-focus...and blurred image, find an estimate of the ideal image using a priori information about the blur, noise , and the ideal image" (12). The research for...recording can be described as the original signal convolved with impulses , which appear as echoes in the seismic event. The term deconvolution indicates

  3. Image quality (IQ) guided multispectral image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  4. Image compression system and method having optimized quantization tables

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)

    1998-01-01

    A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.

  5. Comparative performance evaluation of transform coding in image pre-processing

    NASA Astrophysics Data System (ADS)

    Menon, Vignesh V.; NB, Harikrishnan; Narayanan, Gayathri; CK, Niveditha

    2017-07-01

    We are in the midst of a communication transmute which drives the development as largely as dissemination of pioneering communication systems with ever-increasing fidelity and resolution. Distinguishable researches have been appreciative in image processing techniques crazed by a growing thirst for faster and easier encoding, storage and transmission of visual information. In this paper, the researchers intend to throw light on many techniques which could be worn at the transmitter-end in order to ease the transmission and reconstruction of the images. The researchers investigate the performance of different image transform coding schemes used in pre-processing, their comparison, and effectiveness, the necessary and sufficient conditions, properties and complexity in implementation. Whimsical by prior advancements in image processing techniques, the researchers compare various contemporary image pre-processing frameworks- Compressed Sensing, Singular Value Decomposition, Integer Wavelet Transform on performance. The paper exposes the potential of Integer Wavelet transform to be an efficient pre-processing scheme.

  6. EISCAT Aperture Synthesis Imaging (EASI _3D) for the EISCAT_3D Project

    NASA Astrophysics Data System (ADS)

    La Hoz, Cesar; Belyey, Vasyl

    2012-07-01

    Aperture Synthesis Imaging Radar (ASIR) is one of the technologies adopted by the EISCAT_3D project to endow it with imaging capabilities in 3-dimensions that includes sub-beam resolution. Complemented by pulse compression, it will provide 3-dimensional images of certain types of incoherent scatter radar targets resolved to about 100 metres at 100 km range, depending on the signal-to-noise ratio. This ability will open new research opportunities to map small structures associated with non-homogeneous, unstable processes such as aurora, summer and winter polar radar echoes (PMSE and PMWE), Natural Enhanced Ion Acoustic Lines (NEIALs), structures excited by HF ionospheric heating, meteors, space debris, and others. The underlying physico-mathematical principles of the technique are the same as the technique employed in radioastronomy to image stellar objects; both require sophisticated inversion techniques to obtain reliable images.

  7. Comparison of ultrasound B-mode, strain imaging, acoustic radiation force impulse displacement and shear wave velocity imaging using real time clinical breast images

    NASA Astrophysics Data System (ADS)

    Manickam, Kavitha; Machireddy, Ramasubba Reddy; Raghavan, Bagyam

    2016-04-01

    It has been observed that many pathological process increase the elastic modulus of soft tissue compared to normal. In order to image tissue stiffness using ultrasound, a mechanical compression is applied to tissues of interest and local tissue deformation is measured. Based on the mechanical excitation, ultrasound stiffness imaging methods are classified as compression or strain imaging which is based on external compression and Acoustic Radiation Force Impulse (ARFI) imaging which is based on force generated by focused ultrasound. When ultrasound is focused on tissue, shear wave is generated in lateral direction and shear wave velocity is proportional to stiffness of tissues. The work presented in this paper investigates strain elastography and ARFI imaging in clinical cancer diagnostics using real time patient data. Ultrasound B-mode imaging, strain imaging, ARFI displacement and ARFI shear wave velocity imaging were conducted on 50 patients (31 Benign and 23 malignant categories) using Siemens S2000 machine. True modulus contrast values were calculated from the measured shear wave velocities. For ultrasound B-mode, ARFI displacement imaging and strain imaging, observed image contrast and Contrast to Noise Ratio were calculated for benign and malignant cancers. Observed contrast values were compared based on the true modulus contrast values calculated from shear wave velocity imaging. In addition to that, student unpaired t-test was conducted for all the four techniques and box plots are presented. Results show that, strain imaging is better for malignant cancers whereas ARFI imaging is superior than strain imaging and B-mode for benign lesions representations.

  8. High-quality JPEG compression history detection for fake uncompressed images

    NASA Astrophysics Data System (ADS)

    Zhang, Rong; Wang, Rang-Ding; Guo, Li-Jun; Jiang, Bao-Chuan

    2017-05-01

    Authenticity is one of the most important evaluation factors of images for photography competitions or journalism. Unusual compression history of an image often implies the illicit intent of its author. Our work aims at distinguishing real uncompressed images from fake uncompressed images that are saved in uncompressed formats but have been previously compressed. To detect the potential image JPEG compression, we analyze the JPEG compression artifacts based on the tetrolet covering, which corresponds to the local image geometrical structure. Since the compression can alter the structure information, the tetrolet covering indexes may be changed if a compression is performed on the test image. Such changes can provide valuable clues about the image compression history. To be specific, the test image is first compressed with different quality factors to generate a set of temporary images. Then, the test image is compared with each temporary image block-by-block to investigate whether the tetrolet covering index of each 4×4 block is different between them. The percentages of the changed tetrolet covering indexes corresponding to the quality factors (from low to high) are computed and used to form the p-curve, the local minimum of which may indicate the potential compression. Our experimental results demonstrate the advantage of our method to detect JPEG compressions of high quality, even the highest quality factors such as 98, 99, or 100 of the standard JPEG compression, from uncompressed-format images. At the same time, our detection algorithm can accurately identify the corresponding compression quality factor.

  9. Improving multispectral satellite image compression using onboard subpixel registration

    NASA Astrophysics Data System (ADS)

    Albinet, Mathieu; Camarero, Roberto; Isnard, Maxime; Poulet, Christophe; Perret, Jokin

    2013-09-01

    Future CNES earth observation missions will have to deal with an ever increasing telemetry data rate due to improvements in resolution and addition of spectral bands. Current CNES image compressors implement a discrete wavelet transform (DWT) followed by a bit plane encoding (BPE) but only on a mono spectral basis and do not profit from the multispectral redundancy of the observed scenes. Recent CNES studies have proven a substantial gain on the achievable compression ratio, +20% to +40% on selected scenarios, by implementing a multispectral compression scheme based on a Karhunen Loeve transform (KLT) followed by the classical DWT+BPE. But such results can be achieved only on perfectly registered bands; a default of registration as low as 0.5 pixel ruins all the benefits of multispectral compression. In this work, we first study the possibility to implement a multi-bands subpixel onboard registration based on registration grids generated on-the-fly by the satellite attitude control system and simplified resampling and interpolation techniques. Indeed bands registration is usually performed on ground using sophisticated techniques too computationally intensive for onboard use. This fully quantized algorithm is tuned to meet acceptable registration performances within stringent image quality criteria, with the objective of onboard real-time processing. In a second part, we describe a FPGA implementation developed to evaluate the design complexity and, by extrapolation, the data rate achievable on a spacequalified ASIC. Finally, we present the impact of this approach on the processing chain not only onboard but also on ground and the impacts on the design of the instrument.

  10. Accelerated echo-planar J-resolved spectroscopic imaging in the human brain using compressed sensing: a pilot validation in obstructive sleep apnea.

    PubMed

    Sarma, M K; Nagarajan, R; Macey, P M; Kumar, R; Villablanca, J P; Furuyama, J; Thomas, M A

    2014-06-01

    Echo-planar J-resolved spectroscopic imaging is a fast spectroscopic technique to record the biochemical information in multiple regions of the brain, but for clinical applications, time is still a constraint. Investigations of neural injury in obstructive sleep apnea have revealed structural changes in the brain, but determining the neurochemical changes requires more detailed measurements across multiple brain regions, demonstrating a need for faster echo-planar J-resolved spectroscopic imaging. Hence, we have extended the compressed sensing reconstruction of prospectively undersampled 4D echo-planar J-resolved spectroscopic imaging to investigate metabolic changes in multiple brain locations of patients with obstructive sleep apnea and healthy controls. Nonuniform undersampling was imposed along 1 spatial and 1 spectral dimension of 4D echo-planar J-resolved spectroscopic imaging, and test-retest reliability of the compressed sensing reconstruction of the nonuniform undersampling data was tested by using a brain phantom. In addition, 9 patients with obstructive sleep apnea and 11 healthy controls were investigated by using a 3T MR imaging/MR spectroscopy scanner. Significantly reduced metabolite differences were observed between patients with obstructive sleep apnea and healthy controls in multiple brain regions: NAA/Cr in the left hippocampus; total Cho/Cr and Glx/Cr in the right hippocampus; total NAA/Cr, taurine/Cr, scyllo-Inositol/Cr, phosphocholine/Cr, and total Cho/Cr in the occipital gray matter; total NAA/Cr and NAA/Cr in the medial frontal white matter; and taurine/Cr and total Cho/Cr in the left frontal white matter regions. The 4D echo-planar J-resolved spectroscopic imaging technique using the nonuniform undersampling-based acquisition and compressed sensing reconstruction in patients with obstructive sleep apnea and healthy brain is feasible in a clinically suitable time. In addition to brain metabolite changes previously reported by 1D MR spectroscopy, our results show changes of additional metabolites in patients with obstructive sleep apnea compared with healthy controls. © 2014 by American Journal of Neuroradiology.

  11. Wavelet-based higher-order neural networks for mine detection in thermal IR imagery

    NASA Astrophysics Data System (ADS)

    Baertlein, Brian A.; Liao, Wen-Jiao

    2000-08-01

    An image processing technique is described for the detection of miens in RI imagery. The proposed technique is based on a third-order neural network, which processes the output of a wavelet packet transform. The technique is inherently invariant to changes in signature position, rotation and scaling. The well-known memory limitations that arise with higher-order neural networks are addressed by (1) the data compression capabilities of wavelet packets, (2) protections of the image data into a space of similar triangles, and (3) quantization of that 'triangle space'. Using these techniques, image chips of size 28 by 28, which would require 0(109) neural net weights, are processed by a network having 0(102) weights. ROC curves are presented for mine detection in real and simulated imagery.

  12. GPU-accelerated compressed-sensing (CS) image reconstruction in chest digital tomosynthesis (CDT) using CUDA programming

    NASA Astrophysics Data System (ADS)

    Choi, Sunghoon; Lee, Haenghwa; Lee, Donghoon; Choi, Seungyeon; Shin, Jungwook; Jang, Woojin; Seo, Chang-Woo; Kim, Hee-Joung

    2017-03-01

    A compressed-sensing (CS) technique has been rapidly applied in medical imaging field for retrieving volumetric data from highly under-sampled projections. Among many variant forms, CS technique based on a total-variation (TV) regularization strategy shows fairly reasonable results in cone-beam geometry. In this study, we implemented the TV-based CS image reconstruction strategy in our prototype chest digital tomosynthesis (CDT) R/F system. Due to the iterative nature of time consuming processes in solving a cost function, we took advantage of parallel computing using graphics processing units (GPU) by the compute unified device architecture (CUDA) programming to accelerate our algorithm. In order to compare the algorithmic performance of our proposed CS algorithm, conventional filtered back-projection (FBP) and simultaneous algebraic reconstruction technique (SART) reconstruction schemes were also studied. The results indicated that the CS produced better contrast-to-noise ratios (CNRs) in the physical phantom images (Teflon region-of-interest) by factors of 3.91 and 1.93 than FBP and SART images, respectively. The resulted human chest phantom images including lung nodules with different diameters also showed better visual appearance in the CS images. Our proposed GPU-accelerated CS reconstruction scheme could produce volumetric data up to 80 times than CPU programming. Total elapsed time for producing 50 coronal planes with 1024×1024 image matrix using 41 projection views were 216.74 seconds for proposed CS algorithms on our GPU programming, which could match the clinically feasible time ( 3 min). Consequently, our results demonstrated that the proposed CS method showed a potential of additional dose reduction in digital tomosynthesis with reasonable image quality in a fast time.

  13. Embedded importance watermarking for image verification in radiology

    NASA Astrophysics Data System (ADS)

    Osborne, Domininc; Rogers, D.; Sorell, M.; Abbott, Derek

    2004-03-01

    Digital medical images used in radiology are quite different to everyday continuous tone images. Radiology images require that all detailed diagnostic information can be extracted, which traditionally constrains digital medical images to be of large size and stored without loss of information. In order to transmit diagnostic images over a narrowband wireless communication link for remote diagnosis, lossy compression schemes must be used. This involves discarding detailed information and compressing the data, making it more susceptible to error. The loss of image detail and incidental degradation occurring during transmission have potential legal accountability issues, especially in the case of the null diagnosis of a tumor. The work proposed here investigates techniques for verifying the voracity of medical images - in particular, detailing the use of embedded watermarking as an objective means to ensure that important parts of the medical image can be verified. We propose a result to show how embedded watermarking can be used to differentiate contextual from detailed information. The type of images that will be used include spiral hairline fractures and small tumors, which contain the essential diagnostic high spatial frequency information.

  14. Improving the scalability of hyperspectral imaging applications on heterogeneous platforms using adaptive run-time data compression

    NASA Astrophysics Data System (ADS)

    Plaza, Antonio; Plaza, Javier; Paz, Abel

    2010-10-01

    Latest generation remote sensing instruments (called hyperspectral imagers) are now able to generate hundreds of images, corresponding to different wavelength channels, for the same area on the surface of the Earth. In previous work, we have reported that the scalability of parallel processing algorithms dealing with these high-dimensional data volumes is affected by the amount of data to be exchanged through the communication network of the system. However, large messages are common in hyperspectral imaging applications since processing algorithms are pixel-based, and each pixel vector to be exchanged through the communication network is made up of hundreds of spectral values. Thus, decreasing the amount of data to be exchanged could improve the scalability and parallel performance. In this paper, we propose a new framework based on intelligent utilization of wavelet-based data compression techniques for improving the scalability of a standard hyperspectral image processing chain on heterogeneous networks of workstations. This type of parallel platform is quickly becoming a standard in hyperspectral image processing due to the distributed nature of collected hyperspectral data as well as its flexibility and low cost. Our experimental results indicate that adaptive lossy compression can lead to improvements in the scalability of the hyperspectral processing chain without sacrificing analysis accuracy, even at sub-pixel precision levels.

  15. Fpack and Funpack Utilities for FITS Image Compression and Uncompression

    NASA Technical Reports Server (NTRS)

    Pence, W.

    2008-01-01

    Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see http://fits.gsfc.nasa.gov). The associated funpack program restores the compressed image file back to its original state (as long as a lossless compression algorithm is used). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression algorithms. Fpack stores the compressed image using the FITS tiled image compression convention (see http://fits.gsfc.nasa.gov/fits_registry.html). Under this convention, the image is first divided into a user-configurable grid of rectangular tiles, and then each tile is individually compressed and stored in a variable-length array column in a FITS binary table. By default, fpack usually adopts a row-by-row tiling pattern. The FITS image header keywords remain uncompressed for fast access by FITS reading and writing software. The tiled image compression convention can in principle support any number of different compression algorithms. The fpack and funpack utilities call on routines in the CFITSIO library (http://hesarc.gsfc.nasa.gov/fitsio) to perform the actual compression and uncompression of the FITS images, which currently supports the GZIP, Rice, H-compress, and PLIO IRAF pixel list compression algorithms.

  16. Embedded wavelet packet transform technique for texture compression

    NASA Astrophysics Data System (ADS)

    Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay

    1995-09-01

    A highly efficient texture compression scheme is proposed in this research. With this scheme, energy compaction of texture images is first achieved by the wavelet packet transform, and an embedding approach is then adopted for the coding of the wavelet packet transform coefficients. By comparing the proposed algorithm with the JPEG standard, FBI wavelet/scalar quantization standard and the EZW scheme with extensive experimental results, we observe a significant improvement in the rate-distortion performance and visual quality.

  17. An efficient system for reliably transmitting image and video data over low bit rate noisy channels

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.

    1994-01-01

    This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.

  18. Near-common-path interferometer for imaging Fourier-transform spectroscopy in wide-field microscopy

    PubMed Central

    Wadduwage, Dushan N.; Singh, Vijay Raj; Choi, Heejin; Yaqoob, Zahid; Heemskerk, Hans; Matsudaira, Paul; So, Peter T. C.

    2017-01-01

    Imaging Fourier-transform spectroscopy (IFTS) is a powerful method for biological hyperspectral analysis based on various imaging modalities, such as fluorescence or Raman. Since the measurements are taken in the Fourier space of the spectrum, it can also take advantage of compressed sensing strategies. IFTS has been readily implemented in high-throughput, high-content microscope systems based on wide-field imaging modalities. However, there are limitations in existing wide-field IFTS designs. Non-common-path approaches are less phase-stable. Alternatively, designs based on the common-path Sagnac interferometer are stable, but incompatible with high-throughput imaging. They require exhaustive sequential scanning over large interferometric path delays, making compressive strategic data acquisition impossible. In this paper, we present a novel phase-stable, near-common-path interferometer enabling high-throughput hyperspectral imaging based on strategic data acquisition. Our results suggest that this approach can improve throughput over those of many other wide-field spectral techniques by more than an order of magnitude without compromising phase stability. PMID:29392168

  19. Digital image modification detection using color information and its histograms.

    PubMed

    Zhou, Haoyu; Shen, Yue; Zhu, Xinghui; Liu, Bo; Fu, Zigang; Fan, Na

    2016-09-01

    The rapid development of many open source and commercial image editing software makes the authenticity of the digital images questionable. Copy-move forgery is one of the most widely used tampering techniques to create desirable objects or conceal undesirable objects in a scene. Existing techniques reported in the literature to detect such tampering aim to improve the robustness of these methods against the use of JPEG compression, blurring, noise, or other types of post processing operations. These post processing operations are frequently used with the intention to conceal tampering and reduce tampering clues. A robust method based on the color moments and other five image descriptors is proposed in this paper. The method divides the image into fixed size overlapping blocks. Clustering operation divides entire search space into smaller pieces with similar color distribution. Blocks from the tampered regions will reside within the same cluster since both copied and moved regions have similar color distributions. Five image descriptors are used to extract block features, which makes the method more robust to post processing operations. An ensemble of deep compositional pattern-producing neural networks are trained with these extracted features. Similarity among feature vectors in clusters indicates possible forged regions. Experimental results show that the proposed method can detect copy-move forgery even if an image was distorted by gamma correction, addictive white Gaussian noise, JPEG compression, or blurring. Copyright © 2016. Published by Elsevier Ireland Ltd.

  20. A recursive technique for adaptive vector quantization

    NASA Technical Reports Server (NTRS)

    Lindsay, Robert A.

    1989-01-01

    Vector Quantization (VQ) is fast becoming an accepted, if not preferred method for image compression. The VQ performs well when compressing all types of imagery including Video, Electro-Optical (EO), Infrared (IR), Synthetic Aperture Radar (SAR), Multi-Spectral (MS), and digital map data. The only requirement is to change the codebook to switch the compressor from one image sensor to another. There are several approaches for designing codebooks for a vector quantizer. Adaptive Vector Quantization is a procedure that simultaneously designs codebooks as the data is being encoded or quantized. This is done by computing the centroid as a recursive moving average where the centroids move after every vector is encoded. When computing the centroid of a fixed set of vectors the resultant centroid is identical to the previous centroid calculation. This method of centroid calculation can be easily combined with VQ encoding techniques. The defined quantizer changes after every encoded vector by recursively updating the centroid of minimum distance which is the selected by the encoder. Since the quantizer is changing definition or states after every encoded vector, the decoder must now receive updates to the codebook. This is done as side information by multiplexing bits into the compressed source data.

  1. Simultaneous storage of medical images in the spatial and frequency domain: a comparative study.

    PubMed

    Nayak, Jagadish; Bhat, P Subbanna; Acharya U, Rajendra; Uc, Niranjan

    2004-06-05

    Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. The patient information is encrypted before interleaving with images to ensure greater security. The bio-signals are compressed and subsequently interleaved with the image. This interleaving is carried out in the spatial domain and Frequency domain. The performance of interleaving in the spatial, Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) coefficients is studied. Differential pulse code modulation (DPCM) is employed for data compression as well as encryption and results are tabulated for a specific example. It can be seen from results, the process does not affect the picture quality. This is attributed to the fact that the change in LSB of a pixel changes its brightness by 1 part in 256. Spatial and DFT domain interleaving gave very less %NRMSE as compared to DCT and DWT domain. The Results show that spatial domain the interleaving, the %NRMSE was less than 0.25% for 8-bit encoded pixel intensity. Among the frequency domain interleaving methods, DFT was found to be very efficient.

  2. Digital audio watermarking using moment-preserving thresholding

    NASA Astrophysics Data System (ADS)

    Choi, DooSeop; Jung, Hae Kyung; Choi, Hyuk; Kim, Taejeong

    2007-09-01

    The Moment-Preserving Thresholding technique for digital images has been used in digital image processing for decades, especially in image binarization and image compression. Its main strength lies in that the binary values that the MPT produces as a result, called representative values, are usually unaffected when the signal being thresholded goes through a signal processing operation. The two representative values in MPT together with the threshold value are obtained by solving the system of the preservation equations for the first, second, and third moment. Relying on this robustness of the representative values to various signal processing attacks considered in the watermarking context, this paper proposes a new watermarking scheme for audio signals. The watermark is embedded in the root-sum-square (RSS) of the two representative values of each signal block using the quantization technique. As a result, the RSS values are modified by scaling the signal according to the watermark bit sequence under the constraint of inaudibility relative to the human psycho-acoustic model. We also address and suggest solutions to the problem of synchronization and power scaling attacks. Experimental results show that the proposed scheme maintains high audio quality and robustness to various attacks including MP3 compression, re-sampling, jittering, and, DA/AD conversion.

  3. Compressibility-aware media retargeting with structure preserving.

    PubMed

    Wang, Shu-Fan; Lai, Shang-Hong

    2011-03-01

    A number of algorithms have been proposed for intelligent image/video retargeting with image content retained as much as possible. However, they usually suffer from some artifacts in the results, such as ridge or structure twist. In this paper, we present a structure-preserving media retargeting technique that preserves the content and image structure as best as possible. Different from the previous pixel or grid based methods, we estimate the image content saliency from the structure of the content. A block structure energy is introduced with a top-down strategy to constrain the image structure inside to deform uniformly in either x or y direction. However, the flexibilities for retargeting are quite different for different images. To cope with this problem, we propose a compressibility assessment scheme for media retargeting by combining the entropies of image gradient magnitude and orientation distributions. Thus, the resized media is produced to preserve the image content and structure as best as possible. Our experiments demonstrate that the proposed method provides resized images/videos with better preservation of content and structure than those by the previous methods.

  4. Motion-adaptive spatio-temporal regularization for accelerated dynamic MRI.

    PubMed

    Asif, M Salman; Hamilton, Lei; Brummer, Marijn; Romberg, Justin

    2013-09-01

    Accelerated magnetic resonance imaging techniques reduce signal acquisition time by undersampling k-space. A fundamental problem in accelerated magnetic resonance imaging is the recovery of quality images from undersampled k-space data. Current state-of-the-art recovery algorithms exploit the spatial and temporal structures in underlying images to improve the reconstruction quality. In recent years, compressed sensing theory has helped formulate mathematical principles and conditions that ensure recovery of (structured) sparse signals from undersampled, incoherent measurements. In this article, a new recovery algorithm, motion-adaptive spatio-temporal regularization, is presented that uses spatial and temporal structured sparsity of MR images in the compressed sensing framework to recover dynamic MR images from highly undersampled k-space data. In contrast to existing algorithms, our proposed algorithm models temporal sparsity using motion-adaptive linear transformations between neighboring images. The efficiency of motion-adaptive spatio-temporal regularization is demonstrated with experiments on cardiac magnetic resonance imaging for a range of reduction factors. Results are also compared with k-t FOCUSS with motion estimation and compensation-another recently proposed recovery algorithm for dynamic magnetic resonance imaging. . Copyright © 2012 Wiley Periodicals, Inc.

  5. Temporal Coding of Volumetric Imagery

    NASA Astrophysics Data System (ADS)

    Llull, Patrick Ryan

    'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption. This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications. Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level. Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration of other information within that video; namely, focal and spectral information. The next part of the thesis demonstrates derivative works of CACTI: compressive extended depth of field and compressive spectral-temporal imaging. These works successfully show the technique's extension of temporal coding to improve sensing performance in these other dimensions. Geometrical optics-related tradeoffs, such as the classic challenges of wide-field-of-view and high resolution photography, have motivated the development of mulitscale camera arrays. The advent of such designs less than a decade ago heralds a new era of research- and engineering-related challenges. One significant challenge is that of managing the focal volume (x,y,z ) over wide fields of view and resolutions. The fourth chapter shows advances on focus and image quality assessment for a class of multiscale gigapixel cameras developed at Duke. Along the same line of work, we have explored methods for dynamic and adaptive addressing of focus via point spread function engineering. We demonstrate another form of temporal coding in the form of physical translation of the image plane from its nominal focal position. We demonstrate this technique's capability to generate arbitrary point spread functions.

  6. Dissemination of compressed satellite imagery within the Navy SPAWAR Central Site Product Display environment

    NASA Technical Reports Server (NTRS)

    Kiselyov, Oleg; Fisher, Paul

    1995-01-01

    This paper presents a case study of integration of compression techniques within a satellite image communication component of an actual tactical weather information dissemination system. The paper describes history and requirements of the project, and discusses the information flow, request/reply protocols, error handling, and, especially, system integration issues: specification of compression parameters and the place and time for compressor/decompressor plug-ins. A case for a non-uniform compression of satellite imagery is presented, and its implementation in the current system id demonstrated. The paper gives special attention to challenges of moving the system towards the use of standard, non-proprietary protocols (smtp and http) and new technologies (OpenDoc), and reports the ongoing work in this direction.

  7. Band-Moment Compression of AVIRIS Hyperspectral Data and its Use in the Detection of Vegetation Stress

    NASA Technical Reports Server (NTRS)

    Estep, L.; Davis, B.

    2001-01-01

    A remote sensing campaign was conducted over a U.S. Department of Agriculture test farm at Shelton, Nebraska. An experimental field was set off in plots that were differentially treated with anhydrous ammonia. Four replicates of 0-kg/ha to 200-kg/ha plots, in 50-kg/ha increments, were set out in a random block design. Low-altitude (GSD of 3 m) Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) hyperspectral data were collected over the site in 224 bands. Simultaneously, ground data were collected to support the airborne imagery. In an effort to reduce data load while maintaining or enhancing algorithm performance for vegetation stress detection, band-moment compression and analysis was applied to the AVIRIS image cube. The results indicated that band-moment techniques compress the AVIRIS dataset significantly while retaining the capability of detecting environmentally induced vegetation stress.

  8. Perceptual compression of magnitude-detected synthetic aperture radar imagery

    NASA Technical Reports Server (NTRS)

    Gorman, John D.; Werness, Susan A.

    1994-01-01

    A perceptually-based approach for compressing synthetic aperture radar (SAR) imagery is presented. Key components of the approach are a multiresolution wavelet transform, a bit allocation mask based on an empirical human visual system (HVS) model, and hybrid scalar/vector quantization. Specifically, wavelet shrinkage techniques are used to segregate wavelet transform coefficients into three components: local means, edges, and texture. Each of these three components is then quantized separately according to a perceptually-based bit allocation scheme. Wavelet coefficients associated with local means and edges are quantized using high-rate scalar quantization while texture information is quantized using low-rate vector quantization. The impact of the perceptually-based multiresolution compression algorithm on visual image quality, impulse response, and texture properties is assessed for fine-resolution magnitude-detected SAR imagery; excellent image quality is found at bit rates at or above 1 bpp along with graceful performance degradation at rates below 1 bpp.

  9. Pulse compression of harmonic chirp signals using the fractional fourier transform.

    PubMed

    Arif, M; Cowell, D M J; Freear, S

    2010-06-01

    In ultrasound harmonic imaging with chirp-coded excitation, a harmonic matched filter (HMF) is typically used on the received signal to perform pulse compression of the second harmonic component (SHC) to recover signal axial resolution. Designing the HMF for the compression of the SHC is a problematic issue because it requires optimal window selection. In the compressed second harmonic signal, the sidelobe level may increase and the mainlobe width (MLW) widen under a mismatched condition, resulting in loss of axial resolution. We propose the use of the fractional Fourier transform (FrFT) as an alternative tool to perform compression of the chirp-coded SHC generated as a result of the nonlinear propagation of an ultrasound signal. Two methods are used to experimentally assess the performance benefits of the FrFT technique over the HMF techniques. The first method uses chirp excitation with central frequency of 2.25 MHz and bandwidth of 1 MHz. The second method uses chirp excitation with pulse inversion to increase the bandwidth to 2 MHz. In this study, experiments were performed in a water tank with a single-element transducer mounted coaxially with a hydrophone in a pitch-catch configuration. Results are presented that indicate that the FrFT can perform pulse compression of the second harmonic chirp component, with a 14% reduction in the MLW of the compressed signal when compared with the HMF. Also, the FrFT provides at least 23% reduction in the MLW of the compressed signal when compared with the harmonic mismatched filter (HMMF). The FrFT maintains comparable peak and integrated sidelobe levels when compared with the HMF and HMMF techniques. Copyright 2010 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  10. Three-dimensional kinematic stress magnetic resonance image analysis shows promise for detecting altered anatomical relationships of tissues in the cervical spine associated with painful radiculopathy.

    PubMed

    Jaumard, N V; Udupa, J K; Siegler, S; Schuster, J M; Hilibrand, A S; Hirsch, B E; Borthakur, A; Winkelstein, B A

    2013-10-01

    For some patients with radiculopathy a source of nerve root compression cannot be identified despite positive electromyography (EMG) evidence. This discrepancy hampers the effective clinical management for these individuals. Although it has been well-established that tissues in the cervical spine move in a three-dimensional (3D) manner, the 3D motions of the neural elements and their relationship to the bones surrounding them are largely unknown even for asymptomatic normal subjects. We hypothesize that abnormal mechanical loading of cervical nerve roots during pain-provoking head positioning may be responsible for radicular pain in those cases in which there is no evidence of nerve root compression on conventional cervical magnetic resonance imaging (MRI) with the neck in the neutral position. This biomechanical imaging proof-of-concept study focused on quantitatively defining the architectural relationships between the neural and bony structures in the cervical spine using measurements derived from 3D MR images acquired in neutral and pain-provoking neck positions for subjects: (1) with radicular symptoms and evidence of root compression by conventional MRI and positive EMG, (2) with radicular symptoms and no evidence of root compression by MRI but positive EMG, and (3) asymptomatic age-matched controls. Function and pain scores were measured, along with neck range of motion, for all subjects. MR imaging was performed in both a neutral position and a pain-provoking position. Anatomical architectural data derived from analysis of the 3D MR images were compared between symptomatic and asymptomatic groups, and the symptomatic groups with and without imaging evidence of root compression. Several differences in the architectural relationships between the bone and neural tissues were identified between the asymptomatic and symptomatic groups. In addition, changes in architectural relationships were also detected between the symptomatic groups with and without imaging evidence of nerve root compression. As demonstrated in the data and a case study the 3D stress MR imaging approach provides utility to identify biomechanical relationships between hard and soft tissues that are otherwise undetected by standard clinical imaging methods. This technique offers a promising approach to detect the source of radiculopathy to inform clinical management for this pathology. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Spatial compression algorithm for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R [Albuquerque, NM

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  12. Commodity cluster and hardware-based massively parallel implementations of hyperspectral imaging algorithms

    NASA Astrophysics Data System (ADS)

    Plaza, Antonio; Chang, Chein-I.; Plaza, Javier; Valencia, David

    2006-05-01

    The incorporation of hyperspectral sensors aboard airborne/satellite platforms is currently producing a nearly continual stream of multidimensional image data, and this high data volume has soon introduced new processing challenges. The price paid for the wealth spatial and spectral information available from hyperspectral sensors is the enormous amounts of data that they generate. Several applications exist, however, where having the desired information calculated quickly enough for practical use is highly desirable. High computing performance of algorithm analysis is particularly important in homeland defense and security applications, in which swift decisions often involve detection of (sub-pixel) military targets (including hostile weaponry, camouflage, concealment, and decoys) or chemical/biological agents. In order to speed-up computational performance of hyperspectral imaging algorithms, this paper develops several fast parallel data processing techniques. Techniques include four classes of algorithms: (1) unsupervised classification, (2) spectral unmixing, and (3) automatic target recognition, and (4) onboard data compression. A massively parallel Beowulf cluster (Thunderhead) at NASA's Goddard Space Flight Center in Maryland is used to measure parallel performance of the proposed algorithms. In order to explore the viability of developing onboard, real-time hyperspectral data compression algorithms, a Xilinx Virtex-II field programmable gate array (FPGA) is also used in experiments. Our quantitative and comparative assessment of parallel techniques and strategies may help image analysts in selection of parallel hyperspectral algorithms for specific applications.

  13. Ultrafast Imaging using Spectral Resonance Modulation

    NASA Astrophysics Data System (ADS)

    Huang, Eric; Ma, Qian; Liu, Zhaowei

    2016-04-01

    CCD cameras are ubiquitous in research labs, industry, and hospitals for a huge variety of applications, but there are many dynamic processes in nature that unfold too quickly to be captured. Although tradeoffs can be made between exposure time, sensitivity, and area of interest, ultimately the speed limit of a CCD camera is constrained by the electronic readout rate of the sensors. One potential way to improve the imaging speed is with compressive sensing (CS), a technique that allows for a reduction in the number of measurements needed to record an image. However, most CS imaging methods require spatial light modulators (SLMs), which are subject to mechanical speed limitations. Here, we demonstrate an etalon array based SLM without any moving elements that is unconstrained by either mechanical or electronic speed limitations. This novel spectral resonance modulator (SRM) shows great potential in an ultrafast compressive single pixel camera.

  14. Noise characterization of broadband fiber Cherenkov radiation as a visible-wavelength source for optical coherence tomography and two-photon fluorescence microscopy.

    PubMed

    Tu, Haohua; Zhao, Youbo; Liu, Yuan; Liu, Yuan-Zhi; Boppart, Stephen

    2014-08-25

    Optical sources in the visible region immediately adjacent to the near-infrared biological optical window are preferred in imaging techniques such as spectroscopic optical coherence tomography of endogenous absorptive molecules and two-photon fluorescence microscopy of intrinsic fluorophores. However, existing sources based on fiber supercontinuum generation are known to have high relative intensity noise and low spectral coherence, which may degrade imaging performance. Here we compare the optical noise and pulse compressibility of three high-power fiber Cherenkov radiation sources developed recently, and evaluate their potential to replace the existing supercontinuum sources in these imaging techniques.

  15. New patient-controlled abdominal compression method in radiography: radiation dose and image quality.

    PubMed

    Piippo-Huotari, Oili; Norrman, Eva; Anderzén-Carlsson, Agneta; Geijer, Håkan

    2018-05-01

    The radiation dose for patients can be reduced with many methods and one way is to use abdominal compression. In this study, the radiation dose and image quality for a new patient-controlled compression device were compared with conventional compression and compression in the prone position . To compare radiation dose and image quality of patient-controlled compression compared with conventional and prone compression in general radiography. An experimental design with quantitative approach. After obtaining the approval of the ethics committee, a consecutive sample of 48 patients was examined with the standard clinical urography protocol. The radiation doses were measured as dose-area product and analyzed with a paired t-test. The image quality was evaluated by visual grading analysis. Four radiologists evaluated each image individually by scoring nine criteria modified from the European quality criteria for diagnostic radiographic images. There was no significant difference in radiation dose or image quality between conventional and patient-controlled compression. Prone position resulted in both higher dose and inferior image quality. Patient-controlled compression gave similar dose levels as conventional compression and lower than prone compression. Image quality was similar with both patient-controlled and conventional compression and was judged to be better than in the prone position.

  16. The mobile image quality survey game

    NASA Astrophysics Data System (ADS)

    Rasmussen, D. René

    2012-01-01

    In this paper we discuss human assessment of the quality of photographic still images, that are degraded in various manners relative to an original, for example due to compression or noise. In particular, we examine and present results from a technique where observers view images on a mobile device, perform pairwise comparisons, identify defects in the images, and interact with the display to indicate the location of the defects. The technique measures the response time and accuracy of the responses. By posing the survey in a form similar to a game, providing performance feedback to the observer, the technique attempts to increase the engagement of the observers, and to avoid exhausting observers, a factor that is often a problem for subjective surveys. The results are compared with the known physical magnitudes of the defects and with results from similar web-based surveys. The strengths and weaknesses of the technique are discussed. Possible extensions of the technique to video quality assessment are also discussed.

  17. Sparse Reconstruction Techniques in MRI: Methods, Applications, and Challenges to Clinical Adoption

    PubMed Central

    Yang, Alice Chieh-Yu; Kretzler, Madison; Sudarski, Sonja; Gulani, Vikas; Seiberlich, Nicole

    2016-01-01

    The family of sparse reconstruction techniques, including the recently introduced compressed sensing framework, has been extensively explored to reduce scan times in Magnetic Resonance Imaging (MRI). While there are many different methods that fall under the general umbrella of sparse reconstructions, they all rely on the idea that a priori information about the sparsity of MR images can be employed to reconstruct full images from undersampled data. This review describes the basic ideas behind sparse reconstruction techniques, how they could be applied to improve MR imaging, and the open challenges to their general adoption in a clinical setting. The fundamental principles underlying different classes of sparse reconstructions techniques are examined, and the requirements that each make on the undersampled data outlined. Applications that could potentially benefit from the accelerations that sparse reconstructions could provide are described, and clinical studies using sparse reconstructions reviewed. Lastly, technical and clinical challenges to widespread implementation of sparse reconstruction techniques, including optimization, reconstruction times, artifact appearance, and comparison with current gold-standards, are discussed. PMID:27003227

  18. Non-contact evaluation of milk-based products using air-coupled ultrasound

    NASA Astrophysics Data System (ADS)

    Meyer, S.; Hindle, S. A.; Sandoz, J.-P.; Gan, T. H.; Hutchins, D. A.

    2006-07-01

    An air-coupled ultrasonic technique has been developed and used to detect physicochemical changes of liquid beverages within a glass container. This made use of two wide-bandwidth capacitive transducers, combined with pulse-compression techniques. The use of a glass container to house samples enabled visual inspection, helping to verify the results of some of the ultrasonic measurements. The non-contact pulse-compression system was used to evaluate agglomeration processes in milk-based products. It is shown that the amplitude of the signal varied with time after the samples had been treated with lactic acid, thus promoting sample destabilization. Non-contact imaging was also performed to follow destabilization of samples by scanning in various directions across the container. The obtained ultrasonic images were also compared to those from a digital camera. Coagulation with glucono-delta-lactone of skim milk poured into this container could be monitored within a precision of a pH of 0.15. This rapid, non-contact and non-destructive technique has shown itself to be a feasible method for investigating the quality of milk-based beverages, and possibly other food products.

  19. High Performance Compression of Science Data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Carpentieri, Bruno; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  20. Simple motion correction strategy reduces respiratory-induced motion artifacts for k-t accelerated and compressed-sensing cardiovascular magnetic resonance perfusion imaging.

    PubMed

    Zhou, Ruixi; Huang, Wei; Yang, Yang; Chen, Xiao; Weller, Daniel S; Kramer, Christopher M; Kozerke, Sebastian; Salerno, Michael

    2018-02-01

    Cardiovascular magnetic resonance (CMR) stress perfusion imaging provides important diagnostic and prognostic information in coronary artery disease (CAD). Current clinical sequences have limited temporal and/or spatial resolution, and incomplete heart coverage. Techniques such as k-t principal component analysis (PCA) or k-t sparcity and low rank structure (SLR), which rely on the high degree of spatiotemporal correlation in first-pass perfusion data, can significantly accelerate image acquisition mitigating these problems. However, in the presence of respiratory motion, these techniques can suffer from significant degradation of image quality. A number of techniques based on non-rigid registration have been developed. However, to first approximation, breathing motion predominantly results in rigid motion of the heart. To this end, a simple robust motion correction strategy is proposed for k-t accelerated and compressed sensing (CS) perfusion imaging. A simple respiratory motion compensation (MC) strategy for k-t accelerated and compressed-sensing CMR perfusion imaging to selectively correct respiratory motion of the heart was implemented based on linear k-space phase shifts derived from rigid motion registration of a region-of-interest (ROI) encompassing the heart. A variable density Poisson disk acquisition strategy was used to minimize coherent aliasing in the presence of respiratory motion, and images were reconstructed using k-t PCA and k-t SLR with or without motion correction. The strategy was evaluated in a CMR-extended cardiac torso digital (XCAT) phantom and in prospectively acquired first-pass perfusion studies in 12 subjects undergoing clinically ordered CMR studies. Phantom studies were assessed using the Structural Similarity Index (SSIM) and Root Mean Square Error (RMSE). In patient studies, image quality was scored in a blinded fashion by two experienced cardiologists. In the phantom experiments, images reconstructed with the MC strategy had higher SSIM (p < 0.01) and lower RMSE (p < 0.01) in the presence of respiratory motion. For patient studies, the MC strategy improved k-t PCA and k-t SLR reconstruction image quality (p < 0.01). The performance of k-t SLR without motion correction demonstrated improved image quality as compared to k-t PCA in the setting of respiratory motion (p < 0.01), while with motion correction there is a trend of better performance in k-t SLR as compared with motion corrected k-t PCA. Our simple and robust rigid motion compensation strategy greatly reduces motion artifacts and improves image quality for standard k-t PCA and k-t SLR techniques in setting of respiratory motion due to imperfect breath-holding.

  1. Development and evaluation of a novel lossless image compression method (AIC: artificial intelligence compression method) using neural networks as artificial intelligence.

    PubMed

    Fukatsu, Hiroshi; Naganawa, Shinji; Yumura, Shinnichiro

    2008-04-01

    This study was aimed to validate the performance of a novel image compression method using a neural network to achieve a lossless compression. The encoding consists of the following blocks: a prediction block; a residual data calculation block; a transformation and quantization block; an organization and modification block; and an entropy encoding block. The predicted image is divided into four macro-blocks using the original image for teaching; and then redivided into sixteen sub-blocks. The predicted image is compared to the original image to create the residual image. The spatial and frequency data of the residual image are compared and transformed. Chest radiography, computed tomography (CT), magnetic resonance imaging, positron emission tomography, radioisotope mammography, ultrasonography, and digital subtraction angiography images were compressed using the AIC lossless compression method; and the compression rates were calculated. The compression rates were around 15:1 for chest radiography and mammography, 12:1 for CT, and around 6:1 for other images. This method thus enables greater lossless compression than the conventional methods. This novel method should improve the efficiency of handling of the increasing volume of medical imaging data.

  2. A New Compression Method for FITS Tables

    NASA Technical Reports Server (NTRS)

    Pence, William; Seaman, Rob; White, Richard L.

    2010-01-01

    As the size and number of FITS binary tables generated by astronomical observatories increases, so does the need for a more efficient compression method to reduce the amount disk space and network bandwidth required to archive and down1oad the data tables. We have developed a new compression method for FITS binary tables that is modeled after the FITS tiled-image compression compression convention that has been in use for the past decade. Tests of this new method on a sample of FITS binary tables from a variety of current missions show that on average this new compression technique saves about 50% more disk space than when simply compressing the whole FITS file with gzip. Other advantages of this method are (1) the compressed FITS table is itself a valid FITS table, (2) the FITS headers remain uncompressed, thus allowing rapid read and write access to the keyword values, and (3) in the common case where the FITS file contains multiple tables, each table is compressed separately and may be accessed without having to uncompress the whole file.

  3. Near-infrared hyperspectral imaging for quality analysis of agricultural and food products

    NASA Astrophysics Data System (ADS)

    Singh, C. B.; Jayas, D. S.; Paliwal, J.; White, N. D. G.

    2010-04-01

    Agricultural and food processing industries are always looking to implement real-time quality monitoring techniques as a part of good manufacturing practices (GMPs) to ensure high-quality and safety of their products. Near-infrared (NIR) hyperspectral imaging is gaining popularity as a powerful non-destructive tool for quality analysis of several agricultural and food products. This technique has the ability to analyse spectral data in a spatially resolved manner (i.e., each pixel in the image has its own spectrum) by applying both conventional image processing and chemometric tools used in spectral analyses. Hyperspectral imaging technique has demonstrated potential in detecting defects and contaminants in meats, fruits, cereals, and processed food products. This paper discusses the methodology of hyperspectral imaging in terms of hardware, software, calibration, data acquisition and compression, and development of prediction and classification algorithms and it presents a thorough review of the current applications of hyperspectral imaging in the analyses of agricultural and food products.

  4. Four-dimensional wavelet compression of arbitrarily sized echocardiographic data.

    PubMed

    Zeng, Li; Jansen, Christian P; Marsch, Stephan; Unser, Michael; Hunziker, Patrick R

    2002-09-01

    Wavelet-based methods have become most popular for the compression of two-dimensional medical images and sequences. The standard implementations consider data sizes that are powers of two. There is also a large body of literature treating issues such as the choice of the "optimal" wavelets and the performance comparison of competing algorithms. With the advent of telemedicine, there is a strong incentive to extend these techniques to higher dimensional data such as dynamic three-dimensional (3-D) echocardiography [four-dimensional (4-D) datasets]. One of the practical difficulties is that the size of this data is often not a multiple of a power of two, which can lead to increased computational complexity and impaired compression power. Our contribution in this paper is to present a genuine 4-D extension of the well-known zerotree algorithm for arbitrarily sized data. The key component of our method is a one-dimensional wavelet algorithm that can handle arbitrarily sized input signals. The method uses a pair of symmetric/antisymmetric wavelets (10/6) together with some appropriate midpoint symmetry boundary conditions that reduce border artifacts. The zerotree structure is also adapted so that it can accommodate noneven data splitting. We have applied our method to the compression of real 3-D dynamic sequences from clinical cardiac ultrasound examinations. Our new algorithm compares very favorably with other more ad hoc adaptations (image extension and tiling) of the standard powers-of-two methods, in terms of both compression performance and computational cost. It is vastly superior to slice-by-slice wavelet encoding. This was seen not only in numerical image quality parameters but also in expert ratings, where significant improvement using the new approach could be documented. Our validation experiments show that one can safely compress 4-D data sets at ratios of 128:1 without compromising the diagnostic value of the images. We also display some more extreme compression results at ratios of 2000:1 where some key diagnostically relevant key features are preserved.

  5. Multi-axis dose accumulation of noninvasive image-guided breast brachytherapy through biomechanical modeling of tissue deformation using the finite element method

    PubMed Central

    Ghadyani, Hamid R.; Bastien, Adam D.; Lutz, Nicholas N.; Hepel, Jaroslaw T.

    2015-01-01

    Purpose Noninvasive image-guided breast brachytherapy delivers conformal HDR 192Ir brachytherapy treatments with the breast compressed, and treated in the cranial-caudal and medial-lateral directions. This technique subjects breast tissue to extreme deformations not observed for other disease sites. Given that, commercially-available software for deformable image registration cannot accurately co-register image sets obtained in these two states, a finite element analysis based on a biomechanical model was developed to deform dose distributions for each compression circumstance for dose summation. Material and methods The model assumed the breast was under planar stress with values of 30 kPa for Young's modulus and 0.3 for Poisson's ratio. Dose distributions from round and skin-dose optimized applicators in cranial-caudal and medial-lateral compressions were deformed using 0.1 cm planar resolution. Dose distributions, skin doses, and dose-volume histograms were generated. Results were examined as a function of breast thickness, applicator size, target size, and offset distance from the center. Results Over the range of examined thicknesses, target size increased several millimeters as compression thickness decreased. This trend increased with increasing offset distances. Applicator size minimally affected target coverage, until applicator size was less than the compressed target size. In all cases, with an applicator larger or equal to the compressed target size, > 90% of the target covered by > 90% of the prescription dose. In all cases, dose coverage became less uniform as offset distance increased and average dose increased. This effect was more pronounced for smaller target–applicator combinations. Conclusions The model exhibited skin dose trends that matched MC-generated benchmarking results within 2% and clinical observations over a similar range of breast thicknesses and target sizes. The model provided quantitative insight on dosimetric treatment variables over a range of clinical circumstances. These findings highlight the need for careful target localization and accurate identification of compression thickness and target offset. PMID:25829938

  6. Prediction of compression-induced image interpretability degradation

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Chen, Hua-Mei; Irvine, John M.; Wang, Zhonghai; Chen, Genshe; Nagy, James; Scott, Stephen

    2018-04-01

    Image compression is an important component in modern imaging systems as the volume of the raw data collected is increasing. To reduce the volume of data while collecting imagery useful for analysis, choosing the appropriate image compression method is desired. Lossless compression is able to preserve all the information, but it has limited reduction power. On the other hand, lossy compression, which may result in very high compression ratios, suffers from information loss. We model the compression-induced information loss in terms of the National Imagery Interpretability Rating Scale or NIIRS. NIIRS is a user-based quantification of image interpretability widely adopted by the Geographic Information System community. Specifically, we present the Compression Degradation Image Function Index (CoDIFI) framework that predicts the NIIRS degradation (i.e., a decrease of NIIRS level) for a given compression setting. The CoDIFI-NIIRS framework enables a user to broker the maximum compression setting while maintaining a specified NIIRS rating.

  7. Time-Reversal Based Range Extension technique for Ultra-wideband (UWB) Sensors and Applications in Tactical Communications and Networking

    DTIC Science & Technology

    2010-01-28

    has to rely on a uni- polar sequence whose autocorrelation is typically less sharp than that of a bi-polar sequence. Optical orthogonal code (OOC...detection in multipath environments," in Proc. IEEE ICC󈧇, vol. 5, pp. 3530-3534, May 2003. [11] M. Weisenhorn and W. Hirt, "Robust Noncoherent Receiver...M. Duarte, D. Baron, S. Sarvotham, K. Kelly, and R. Baraniuk, "A New Compressive Imaging Camera Architecture using Optical -Domain Compression," in

  8. A comparison of select image-compression algorithms for an electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.

  9. Lossless medical image compression with a hybrid coder

    NASA Astrophysics Data System (ADS)

    Way, Jing-Dar; Cheng, Po-Yuen

    1998-10-01

    The volume of medical image data is expected to increase dramatically in the next decade due to the large use of radiological image for medical diagnosis. The economics of distributing the medical image dictate that data compression is essential. While there is lossy image compression, the medical image must be recorded and transmitted lossless before it reaches the users to avoid wrong diagnosis due to the image data lost. Therefore, a low complexity, high performance lossless compression schematic that can approach the theoretic bound and operate in near real-time is needed. In this paper, we propose a hybrid image coder to compress the digitized medical image without any data loss. The hybrid coder is constituted of two key components: an embedded wavelet coder and a lossless run-length coder. In this system, the medical image is compressed with the lossy wavelet coder first, and the residual image between the original and the compressed ones is further compressed with the run-length coder. Several optimization schemes have been used in these coders to increase the coding performance. It is shown that the proposed algorithm is with higher compression ratio than run-length entropy coders such as arithmetic, Huffman and Lempel-Ziv coders.

  10. Fast and accurate face recognition based on image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2017-05-01

    Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.

  11. A database for assessment of effect of lossy compression on digital mammograms

    NASA Astrophysics Data System (ADS)

    Wang, Jiheng; Sahiner, Berkman; Petrick, Nicholas; Pezeshk, Aria

    2018-03-01

    With widespread use of screening digital mammography, efficient storage of the vast amounts of data has become a challenge. While lossless image compression causes no risk to the interpretation of the data, it does not allow for high compression rates. Lossy compression and the associated higher compression ratios are therefore more desirable. The U.S. Food and Drug Administration (FDA) currently interprets the Mammography Quality Standards Act as prohibiting lossy compression of digital mammograms for primary image interpretation, image retention, or transfer to the patient or her designated recipient. Previous work has used reader studies to determine proper usage criteria for evaluating lossy image compression in mammography, and utilized different measures and metrics to characterize medical image quality. The drawback of such studies is that they rely on a threshold on compression ratio as the fundamental criterion for preserving the quality of images. However, compression ratio is not a useful indicator of image quality. On the other hand, many objective image quality metrics (IQMs) have shown excellent performance for natural image content for consumer electronic applications. In this paper, we create a new synthetic mammogram database with several unique features. We compare and characterize the impact of image compression on several clinically relevant image attributes such as perceived contrast and mass appearance for different kinds of masses. We plan to use this database to develop a new objective IQM for measuring the quality of compressed mammographic images to help determine the allowed maximum compression for different kinds of breasts and masses in terms of visual and diagnostic quality.

  12. Accelerated T1ρ acquisition for knee cartilage quantification using compressed sensing and data-driven parallel imaging: A feasibility study.

    PubMed

    Pandit, Prachi; Rivoire, Julien; King, Kevin; Li, Xiaojuan

    2016-03-01

    Quantitative T1ρ imaging is beneficial for early detection for osteoarthritis but has seen limited clinical use due to long scan times. In this study, we evaluated the feasibility of accelerated T1ρ mapping for knee cartilage quantification using a combination of compressed sensing (CS) and data-driven parallel imaging (ARC-Autocalibrating Reconstruction for Cartesian sampling). A sequential combination of ARC and CS, both during data acquisition and reconstruction, was used to accelerate the acquisition of T1ρ maps. Phantom, ex vivo (porcine knee), and in vivo (human knee) imaging was performed on a GE 3T MR750 scanner. T1ρ quantification after CS-accelerated acquisition was compared with non CS-accelerated acquisition for various cartilage compartments. Accelerating image acquisition using CS did not introduce major deviations in quantification. The coefficient of variation for the root mean squared error increased with increasing acceleration, but for in vivo measurements, it stayed under 5% for a net acceleration factor up to 2, where the acquisition was 25% faster than the reference (only ARC). To the best of our knowledge, this is the first implementation of CS for in vivo T1ρ quantification. These early results show that this technique holds great promise in making quantitative imaging techniques more accessible for clinical applications. © 2015 Wiley Periodicals, Inc.

  13. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1990-01-01

    A process is disclosed for x ray registration and differencing which results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  14. Digital Data Registration and Differencing Compression System

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1996-01-01

    A process for X-ray registration and differencing results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic X-ray digital images.

  15. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1992-01-01

    A process for x ray registration and differencing results in more efficient compression is discussed. Differencing of registered modeled subject image with a modeled reference image forms a differential image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three dimensional model, which three dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  16. Quantitative micro-elastography: imaging of tissue elasticity using compression optical coherence elastography

    PubMed Central

    Kennedy, Kelsey M.; Chin, Lixin; McLaughlin, Robert A.; Latham, Bruce; Saunders, Christobel M.; Sampson, David D.; Kennedy, Brendan F.

    2015-01-01

    Probing the mechanical properties of tissue on the microscale could aid in the identification of diseased tissues that are inadequately detected using palpation or current clinical imaging modalities, with potential to guide medical procedures such as the excision of breast tumours. Compression optical coherence elastography (OCE) maps tissue strain with microscale spatial resolution and can delineate microstructural features within breast tissues. However, without a measure of the locally applied stress, strain provides only a qualitative indication of mechanical properties. To overcome this limitation, we present quantitative micro-elastography, which combines compression OCE with a compliant stress sensor to image tissue elasticity. The sensor consists of a layer of translucent silicone with well-characterized stress-strain behaviour. The measured strain in the sensor is used to estimate the two-dimensional stress distribution applied to the sample surface. Elasticity is determined by dividing the stress by the strain in the sample. We show that quantification of elasticity can improve the ability of compression OCE to distinguish between tissues, thereby extending the potential for inter-sample comparison and longitudinal studies of tissue elasticity. We validate the technique using tissue-mimicking phantoms and demonstrate the ability to map elasticity of freshly excised malignant and benign human breast tissues. PMID:26503225

  17. Image compression-encryption algorithms by combining hyper-chaotic system with discrete fractional random transform

    NASA Astrophysics Data System (ADS)

    Gong, Lihua; Deng, Chengzhi; Pan, Shumin; Zhou, Nanrun

    2018-07-01

    Based on hyper-chaotic system and discrete fractional random transform, an image compression-encryption algorithm is designed. The original image is first transformed into a spectrum by the discrete cosine transform and the resulting spectrum is compressed according to the method of spectrum cutting. The random matrix of the discrete fractional random transform is controlled by a chaotic sequence originated from the high dimensional hyper-chaotic system. Then the compressed spectrum is encrypted by the discrete fractional random transform. The order of DFrRT and the parameters of the hyper-chaotic system are the main keys of this image compression and encryption algorithm. The proposed algorithm can compress and encrypt image signal, especially can encrypt multiple images once. To achieve the compression of multiple images, the images are transformed into spectra by the discrete cosine transform, and then the spectra are incised and spliced into a composite spectrum by Zigzag scanning. Simulation results demonstrate that the proposed image compression and encryption algorithm is of high security and good compression performance.

  18. Simultaneous CT-MRI Reconstruction for Constrained Imaging Geometries using Structural Coupling and Compressive Sensing

    PubMed Central

    Xi, Yan; Zhao, Jun; Bennett, James R.; Stacy, Mitchel R.; Sinusas, Albert J.; Wang, Ge

    2016-01-01

    Objective A unified reconstruction framework is presented for simultaneous CT-MRI reconstruction. Significance Combined CT-MRI imaging has the potential for improved results in existing preclinical and clinical applications, as well as opening novel research directions for future applications. Methods In an ideal CT-MRI scanner, CT and MRI acquisitions would occur simultaneously, and hence would be inherently registered in space and time. Alternatively, separately acquired CT and MRI scans can be fused to simulate an instantaneous acquisition. In this study, structural coupling and compressive sensing techniques are combined to unify CT and MRI reconstructions. A bidirectional image estimation method was proposed to connect images from different modalities. Hence, CT and MRI data serve as prior knowledge to each other for better CT and MRI image reconstruction than what could be achieved with separate reconstruction. Results Our integrated reconstruction methodology is demonstrated with numerical phantom and real-dataset based experiments, and has yielded promising results. PMID:26672028

  19. Watermarking and copyright labeling of printed images

    NASA Astrophysics Data System (ADS)

    Hel-Or, Hagit Z.

    2001-07-01

    Digital watermarking is a labeling technique for digital images which embeds a code into the digital data so the data are marked. Watermarking techniques previously developed deal with on-line digital data. These techniques have been developed to withstand digital attacks such as image processing, image compression and geometric transformations. However, one must also consider the readily available attack of printing and scanning. The available watermarking techniques are not reliable under printing and scanning. In fact, one must consider the availability of watermarks for printed images as well as for digital images. An important issue is to intercept and prevent forgery in printed material such as currency notes, back checks, etc. and to track and validate sensitive and secrete printed material. Watermarking in such printed material can be used not only for verification of ownership but as an indicator of date and type of transaction or date and source of the printed data. In this work we propose a method of embedding watermarks in printed images by inherently taking advantage of the printing process. The method is visually unobtrusive to the printed image, the watermark is easily extracted and is robust under reconstruction errors. The decoding algorithm is automatic given the watermarked image.

  20. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information

    NASA Technical Reports Server (NTRS)

    Pence, William D.; White, R. L.; Seaman, R.

    2010-01-01

    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  1. Lensless Photoluminescence Hyperspectral Camera Employing Random Speckle Patterns.

    PubMed

    Žídek, Karel; Denk, Ondřej; Hlubuček, Jiří

    2017-11-10

    We propose and demonstrate a spectrally-resolved photoluminescence imaging setup based on the so-called single pixel camera - a technique of compressive sensing, which enables imaging by using a single-pixel photodetector. The method relies on encoding an image by a series of random patterns. In our approach, the image encoding was maintained via laser speckle patterns generated by an excitation laser beam scattered on a diffusor. By using a spectrometer as the single-pixel detector we attained a realization of a spectrally-resolved photoluminescence camera with unmatched simplicity. We present reconstructed hyperspectral images of several model scenes. We also discuss parameters affecting the imaging quality, such as the correlation degree of speckle patterns, pattern fineness, and number of datapoints. Finally, we compare the presented technique to hyperspectral imaging using sample scanning. The presented method enables photoluminescence imaging for a broad range of coherent excitation sources and detection spectral areas.

  2. Reversible Watermarking Surviving JPEG Compression.

    PubMed

    Zain, J; Clarke, M

    2005-01-01

    This paper will discuss the properties of watermarking medical images. We will also discuss the possibility of such images being compressed by JPEG and give an overview of JPEG compression. We will then propose a watermarking scheme that is reversible and robust to JPEG compression. The purpose is to verify the integrity and authenticity of medical images. We used 800x600x8 bits ultrasound (US) images in our experiment. SHA-256 of the image is then embedded in the Least significant bits (LSB) of an 8x8 block in the Region of Non Interest (RONI). The image is then compressed using JPEG and decompressed using Photoshop 6.0. If the image has not been altered, the watermark extracted will match the hash (SHA256) of the original image. The result shown that the embedded watermark is robust to JPEG compression up to image quality 60 (~91% compressed).

  3. Mixed raster content (MRC) model for compound image compression

    NASA Astrophysics Data System (ADS)

    de Queiroz, Ricardo L.; Buckley, Robert R.; Xu, Ming

    1998-12-01

    This paper will describe the Mixed Raster Content (MRC) method for compressing compound images, containing both binary test and continuous-tone images. A single compression algorithm that simultaneously meets the requirements for both text and image compression has been elusive. MRC takes a different approach. Rather than using a single algorithm, MRC uses a multi-layered imaging model for representing the results of multiple compression algorithms, including ones developed specifically for text and for images. As a result, MRC can combine the best of existing or new compression algorithms and offer different quality-compression ratio tradeoffs. The algorithms used by MRC set the lower bound on its compression performance. Compared to existing algorithms, MRC has some image-processing overhead to manage multiple algorithms and the imaging model. This paper will develop the rationale for the MRC approach by describing the multi-layered imaging model in light of a rate-distortion trade-off. Results will be presented comparing images compressed using MRC, JPEG and state-of-the-art wavelet algorithms such as SPIHT. MRC has been approved or proposed as an architectural model for several standards, including ITU Color Fax, IETF Internet Fax, and JPEG 2000.

  4. High bit depth infrared image compression via low bit depth codecs

    NASA Astrophysics Data System (ADS)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-08-01

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.

  5. A new hyperspectral image compression paradigm based on fusion

    NASA Astrophysics Data System (ADS)

    Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.

  6. The effects of wavelet compression on Digital Elevation Models (DEMs)

    USGS Publications Warehouse

    Oimoen, M.J.

    2004-01-01

    This paper investigates the effects of lossy compression on floating-point digital elevation models using the discrete wavelet transform. The compression of elevation data poses a different set of problems and concerns than does the compression of images. Most notably, the usefulness of DEMs depends largely in the quality of their derivatives, such as slope and aspect. Three areas extracted from the U.S. Geological Survey's National Elevation Dataset were transformed to the wavelet domain using the third order filters of the Daubechies family (DAUB6), and were made sparse by setting 95 percent of the smallest wavelet coefficients to zero. The resulting raster is compressible to a corresponding degree. The effects of the nulled coefficients on the reconstructed DEM are noted as residuals in elevation, derived slope and aspect, and delineation of drainage basins and streamlines. A simple masking technique also is presented, that maintains the integrity and flatness of water bodies in the reconstructed DEM.

  7. Fast Lossless Compression of Multispectral-Image Data

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew

    2006-01-01

    An algorithm that effects fast lossless compression of multispectral-image data is based on low-complexity, proven adaptive-filtering algorithms. This algorithm is intended for use in compressing multispectral-image data aboard spacecraft for transmission to Earth stations. Variants of this algorithm could be useful for lossless compression of three-dimensional medical imagery and, perhaps, for compressing image data in general.

  8. Exploiting sparsity and low-rank structure for the recovery of multi-slice breast MRIs with reduced sampling error.

    PubMed

    Yin, X X; Ng, B W-H; Ramamohanarao, K; Baghai-Wadji, A; Abbott, D

    2012-09-01

    It has been shown that, magnetic resonance images (MRIs) with sparsity representation in a transformed domain, e.g. spatial finite-differences (FD), or discrete cosine transform (DCT), can be restored from undersampled k-space via applying current compressive sampling theory. The paper presents a model-based method for the restoration of MRIs. The reduced-order model, in which a full-system-response is projected onto a subspace of lower dimensionality, has been used to accelerate image reconstruction by reducing the size of the involved linear system. In this paper, the singular value threshold (SVT) technique is applied as a denoising scheme to reduce and select the model order of the inverse Fourier transform image, and to restore multi-slice breast MRIs that have been compressively sampled in k-space. The restored MRIs with SVT for denoising show reduced sampling errors compared to the direct MRI restoration methods via spatial FD, or DCT. Compressive sampling is a technique for finding sparse solutions to underdetermined linear systems. The sparsity that is implicit in MRIs is to explore the solution to MRI reconstruction after transformation from significantly undersampled k-space. The challenge, however, is that, since some incoherent artifacts result from the random undersampling, noise-like interference is added to the image with sparse representation. These recovery algorithms in the literature are not capable of fully removing the artifacts. It is necessary to introduce a denoising procedure to improve the quality of image recovery. This paper applies a singular value threshold algorithm to reduce the model order of image basis functions, which allows further improvement of the quality of image reconstruction with removal of noise artifacts. The principle of the denoising scheme is to reconstruct the sparse MRI matrices optimally with a lower rank via selecting smaller number of dominant singular values. The singular value threshold algorithm is performed by minimizing the nuclear norm of difference between the sampled image and the recovered image. It has been illustrated that this algorithm improves the ability of previous image reconstruction algorithms to remove noise artifacts while significantly improving the quality of MRI recovery.

  9. Outer planet Pioneer imaging communications system study. [data compression

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The effects of different types of imaging data compression on the elements of the Pioneer end-to-end data system were studied for three imaging transmission methods. These were: no data compression, moderate data compression, and the advanced imaging communications system. It is concluded that: (1) the value of data compression is inversely related to the downlink telemetry bit rate; (2) the rolling characteristics of the spacecraft limit the selection of data compression ratios; and (3) data compression might be used to perform acceptable outer planet mission at reduced downlink telemetry bit rates.

  10. Nanometer-scale characterization of laser-driven plasmas, compression, shocks and phase transitions, by coherent small angle x-ray scattering

    NASA Astrophysics Data System (ADS)

    Kluge, Thomas

    2015-11-01

    Combining ultra-intense short-pulse and high-energy long-pulse lasers, with brilliant coherent hard X-ray FELs, such as the Helmholtz International Beamline for Extreme Fields (HIBEF) under construction at the HED Instrument of European XFEL, or MEC at LCLS, holds the promise to revolutionize our understanding of many High Energy Density Physics phenomena. Examples include the relativistic electron generation, transport, and bulk plasma response, and ionization dynamics and heating in relativistic laser-matter interactions, or the dynamics of laser-driven shocks, quasi-isentropic compression, and the kinetics of phase transitions at high pressure. A particularly promising new technique is the use of coherent X-ray diffraction to characterize electron density correlations, and by resonant scattering to characterize the distribution of specific charge-state ions, either on the ultrafast time scale of the laser interaction, or associated with hydrodynamic motion. As well one can image slight density changes arising from phase transitions inside of shock-compressed high pressure matter. The feasibility of coherent diffraction techniques in laser-driven matter will be discussed. including recent results from demonstration experiments at MEC. Among other things, very sharp density changes from laser-driven compression are observed, having an effective step width of 10 nm or smaller. This compares to a resolution of several hundred nm achievedpreviously with phase contrast imaging. and on behalf of HIBEF User Consortium, for the Helmholtz International Beamline for Extreme Fields at the European XFEL.

  11. Compressive sensing in medical imaging

    PubMed Central

    Graff, Christian G.; Sidky, Emil Y.

    2015-01-01

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400

  12. Resistance Curves in the Tensile and Compressive Longitudinal Failure of Composites

    NASA Technical Reports Server (NTRS)

    Camanho, Pedro P.; Catalanotti, Giuseppe; Davila, Carlos G.; Lopes, Claudio S.; Bessa, Miguel A.; Xavier, Jose C.

    2010-01-01

    This paper presents a new methodology to measure the crack resistance curves associated with fiber-dominated failure modes in polymer-matrix composites. These crack resistance curves not only characterize the fracture toughness of the material, but are also the basis for the identification of the parameters of the softening laws used in the analytical and numerical simulation of fracture in composite materials. The method proposed is based on the identification of the crack tip location by the use of Digital Image Correlation and the calculation of the J-integral directly from the test data using a simple expression derived for cross-ply composite laminates. It is shown that the results obtained using the proposed methodology yield crack resistance curves similar to those obtained using FEM-based methods in compact tension carbon-epoxy specimens. However, it is also shown that the Digital Image Correlation based technique can be used to extract crack resistance curves in compact compression tests for which FEM-based techniques are inadequate.

  13. Homomorphic filtering textural analysis technique to reduce multiplicative noise in the 11Oba nano-doped liquid crystalline compounds

    NASA Astrophysics Data System (ADS)

    Madhav, B. T. P.; Pardhasaradhi, P.; Manepalli, R. K. N. R.; Pisipati, V. G. K. M.

    2015-07-01

    The compound undecyloxy benzoic acid (11Oba) exhibits nematic and smectic-C phases while a nano-doped undecyloxy benzoic acid with ZnO exhibits the same nematic and smectic-C phases with reduced clearing temperature as expected. The doping is done with 0.5% and 1% ZnO molecules. The clearing temperatures are reduced by approximately 4 ° and 6 °, respectively (differential scanning calorimeter data). While collecting the images from a polarizing microscope connected with hot stage and camera, the illumination and reflectance combined multiplicatively and the image quality was reduced to identify the exact phase in the compound. A novel technique of homomorphic filtering is used in this manuscript through which multiplicative noise components of the image are separated linearly in the frequency domain. This technique provides a frequency domain procedure to improve the appearance of an image by gray level range compression and contrast enhancement.

  14. Fingerprint recognition of wavelet-based compressed images by neuro-fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Liu, Ti C.; Mitra, Sunanda

    1996-06-01

    Image compression plays a crucial role in many important and diverse applications requiring efficient storage and transmission. This work mainly focuses on a wavelet transform (WT) based compression of fingerprint images and the subsequent classification of the reconstructed images. The algorithm developed involves multiresolution wavelet decomposition, uniform scalar quantization, entropy and run- length encoder/decoder and K-means clustering of the invariant moments as fingerprint features. The performance of the WT-based compression algorithm has been compared with JPEG current image compression standard. Simulation results show that WT outperforms JPEG in high compression ratio region and the reconstructed fingerprint image yields proper classification.

  15. Image Coding Based on Address Vector Quantization.

    NASA Astrophysics Data System (ADS)

    Feng, Yushu

    Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing Adaptive VQ Technique" is presented. In addition to chapters 2 through 6 which report on new work, this dissertation includes one chapter (chapter 1) and part of chapter 2 which review previous work on VQ and image coding, respectively. Finally, a short discussion of directions for further research is presented in conclusion.

  16. Multicontrast reconstruction using compressed sensing with low rank and spatially varying edge-preserving constraints for high-resolution MR characterization of myocardial infarction.

    PubMed

    Zhang, Li; Athavale, Prashant; Pop, Mihaela; Wright, Graham A

    2017-08-01

    To enable robust reconstruction for highly accelerated three-dimensional multicontrast late enhancement imaging to provide improved MR characterization of myocardial infarction with isotropic high spatial resolution. A new method using compressed sensing with low rank and spatially varying edge-preserving constraints (CS-LASER) is proposed to improve the reconstruction of fine image details from highly undersampled data. CS-LASER leverages the low rank feature of the multicontrast volume series in MR relaxation and integrates spatially varying edge preservation into the explicit low rank constrained compressed sensing framework using weighted total variation. With an orthogonal temporal basis pre-estimated, a multiscale iterative reconstruction framework is proposed to enable the practice of CS-LASER with spatially varying weights of appropriate accuracy. In in vivo pig studies with both retrospective and prospective undersamplings, CS-LASER preserved fine image details better and presented tissue characteristics with a higher degree of consistency with histopathology, particularly in the peri-infarct region, than an alternative technique for different acceleration rates. An isotropic resolution of 1.5 mm was achieved in vivo within a single breath-hold using the proposed techniques. Accelerated three-dimensional multicontrast late enhancement with CS-LASER can achieve improved MR characterization of myocardial infarction with high spatial resolution. Magn Reson Med 78:598-610, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  17. Fast Acquisition and Reconstruction of Optical Coherence Tomography Images via Sparse Representation

    PubMed Central

    Li, Shutao; McNabb, Ryan P.; Nie, Qing; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.; Farsiu, Sina

    2014-01-01

    In this paper, we present a novel technique, based on compressive sensing principles, for reconstruction and enhancement of multi-dimensional image data. Our method is a major improvement and generalization of the multi-scale sparsity based tomographic denoising (MSBTD) algorithm we recently introduced for reducing speckle noise. Our new technique exhibits several advantages over MSBTD, including its capability to simultaneously reduce noise and interpolate missing data. Unlike MSBTD, our new method does not require an a priori high-quality image from the target imaging subject and thus offers the potential to shorten clinical imaging sessions. This novel image restoration method, which we termed sparsity based simultaneous denoising and interpolation (SBSDI), utilizes sparse representation dictionaries constructed from previously collected datasets. We tested the SBSDI algorithm on retinal spectral domain optical coherence tomography images captured in the clinic. Experiments showed that the SBSDI algorithm qualitatively and quantitatively outperforms other state-of-the-art methods. PMID:23846467

  18. JPEG2000 still image coding quality.

    PubMed

    Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei

    2013-10-01

    This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression.

  19. Optical identity authentication technique based on compressive ghost imaging with QR code

    NASA Astrophysics Data System (ADS)

    Wenjie, Zhan; Leihong, Zhang; Xi, Zeng; Yi, Kang

    2018-04-01

    With the rapid development of computer technology, information security has attracted more and more attention. It is not only related to the information and property security of individuals and enterprises, but also to the security and social stability of a country. Identity authentication is the first line of defense in information security. In authentication systems, response time and security are the most important factors. An optical authentication technology based on compressive ghost imaging with QR codes is proposed in this paper. The scheme can be authenticated with a small number of samples. Therefore, the response time of the algorithm is short. At the same time, the algorithm can resist certain noise attacks, so it offers good security.

  20. Vertebral artery pexy for microvascular decompression of the facial nerve in the treatment of hemifacial spasm.

    PubMed

    Ferreira, Manuel; Walcott, Brian P; Nahed, Brian V; Sekhar, Laligam N

    2011-06-01

    Hemifacial spasm (HFS) is caused by arterial or venous compression of cranial nerve VII at its root exit zone. Traditionally, microvascular decompression of the facial nerve has been an effective treatment for posterior inferior and anterior inferior cerebellar artery as well as venous compression. The traditional technique involves Teflon felt or another construct to cushion the offending vessel from the facial nerve, or cautery and division of the offending vein. However, using this technique for severe vertebral artery (VA) compression can be ineffective and fraught with complications. The authors report the use of a new technique of VA pexy to the petrous or clival dura mater in patients with HFS attributed to a severely ectatic and tortuous VA, and detail the results in a series of patients. Six patients with HFS due to VA compression underwent a retrosigmoid craniotomy, combined with a far-lateral approach in some patients. On identification of the site of VA compression, the vessel was mobilized adequately for the decompression. Great care was taken to avoid kinking the perforating vessels arising from the VA. Two 8-0 nylon sutures were passed through to the wall of the VA and then through the clival or petrous dura, and then tied to alleviate compression on cranial nerve VII. Patients were followed for at least 1 year postoperatively (mean 2.7 years, range 1-4 years). All 6 patients had complete resolution of their HFS. Facial function was tested postoperatively, and was stable when compared with the preoperative baseline. Two of the 3 patients with preoperative tinnitus had resolution of this symptom after the procedure. Postoperative imaging demonstrated VA decompression of the facial nerve and no evidence of stroke in all patients. One patient suffered from hearing loss, another developed a postoperative transient unilateral vocal cord paralysis, and a third patient developed a pseudomeningocele that resolved with the placement of a lumbar drain. Hemifacial spasm and other neurovascular syndromes are effectively treated by repositioning the compressing artery. Careful study of the preoperative MR images may identify a select group of patients with HFS due to an ectatic VA. Rather than traditional decompression with only pledget placement, these patients may benefit from a VA pexy to provide an effective, safe, and durable resolution of their symptoms while minimizing surgical complications.

  1. Spectral Prior Image Constrained Compressed Sensing (Spectral PICCS) for Photon-Counting Computed Tomography

    PubMed Central

    Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.

    2016-01-01

    Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in-vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43~73%) without sacrificing CT number accuracy or spatial resolution. PMID:27551878

  2. Scalable Coding of Plenoptic Images by Using a Sparse Set and Disparities.

    PubMed

    Li, Yun; Sjostrom, Marten; Olsson, Roger; Jennehag, Ulf

    2016-01-01

    One of the light field capturing techniques is the focused plenoptic capturing. By placing a microlens array in front of the photosensor, the focused plenoptic cameras capture both spatial and angular information of a scene in each microlens image and across microlens images. The capturing results in a significant amount of redundant information, and the captured image is usually of a large resolution. A coding scheme that removes the redundancy before coding can be of advantage for efficient compression, transmission, and rendering. In this paper, we propose a lossy coding scheme to efficiently represent plenoptic images. The format contains a sparse image set and its associated disparities. The reconstruction is performed by disparity-based interpolation and inpainting, and the reconstructed image is later employed as a prediction reference for the coding of the full plenoptic image. As an outcome of the representation, the proposed scheme inherits a scalable structure with three layers. The results show that plenoptic images are compressed efficiently with over 60 percent bit rate reduction compared with High Efficiency Video Coding intra coding, and with over 20 percent compared with an High Efficiency Video Coding block copying mode.

  3. Spectral prior image constrained compressed sensing (spectral PICCS) for photon-counting computed tomography

    NASA Astrophysics Data System (ADS)

    Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.

    2016-09-01

    Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43-73%) without sacrificing CT number accuracy or spatial resolution.

  4. Exploring the interior of cuticles and compressions of fossil plants by FIB-SEM milling and image microscopy.

    PubMed

    Sender, L M; Escapa, I; Benedetti, A; Cúneo, R; Diez, J B

    2018-01-01

    We present the first study of cuticles and compressions of fossil leaves by Focused Ion Beam Scanning Electron Microscopy (FIB-SEM). Cavities preserved inside fossil leaf compressions corresponding to substomatal chambers have been observed for the first time and several new features were identified in the cross-section cuts. These results open a new way in the investigation of the three-dimensional structures of both micro- and nanostructural features of fossil plants. Moreover, the application of the FIB-SEM technique to both fossils and extant plant remains represent a new source of taxonomical, palaeoenvironmental and palaeoclimatic information. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  5. High performance compression of science data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in the interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  6. Subjective evaluation of compressed image quality

    NASA Astrophysics Data System (ADS)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  7. Simultaneous storage of medical images in the spatial and frequency domain: A comparative study

    PubMed Central

    Nayak, Jagadish; Bhat, P Subbanna; Acharya U, Rajendra; UC, Niranjan

    2004-01-01

    Background Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. Methods The patient information is encrypted before interleaving with images to ensure greater security. The bio-signals are compressed and subsequently interleaved with the image. This interleaving is carried out in the spatial domain and Frequency domain. The performance of interleaving in the spatial, Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) coefficients is studied. Differential pulse code modulation (DPCM) is employed for data compression as well as encryption and results are tabulated for a specific example. Results It can be seen from results, the process does not affect the picture quality. This is attributed to the fact that the change in LSB of a pixel changes its brightness by 1 part in 256. Spatial and DFT domain interleaving gave very less %NRMSE as compared to DCT and DWT domain. Conclusion The Results show that spatial domain the interleaving, the %NRMSE was less than 0.25% for 8-bit encoded pixel intensity. Among the frequency domain interleaving methods, DFT was found to be very efficient. PMID:15180899

  8. Toward an image compression algorithm for the high-resolution electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.

  9. Method and apparatus for optical encoding with compressible imaging

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B. (Inventor)

    2006-01-01

    The present invention presents an optical encoder with increased conversion rates. Improvement in the conversion rate is a result of combining changes in the pattern recognition encoder's scale pattern with an image sensor readout technique which takes full advantage of those changes, and lends itself to operation by modern, high-speed, ultra-compact microprocessors and digital signal processors (DSP) or field programmable gate array (FPGA) logic elements which can process encoder scale images at the highest speeds. Through these improvements, all three components of conversion time (reciprocal conversion rate)--namely exposure time, image readout time, and image processing time--are minimized.

  10. Compression and accelerated rendering of volume data using DWT

    NASA Astrophysics Data System (ADS)

    Kamath, Preyas; Akleman, Ergun; Chan, Andrew K.

    1998-09-01

    2D images cannot convey information on object depth and location relative to the surfaces. The medical community is increasingly using 3D visualization techniques to view data from CT scans, MRI etc. 3D images provide more information on depth and location in the spatial domain to help surgeons making better diagnoses of the problem. 3D images can be constructed from 2D images using 3D scalar algorithms. With recent advances in communication techniques, it is possible for doctors to diagnose and plan treatment of a patient who lives at a remote location. It is made possible by transmitting relevant data of the patient via telephone lines. If this information is to be reconstructed in 3D, then 2D images must be transmitted. However 2D dataset storage occupies a lot of memory. In addition, visualization algorithms are slow. We describe in this paper a scheme which reduces the data transfer time by only transmitting information that the doctor wants. Compression is achieved by reducing the amount of data transfer. This is possible by using the 3D wavelet transform applied to 3D datasets. Since the wavelet transform is localized in frequency and spatial domain, we transmit detail only in the region where the doctor needs it. Since only ROM (Region of Interest) is reconstructed in detail, we need to render only ROI in detail, thus we can reduce the rendering time.

  11. Compression of regions in the global advanced very high resolution radiometer 1-km data set

    NASA Technical Reports Server (NTRS)

    Kess, Barbara L.; Steinwand, Daniel R.; Reichenbach, Stephen E.

    1994-01-01

    The global advanced very high resolution radiometer (AVHRR) 1-km data set is a 10-band image produced at USGS' EROS Data Center for the study of the world's land surfaces. The image contains masked regions for non-land areas which are identical in each band but vary between data sets. They comprise over 75 percent of this 9.7 gigabyte image. The mask is compressed once and stored separately from the land data which is compressed for each of the 10 bands. The mask is stored in a hierarchical format for multi-resolution decompression of geographic subwindows of the image. The land for each band is compressed by modifying a method that ignores fill values. This multi-spectral region compression efficiently compresses the region data and precludes fill values from interfering with land compression statistics. Results show that the masked regions in a one-byte test image (6.5 Gigabytes) compress to 0.2 percent of the 557,756,146 bytes they occupy in the original image, resulting in a compression ratio of 89.9 percent for the entire image.

  12. WE-G-207-04: Non-Local Total-Variation (NLTV) Combined with Reweighted L1-Norm for Compressed Sensing Based CT Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, H; Chen, J; Pouliot, J

    2015-06-15

    Purpose: Compressed sensing (CS) has been used for CT (4DCT/CBCT) reconstruction with few projections to reduce dose of radiation. Total-variation (TV) in L1-minimization (min.) with local information is the prevalent technique in CS, while it can be prone to noise. To address the problem, this work proposes to apply a new image processing technique, called non-local TV (NLTV), to CS based CT reconstruction, and incorporate reweighted L1-norm into it for more precise reconstruction. Methods: TV minimizes intensity variations by considering two local neighboring voxels, which can be prone to noise, possibly damaging the reconstructed CT image. NLTV, contrarily, utilizes moremore » global information by computing a weight function of current voxel relative to surrounding search area. In fact, it might be challenging to obtain an optimal solution due to difficulty in defining the weight function with appropriate parameters. Introducing reweighted L1-min., designed for approximation to ideal L0-min., can reduce the dependence on defining the weight function, therefore improving accuracy of the solution. This work implemented the NLTV combined with reweighted L1-min. by Split Bregman Iterative method. For evaluation, a noisy digital phantom and a pelvic CT images are employed to compare the quality of images reconstructed by TV, NLTV and reweighted NLTV. Results: In both cases, conventional and reweighted NLTV outperform TV min. in signal-to-noise ratio (SNR) and root-mean squared errors of the reconstructed images. Relative to conventional NLTV, NLTV with reweighted L1-norm was able to slightly improve SNR, while greatly increasing the contrast between tissues due to additional iterative reweighting process. Conclusion: NLTV min. can provide more precise compressed sensing based CT image reconstruction by incorporating the reweighted L1-norm, while maintaining greater robustness to the noise effect than TV min.« less

  13. A new efficient method for color image compression based on visual attention mechanism

    NASA Astrophysics Data System (ADS)

    Shao, Xiaoguang; Gao, Kun; Lv, Lily; Ni, Guoqiang

    2010-11-01

    One of the key procedures in color image compression is to extract its region of interests (ROIs) and evaluate different compression ratios. A new non-uniform color image compression algorithm with high efficiency is proposed in this paper by using a biology-motivated selective attention model for the effective extraction of ROIs in natural images. When the ROIs have been extracted and labeled in the image, the subsequent work is to encode the ROIs and other regions with different compression ratios via popular JPEG algorithm. Furthermore, experiment results and quantitative and qualitative analysis in the paper show perfect performance when comparing with other traditional color image compression approaches.

  14. Digitized hand-wrist radiographs: comparison of subjective and software-derived image quality at various compression ratios.

    PubMed

    McCord, Layne K; Scarfe, William C; Naylor, Rachel H; Scheetz, James P; Silveira, Anibal; Gillespie, Kevin R

    2007-05-01

    The objectives of this study were to compare the effect of JPEG 2000 compression of hand-wrist radiographs on observer image quality qualitative assessment and to compare with a software-derived quantitative image quality index. Fifteen hand-wrist radiographs were digitized and saved as TIFF and JPEG 2000 images at 4 levels of compression (20:1, 40:1, 60:1, and 80:1). The images, including rereads, were viewed by 13 orthodontic residents who determined the image quality rating on a scale of 1 to 5. A quantitative analysis was also performed by using a readily available software based on the human visual system (Image Quality Measure Computer Program, version 6.2, Mitre, Bedford, Mass). ANOVA was used to determine the optimal compression level (P < or =.05). When we compared subjective indexes, JPEG compression greater than 60:1 significantly reduced image quality. When we used quantitative indexes, the JPEG 2000 images had lower quality at all compression ratios compared with the original TIFF images. There was excellent correlation (R2 >0.92) between qualitative and quantitative indexes. Image Quality Measure indexes are more sensitive than subjective image quality assessments in quantifying image degradation with compression. There is potential for this software-based quantitative method in determining the optimal compression ratio for any image without the use of subjective raters.

  15. A feasibility study for compressed sensing combined phase contrast MR angiography reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Hoon; Hong, Cheol-Pyo; Lee, Man-Woo; Han, Bong-Soo

    2012-02-01

    Phase contrast magnetic resonance angiography (PC MRA) is a technique for flow velocity measurement and vessels visualization, simultaneously. The PC MRA takes long scan time because each flow encoding gradients which are composed bipolar gradient type need to reconstruct the angiography image. Moreover, it takes more image acquisition time when we use the PC MRA at the low-tesla MRI system. In this study, we studied and evaluation of feasibility for CS MRI reconstruction combined PC MRA which data acquired by low-tesla MRI system. We used non-linear reconstruction algorithm which named Bregman iteration for CS image reconstruction and validate the usefulness of CS combined PC MRA reconstruction technique. The results of CS reconstructed PC MRA images provide similar level of image quality between fully sampled reconstruction data and sparse sampled reconstruction using CS technique. Although our results used half of sampling ratio and do not used specification hardware device or performance which are improving the temporal resolution of MR image acquisition such as parallel imaging reconstruction using phased array coil or non-cartesian trajectory, we think that CS combined PC MRA technique will be helpful to increase the temporal resolution and at low-tesla MRI system.

  16. Comparison of two SVD-based color image compression schemes.

    PubMed

    Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli

    2017-01-01

    Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR.

  17. Comparison of two SVD-based color image compression schemes

    PubMed Central

    Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli

    2017-01-01

    Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR. PMID:28257451

  18. Compression of the Global Land 1-km AVHRR dataset

    USGS Publications Warehouse

    Kess, B. L.; Steinwand, D.R.; Reichenbach, S.E.

    1996-01-01

    Large datasets, such as the Global Land 1-km Advanced Very High Resolution Radiometer (AVHRR) Data Set (Eidenshink and Faundeen 1994), require compression methods that provide efficient storage and quick access to portions of the data. A method of lossless compression is described that provides multiresolution decompression within geographic subwindows of multi-spectral, global, 1-km, AVHRR images. The compression algorithm segments each image into blocks and compresses each block in a hierarchical format. Users can access the data by specifying either a geographic subwindow or the whole image and a resolution (1,2,4, 8, or 16 km). The Global Land 1-km AVHRR data are presented in the Interrupted Goode's Homolosine map projection. These images contain masked regions for non-land areas which comprise 80 per cent of the image. A quadtree algorithm is used to compress the masked regions. The compressed region data are stored separately from the compressed land data. Results show that the masked regions compress to 0·143 per cent of the bytes they occupy in the test image and the land areas are compressed to 33·2 per cent of their original size. The entire image is compressed hierarchically to 6·72 per cent of the original image size, reducing the data from 9·05 gigabytes to 623 megabytes. These results are compared to the first order entropy of the residual image produced with lossless Joint Photographic Experts Group predictors. Compression results are also given for Lempel-Ziv-Welch (LZW) and LZ77, the algorithms used by UNIX compress and GZIP respectively. In addition to providing multiresolution decompression of geographic subwindows of the data, the hierarchical approach and the use of quadtrees for storing the masked regions gives a marked improvement over these popular methods.

  19. A new method to assess the deformations of internal organs of the abdomen during impact.

    PubMed

    Helfenstein-Didier, Clémentine; Rongiéras, Frédéric; Gennisson, Jean-Luc; Tanter, Mickaël; Beillas, Philippe

    2016-11-16

    Due to limitations of classic imaging approaches, the internal response of abdominal organs is difficult to observe during an impact. Within the context of impact biomechanics for the protection of the occupant of transports, this could be an issue for human model validation and injury prediction. In the current study, a previously developed technique (ultrafast ultrasound imaging) was used as the basis to develop a protocol to observe the internal response of abdominal organs in situ at high imaging rates. The protocol was applied to 3 postmortem human surrogates to observe the liver and the colon during impacts delivered to the abdomen. The results show the sensitivity of the liver motion to the impact location. Compression of the colon was also quantified and compared to the abdominal compression. These results illustrate the feasibility of the approach. Further tests and comparisons with simulations are under preparation.

  20. Resolution enhancement of low-quality videos using a high-resolution frame

    NASA Astrophysics Data System (ADS)

    Pham, Tuan Q.; van Vliet, Lucas J.; Schutte, Klamer

    2006-01-01

    This paper proposes an example-based Super-Resolution (SR) algorithm of compressed videos in the Discrete Cosine Transform (DCT) domain. Input to the system is a Low-Resolution (LR) compressed video together with a High-Resolution (HR) still image of similar content. Using a training set of corresponding LR-HR pairs of image patches from the HR still image, high-frequency details are transferred from the HR source to the LR video. The DCT-domain algorithm is much faster than example-based SR in spatial domain 6 because of a reduction in search dimensionality, which is a direct result of the compact and uncorrelated DCT representation. Fast searching techniques like tree-structure vector quantization 16 and coherence search1 are also key to the improved efficiency. Preliminary results on MJPEG sequence show promising result of the DCT-domain SR synthesis approach.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salloum, Maher; Fabian, Nathan D.; Hensinger, David M.

    Exascale computing promises quantities of data too large to efficiently store and transfer across networks in order to be able to analyze and visualize the results. We investigate compressed sensing (CS) as an in situ method to reduce the size of the data as it is being generated during a large-scale simulation. CS works by sampling the data on the computational cluster within an alternative function space such as wavelet bases and then reconstructing back to the original space on visualization platforms. While much work has gone into exploring CS on structured datasets, such as image data, we investigate itsmore » usefulness for point clouds such as unstructured mesh datasets often found in finite element simulations. We sample using a technique that exhibits low coherence with tree wavelets found to be suitable for point clouds. We reconstruct using the stagewise orthogonal matching pursuit algorithm that we improved to facilitate automated use in batch jobs. We analyze the achievable compression ratios and the quality and accuracy of reconstructed results at each compression ratio. In the considered case studies, we are able to achieve compression ratios up to two orders of magnitude with reasonable reconstruction accuracy and minimal visual deterioration in the data. Finally, our results suggest that, compared to other compression techniques, CS is attractive in cases where the compression overhead has to be minimized and where the reconstruction cost is not a significant concern.« less

  2. Coding for Efficient Image Transmission

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Lee, J. J.

    1986-01-01

    NASA publication second in series on data-coding techniques for noiseless channels. Techniques used even in noisy channels, provided data further processed with Reed-Solomon or other error-correcting code. Techniques discussed in context of transmission of monochrome imagery from Voyager II spacecraft but applicable to other streams of data. Objective of this type coding to "compress" data; that is, to transmit using as few bits as possible by omitting as much as possible of portion of information repeated in subsequent samples (or picture elements).

  3. Optimal Compressed Sensing and Reconstruction of Unstructured Mesh Datasets

    DOE PAGES

    Salloum, Maher; Fabian, Nathan D.; Hensinger, David M.; ...

    2017-08-09

    Exascale computing promises quantities of data too large to efficiently store and transfer across networks in order to be able to analyze and visualize the results. We investigate compressed sensing (CS) as an in situ method to reduce the size of the data as it is being generated during a large-scale simulation. CS works by sampling the data on the computational cluster within an alternative function space such as wavelet bases and then reconstructing back to the original space on visualization platforms. While much work has gone into exploring CS on structured datasets, such as image data, we investigate itsmore » usefulness for point clouds such as unstructured mesh datasets often found in finite element simulations. We sample using a technique that exhibits low coherence with tree wavelets found to be suitable for point clouds. We reconstruct using the stagewise orthogonal matching pursuit algorithm that we improved to facilitate automated use in batch jobs. We analyze the achievable compression ratios and the quality and accuracy of reconstructed results at each compression ratio. In the considered case studies, we are able to achieve compression ratios up to two orders of magnitude with reasonable reconstruction accuracy and minimal visual deterioration in the data. Finally, our results suggest that, compared to other compression techniques, CS is attractive in cases where the compression overhead has to be minimized and where the reconstruction cost is not a significant concern.« less

  4. Unconventional methods of imaging: computational microscopy and compact implementations

    NASA Astrophysics Data System (ADS)

    McLeod, Euan; Ozcan, Aydogan

    2016-07-01

    In the past two decades or so, there has been a renaissance of optical microscopy research and development. Much work has been done in an effort to improve the resolution and sensitivity of microscopes, while at the same time to introduce new imaging modalities, and make existing imaging systems more efficient and more accessible. In this review, we look at two particular aspects of this renaissance: computational imaging techniques and compact imaging platforms. In many cases, these aspects go hand-in-hand because the use of computational techniques can simplify the demands placed on optical hardware in obtaining a desired imaging performance. In the first main section, we cover lens-based computational imaging, in particular, light-field microscopy, structured illumination, synthetic aperture, Fourier ptychography, and compressive imaging. In the second main section, we review lensfree holographic on-chip imaging, including how images are reconstructed, phase recovery techniques, and integration with smart substrates for more advanced imaging tasks. In the third main section we describe how these and other microscopy modalities have been implemented in compact and field-portable devices, often based around smartphones. Finally, we conclude with some comments about opportunities and demand for better results, and where we believe the field is heading.

  5. Abdominal 4D flow MR imaging in a breath hold: combination of spiral sampling and dynamic compressed sensing for highly accelerated acquisition.

    PubMed

    Dyvorne, Hadrien; Knight-Greenfield, Ashley; Jajamovich, Guido; Besa, Cecilia; Cui, Yong; Stalder, Aurélien; Markl, Michael; Taouli, Bachir

    2015-04-01

    To develop a highly accelerated phase-contrast cardiac-gated volume flow measurement (four-dimensional [4D] flow) magnetic resonance (MR) imaging technique based on spiral sampling and dynamic compressed sensing and to compare this technique with established phase-contrast imaging techniques for the quantification of blood flow in abdominal vessels. This single-center prospective study was compliant with HIPAA and approved by the institutional review board. Ten subjects (nine men, one woman; mean age, 51 years; age range, 30-70 years) were enrolled. Seven patients had liver disease. Written informed consent was obtained from all participants. Two 4D flow acquisitions were performed in each subject, one with use of Cartesian sampling with respiratory tracking and the other with use of spiral sampling and a breath hold. Cartesian two-dimensional (2D) cine phase-contrast images were also acquired in the portal vein. Two observers independently assessed vessel conspicuity on phase-contrast three-dimensional angiograms. Quantitative flow parameters were measured by two independent observers in major abdominal vessels. Intertechnique concordance was quantified by using Bland-Altman and logistic regression analyses. There was moderate to substantial agreement in vessel conspicuity between 4D flow acquisitions in arteries and veins (κ = 0.71 and 0.61, respectively, for observer 1; κ = 0.71 and 0.44 for observer 2), whereas more artifacts were observed with spiral 4D flow (κ = 0.30 and 0.20). Quantitative measurements in abdominal vessels showed good equivalence between spiral and Cartesian 4D flow techniques (lower bound of the 95% confidence interval: 63%, 77%, 60%, and 64% for flow, area, average velocity, and peak velocity, respectively). For portal venous flow, spiral 4D flow was in better agreement with 2D cine phase-contrast flow (95% limits of agreement: -8.8 and 9.3 mL/sec, respectively) than was Cartesian 4D flow (95% limits of agreement: -10.6 and 14.6 mL/sec). The combination of highly efficient spiral sampling with dynamic compressed sensing results in major acceleration for 4D flow MR imaging, which allows comprehensive assessment of abdominal vessel hemodynamics in a single breath hold.

  6. Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh I.; Keymeulen, Didier; Kimesh, Matthew A.

    2012-01-01

    Modern hyperspectral imaging systems are able to acquire far more data than can be downlinked from a spacecraft. Onboard data compression helps to alleviate this problem, but requires a system capable of power efficiency and high throughput. Software solutions have limited throughput performance and are power-hungry. Dedicated hardware solutions can provide both high throughput and power efficiency, while taking the load off of the main processor. Thus a hardware compression system was developed. The implementation uses a field-programmable gate array (FPGA). The implementation is based on the fast lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral-Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which achieves excellent compression performance and has low complexity. This algorithm performs predictive compression using an adaptive filtering method, and uses adaptive Golomb coding. The implementation also packetizes the coded data. The FL algorithm is well suited for implementation in hardware. In the FPGA implementation, one sample is compressed every clock cycle, which makes for a fast and practical realtime solution for space applications. Benefits of this implementation are: 1) The underlying algorithm achieves a combination of low complexity and compression effectiveness that exceeds that of techniques currently in use. 2) The algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. 3) Hardware acceleration provides a throughput improvement of 10 to 100 times vs. the software implementation. A prototype of the compressor is available in software, but it runs at a speed that does not meet spacecraft requirements. The hardware implementation targets the Xilinx Virtex IV FPGAs, and makes the use of this compressor practical for Earth satellites as well as beyond-Earth missions with hyperspectral instruments.

  7. Multiple-image encryption via lifting wavelet transform and XOR operation based on compressive ghost imaging scheme

    NASA Astrophysics Data System (ADS)

    Li, Xianye; Meng, Xiangfeng; Yang, Xiulun; Wang, Yurong; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-03-01

    A multiple-image encryption method via lifting wavelet transform (LWT) and XOR operation is proposed, which is based on a row scanning compressive ghost imaging scheme. In the encryption process, the scrambling operation is implemented for the sparse images transformed by LWT, then the XOR operation is performed on the scrambled images, and the resulting XOR images are compressed in the row scanning compressive ghost imaging, through which the ciphertext images can be detected by bucket detector arrays. During decryption, the participant who possesses his/her correct key-group, can successfully reconstruct the corresponding plaintext image by measurement key regeneration, compression algorithm reconstruction, XOR operation, sparse images recovery, and inverse LWT (iLWT). Theoretical analysis and numerical simulations validate the feasibility of the proposed method.

  8. Optical scanning holography based on compressive sensing using a digital micro-mirror device

    NASA Astrophysics Data System (ADS)

    A-qian, Sun; Ding-fu, Zhou; Sheng, Yuan; You-jun, Hu; Peng, Zhang; Jian-ming, Yue; xin, Zhou

    2017-02-01

    Optical scanning holography (OSH) is a distinct digital holography technique, which uses a single two-dimensional (2D) scanning process to record the hologram of a three-dimensional (3D) object. Usually, these 2D scanning processes are in the form of mechanical scanning, and the quality of recorded hologram may be affected due to the limitation of mechanical scanning accuracy and unavoidable vibration of stepper motor's start-stop. In this paper, we propose a new framework, which replaces the 2D mechanical scanning mirrors with a Digital Micro-mirror Device (DMD) to modulate the scanning light field, and we call it OSH based on Compressive Sensing (CS) using a digital micro-mirror device (CS-OSH). CS-OSH can reconstruct the hologram of an object through the use of compressive sensing theory, and then restore the image of object itself. Numerical simulation results confirm this new type OSH can get a reconstructed image with favorable visual quality even under the condition of a low sample rate.

  9. High efficient optical remote sensing images acquisition for nano-satellite-framework

    NASA Astrophysics Data System (ADS)

    Li, Feng; Xin, Lei; Liu, Yang; Fu, Jie; Liu, Yuhong; Guo, Yi

    2017-09-01

    It is more difficult and challenging to implement Nano-satellite (NanoSat) based optical Earth observation missions than conventional satellites because of the limitation of volume, weight and power consumption. In general, an image compression unit is a necessary onboard module to save data transmission bandwidth and disk space. The image compression unit can get rid of redundant information of those captured images. In this paper, a new image acquisition framework is proposed for NanoSat based optical Earth observation applications. The entire process of image acquisition and compression unit can be integrated in the photo detector array chip, that is, the output data of the chip is already compressed. That is to say, extra image compression unit is no longer needed; therefore, the power, volume, and weight of the common onboard image compression units consumed can be largely saved. The advantages of the proposed framework are: the image acquisition and image compression are combined into a single step; it can be easily built in CMOS architecture; quick view can be provided without reconstruction in the framework; Given a certain compression ratio, the reconstructed image quality is much better than those CS based methods. The framework holds promise to be widely used in the future.

  10. hyperspectral characterization of tissue simulating phantoms using a supercontinuum laser in a spatial frequency domain imaging instrument

    NASA Astrophysics Data System (ADS)

    Torabzadeh, Mohammad; Stockton, Patrick; Kennedy, Gordon T.; Saager, Rolf B.; Durkin, Anthony J.; Bartels, Randy A.; Tromberg, Bruce J.

    2018-02-01

    Hyperspectral Imaging (HSI) is a growing field in tissue optics due to its ability to collect continuous spectral features of a sample without a contact probe. Spatial Frequency Domain Imaging (SFDI) is a non-contact wide-field spectral imaging technique that is used to quantitatively characterize tissue structure and chromophore concentration. In this study, we designed a Hyperspectral SFDI (H-SFDI) instrument which integrated a supercontinuum laser source to a wavelength tuning optical configuration and a sCMOS camera to extract spatial (Field of View: 2cm×2cm) and broadband spectral features (580nm-950nm). A preliminary experiment was also performed to integrate the hyperspectral projection unit to a compressed single pixel camera and Light Labeling (LiLa) technique.

  11. Visually enhanced CCTV digital surveillance utilizing Intranet and Internet.

    PubMed

    Ozaki, Nobuyuki

    2002-07-01

    This paper describes a solution for integrated plant supervision utilizing closed circuit television (CCTV) digital surveillance. Three basic requirements are first addressed as the platform of the system, with discussion on the suitable video compression. The system configuration is described in blocks. The system provides surveillance functionality: real-time monitoring, and process analysis functionality: a troubleshooting tool. This paper describes the formulation of practical performance design for determining various encoder parameters. It also introduces image processing techniques for enhancing the original CCTV digital image to lessen the burden on operators. Some screenshots are listed for the surveillance functionality. For the process analysis, an image searching filter supported by image processing techniques is explained with screenshots. Multimedia surveillance, which is the merger with process data surveillance, or the SCADA system, is also explained.

  12. Digital micromirror devices in Raman trace detection of explosives

    NASA Astrophysics Data System (ADS)

    Glimtoft, Martin; Svanqvist, Mattias; Ågren, Matilda; Nordberg, Markus; Östmark, Henric

    2016-05-01

    Imaging Raman spectroscopy based on tunable filters is an established technique for detecting single explosives particles at stand-off distances. However, large light losses are inherent in the design due to sequential imaging at different wavelengths, leading to effective transmission often well below 1 %. The use of digital micromirror devices (DMD) and compressive sensing (CS) in imaging Raman explosives trace detection can improve light throughput and add significant flexibility compared to existing systems. DMDs are based on mature microelectronics technology, and are compact, scalable, and can be customized for specific tasks, including new functions not available with current technologies. This paper has been focusing on investigating how a DMD can be used when applying CS-based imaging Raman spectroscopy on stand-off explosives trace detection, and evaluating the performance in terms of light throughput, image reconstruction ability and potential detection limits. This type of setup also gives the possibility to combine imaging Raman with non-spatially resolved fluorescence suppression techniques, such as Kerr gating. The system used consists of a 2nd harmonics Nd:YAG laser for sample excitation, collection optics, DMD, CMOScamera and a spectrometer with ICCD camera for signal gating and detection. Initial results for compressive sensing imaging Raman shows a stable reconstruction procedure even at low signals and in presence of interfering background signal. It is also shown to give increased effective light transmission without sacrificing molecular specificity or area coverage compared to filter based imaging Raman. At the same time it adds flexibility so the setup can be customized for new functionality.

  13. A tone mapping operator based on neural and psychophysical models of visual perception

    NASA Astrophysics Data System (ADS)

    Cyriac, Praveen; Bertalmio, Marcelo; Kane, David; Vazquez-Corral, Javier

    2015-03-01

    High dynamic range imaging techniques involve capturing and storing real world radiance values that span many orders of magnitude. However, common display devices can usually reproduce intensity ranges only up to two to three orders of magnitude. Therefore, in order to display a high dynamic range image on a low dynamic range screen, the dynamic range of the image needs to be compressed without losing details or introducing artefacts, and this process is called tone mapping. A good tone mapping operator must be able to produce a low dynamic range image that matches as much as possible the perception of the real world scene. We propose a two stage tone mapping approach, in which the first stage is a global method for range compression based on a gamma curve that equalizes the lightness histogram the best, and the second stage performs local contrast enhancement and color induction using neural activity models for the visual cortex.

  14. Quantitative surface topography assessment of directly compressed and roller compacted tablet cores using photometric stereo image analysis.

    PubMed

    Allesø, Morten; Holm, Per; Carstensen, Jens Michael; Holm, René

    2016-05-25

    Surface topography, in the context of surface smoothness/roughness, was investigated by the use of an image analysis technique, MultiRay™, related to photometric stereo, on different tablet batches manufactured either by direct compression or roller compaction. In the present study, oblique illumination of the tablet (darkfield) was considered and the area of cracks and pores in the surface was used as a measure of tablet surface topography; the higher a value, the rougher the surface. The investigations demonstrated a high precision of the proposed technique, which was able to rapidly (within milliseconds) and quantitatively measure the obtained surface topography of the produced tablets. Compaction history, in the form of applied roll force and tablet punch pressure, was also reflected in the measured smoothness of the tablet surfaces. Generally it was found that a higher degree of plastic deformation of the microcrystalline cellulose resulted in a smoother tablet surface. This altogether demonstrated that the technique provides the pharmaceutical developer with a reliable, quantitative response parameter for visual appearance of solid dosage forms, which may be used for process and ultimately product optimization. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. GPU Lossless Hyperspectral Data Compression System for Space Applications

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didier; Aranki, Nazeeh; Hopson, Ben; Kiely, Aaron; Klimesh, Matthew; Benkrid, Khaled

    2012-01-01

    On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. At JPL, a novel, adaptive and predictive technique for lossless compression of hyperspectral data, named the Fast Lossless (FL) algorithm, was recently developed. This technique uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. Because of its outstanding performance and suitability for real-time onboard hardware implementation, the FL compressor is being formalized as the emerging CCSDS Standard for Lossless Multispectral & Hyperspectral image compression. The FL compressor is well-suited for parallel hardware implementation. A GPU hardware implementation was developed for FL targeting the current state-of-the-art GPUs from NVIDIA(Trademark). The GPU implementation on a NVIDIA(Trademark) GeForce(Trademark) GTX 580 achieves a throughput performance of 583.08 Mbits/sec (44.85 MSamples/sec) and an acceleration of at least 6 times a software implementation running on a 3.47 GHz single core Intel(Trademark) Xeon(Trademark) processor. This paper describes the design and implementation of the FL algorithm on the GPU. The massively parallel implementation will provide in the future a fast and practical real-time solution for airborne and space applications.

  16. Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm

    NASA Astrophysics Data System (ADS)

    Elahi, Sana; kaleem, Muhammad; Omer, Hammad

    2018-01-01

    Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.

  17. Evaluation of image compression for computer-aided diagnosis of breast tumors in 3D sonography

    NASA Astrophysics Data System (ADS)

    Chen, We-Min; Huang, Yu-Len; Tao, Chi-Chuan; Chen, Dar-Ren; Moon, Woo-Kyung

    2006-03-01

    Medical imaging examinations form the basis for physicians diagnosing diseases, as evidenced by the increasing use of digital medical images for picture archiving and communications systems (PACS). However, with enlarged medical image databases and rapid growth of patients' case reports, PACS requires image compression to accelerate the image transmission rate and conserve disk space for diminishing implementation costs. For this purpose, JPEG and JPEG2000 have been accepted as legal formats for the digital imaging and communications in medicine (DICOM). The high compression ratio is felt to be useful for medical imagery. Therefore, this study evaluates the compression ratios of JPEG and JPEG2000 standards for computer-aided diagnosis (CAD) of breast tumors in 3-D medical ultrasound (US) images. The 3-D US data sets with various compression ratios are compressed using the two efficacious image compression standards. The reconstructed data sets are then diagnosed by a previous proposed CAD system. The diagnostic accuracy is measured based on receiver operating characteristic (ROC) analysis. Namely, the ROC curves are used to compare the diagnostic performance of two or more reconstructed images. Analysis results ensure a comparison of the compression ratios by using JPEG and JPEG2000 for 3-D US images. Results of this study provide the possible bit rates using JPEG and JPEG2000 for 3-D breast US images.

  18. Learning random networks for compression of still and moving images

    NASA Technical Reports Server (NTRS)

    Gelenbe, Erol; Sungur, Mert; Cramer, Christopher

    1994-01-01

    Image compression for both still and moving images is an extremely important area of investigation, with numerous applications to videoconferencing, interactive education, home entertainment, and potential applications to earth observations, medical imaging, digital libraries, and many other areas. We describe work on a neural network methodology to compress/decompress still and moving images. We use the 'point-process' type neural network model which is closer to biophysical reality than standard models, and yet is mathematically much more tractable. We currently achieve compression ratios of the order of 120:1 for moving grey-level images, based on a combination of motion detection and compression. The observed signal-to-noise ratio varies from values above 25 to more than 35. The method is computationally fast so that compression and decompression can be carried out in real-time. It uses the adaptive capabilities of a set of neural networks so as to select varying compression ratios in real-time as a function of quality achieved. It also uses a motion detector which will avoid retransmitting portions of the image which have varied little from the previous frame. Further improvements can be achieved by using on-line learning during compression, and by appropriate compensation of nonlinearities in the compression/decompression scheme. We expect to go well beyond the 250:1 compression level for color images with good quality levels.

  19. Comparison between various patch wise strategies for reconstruction of ultra-spectral cubes captured with a compressive sensing system

    NASA Astrophysics Data System (ADS)

    Oiknine, Yaniv; August, Isaac Y.; Revah, Liat; Stern, Adrian

    2016-05-01

    Recently we introduced a Compressive Sensing Miniature Ultra-Spectral Imaging (CS-MUSI) system. The system is based on a single Liquid Crystal (LC) cell and a parallel sensor array where the liquid crystal cell performs spectral encoding. Within the framework of compressive sensing, the CS-MUSI system is able to reconstruct ultra-spectral cubes captured with only an amount of ~10% samples compared to a conventional system. Despite the compression, the technique is extremely complex computationally, because reconstruction of ultra-spectral images requires processing huge data cubes of Gigavoxel size. Fortunately, the computational effort can be alleviated by using separable operation. An additional way to reduce the reconstruction effort is to perform the reconstructions on patches. In this work, we consider processing on various patch shapes. We present an experimental comparison between various patch shapes chosen to process the ultra-spectral data captured with CS-MUSI system. The patches may be one dimensional (1D) for which the reconstruction is carried out spatially pixel-wise, or two dimensional (2D) - working on spatial rows/columns of the ultra-spectral cube, as well as three dimensional (3D).

  20. An image assessment study of image acceptability of the Galileo low gain antenna mission

    NASA Technical Reports Server (NTRS)

    Chuang, S. L.; Haines, R. F.; Grant, T.; Gold, Yaron; Cheung, Kar-Ming

    1994-01-01

    This paper describes a study conducted by NASA Ames Research Center (ARC) in collaboration with the Jet Propulsion Laboratory (JPL), Pasadena, California on the image acceptability of the Galileo Low Gain Antenna mission. The primary objective of the study is to determine the impact of the Integer Cosine Transform (ICT) compression algorithm on Galilean images of atmospheric bodies, moons, asteroids and Jupiter's rings. The approach involved fifteen volunteer subjects representing twelve institutions involved with the Galileo Solid State Imaging (SSI) experiment. Four different experiment specific quantization tables (q-table) and various compression stepsizes (q-factor) to achieve different compression ratios were used. It then determined the acceptability of the compressed monochromatic astronomical images as evaluated by Galileo SSI mission scientists. Fourteen different images were evaluated. Each observer viewed two versions of the same image side by side on a high resolution monitor, each was compressed using a different quantization stepsize. They were requested to select which image had the highest overall quality to support them in carrying out their visual evaluations of image content. Then they rated both images using a scale from one to five on its judged degree of usefulness. Up to four pre-selected types of images were presented with and without noise to each subject based upon results of a previously administered survey of their image preferences. Fourteen different images in seven image groups were studied. The results showed that: (1) acceptable compression ratios vary widely with the type of images; (2) noisy images detract greatly from image acceptability and acceptable compression ratios; and (3) atmospheric images of Jupiter seem to have higher compression ratios of 4 to 5 times that of some clear surface satellite images.

  1. Reliability analysis of the epidural spinal cord compression scale.

    PubMed

    Bilsky, Mark H; Laufer, Ilya; Fourney, Daryl R; Groff, Michael; Schmidt, Meic H; Varga, Peter Paul; Vrionis, Frank D; Yamada, Yoshiya; Gerszten, Peter C; Kuklo, Timothy R

    2010-09-01

    The evolution of imaging techniques, along with highly effective radiation options has changed the way metastatic epidural tumors are treated. While high-grade epidural spinal cord compression (ESCC) frequently serves as an indication for surgical decompression, no consensus exists in the literature about the precise definition of this term. The advancement of the treatment paradigms in patients with metastatic tumors for the spine requires a clear grading scheme of ESCC. The degree of ESCC often serves as a major determinant in the decision to operate or irradiate. The purpose of this study was to determine the reliability and validity of a 6-point, MR imaging-based grading system for ESCC. To determine the reliability of the grading scale, a survey was distributed to 7 spine surgeons who participate in the Spine Oncology Study Group. The MR images of 25 cervical or thoracic spinal tumors were distributed consisting of 1 sagittal image and 3 axial images at the identical level including T1-weighted, T2-weighted, and Gd-enhanced T1-weighted images. The survey was administered 3 times at 2-week intervals. The inter- and intrarater reliability was assessed. The inter- and intrarater reliability ranged from good to excellent when surgeons were asked to rate the degree of spinal cord compression using T2-weighted axial images. The T2-weighted images were superior indicators of ESCC compared with T1-weighted images with and without Gd. The ESCC scale provides a valid and reliable instrument that may be used to describe the degree of ESCC based on T2-weighted MR images. This scale accounts for recent advances in the treatment of spinal metastases and may be used to provide an ESCC classification scheme for multicenter clinical trial and outcome studies.

  2. Compressed/reconstructed test images for CRAF/Cassini

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.

    1991-01-01

    A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.

  3. Using x-ray mammograms to assist in microwave breast image interpretation.

    PubMed

    Curtis, Charlotte; Frayne, Richard; Fear, Elise

    2012-01-01

    Current clinical breast imaging modalities include ultrasound, magnetic resonance (MR) imaging, and the ubiquitous X-ray mammography. Microwave imaging, which takes advantage of differing electromagnetic properties to obtain image contrast, shows potential as a complementary imaging technique. As an emerging modality, interpretation of 3D microwave images poses a significant challenge. MR images are often used to assist in this task, and X-ray mammograms are readily available. However, X-ray mammograms provide 2D images of a breast under compression, resulting in significant geometric distortion. This paper presents a method to estimate the 3D shape of the breast and locations of regions of interest from standard clinical mammograms. The technique was developed using MR images as the reference 3D shape with the future intention of using microwave images. Twelve breast shapes were estimated and compared to ground truth MR images, resulting in a skin surface estimation accurate to within an average Euclidean distance of 10 mm. The 3D locations of regions of interest were estimated to be within the same clinical area of the breast as corresponding regions seen on MR imaging. These results encourage investigation into the use of mammography as a source of information to assist with microwave image interpretation as well as validation of microwave imaging techniques.

  4. Implementation of compressive sensing for preclinical cine-MRI

    NASA Astrophysics Data System (ADS)

    Tan, Elliot; Yang, Ming; Ma, Lixin; Zheng, Yahong Rosa

    2014-03-01

    This paper presents a practical implementation of Compressive Sensing (CS) for a preclinical MRI machine to acquire randomly undersampled k-space data in cardiac function imaging applications. First, random undersampling masks were generated based on Gaussian, Cauchy, wrapped Cauchy and von Mises probability distribution functions by the inverse transform method. The best masks for undersampling ratios of 0.3, 0.4 and 0.5 were chosen for animal experimentation, and were programmed into a Bruker Avance III BioSpec 7.0T MRI system through method programming in ParaVision. Three undersampled mouse heart datasets were obtained using a fast low angle shot (FLASH) sequence, along with a control undersampled phantom dataset. ECG and respiratory gating was used to obtain high quality images. After CS reconstructions were applied to all acquired data, resulting images were quantitatively analyzed using the performance metrics of reconstruction error and Structural Similarity Index (SSIM). The comparative analysis indicated that CS reconstructed images from MRI machine undersampled data were indeed comparable to CS reconstructed images from retrospective undersampled data, and that CS techniques are practical in a preclinical setting. The implementation achieved 2 to 4 times acceleration for image acquisition and satisfactory quality of image reconstruction.

  5. Tiny videos: a large data set for nonparametric video retrieval and frame classification.

    PubMed

    Karpenko, Alexandre; Aarabi, Parham

    2011-03-01

    In this paper, we present a large database of over 50,000 user-labeled videos collected from YouTube. We develop a compact representation called "tiny videos" that achieves high video compression rates while retaining the overall visual appearance of the video as it varies over time. We show that frame sampling using affinity propagation-an exemplar-based clustering algorithm-achieves the best trade-off between compression and video recall. We use this large collection of user-labeled videos in conjunction with simple data mining techniques to perform related video retrieval, as well as classification of images and video frames. The classification results achieved by tiny videos are compared with the tiny images framework [24] for a variety of recognition tasks. The tiny images data set consists of 80 million images collected from the Internet. These are the largest labeled research data sets of videos and images available to date. We show that tiny videos are better suited for classifying scenery and sports activities, while tiny images perform better at recognizing objects. Furthermore, we demonstrate that combining the tiny images and tiny videos data sets improves classification precision in a wider range of categories.

  6. Halftoning processing on a JPEG-compressed image

    NASA Astrophysics Data System (ADS)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  7. A variable resolution x-ray detector for computed tomography: I. Theoretical basis and experimental verification.

    PubMed

    DiBianca, F A; Gupta, V; Zeman, H D

    2000-08-01

    A computed tomography imaging technique called variable resolution x-ray (VRX) detection provides detector resolution ranging from that of clinical body scanning to that of microscopy (1 cy/mm to 100 cy/mm). The VRX detection technique is based on a new principle denoted as "projective compression" that allows the detector resolution element to scale proportionally to the image field size. Two classes of VRX detector geometry are considered. Theoretical aspects related to x-ray physics and data sampling are presented. Measured resolution parameters (line-spread function and modulation-transfer function) are presented and discussed. A VRX image that resolves a pair of 50 micron tungsten hairs spaced 30 microns apart is shown.

  8. Volumetric MRI of the lungs during forced expiration.

    PubMed

    Berman, Benjamin P; Pandey, Abhishek; Li, Zhitao; Jeffries, Lindsie; Trouard, Theodore P; Oliva, Isabel; Cortopassi, Felipe; Martin, Diego R; Altbach, Maria I; Bilgin, Ali

    2016-06-01

    Lung function is typically characterized by spirometer measurements, which do not offer spatially specific information. Imaging during exhalation provides spatial information but is challenging due to large movement over a short time. The purpose of this work is to provide a solution to lung imaging during forced expiration using accelerated magnetic resonance imaging. The method uses radial golden angle stack-of-stars gradient echo acquisition and compressed sensing reconstruction. A technique for dynamic three-dimensional imaging of the lungs from highly undersampled data is developed and tested on six subjects. This method takes advantage of image sparsity, both spatially and temporally, including the use of reference frames called bookends. Sparsity, with respect to total variation, and residual from the bookends, enables reconstruction from an extremely limited amount of data. Dynamic three-dimensional images can be captured at sub-150 ms temporal resolution, using only three (or less) acquired radial lines per slice per timepoint. The images have a spatial resolution of 4.6×4.6×10 mm. Lung volume calculations based on image segmentation are compared to those from simultaneously acquired spirometer measurements. Dynamic lung imaging during forced expiration is made possible by compressed sensing accelerated dynamic three-dimensional radial magnetic resonance imaging. Magn Reson Med 75:2295-2302, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  9. Content-based retrieval of historical Ottoman documents stored as textual images.

    PubMed

    Saykol, Ediz; Sinop, Ali Kemal; Güdükbay, Ugur; Ulusoy, Ozgür; Cetin, A Enis

    2004-03-01

    There is an accelerating demand to access the visual content of documents stored in historical and cultural archives. Availability of electronic imaging tools and effective image processing techniques makes it feasible to process the multimedia data in large databases. In this paper, a framework for content-based retrieval of historical documents in the Ottoman Empire archives is presented. The documents are stored as textual images, which are compressed by constructing a library of symbols occurring in a document, and the symbols in the original image are then replaced with pointers into the codebook to obtain a compressed representation of the image. The features in wavelet and spatial domain based on angular and distance span of shapes are used to extract the symbols. In order to make content-based retrieval in historical archives, a query is specified as a rectangular region in an input image and the same symbol-extraction process is applied to the query region. The queries are processed on the codebook of documents and the query images are identified in the resulting documents using the pointers in textual images. The querying process does not require decompression of images. The new content-based retrieval framework is also applicable to many other document archives using different scripts.

  10. Development of ultrasound/endoscopy PACS (picture archiving and communication system) and investigation of compression method for cine images

    NASA Astrophysics Data System (ADS)

    Osada, Masakazu; Tsukui, Hideki

    2002-09-01

    ABSTRACT Picture Archiving and Communication System (PACS) is a system which connects imaging modalities, image archives, and image workstations to reduce film handling cost and improve hospital workflow. Handling diagnostic ultrasound and endoscopy images is challenging, because it produces large amount of data such as motion (cine) images of 30 frames per second, 640 x 480 in resolution, with 24-bit color. Also, it requires enough image quality for clinical review. We have developed PACS which is able to manage ultrasound and endoscopy cine images with above resolution and frame rate, and investigate suitable compression method and compression rate for clinical image review. Results show that clinicians require capability for frame-by-frame forward and backward review of cine images because they carefully look through motion images to find certain color patterns which may appear in one frame. In order to satisfy this quality, we have chosen motion JPEG, installed and confirmed that we could capture this specific pattern. As for acceptable image compression rate, we have performed subjective evaluation. No subjects could tell the difference between original non-compressed images and 1:10 lossy compressed JPEG images. One subject could tell the difference between original and 1:20 lossy compressed JPEG images although it is acceptable. Thus, ratios of 1:10 to 1:20 are acceptable to reduce data amount and cost while maintaining quality for clinical review.

  11. The effect of lossy image compression on image classification

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.

  12. Time reversal and phase coherent music techniques for super-resolution ultrasound imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lianjie; Labyed, Yassin

    Systems and methods for super-resolution ultrasound imaging using a windowed and generalized TR-MUSIC algorithm that divides the imaging region into overlapping sub-regions and applies the TR-MUSIC algorithm to the windowed backscattered ultrasound signals corresponding to each sub-region. The algorithm is also structured to account for the ultrasound attenuation in the medium and the finite-size effects of ultrasound transducer elements. A modified TR-MUSIC imaging algorithm is used to account for ultrasound scattering from both density and compressibility contrasts. The phase response of ultrasound transducer elements is accounted for in a PC-MUSIC system.

  13. Estimated spectrum adaptive postfilter and the iterative prepost filtering algirighms

    NASA Technical Reports Server (NTRS)

    Linares, Irving (Inventor)

    2004-01-01

    The invention presents The Estimated Spectrum Adaptive Postfilter (ESAP) and the Iterative Prepost Filter (IPF) algorithms. These algorithms model a number of image-adaptive post-filtering and pre-post filtering methods. They are designed to minimize Discrete Cosine Transform (DCT) blocking distortion caused when images are highly compressed with the Joint Photographic Expert Group (JPEG) standard. The ESAP and the IPF techniques of the present invention minimize the mean square error (MSE) to improve the objective and subjective quality of low-bit-rate JPEG gray-scale images while simultaneously enhancing perceptual visual quality with respect to baseline JPEG images.

  14. Proceedings of the Scientific Data Compression Workshop

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K. (Editor)

    1989-01-01

    Continuing advances in space and Earth science requires increasing amounts of data to be gathered from spaceborne sensors. NASA expects to launch sensors during the next two decades which will be capable of producing an aggregate of 1500 Megabits per second if operated simultaneously. Such high data rates cause stresses in all aspects of end-to-end data systems. Technologies and techniques are needed to relieve such stresses. Potential solutions to the massive data rate problems are: data editing, greater transmission bandwidths, higher density and faster media, and data compression. Through four subpanels on Science Payload Operations, Multispectral Imaging, Microwave Remote Sensing and Science Data Management, recommendations were made for research in data compression and scientific data applications to space platforms.

  15. SU-G-IeP2-06: Evaluation of Registration Accuracy for Cone-Beam CT Reconstruction Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, J; Wang, P; Zhang, H

    2016-06-15

    Purpose: Cone-beam (CB) computed tomography (CT) is used for image guidance during radiotherapy treatment delivery. Conventional Feldkamp and compressed sensing (CS) based CBCT recon-struction techniques are compared for image registration. This study is to evaluate the image registration accuracy of conventional and CS CBCT for head-and-neck (HN) patients. Methods: Ten HN patients with oropharyngeal tumors were retrospectively selected. Each HN patient had one planning CT (CTP) and three CBCTs were acquired during an adaptive radiotherapy proto-col. Each CBCT was reconstructed by both the conventional (CBCTCON) and compressed sens-ing (CBCTCS) methods. Two oncologists manually labeled 23 landmarks of normal tissue andmore » implanted gold markers on both the CTP and CBCTCON. Subsequently, landmarks on CTp were propagated to CBCTs, using a b-spline-based deformable image registration (DIR) and rigid registration (RR). The errors of these registration methods between two CBCT methods were calcu-lated. Results: For DIR, the mean distance between the propagated and the labeled landmarks was 2.8 mm ± 0.52 for CBCTCS, and 3.5 mm ± 0.75 for CBCTCON. For RR, the mean distance between the propagated and the labeled landmarks was 6.8 mm ± 0.92 for CBCTCS, and 8.7 mm ± 0.95 CBCTCON. Conclusion: This study has demonstrated that CS CBCT is more accurate than conventional CBCT in image registration by both rigid and non-rigid methods. It is potentially suggested that CS CBCT is an improved image modality for image guided adaptive applications.« less

  16. Reevaluation of JPEG image compression to digitalized gastrointestinal endoscopic color images: a pilot study

    NASA Astrophysics Data System (ADS)

    Kim, Christopher Y.

    1999-05-01

    Endoscopic images p lay an important role in describing many gastrointestinal (GI) disorders. The field of radiology has been on the leading edge of creating, archiving and transmitting digital images. With the advent of digital videoendoscopy, endoscopists now have the ability to generate images for storage and transmission. X-rays can be compressed 30-40X without appreciable decline in quality. We reported results of a pilot study using JPEG compression of 24-bit color endoscopic images. For that study, the result indicated that adequate compression ratios vary according to the lesion and that images could be compressed to between 31- and 99-fold smaller than the original size without an appreciable decline in quality. The purpose of this study was to expand upon the methodology of the previous sty with an eye towards application for the WWW, a medium which would expand both clinical and educational purposes of color medical imags. The results indicate that endoscopists are able to tolerate very significant compression of endoscopic images without loss of clinical image quality. This finding suggests that even 1 MB color images can be compressed to well under 30KB, which is considered a maximal tolerable image size for downloading on the WWW.

  17. Superharmonic imaging with chirp coded excitation: filtering spectrally overlapped harmonics.

    PubMed

    Harput, Sevan; McLaughlan, James; Cowell, David M J; Freear, Steven

    2014-11-01

    Superharmonic imaging improves the spatial resolution by using the higher order harmonics generated in tissue. The superharmonic component is formed by combining the third, fourth, and fifth harmonics, which have low energy content and therefore poor SNR. This study uses coded excitation to increase the excitation energy. The SNR improvement is achieved on the receiver side by performing pulse compression with harmonic matched filters. The use of coded signals also introduces new filtering capabilities that are not possible with pulsed excitation. This is especially important when using wideband signals. For narrowband signals, the spectral boundaries of the harmonics are clearly separated and thus easy to filter; however, the available imaging bandwidth is underused. Wideband excitation is preferable for harmonic imaging applications to preserve axial resolution, but it generates spectrally overlapping harmonics that are not possible to filter in time and frequency domains. After pulse compression, this overlap increases the range side lobes, which appear as imaging artifacts and reduce the Bmode image quality. In this study, the isolation of higher order harmonics was achieved in another domain by using the fan chirp transform (FChT). To show the effect of excitation bandwidth in superharmonic imaging, measurements were performed by using linear frequency modulated chirp excitation with varying bandwidths of 10% to 50%. Superharmonic imaging was performed on a wire phantom using a wideband chirp excitation. Results were presented with and without applying the FChT filtering technique by comparing the spatial resolution and side lobe levels. Wideband excitation signals achieved a better resolution as expected, however range side lobes as high as -23 dB were observed for the superharmonic component of chirp excitation with 50% fractional bandwidth. The proposed filtering technique achieved >50 dB range side lobe suppression and improved the image quality without affecting the axial resolution.

  18. Image-adaptive and robust digital wavelet-domain watermarking for images

    NASA Astrophysics Data System (ADS)

    Zhao, Yi; Zhang, Liping

    2018-03-01

    We propose a new frequency domain wavelet based watermarking technique. The key idea of our scheme is twofold: multi-tier solution representation of image and odd-even quantization embedding/extracting watermark. Because many complementary watermarks need to be hidden, the watermark image designed is image-adaptive. The meaningful and complementary watermark images was embedded into the original image (host image) by odd-even quantization modifying coefficients, which was selected from the detail wavelet coefficients of the original image, if their magnitudes are larger than their corresponding Just Noticeable Difference thresholds. The tests show good robustness against best-known attacks such as noise addition, image compression, median filtering, clipping as well as geometric transforms. Further research may improve the performance by refining JND thresholds.

  19. Performance of a Discrete Wavelet Transform for Compressing Plasma Count Data and its Application to the Fast Plasma Investigation on NASA's Magnetospheric Multiscale Mission

    NASA Technical Reports Server (NTRS)

    Barrie, Alexander C.; Yeh, Penshu; Dorelli, John C.; Clark, George B.; Paterson, William R.; Adrian, Mark L.; Holland, Matthew P.; Lobell, James V.; Simpson, David G.; Pollock, Craig J.; hide

    2015-01-01

    Plasma measurements in space are becoming increasingly faster, higher resolution, and distributed over multiple instruments. As raw data generation rates can exceed available data transfer bandwidth, data compression is becoming a critical design component. Data compression has been a staple of imaging instruments for years, but only recently have plasma measurement designers become interested in high performance data compression. Missions will often use a simple lossless compression technique yielding compression ratios of approximately 2:1, however future missions may require compression ratios upwards of 10:1. This study aims to explore how a Discrete Wavelet Transform combined with a Bit Plane Encoder (DWT/BPE), implemented via a CCSDS standard, can be used effectively to compress count information common to plasma measurements to high compression ratios while maintaining little or no compression error. The compression ASIC used for the Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale mission (MMS) is used for this study. Plasma count data from multiple sources is examined: resampled data from previous missions, randomly generated data from distribution functions, and simulations of expected regimes. These are run through the compression routines with various parameters to yield the greatest possible compression ratio while maintaining little or no error, the latter indicates that fully lossless compression is obtained. Finally, recommendations are made for future missions as to what can be achieved when compressing plasma count data and how best to do so.

  20. The quality mammographic image. A review of its components.

    PubMed

    Rickard, M T

    1989-11-01

    Seven major factors resulting in a quality or high contrast and high resolution mammographic image have been discussed. The following is a summary of their key features: 1) Dedicated mammographic equipment. --Molybdenum target material --Molybdenum filter, beryllium window --Low kVp usage, in range of 24 to 30 --Routine contact mammography performed at 25 kVp --Slightly lower kVp for coned compression --Slightly higher kVp for microfocus magnification 2) Film density --Phototimer with adjustable position --Calibration of phototimer to optimal optical density of approx. 1.4 over full kVp range 3) Breast Compression --General and focal (coned compression). --Essential to achieve proper contrast, resolution and breast immobility. --Foot controls preferable. 4) Focal Spot. --Size recommendation for contact work 0.3 mm. --Minimum power output of 100 mA at 25 kVp desirable to avoid movement blurring in contact grid work. --Size recommendation for magnification work 0.1 mm. 5) Grid. --Usage recommended as routine in all but magnification work. 6) Film-screen Combination. --High contrast--high speed film. --High resolution screen. --Specifically designed cassette for close film-screen contact and low radiation absorption. --Use of faster screens for magnification techniques. 7) Dedicated processing. --Increased developing time--40 to 45 seconds. --Increased developer temperature--35 to 38 degrees. --Adjusted replenishment rate and dryer temperature. All seven factors contributing to image contrast and resolution affect radiation dosage to the breast. The risk of increased dosage associated with the use of various techniques needs to be balanced against the risks of incorrect diagnosis associated with their non-use.(ABSTRACT TRUNCATED AT 250 WORDS)

  1. Status report: Data management program algorithm evaluation activity at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R., Jr.

    1977-01-01

    An algorithm evaluation activity was initiated to study the problems associated with image processing by assessing the independent and interdependent effects of registration, compression, and classification techniques on LANDSAT data for several discipline applications. The objective of the activity was to make recommendations on selected applicable image processing algorithms in terms of accuracy, cost, and timeliness or to propose alternative ways of processing the data. As a means of accomplishing this objective, an Image Coding Panel was established. The conduct of the algorithm evaluation is described.

  2. A constrained joint source/channel coder design and vector quantization of nonstationary sources

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Y. C.; Nori, S.; Araj, A.

    1993-01-01

    The emergence of broadband ISDN as the network for the future brings with it the promise of integration of all proposed services in a flexible environment. In order to achieve this flexibility, asynchronous transfer mode (ATM) has been proposed as the transfer technique. During this period a study was conducted on the bridging of network transmission performance and video coding. The successful transmission of variable bit rate video over ATM networks relies on the interaction between the video coding algorithm and the ATM networks. Two aspects of networks that determine the efficiency of video transmission are the resource allocation algorithm and the congestion control algorithm. These are explained in this report. Vector quantization (VQ) is one of the more popular compression techniques to appear in the last twenty years. Numerous compression techniques, which incorporate VQ, have been proposed. While the LBG VQ provides excellent compression, there are also several drawbacks to the use of the LBG quantizers including search complexity and memory requirements, and a mismatch between the codebook and the inputs. The latter mainly stems from the fact that the VQ is generally designed for a specific rate and a specific class of inputs. In this work, an adaptive technique is proposed for vector quantization of images and video sequences. This technique is an extension of the recursively indexed scalar quantization (RISQ) algorithm.

  3. EBLAST: an efficient high-compression image transformation 3. application to Internet image and video transmission

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.

    2001-12-01

    A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.

  4. Efficient storage and management of radiographic images using a novel wavelet-based multiscale vector quantizer

    NASA Astrophysics Data System (ADS)

    Yang, Shuyu; Mitra, Sunanda

    2002-05-01

    Due to the huge volumes of radiographic images to be managed in hospitals, efficient compression techniques yielding no perceptual loss in the reconstructed images are becoming a requirement in the storage and management of such datasets. A wavelet-based multi-scale vector quantization scheme that generates a global codebook for efficient storage and transmission of medical images is presented in this paper. The results obtained show that even at low bit rates one is able to obtain reconstructed images with perceptual quality higher than that of the state-of-the-art scalar quantization method, the set partitioning in hierarchical trees.

  5. Novel Near-Lossless Compression Algorithm for Medical Sequence Images with Adaptive Block-Based Spatial Prediction.

    PubMed

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2016-12-01

    To address the low compression efficiency of lossless compression and the low image quality of general near-lossless compression, a novel near-lossless compression algorithm based on adaptive spatial prediction is proposed for medical sequence images for possible diagnostic use in this paper. The proposed method employs adaptive block size-based spatial prediction to predict blocks directly in the spatial domain and Lossless Hadamard Transform before quantization to improve the quality of reconstructed images. The block-based prediction breaks the pixel neighborhood constraint and takes full advantage of the local spatial correlations found in medical images. The adaptive block size guarantees a more rational division of images and the improved use of the local structure. The results indicate that the proposed algorithm can efficiently compress medical images and produces a better peak signal-to-noise ratio (PSNR) under the same pre-defined distortion than other near-lossless methods.

  6. Triaxial testing system for pressure core analysis using image processing technique

    NASA Astrophysics Data System (ADS)

    Yoneda, J.; Masui, A.; Tenma, N.; Nagao, J.

    2013-11-01

    In this study, a newly developed innovative triaxial testing system to investigate strength, deformation behavior, and/or permeability of gas hydrate bearing-sediments in deep sea is described. Transport of the pressure core from the storage chamber to the interior of the sealing sleeve of a triaxial cell without depressurization was achieved. An image processing technique was used to capture the motion and local deformation of a specimen in a transparent acrylic triaxial pressure cell and digital photographs were obtained at each strain level during the compression test. The material strength was successfully measured and the failure mode was evaluated under high confining and pore water pressures.

  7. Coronary angiogram video compression for remote browsing and archiving applications.

    PubMed

    Ouled Zaid, Azza; Fradj, Bilel Ben

    2010-12-01

    In this paper, we propose a H.264/AVC based compression technique adapted to coronary angiograms. H.264/AVC coder has proven to use the most advanced and accurate motion compensation process, but, at the cost of high computational complexity. On the other hand, analysis of coronary X-ray images reveals large areas containing no diagnostically important information. Our contribution is to exploit the energy characteristics in slice equal size regions to determine the regions with relevant information content, to be encoded using the H.264 coding paradigm. The others regions, are compressed using fixed block motion compensation and conventional hard-decision quantization. Experiments have shown that at the same bitrate, this procedure reduces the H.264 coder computing time of about 25% while attaining the same visual quality. A subjective assessment, based on the consensus approach leads to a compression ratio of 30:1 which insures both a diagnostic adequacy and a sufficient compression in regards to storage and transmission requirements. Copyright © 2010 Elsevier Ltd. All rights reserved.

  8. Using irreversible compression in digital radiology: a preliminary study of the opinions of radiologists

    NASA Astrophysics Data System (ADS)

    Seeram, Euclid

    2006-03-01

    The large volumes of digital images produced by digital imaging modalities in Radiology have provided the motivation for the development of picture archiving and communication systems (PACS) in an effort to provide an organized mechanism for digital image management. The development of more sophisticated methods of digital image acquisition (Multislice CT and Digital Mammography, for example), as well as the implementation and performance of PACS and Teleradiology systems in a health care environment, have created challenges in the area of image compression with respect to storing and transmitting digital images. Image compression can be reversible (lossless) or irreversible (lossy). While in the former, there is no loss of information, the latter presents concerns since there is a loss of information. This loss of information from diagnostic medical images is of primary concern not only to radiologists, but also to patients and their physicians. In 1997, Goldberg pointed out that "there is growing evidence that lossy compression can be applied without significantly affecting the diagnostic content of images... there is growing consensus in the radiologic community that some forms of lossy compression are acceptable". The purpose of this study was to explore the opinions of expert radiologists, and related professional organizations on the use of irreversible compression in routine practice The opinions of notable radiologists in the US and Canada are varied indicating no consensus of opinion on the use of irreversible compression in primary diagnosis, however, they are generally positive on the notion of the image storage and transmission advantages. Almost all radiologists are concerned with the litigation potential of an incorrect diagnosis based on irreversible compressed images. The survey of several radiology professional and related organizations reveals that no professional practice standards exist for the use of irreversible compression. Currently, the only standard for image compression is stated in the ACR's Technical Standards for Teleradiology and Digital Image Management.

  9. Generation of a suite of 3D computer-generated breast phantoms from a limited set of human subject data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, Christina M. L.; Palmeri, Mark L.; Department of Anesthesiology, Duke University Medical Center, Durham, North Carolina 27710

    2013-04-15

    Purpose: The authors previously reported on a three-dimensional computer-generated breast phantom, based on empirical human image data, including a realistic finite-element based compression model that was capable of simulating multimodality imaging data. The computerized breast phantoms are a hybrid of two phantom generation techniques, combining empirical breast CT (bCT) data with flexible computer graphics techniques. However, to date, these phantoms have been based on single human subjects. In this paper, the authors report on a new method to generate multiple phantoms, simulating additional subjects from the limited set of original dedicated breast CT data. The authors developed an image morphingmore » technique to construct new phantoms by gradually transitioning between two human subject datasets, with the potential to generate hundreds of additional pseudoindependent phantoms from the limited bCT cases. The authors conducted a preliminary subjective assessment with a limited number of observers (n= 4) to illustrate how realistic the simulated images generated with the pseudoindependent phantoms appeared. Methods: Several mesh-based geometric transformations were developed to generate distorted breast datasets from the original human subject data. Segmented bCT data from two different human subjects were used as the 'base' and 'target' for morphing. Several combinations of transformations were applied to morph between the 'base' and 'target' datasets such as changing the breast shape, rotating the glandular data, and changing the distribution of the glandular tissue. Following the morphing, regions of skin and fat were assigned to the morphed dataset in order to appropriately assign mechanical properties during the compression simulation. The resulting morphed breast was compressed using a finite element algorithm and simulated mammograms were generated using techniques described previously. Sixty-two simulated mammograms, generated from morphing three human subject datasets, were used in a preliminary observer evaluation where four board certified breast radiologists with varying amounts of experience ranked the level of realism (from 1 ='fake' to 10 ='real') of the simulated images. Results: The morphing technique was able to successfully generate new and unique morphed datasets from the original human subject data. The radiologists evaluated the realism of simulated mammograms generated from the morphed and unmorphed human subject datasets and scored the realism with an average ranking of 5.87 {+-} 1.99, confirming that overall the phantom image datasets appeared more 'real' than 'fake.' Moreover, there was not a significant difference (p > 0.1) between the realism of the unmorphed datasets (6.0 {+-} 1.95) compared to the morphed datasets (5.86 {+-} 1.99). Three of the four observers had overall average rankings of 6.89 {+-} 0.89, 6.9 {+-} 1.24, 6.76 {+-} 1.22, whereas the fourth observer ranked them noticeably lower at 2.94 {+-} 0.7. Conclusions: This work presents a technique that can be used to generate a suite of realistic computerized breast phantoms from a limited number of human subjects. This suite of flexible breast phantoms can be used for multimodality imaging research to provide a known truth while concurrently producing realistic simulated imaging data.« less

  10. Oblivious image watermarking combined with JPEG compression

    NASA Astrophysics Data System (ADS)

    Chen, Qing; Maitre, Henri; Pesquet-Popescu, Beatrice

    2003-06-01

    For most data hiding applications, the main source of concern is the effect of lossy compression on hidden information. The objective of watermarking is fundamentally in conflict with lossy compression. The latter attempts to remove all irrelevant and redundant information from a signal, while the former uses the irrelevant information to mask the presence of hidden data. Compression on a watermarked image can significantly affect the retrieval of the watermark. Past investigations of this problem have heavily relied on simulation. It is desirable not only to measure the effect of compression on embedded watermark, but also to control the embedding process to survive lossy compression. In this paper, we focus on oblivious watermarking by assuming that the watermarked image inevitably undergoes JPEG compression prior to watermark extraction. We propose an image-adaptive watermarking scheme where the watermarking algorithm and the JPEG compression standard are jointly considered. Watermark embedding takes into consideration the JPEG compression quality factor and exploits an HVS model to adaptively attain a proper trade-off among transparency, hiding data rate, and robustness to JPEG compression. The scheme estimates the image-dependent payload under JPEG compression to achieve the watermarking bit allocation in a determinate way, while maintaining consistent watermark retrieval performance.

  11. Pornographic image recognition and filtering using incremental learning in compressed domain

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Wang, Chao; Zhuo, Li; Geng, Wenhao

    2015-11-01

    With the rapid development and popularity of the network, the openness, anonymity, and interactivity of networks have led to the spread and proliferation of pornographic images on the Internet, which have done great harm to adolescents' physical and mental health. With the establishment of image compression standards, pornographic images are mainly stored with compressed formats. Therefore, how to efficiently filter pornographic images is one of the challenging issues for information security. A pornographic image recognition and filtering method in the compressed domain is proposed by using incremental learning, which includes the following steps: (1) low-resolution (LR) images are first reconstructed from the compressed stream of pornographic images, (2) visual words are created from the LR image to represent the pornographic image, and (3) incremental learning is adopted to continuously adjust the classification rules to recognize the new pornographic image samples after the covering algorithm is utilized to train and recognize the visual words in order to build the initial classification model of pornographic images. The experimental results show that the proposed pornographic image recognition method using incremental learning has a higher recognition rate as well as costing less recognition time in the compressed domain.

  12. A new approach of objective quality evaluation on JPEG2000 lossy-compressed lung cancer CT images

    NASA Astrophysics Data System (ADS)

    Cai, Weihua; Tan, Yongqiang; Zhang, Jianguo

    2007-03-01

    Image compression has been used to increase the communication efficiency and storage capacity. JPEG 2000 compression, based on the wavelet transformation, has its advantages comparing to other compression methods, such as ROI coding, error resilience, adaptive binary arithmetic coding and embedded bit-stream. However it is still difficult to find an objective method to evaluate the image quality of lossy-compressed medical images so far. In this paper, we present an approach to evaluate the image quality by using a computer aided diagnosis (CAD) system. We selected 77 cases of CT images, bearing benign and malignant lung nodules with confirmed pathology, from our clinical Picture Archiving and Communication System (PACS). We have developed a prototype of CAD system to classify these images into benign ones and malignant ones, the performance of which was evaluated by the receiver operator characteristics (ROC) curves. We first used JPEG 2000 to compress these cases of images with different compression ratio from lossless to lossy, and used the CAD system to classify the cases with different compressed ratio, then compared the ROC curves from the CAD classification results. Support vector machine (SVM) and neural networks (NN) were used to classify the malignancy of input nodules. In each approach, we found that the area under ROC (AUC) decreases with the increment of compression ratio with small fluctuations.

  13. Images multiplexing by code division technique

    NASA Astrophysics Data System (ADS)

    Kuo, Chung J.; Rigas, Harriett

    Spread Spectrum System (SSS) or Code Division Multiple Access System (CDMAS) has been studied for a long time, but most of the attention was focused on the transmission problems. In this paper, we study the results when the code division technique is applied to the image at the source stage. The idea is to convolve the N different images with the corresponding m-sequence to obtain the encrypted image. The superimposed image (summation of the encrypted images) is then stored or transmitted. The benefit of this is that no one knows what is stored or transmitted unless the m-sequence is known. The recovery of the original image is recovered by correlating the superimposed image with corresponding m-sequence. Two cases are studied in this paper. First, the two-dimensional image is treated as a long one-dimensional vector and the m-sequence is employed to obtain the results. Secondly, the two-dimensional quasi m-array is proposed and used for the code division multiplexing. It is shown that quasi m-array is faster when the image size is 256 x 256. The important features of the proposed technique are not only the image security but also the data compactness. The compression ratio depends on how many images are superimposed.

  14. Images Multiplexing By Code Division Technique

    NASA Astrophysics Data System (ADS)

    Kuo, Chung Jung; Rigas, Harriett B.

    1990-01-01

    Spread Spectrum System (SSS) or Code Division Multiple Access System (CDMAS) has been studied for a long time, but most of the attention was focused on the transmission problems. In this paper, we study the results when the code division technique is applied to the image at the source stage. The idea is to convolve the N different images with the corresponding m-sequence to obtain the encrypted image. The superimposed image (summation of the encrypted images) is then stored or transmitted. The benefit of this is that no one knows what is stored or transmitted unless the m-sequence is known. The recovery of the original image is recovered by correlating the superimposed image with corresponding m-sequence. Two cases are studied in this paper. First, the 2-D image is treated as a long 1-D vector and the m-sequence is employed to obtained the results. Secondly, the 2-D quasi m-array is proposed and used for the code division multiplexing. It is showed that quasi m-array is faster when the image size is 256x256. The important features of the proposed technique are not only the image security but also the data compactness. The compression ratio depends on how many images are superimposed.

  15. Enhancing the image resolution in a single-pixel sub-THz imaging system based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Alkus, Umit; Ermeydan, Esra Sengun; Sahin, Asaf Behzat; Cankaya, Ilyas; Altan, Hakan

    2018-04-01

    Compressed sensing (CS) techniques allow for faster imaging when combined with scan architectures, which typically suffer from speed. This technique when implemented with a subterahertz (sub-THz) single detector scan imaging system provides images whose resolution is only limited by the pixel size of the pattern used to scan the image plane. To overcome this limitation, the image of the target can be oversampled; however, this results in slower imaging rates especially if this is done in two-dimensional across the image plane. We show that by implementing a one-dimensional (1-D) scan of the image plane, a modified approach to CS theory applied with an appropriate reconstruction algorithm allows for successful reconstruction of the reflected oversampled image of a target placed in standoff configuration from the source. The experiments are done in reflection mode configuration where the operating frequency is 93 GHz and the corresponding wavelength is λ = 3.2 mm. To reconstruct the image with fewer samples, CS theory is applied using masks where the pixel size is 5 mm × 5 mm, and each mask covers an image area of 5 cm × 5 cm, meaning that the basic image is resolved as 10 × 10 pixels. To enhance the resolution, the information between two consecutive pixels is used, and oversampling along 1-D coupled with a modification of the masks in CS theory allowed for oversampled images to be reconstructed rapidly in 20 × 20 and 40 × 40 pixel formats. These are then compared using two different reconstruction algorithms, TVAL3 and ℓ1-MAGIC. The performance of these methods is compared for both simulated signals and real signals. It is found that the modified CS theory approach coupled with the TVAL3 reconstruction process, even when scanning along only 1-D, allows for rapid precise reconstruction of the oversampled target.

  16. CWICOM: A Highly Integrated & Innovative CCSDS Image Compression ASIC

    NASA Astrophysics Data System (ADS)

    Poupat, Jean-Luc; Vitulli, Raffaele

    2013-08-01

    The space market is more and more demanding in terms of on image compression performances. The earth observation satellites instrument resolution, the agility and the swath are continuously increasing. It multiplies by 10 the volume of picture acquired on one orbit. In parallel, the satellites size and mass are decreasing, requiring innovative electronic technologies reducing size, mass and power consumption. Astrium, leader on the market of the combined solutions for compression and memory for space application, has developed a new image compression ASIC which is presented in this paper. CWICOM is a high performance and innovative image compression ASIC developed by Astrium in the frame of the ESA contract n°22011/08/NLL/LvH. The objective of this ESA contract is to develop a radiation hardened ASIC that implements the CCSDS 122.0-B-1 Standard for Image Data Compression, that has a SpaceWire interface for configuring and controlling the device, and that is compatible with Sentinel-2 interface and with similar Earth Observation missions. CWICOM stands for CCSDS Wavelet Image COMpression ASIC. It is a large dynamic, large image and very high speed image compression ASIC potentially relevant for compression of any 2D image with bi-dimensional data correlation such as Earth observation, scientific data compression… The paper presents some of the main aspects of the CWICOM development, such as the algorithm and specification, the innovative memory organization, the validation approach and the status of the project.

  17. Estimation of stress relaxation time for normal and abnormal breast phantoms using optical technique

    NASA Astrophysics Data System (ADS)

    Udayakumar, K.; Sujatha, N.

    2015-03-01

    Many of the early occurring micro-anomalies in breast may transform into a deadliest cancer tumor in future. Probability of curing early occurring abnormalities in breast is more if rightly identified. Even in mammogram, considered as a golden standard technique for breast imaging, it is hard to pick up early occurring changes in the breast tissue due to the difference in mechanical behavior of the normal and abnormal tissue when subjected to compression prior to x-ray or laser exposure. In this paper, an attempt has been made to estimate the stress relaxation time of normal and abnormal breast mimicking phantom using laser speckle image correlation. Phantoms mimicking normal breast is prepared and subjected to precise mechanical compression. The phantom is illuminated by a Helium Neon laser and by using a CCD camera, a sequence of strained phantom speckle images are captured and correlated by the image mean intensity value at specific time intervals. From the relation between mean intensity versus time, tissue stress relaxation time is quantified. Experiments were repeated for phantoms with increased stiffness mimicking abnormal tissue for similar ranges of applied loading. Results shows that phantom with more stiffness representing abnormal tissue shows uniform relaxation for varying load of the selected range, whereas phantom with less stiffness representing normal tissue shows irregular behavior for varying loadings in the given range.

  18. Blind compressed sensing image reconstruction based on alternating direction method

    NASA Astrophysics Data System (ADS)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  19. Real time on-chip sequential adaptive principal component analysis for data feature extraction and image compression

    NASA Technical Reports Server (NTRS)

    Duong, T. A.

    2004-01-01

    In this paper, we present a new, simple, and optimized hardware architecture sequential learning technique for adaptive Principle Component Analysis (PCA) which will help optimize the hardware implementation in VLSI and to overcome the difficulties of the traditional gradient descent in learning convergence and hardware implementation.

  20. Calculation of dose distribution in compressible breast tissues using finite element modeling, Monte Carlo simulation and thermoluminescence dosimeters

    NASA Astrophysics Data System (ADS)

    Mohammadyari, Parvin; Faghihi, Reza; Mosleh-Shirazi, Mohammad Amin; Lotfi, Mehrzad; Rahim Hematiyan, Mohammad; Koontz, Craig; Meigooni, Ali S.

    2015-12-01

    Compression is a technique to immobilize the target or improve the dose distribution within the treatment volume during different irradiation techniques such as AccuBoost® brachytherapy. However, there is no systematic method for determination of dose distribution for uncompressed tissue after irradiation under compression. In this study, the mechanical behavior of breast tissue between compressed and uncompressed states was investigated. With that, a novel method was developed to determine the dose distribution in uncompressed tissue after irradiation of compressed breast tissue. Dosimetry was performed using two different methods, namely, Monte Carlo simulations using the MCNP5 code and measurements using thermoluminescent dosimeters (TLD). The displacement of the breast elements was simulated using a finite element model and calculated using ABAQUS software. From these results, the 3D dose distribution in uncompressed tissue was determined. The geometry of the model was constructed from magnetic resonance images of six different women volunteers. The mechanical properties were modeled by using the Mooney-Rivlin hyperelastic material model. Experimental dosimetry was performed by placing the TLD chips into the polyvinyl alcohol breast equivalent phantom. The results determined that the nodal displacements, due to the gravitational force and the 60 Newton compression forces (with 43% contraction in the loading direction and 37% expansion in the orthogonal direction) were determined. Finally, a comparison of the experimental data and the simulated data showed agreement within 11.5%  ±  5.9%.

  1. Calculation of dose distribution in compressible breast tissues using finite element modeling, Monte Carlo simulation and thermoluminescence dosimeters.

    PubMed

    Mohammadyari, Parvin; Faghihi, Reza; Mosleh-Shirazi, Mohammad Amin; Lotfi, Mehrzad; Hematiyan, Mohammad Rahim; Koontz, Craig; Meigooni, Ali S

    2015-12-07

    Compression is a technique to immobilize the target or improve the dose distribution within the treatment volume during different irradiation techniques such as AccuBoost(®) brachytherapy. However, there is no systematic method for determination of dose distribution for uncompressed tissue after irradiation under compression. In this study, the mechanical behavior of breast tissue between compressed and uncompressed states was investigated. With that, a novel method was developed to determine the dose distribution in uncompressed tissue after irradiation of compressed breast tissue. Dosimetry was performed using two different methods, namely, Monte Carlo simulations using the MCNP5 code and measurements using thermoluminescent dosimeters (TLD). The displacement of the breast elements was simulated using a finite element model and calculated using ABAQUS software. From these results, the 3D dose distribution in uncompressed tissue was determined. The geometry of the model was constructed from magnetic resonance images of six different women volunteers. The mechanical properties were modeled by using the Mooney-Rivlin hyperelastic material model. Experimental dosimetry was performed by placing the TLD chips into the polyvinyl alcohol breast equivalent phantom. The results determined that the nodal displacements, due to the gravitational force and the 60 Newton compression forces (with 43% contraction in the loading direction and 37% expansion in the orthogonal direction) were determined. Finally, a comparison of the experimental data and the simulated data showed agreement within 11.5%  ±  5.9%.

  2. Analysis of hyperspectral fluorescence images for poultry skin tumor inspection

    NASA Astrophysics Data System (ADS)

    Kong, Seong G.; Chen, Yud-Ren; Kim, Intaek; Kim, Moon S.

    2004-02-01

    We present a hyperspectral fluorescence imaging system with a fuzzy inference scheme for detecting skin tumors on poultry carcasses. Hyperspectral images reveal spatial and spectral information useful for finding pathological lesions or contaminants on agricultural products. Skin tumors are not obvious because the visual signature appears as a shape distortion rather than a discoloration. Fluorescence imaging allows the visualization of poultry skin tumors more easily than reflectance. The hyperspectral image samples obtained for this poultry tumor inspection contain 65 spectral bands of fluorescence in the visible region of the spectrum at wavelengths ranging from 425 to 711 nm. The large amount of hyperspectral image data is compressed by use of a discrete wavelet transform in the spatial domain. Principal-component analysis provides an effective compressed representation of the spectral signal of each pixel in the spectral domain. A small number of significant features are extracted from two major spectral peaks of relative fluorescence intensity that have been identified as meaningful spectral bands for detecting tumors. A fuzzy inference scheme that uses a small number of fuzzy rules and Gaussian membership functions successfully detects skin tumors on poultry carcasses. Spatial-filtering techniques are used to significantly reduce false positives.

  3. A novel color image compression algorithm using the human visual contrast sensitivity characteristics

    NASA Astrophysics Data System (ADS)

    Yao, Juncai; Liu, Guizhong

    2017-03-01

    In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.

  4. The application of compressive sampling in rapid ultrasonic computerized tomography (UCT) technique of steel tube slab (STS).

    PubMed

    Jiang, Baofeng; Jia, Pengjiao; Zhao, Wen; Wang, Wentao

    2018-01-01

    This paper explores a new method for rapid structural damage inspection of steel tube slab (STS) structures along randomly measured paths based on a combination of compressive sampling (CS) and ultrasonic computerized tomography (UCT). In the measurement stage, using fewer randomly selected paths rather than the whole measurement net is proposed to detect the underlying damage of a concrete-filled steel tube. In the imaging stage, the ℓ1-minimization algorithm is employed to recover the information of the microstructures based on the measurement data related to the internal situation of the STS structure. A numerical concrete tube model, with the various level of damage, was studied to demonstrate the performance of the rapid UCT technique. Real-world concrete-filled steel tubes in the Shenyang Metro stations were detected using the proposed UCT technique in a CS framework. Both the numerical and experimental results show the rapid UCT technique has the capability of damage detection in an STS structure with a high level of accuracy and with fewer required measurements, which is more convenient and efficient than the traditional UCT technique.

  5. Effects of Image Compression on Automatic Count of Immunohistochemically Stained Nuclei in Digital Images

    PubMed Central

    López, Carlos; Lejeune, Marylène; Escrivà, Patricia; Bosch, Ramón; Salvadó, Maria Teresa; Pons, Lluis E.; Baucells, Jordi; Cugat, Xavier; Álvaro, Tomás; Jaén, Joaquín

    2008-01-01

    This study investigates the effects of digital image compression on automatic quantification of immunohistochemical nuclear markers. We examined 188 images with a previously validated computer-assisted analysis system. A first group was composed of 47 images captured in TIFF format, and other three contained the same images converted from TIFF to JPEG format with 3×, 23× and 46× compression. Counts of TIFF format images were compared with the other three groups. Overall, differences in the count of the images increased with the percentage of compression. Low-complexity images (≤100 cells/field, without clusters or with small-area clusters) had small differences (<5 cells/field in 95–100% of cases) and high-complexity images showed substantial differences (<35–50 cells/field in 95–100% of cases). Compression does not compromise the accuracy of immunohistochemical nuclear marker counts obtained by computer-assisted analysis systems for digital images with low complexity and could be an efficient method for storing these images. PMID:18755997

  6. Optimal color coding for compression of true color images

    NASA Astrophysics Data System (ADS)

    Musatenko, Yurij S.; Kurashov, Vitalij N.

    1998-11-01

    In the paper we present the method that improves lossy compression of the true color or other multispectral images. The essence of the method is to project initial color planes into Karhunen-Loeve (KL) basis that gives completely decorrelated representation for the image and to compress basis functions instead of the planes. To do that the new fast algorithm of true KL basis construction with low memory consumption is suggested and our recently proposed scheme for finding optimal losses of Kl functions while compression is used. Compare to standard JPEG compression of the CMYK images the method provides the PSNR gain from 0.2 to 2 dB for the convenient compression ratios. Experimental results are obtained for high resolution CMYK images. It is demonstrated that presented scheme could work on common hardware.

  7. Performance assessment of a single-pixel compressive sensing imaging system

    NASA Astrophysics Data System (ADS)

    Du Bosq, Todd W.; Preece, Bradley L.

    2016-05-01

    Conventional electro-optical and infrared (EO/IR) systems capture an image by measuring the light incident at each of the millions of pixels in a focal plane array. Compressive sensing (CS) involves capturing a smaller number of unconventional measurements from the scene, and then using a companion process known as sparse reconstruction to recover the image as if a fully populated array that satisfies the Nyquist criteria was used. Therefore, CS operates under the assumption that signal acquisition and data compression can be accomplished simultaneously. CS has the potential to acquire an image with equivalent information content to a large format array while using smaller, cheaper, and lower bandwidth components. However, the benefits of CS do not come without compromise. The CS architecture chosen must effectively balance between physical considerations (SWaP-C), reconstruction accuracy, and reconstruction speed to meet operational requirements. To properly assess the value of such systems, it is necessary to fully characterize the image quality, including artifacts and sensitivity to noise. Imagery of the two-handheld object target set at range was collected using a passive SWIR single-pixel CS camera for various ranges, mirror resolution, and number of processed measurements. Human perception experiments were performed to determine the identification performance within the trade space. The performance of the nonlinear CS camera was modeled with the Night Vision Integrated Performance Model (NV-IPM) by mapping the nonlinear degradations to an equivalent linear shift invariant model. Finally, the limitations of CS modeling techniques will be discussed.

  8. Simultaneous compression and encryption of closely resembling images: application to video sequences and polarimetric images.

    PubMed

    Aldossari, M; Alfalou, A; Brosseau, C

    2014-09-22

    This study presents and validates an optimized method of simultaneous compression and encryption designed to process images with close spectra. This approach is well adapted to the compression and encryption of images of a time-varying scene but also to static polarimetric images. We use the recently developed spectral fusion method [Opt. Lett.35, 1914-1916 (2010)] to deal with the close resemblance of the images. The spectral plane (containing the information to send and/or to store) is decomposed in several independent areas which are assigned according a specific way. In addition, each spectrum is shifted in order to minimize their overlap. The dual purpose of these operations is to optimize the spectral plane allowing us to keep the low- and high-frequency information (compression) and to introduce an additional noise for reconstructing the images (encryption). Our results show that not only can the control of the spectral plane enhance the number of spectra to be merged, but also that a compromise between the compression rate and the quality of the reconstructed images can be tuned. We use a root-mean-square (RMS) optimization criterion to treat compression. Image encryption is realized at different security levels. Firstly, we add a specific encryption level which is related to the different areas of the spectral plane, and then, we make use of several random phase keys. An in-depth analysis at the spectral fusion methodology is done in order to find a good trade-off between the compression rate and the quality of the reconstructed images. Our new proposal spectral shift allows us to minimize the image overlap. We further analyze the influence of the spectral shift on the reconstructed image quality and compression rate. The performance of the multiple-image optical compression and encryption method is verified by analyzing several video sequences and polarimetric images.

  9. A Lossless hybrid wavelet-fractal compression for welding radiographic images.

    PubMed

    Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud

    2016-01-01

    In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm.

  10. Wavelet domain textual coding of Ottoman script images

    NASA Astrophysics Data System (ADS)

    Gerek, Oemer N.; Cetin, Enis A.; Tewfik, Ahmed H.

    1996-02-01

    Image coding using wavelet transform, DCT, and similar transform techniques is well established. On the other hand, these coding methods neither take into account the special characteristics of the images in a database nor are they suitable for fast database search. In this paper, the digital archiving of Ottoman printings is considered. Ottoman documents are printed in Arabic letters. Witten et al. describes a scheme based on finding the characters in binary document images and encoding the positions of the repeated characters This method efficiently compresses document images and is suitable for database research, but it cannot be applied to Ottoman or Arabic documents as the concept of character is different in Ottoman or Arabic. Typically, one has to deal with compound structures consisting of a group of letters. Therefore, the matching criterion will be according to those compound structures. Furthermore, the text images are gray tone or color images for Ottoman scripts for the reasons that are described in the paper. In our method the compound structure matching is carried out in wavelet domain which reduces the search space and increases the compression ratio. In addition to the wavelet transformation which corresponds to the linear subband decomposition, we also used nonlinear subband decomposition. The filters in the nonlinear subband decomposition have the property of preserving edges in the low resolution subband image.

  11. Adaptive foveated single-pixel imaging with dynamic supersampling

    PubMed Central

    Phillips, David B.; Sun, Ming-Jie; Taylor, Jonathan M.; Edgar, Matthew P.; Barnett, Stephen M.; Gibson, Graham M.; Padgett, Miles J.

    2017-01-01

    In contrast to conventional multipixel cameras, single-pixel cameras capture images using a single detector that measures the correlations between the scene and a set of patterns. However, these systems typically exhibit low frame rates, because to fully sample a scene in this way requires at least the same number of correlation measurements as the number of pixels in the reconstructed image. To mitigate this, a range of compressive sensing techniques have been developed which use a priori knowledge to reconstruct images from an undersampled measurement set. Here, we take a different approach and adopt a strategy inspired by the foveated vision found in the animal kingdom—a framework that exploits the spatiotemporal redundancy of many dynamic scenes. In our system, a high-resolution foveal region tracks motion within the scene, yet unlike a simple zoom, every frame delivers new spatial information from across the entire field of view. This strategy rapidly records the detail of quickly changing features in the scene while simultaneously accumulating detail of more slowly evolving regions over several consecutive frames. This architecture provides video streams in which both the resolution and exposure time spatially vary and adapt dynamically in response to the evolution of the scene. The degree of local frame rate enhancement is scene-dependent, but here, we demonstrate a factor of 4, thereby helping to mitigate one of the main drawbacks of single-pixel imaging techniques. The methods described here complement existing compressive sensing approaches and may be applied to enhance computational imagers that rely on sequential correlation measurements. PMID:28439538

  12. Wavelet/scalar quantization compression standard for fingerprint images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class ofmore » potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.« less

  13. Estimating zero strain states of very soft tissue under gravity loading using digital image correlation⋆,⋆⋆,★

    PubMed Central

    Gao, Zhan; Desai, Jaydev P.

    2009-01-01

    This paper presents several experimental techniques and concepts in the process of measuring mechanical properties of very soft tissue in an ex vivo tensile test. Gravitational body force on very soft tissue causes pre-compression and results in a non-uniform initial deformation. The global Digital Image Correlation technique is used to measure the full field deformation behavior of liver tissue in uniaxial tension testing. A maximum stretching band is observed in the incremental strain field when a region of tissue passes from compression and enters a state of tension. A new method for estimating the zero strain state is proposed: the zero strain position is close to, but ahead of the position of the maximum stretching band, or in other words, the tangent of a nominal stress-stretch curve reaches minimum at λ ≳ 1. The approach, to identify zero strain by using maximum incremental strain, can be implemented in other types of image-based soft tissue analysis. The experimental results of ten samples from seven porcine livers are presented and material parameters for the Ogden model fit are obtained. The finite element simulation based on the fitted model confirms the effect of gravity on the deformation of very soft tissue and validates our approach. PMID:20015676

  14. Compressed Sensing for fMRI: Feasibility Study on the Acceleration of Non-EPI fMRI at 9.4T

    PubMed Central

    Kim, Seong-Gi; Ye, Jong Chul

    2015-01-01

    Conventional functional magnetic resonance imaging (fMRI) technique known as gradient-recalled echo (GRE) echo-planar imaging (EPI) is sensitive to image distortion and degradation caused by local magnetic field inhomogeneity at high magnetic fields. Non-EPI sequences such as spoiled gradient echo and balanced steady-state free precession (bSSFP) have been proposed as an alternative high-resolution fMRI technique; however, the temporal resolution of these sequences is lower than the typically used GRE-EPI fMRI. One potential approach to improve the temporal resolution is to use compressed sensing (CS). In this study, we tested the feasibility of k-t FOCUSS—one of the high performance CS algorithms for dynamic MRI—for non-EPI fMRI at 9.4T using the model of rat somatosensory stimulation. To optimize the performance of CS reconstruction, different sampling patterns and k-t FOCUSS variations were investigated. Experimental results show that an optimized k-t FOCUSS algorithm with acceleration by a factor of 4 works well for non-EPI fMRI at high field under various statistical criteria, which confirms that a combination of CS and a non-EPI sequence may be a good solution for high-resolution fMRI at high fields. PMID:26413503

  15. Diagnosing upper extremity deep vein thrombosis with non-contrast-enhanced Magnetic Resonance Direct Thrombus Imaging: A pilot study.

    PubMed

    Dronkers, C E A; Klok, F A; van Haren, G R; Gleditsch, J; Westerlund, E; Huisman, M V; Kroft, L J M

    2018-03-01

    Diagnosing upper extremity deep vein thrombosis (UEDVT) can be challenging. Compression ultrasonography is often inconclusive because of overlying anatomic structures that hamper compressing veins. Contrast venography is invasive and has a risk of contrast allergy. Magnetic Resonance Direct Thrombus Imaging (MRDTI) and Three Dimensional Turbo Spin-echo Spectral Attenuated Inversion Recovery (3D TSE-SPAIR) are both non-contrast-enhanced Magnetic Resonance Imaging (MRI) sequences that can visualize a thrombus directly by the visualization of methemoglobin, which is formed in a fresh blood clot. MRDTI has been proven to be accurate in diagnosing deep venous thrombosis (DVT) of the leg. The primary aim of this pilot study was to test the feasibility of diagnosing UEDVT with these MRI techniques. MRDTI and 3D TSE-SPAIR were performed in 3 pilot patients who were already diagnosed with UEDVT by ultrasonography or contrast venography. In all patients, UEDVT diagnosis could be confirmed by MRDTI and 3D TSE-SPAIR in all vein segments. In conclusion, this study showed that non-contrast MRDTI and 3D TSE-SPAIR sequences may be feasible tests to diagnose UEDVT. However diagnostic accuracy and management studies have to be performed before these techniques can be routinely used in clinical practice. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Hyperspectral data compression using a Wiener filter predictor

    NASA Astrophysics Data System (ADS)

    Villeneuve, Pierre V.; Beaven, Scott G.; Stocker, Alan D.

    2013-09-01

    The application of compression to hyperspectral image data is a significant technical challenge. A primary bottleneck in disseminating data products to the tactical user community is the limited communication bandwidth between the airborne sensor and the ground station receiver. This report summarizes the newly-developed "Z-Chrome" algorithm for lossless compression of hyperspectral image data. A Wiener filter prediction framework is used as a basis for modeling new image bands from already-encoded bands. The resulting residual errors are then compressed using available state-of-the-art lossless image compression functions. Compression performance is demonstrated using a large number of test data collected over a wide variety of scene content from six different airborne and spaceborne sensors .

  17. Impact of lossy compression on diagnostic accuracy of radiographs for periapical lesions

    NASA Technical Reports Server (NTRS)

    Eraso, Francisco E.; Analoui, Mostafa; Watson, Andrew B.; Rebeschini, Regina

    2002-01-01

    OBJECTIVES: The purpose of this study was to evaluate the lossy Joint Photographic Experts Group compression for endodontic pretreatment digital radiographs. STUDY DESIGN: Fifty clinical charge-coupled device-based, digital radiographs depicting periapical areas were selected. Each image was compressed at 2, 4, 8, 16, 32, 48, and 64 compression ratios. One root per image was marked for examination. Images were randomized and viewed by four clinical observers under standardized viewing conditions. Each observer read the image set three times, with at least two weeks between each reading. Three pre-selected sites per image (mesial, distal, apical) were scored on a five-scale score confidence scale. A panel of three examiners scored the uncompressed images, with a consensus score for each site. The consensus score was used as the baseline for assessing the impact of lossy compression on the diagnostic values of images. The mean absolute error between consensus and observer scores was computed for each observer, site, and reading session. RESULTS: Balanced one-way analysis of variance for all observers indicated that for compression ratios 48 and 64, there was significant difference between mean absolute error of uncompressed and compressed images (P <.05). After converting the five-scale score to two-level diagnostic values, the diagnostic accuracy was strongly correlated (R (2) = 0.91) with the compression ratio. CONCLUSION: The results of this study suggest that high compression ratios can have a severe impact on the diagnostic quality of the digital radiographs for detection of periapical lesions.

  18. Image Quality Assessment of JPEG Compressed Mars Science Laboratory Mastcam Images using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Kerner, H. R.; Bell, J. F., III; Ben Amor, H.

    2017-12-01

    The Mastcam color imaging system on the Mars Science Laboratory Curiosity rover acquires images within Gale crater for a variety of geologic and atmospheric studies. Images are often JPEG compressed before being downlinked to Earth. While critical for transmitting images on a low-bandwidth connection, this compression can result in image artifacts most noticeable as anomalous brightness or color changes within or near JPEG compression block boundaries. In images with significant high-frequency detail (e.g., in regions showing fine layering or lamination in sedimentary rocks), the image might need to be re-transmitted losslessly to enable accurate scientific interpretation of the data. The process of identifying which images have been adversely affected by compression artifacts is performed manually by the Mastcam science team, costing significant expert human time. To streamline the tedious process of identifying which images might need to be re-transmitted, we present an input-efficient neural network solution for predicting the perceived quality of a compressed Mastcam image. Most neural network solutions require large amounts of hand-labeled training data for the model to learn the target mapping between input (e.g. distorted images) and output (e.g. quality assessment). We propose an automatic labeling method using joint entropy between a compressed and uncompressed image to avoid the need for domain experts to label thousands of training examples by hand. We use automatically labeled data to train a convolutional neural network to estimate the probability that a Mastcam user would find the quality of a given compressed image acceptable for science analysis. We tested our model on a variety of Mastcam images and found that the proposed method correlates well with image quality perception by science team members. When assisted by our proposed method, we estimate that a Mastcam investigator could reduce the time spent reviewing images by a minimum of 70%.

  19. Comparison of Compressed Sensing Algorithms for Inversion of 3-D Electrical Resistivity Tomography.

    NASA Astrophysics Data System (ADS)

    Peddinti, S. R.; Ranjan, S.; Kbvn, D. P.

    2016-12-01

    Image reconstruction algorithms derived from electrical resistivity tomography (ERT) are highly non-linear, sparse, and ill-posed. The inverse problem is much severe, when dealing with 3-D datasets that result in large sized matrices. Conventional gradient based techniques using L2 norm minimization with some sort of regularization can impose smoothness constraint on the solution. Compressed sensing (CS) is relatively new technique that takes the advantage of inherent sparsity in parameter space in one or the other form. If favorable conditions are met, CS was proven to be an efficient image reconstruction technique that uses limited observations without losing edge sharpness. This paper deals with the development of an open source 3-D resistivity inversion tool using CS framework. The forward model was adopted from RESINVM3D (Pidlisecky et al., 2007) with CS as the inverse code. Discrete cosine transformation (DCT) function was used to induce model sparsity in orthogonal form. Two CS based algorithms viz., interior point method and two-step IST were evaluated on a synthetic layered model with surface electrode observations. The algorithms were tested (in terms of quality and convergence) under varying degrees of parameter heterogeneity, model refinement, and reduced observation data space. In comparison to conventional gradient algorithms, CS was proven to effectively reconstruct the sub-surface image with less computational cost. This was observed by a general increase in NRMSE from 0.5 in 10 iterations using gradient algorithm to 0.8 in 5 iterations using CS algorithms.

  20. Hybrid x-space: a new approach for MPI reconstruction.

    PubMed

    Tateo, A; Iurino, A; Settanni, G; Andrisani, A; Stifanelli, P F; Larizza, P; Mazzia, F; Mininni, R M; Tangaro, S; Bellotti, R

    2016-06-07

    Magnetic particle imaging (MPI) is a new medical imaging technique capable of recovering the distribution of superparamagnetic particles from their measured induced signals. In literature there are two main MPI reconstruction techniques: measurement-based (MB) and x-space (XS). The MB method is expensive because it requires a long calibration procedure as well as a reconstruction phase that can be numerically costly. On the other side, the XS method is simpler than MB but the exact knowledge of the field free point (FFP) motion is essential for its implementation. Our simulation work focuses on the implementation of a new approach for MPI reconstruction: it is called hybrid x-space (HXS), representing a combination of the previous methods. Specifically, our approach is based on XS reconstruction because it requires the knowledge of the FFP position and velocity at each time instant. The difference with respect to the original XS formulation is how the FFP velocity is computed: we estimate it from the experimental measurements of the calibration scans, typical of the MB approach. Moreover, a compressive sensing technique is applied in order to reduce the calibration time, setting a fewer number of sampling positions. Simulations highlight that HXS and XS methods give similar results. Furthermore, an appropriate use of compressive sensing is crucial for obtaining a good balance between time reduction and reconstructed image quality. Our proposal is suitable for open geometry configurations of human size devices, where incidental factors could make the currents, the fields and the FFP trajectory irregular.

  1. Sparsity based terahertz reflective off-axis digital holography

    NASA Astrophysics Data System (ADS)

    Wan, Min; Muniraj, Inbarasan; Malallah, Ra'ed; Zhao, Liang; Ryle, James P.; Rong, Lu; Healy, John J.; Wang, Dayong; Sheridan, John T.

    2017-05-01

    Terahertz radiation lies between the microwave and infrared regions in the electromagnetic spectrum. Emitted frequencies range from 0.1 to 10 THz with corresponding wavelengths ranging from 30 μm to 3 mm. In this paper, a continuous-wave Terahertz off-axis digital holographic system is described. A Gaussian fitting method and image normalisation techniques were employed on the recorded hologram to improve the image resolution. A synthesised contrast enhanced hologram is then digitally constructed. Numerical reconstruction is achieved using the angular spectrum method of the filtered off-axis hologram. A sparsity based compression technique is introduced before numerical data reconstruction in order to reduce the dataset required for hologram reconstruction. Results prove that a tiny amount of sparse dataset is sufficient in order to reconstruct the hologram with good image quality.

  2. The effect of JPEG compression on automated detection of microaneurysms in retinal images

    NASA Astrophysics Data System (ADS)

    Cree, M. J.; Jelinek, H. F.

    2008-02-01

    As JPEG compression at source is ubiquitous in retinal imaging, and the block artefacts introduced are known to be of similar size to microaneurysms (an important indicator of diabetic retinopathy) it is prudent to evaluate the effect of JPEG compression on automated detection of retinal pathology. Retinal images were acquired at high quality and then compressed to various lower qualities. An automated microaneurysm detector was run on the retinal images of various qualities of JPEG compression and the ability to predict the presence of diabetic retinopathy based on the detected presence of microaneurysms was evaluated with receiver operating characteristic (ROC) methodology. The negative effect of JPEG compression on automated detection was observed even at levels of compression sometimes used in retinal eye-screening programmes and these may have important clinical implications for deciding on acceptable levels of compression for a fully automated eye-screening programme.

  3. Novel image compression-encryption hybrid algorithm based on key-controlled measurement matrix in compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua

    2014-10-01

    The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.

  4. DCT-based cyber defense techniques

    NASA Astrophysics Data System (ADS)

    Amsalem, Yaron; Puzanov, Anton; Bedinerman, Anton; Kutcher, Maxim; Hadar, Ofer

    2015-09-01

    With the increasing popularity of video streaming services and multimedia sharing via social networks, there is a need to protect the multimedia from malicious use. An attacker may use steganography and watermarking techniques to embed malicious content, in order to attack the end user. Most of the attack algorithms are robust to basic image processing techniques such as filtering, compression, noise addition, etc. Hence, in this article two novel, real-time, defense techniques are proposed: Smart threshold and anomaly correction. Both techniques operate at the DCT domain, and are applicable for JPEG images and H.264 I-Frames. The defense performance was evaluated against a highly robust attack, and the perceptual quality degradation was measured by the well-known PSNR and SSIM quality assessment metrics. A set of defense techniques is suggested for improving the defense efficiency. For the most aggressive attack configuration, the combination of all the defense techniques results in 80% protection against cyber-attacks with PSNR of 25.74 db.

  5. COxSwAIN: Compressive Sensing for Advanced Imaging and Navigation

    NASA Technical Reports Server (NTRS)

    Kurwitz, Richard; Pulley, Marina; LaFerney, Nathan; Munoz, Carlos

    2015-01-01

    The COxSwAIN project focuses on building an image and video compression scheme that can be implemented in a small or low-power satellite. To do this, we used Compressive Sensing, where the compression is performed by matrix multiplications on the satellite and reconstructed on the ground. Our paper explains our methodology and demonstrates the results of the scheme, being able to achieve high quality image compression that is robust to noise and corruption.

  6. Efficient Imaging and Real-Time Display of Scanning Ion Conductance Microscopy Based on Block Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Li, Gongxin; Li, Peng; Wang, Yuechao; Wang, Wenxue; Xi, Ning; Liu, Lianqing

    2014-07-01

    Scanning Ion Conductance Microscopy (SICM) is one kind of Scanning Probe Microscopies (SPMs), and it is widely used in imaging soft samples for many distinctive advantages. However, the scanning speed of SICM is much slower than other SPMs. Compressive sensing (CS) could improve scanning speed tremendously by breaking through the Shannon sampling theorem, but it still requires too much time in image reconstruction. Block compressive sensing can be applied to SICM imaging to further reduce the reconstruction time of sparse signals, and it has another unique application that it can achieve the function of image real-time display in SICM imaging. In this article, a new method of dividing blocks and a new matrix arithmetic operation were proposed to build the block compressive sensing model, and several experiments were carried out to verify the superiority of block compressive sensing in reducing imaging time and real-time display in SICM imaging.

  7. Study on ion implantation conditions in fabricating compressively strained Si/relaxed Si1-xCx heterostructures using the defect control by ion implantation technique

    NASA Astrophysics Data System (ADS)

    Arisawa, You; Sawano, Kentarou; Usami, Noritaka

    2017-06-01

    The influence of ion implantation energies on compressively strained Si/relaxed Si1-xCx heterostructures formed on Ar ion implanted Si substrates was investigated. It was found that relaxation ratio can be enhanced over 100% at relatively low implantation energies, and compressive strain in the topmost Si layer is maximized at 45 keV due to large lattice mismatch. Cross-sectional transmission electron microscope images revealed that defects are localized around the hetero-interface between the Si1-xCx layer and the Ar+-implanted Si substrate when the implantation energy is 45 keV, which decreases the amount of defects in the topmost Si layer and the upper part of the Si1-xCx buffer layer.

  8. A Data Hiding Technique to Synchronously Embed Physiological Signals in H.264/AVC Encoded Video for Medicine Healthcare.

    PubMed

    Peña, Raul; Ávila, Alfonso; Muñoz, David; Lavariega, Juan

    2015-01-01

    The recognition of clinical manifestations in both video images and physiological-signal waveforms is an important aid to improve the safety and effectiveness in medical care. Physicians can rely on video-waveform (VW) observations to recognize difficult-to-spot signs and symptoms. The VW observations can also reduce the number of false positive incidents and expand the recognition coverage to abnormal health conditions. The synchronization between the video images and the physiological-signal waveforms is fundamental for the successful recognition of the clinical manifestations. The use of conventional equipment to synchronously acquire and display the video-waveform information involves complex tasks such as the video capture/compression, the acquisition/compression of each physiological signal, and the video-waveform synchronization based on timestamps. This paper introduces a data hiding technique capable of both enabling embedding channels and synchronously hiding samples of physiological signals into encoded video sequences. Our data hiding technique offers large data capacity and simplifies the complexity of the video-waveform acquisition and reproduction. The experimental results revealed successful embedding and full restoration of signal's samples. Our results also demonstrated a small distortion in the video objective quality, a small increment in bit-rate, and embedded cost savings of -2.6196% for high and medium motion video sequences.

  9. Iris Recognition: The Consequences of Image Compression

    NASA Astrophysics Data System (ADS)

    Ives, Robert W.; Bishop, Daniel A.; Du, Yingzi; Belcher, Craig

    2010-12-01

    Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA) is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  10. Fundamental study of compression for movie files of coronary angiography

    NASA Astrophysics Data System (ADS)

    Ando, Takekazu; Tsuchiya, Yuichiro; Kodera, Yoshie

    2005-04-01

    When network distribution of movie files was considered as reference, it could be useful that the lossy compression movie files which has small file size. We chouse three kinds of coronary stricture movies with different moving speed as an examination object; heart rate of slow, normal and fast movies. The movies of MPEG-1, DivX5.11, WMV9 (Windows Media Video 9), and WMV9-VCM (Windows Media Video 9-Video Compression Manager) were made from three kinds of AVI format movies with different moving speeds. Five kinds of movies that are four kinds of compression movies and non-compression AVI instead of the DICOM format were evaluated by Thurstone's method. The Evaluation factors of movies were determined as "sharpness, granularity, contrast, and comprehensive evaluation." In the virtual bradycardia movie, AVI was the best evaluation at all evaluation factors except the granularity. In the virtual normal movie, an excellent compression technique is different in all evaluation factors. In the virtual tachycardia movie, MPEG-1 was the best evaluation at all evaluation factors expects the contrast. There is a good compression form depending on the speed of movies because of the difference of compression algorithm. It is thought that it is an influence by the difference of the compression between frames. The compression algorithm for movie has the compression between the frames and the intra-frame compression. As the compression algorithm give the different influence to image by each compression method, it is necessary to examine the relation of the compression algorithm and our results.

  11. A compressed sensing X-ray camera with a multilayer architecture

    NASA Astrophysics Data System (ADS)

    Wang, Zhehui; Iaroshenko, O.; Li, S.; Liu, T.; Parab, N.; Chen, W. W.; Chu, P.; Kenyon, G. T.; Lipton, R.; Sun, K.-X.

    2018-01-01

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.

  12. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Astrophysics Data System (ADS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-07-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters according to scientific discipline and experiment type is critical to the success of remote experiments.

  13. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-01-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters according to scientific discipline and experiment type is critical to the success of remote experiments.

  14. Real-Time Aggressive Image Data Compression

    DTIC Science & Technology

    1990-03-31

    implemented with higher degrees of modularity, concurrency, and higher levels of machine intelligence , thereby providing higher data -throughput rates...Project Summary Project Title: Real-Time Aggressive Image Data Compression Principal Investigators: Dr. Yih-Fang Huang and Dr. Ruey-wen Liu Institution...Summary The objective of the proposed research is to develop reliable algorithms !.hat can achieve aggressive image data compression (with a compression

  15. Context Modeler for Wavelet Compression of Spectral Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Xie, Hua; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    A context-modeling sub-algorithm has been developed as part of an algorithm that effects three-dimensional (3D) wavelet-based compression of hyperspectral image data. The context-modeling subalgorithm, hereafter denoted the context modeler, provides estimates of probability distributions of wavelet-transformed data being encoded. These estimates are utilized by an entropy coding subalgorithm that is another major component of the compression algorithm. The estimates make it possible to compress the image data more effectively than would otherwise be possible. The following background discussion is prerequisite to a meaningful summary of the context modeler. This discussion is presented relative to ICER-3D, which is the name attached to a particular compression algorithm and the software that implements it. The ICER-3D software is summarized briefly in the preceding article, ICER-3D Hyperspectral Image Compression Software (NPO-43238). Some aspects of this algorithm were previously described, in a slightly more general context than the ICER-3D software, in "Improving 3D Wavelet-Based Compression of Hyperspectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. In turn, ICER-3D is a product of generalization of ICER, another previously reported algorithm and computer program that can perform both lossless and lossy wavelet-based compression and decompression of gray-scale-image data. In ICER-3D, hyperspectral image data are decomposed using a 3D discrete wavelet transform (DWT). Following wavelet decomposition, mean values are subtracted from spatial planes of spatially low-pass subbands prior to encoding. The resulting data are converted to sign-magnitude form and compressed. In ICER-3D, compression is progressive, in that compressed information is ordered so that as more of the compressed data stream is received, successive reconstructions of the hyperspectral image data are of successively higher overall fidelity.

  16. Grand Tour outer planet missions definition phase. Part 2: Minutes of meetings and official correspondence

    NASA Technical Reports Server (NTRS)

    Belton, M. J. S.; Aksnes, K.; Davies, M. E.; Hartmann, W. K.; Millis, R. L.; Owen, T. C.; Reilly, T. H.; Sagan, C.; Suomi, V. E.; Collins, S. A., Jr.

    1972-01-01

    A variety of imaging systems proposed for use aboard the Outer Planet Grand Tour Explorer are discussed and evaluated in terms of optimal resolution capability and efficient time utilization. It is pointed out that the planetary and satellite alignments at the time of encounter dictate a high degree of adaptability and versatility in order to provide sufficient image enhancement over earth-based techniques. Data compression methods are also evaluated according to the same criteria.

  17. Image processing techniques and applications to the Earth Resources Technology Satellite program

    NASA Technical Reports Server (NTRS)

    Polge, R. J.; Bhagavan, B. K.; Callas, L.

    1973-01-01

    The Earth Resources Technology Satellite system is studied, with emphasis on sensors, data processing requirements, and image data compression using the Fast Fourier and Hadamard transforms. The ERTS-A system and the fundamentals of remote sensing are discussed. Three user applications (forestry, crops, and rangelands) are selected and their spectral signatures are described. It is shown that additional sensors are needed for rangeland management. An on-board information processing system is recommended to reduce the amount of data transmitted.

  18. A browse facility for Earth science remote sensing data: Center director's discretionary fund final report

    NASA Technical Reports Server (NTRS)

    Meyer, P. J.

    1993-01-01

    An image data visual browse facility is developed for a UNIX platform using the X Windows 11 system. It allows one to visually examine reduced resolution image data to determine which data are applicable for further research. Links with a relational data base manager then allow one to extract not only the full resolution image data, but any other ancillary data related to the case study. Various techniques are examined for compression of the image data in order to reduce data storage requirements and time necessary to transmit the data on the internet. Data used were from the WetNet project.

  19. Image-Data Compression Using Edge-Optimizing Algorithm for WFA Inference.

    ERIC Educational Resources Information Center

    Culik, Karel II; Kari, Jarkko

    1994-01-01

    Presents an inference algorithm that produces a weighted finite automata (WFA), in particular, the grayness functions of graytone images. Image-data compression results based on the new inference algorithm produces a WFA with a relatively small number of edges. Image-data compression results alone and in combination with wavelets are discussed.…

  20. Performance evaluation of the multiple-image optical compression and encryption method by increasing the number of target images

    NASA Astrophysics Data System (ADS)

    Aldossari, M.; Alfalou, A.; Brosseau, C.

    2017-08-01

    In an earlier study [Opt. Express 22, 22349-22368 (2014)], a compression and encryption method that simultaneous compress and encrypt closely resembling images was proposed and validated. This multiple-image optical compression and encryption (MIOCE) method is based on a special fusion of the different target images spectra in the spectral domain. Now for the purpose of assessing the capacity of the MIOCE method, we would like to evaluate and determine the influence of the number of target images. This analysis allows us to evaluate the performance limitation of this method. To achieve this goal, we use a criterion based on the root-mean-square (RMS) [Opt. Lett. 35, 1914-1916 (2010)] and compression ratio to determine the spectral plane area. Then, the different spectral areas are merged in a single spectrum plane. By choosing specific areas, we can compress together 38 images instead of 26 using the classical MIOCE method. The quality of the reconstructed image is evaluated by making use of the mean-square-error criterion (MSE).

  1. Performance enhancement of various real-time image processing techniques via speculative execution

    NASA Astrophysics Data System (ADS)

    Younis, Mohamed F.; Sinha, Purnendu; Marlowe, Thomas J.; Stoyenko, Alexander D.

    1996-03-01

    In real-time image processing, an application must satisfy a set of timing constraints while ensuring the semantic correctness of the system. Because of the natural structure of digital data, pure data and task parallelism have been used extensively in real-time image processing to accelerate the handling time of image data. These types of parallelism are based on splitting the execution load performed by a single processor across multiple nodes. However, execution of all parallel threads is mandatory for correctness of the algorithm. On the other hand, speculative execution is an optimistic execution of part(s) of the program based on assumptions on program control flow or variable values. Rollback may be required if the assumptions turn out to be invalid. Speculative execution can enhance average, and sometimes worst-case, execution time. In this paper, we target various image processing techniques to investigate applicability of speculative execution. We identify opportunities for safe and profitable speculative execution in image compression, edge detection, morphological filters, and blob recognition.

  2. Cone-beam volume CT mammographic imaging: feasibility study

    NASA Astrophysics Data System (ADS)

    Chen, Biao; Ning, Ruola

    2001-06-01

    X-ray projection mammography, using a film/screen combination or digital techniques, has proven to be the most effective imaging modality for early detection of breast cancer currently available. However, the inherent superimposition of structures makes small carcinoma (a few millimeters in size) difficult to detect in the occultation case or in dense breasts, resulting in a high false positive biopsy rate. The cone-beam x-ray projection based volume imaging using flat panel detectors (FPDs) makes it possible to obtain three-dimensional breast images. This may benefit diagnosis of the structure and pattern of the lesion while eliminating hard compression of the breast. This paper presents a novel cone-beam volume CT mammographic imaging protocol based on the above techniques. Through computer simulation, the key issues of the system and imaging techniques, including the x-ray imaging geometry and corresponding reconstruction algorithms, x-ray characteristics of breast tissues, x-ray setting techniques, the absorbed dose estimation and the quantitative effect of x-ray scattering on image quality, are addressed. The preliminary simulation results support the proposed cone-beam volume CT mammographic imaging modality in respect to feasibility and practicability for mammography. The absorbed dose level is comparable to that of current two-view mammography and would not be a prominent problem for this imaging protocol. Compared to traditional mammography, the proposed imaging protocol with isotropic spatial resolution will potentially provide significantly better low contrast detectability of breast tumors and more accurate location of breast lesions.

  3. Molecular breast imaging using a dedicated high-performance instrument

    NASA Astrophysics Data System (ADS)

    O'Connor, Michael K.; Wagenaar, Douglas; Hruska, Carrie B.; Phillips, Stephen; Caravaglia, Gina; Rhodes, Deborah

    2006-08-01

    In women with radiographically dense breasts, the sensitivity of mammography is less than 50%. With the increase in the percent of women with dense breasts, it is important to look at alternative screening techniques for this population. This article reviews the strengths and weaknesses of current imaging techniques and focuses on recent developments in semiconductor-based gamma camera systems that offer significant improvements in image quality over that achievable with single-crystal sodium iodide systems. We have developed a technique known as Molecular Breast Imaging (MBI) using small field of view Cadmium Zinc Telluride (CZT) gamma cameras that permits the breast to be imaged in a similar manner to mammography, using light pain-free compression. Computer simulations and experimental studies have shown that use of low-energy high sensitivity collimation coupled with the excellent energy resolution and intrinsic spatial resolution of CZT detectors provides optimum image quality for the detection of small breast lesions. Preliminary clinical studies with a prototype dual-detector system have demonstrated that Molecular Breast Imaging has a sensitivity of ~90% for the detection of breast tumors less than 10 mm in diameter. By comparison, conventional scintimammography only achieves a sensitivity of 50% in the detection of lesions < 10 mm. Because Molecular Breast Imaging is not affected by breast density, this technique may offer an important adjunct to mammography in the evaluation of women with dense breast parenchyma.

  4. Multispectral Image Compression Based on DSC Combined with CCSDS-IDC

    PubMed Central

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741

  5. Two-level image authentication by two-step phase-shifting interferometry and compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Xue; Meng, Xiangfeng; Yin, Yongkai; Yang, Xiulun; Wang, Yurong; Li, Xianye; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-01-01

    A two-level image authentication method is proposed; the method is based on two-step phase-shifting interferometry, double random phase encoding, and compressive sensing (CS) theory, by which the certification image can be encoded into two interferograms. Through discrete wavelet transform (DWT), sparseness processing, Arnold transform, and data compression, two compressed signals can be generated and delivered to two different participants of the authentication system. Only the participant who possesses the first compressed signal attempts to pass the low-level authentication. The application of Orthogonal Match Pursuit CS algorithm reconstruction, inverse Arnold transform, inverse DWT, two-step phase-shifting wavefront reconstruction, and inverse Fresnel transform can result in the output of a remarkable peak in the central location of the nonlinear correlation coefficient distributions of the recovered image and the standard certification image. Then, the other participant, who possesses the second compressed signal, is authorized to carry out the high-level authentication. Therefore, both compressed signals are collected to reconstruct the original meaningful certification image with a high correlation coefficient. Theoretical analysis and numerical simulations verify the feasibility of the proposed method.

  6. Multispectral image compression based on DSC combined with CCSDS-IDC.

    PubMed

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.

  7. Computer-controlled multi-parameter mapping of 3D compressible flowfields using planar laser-induced iodine fluorescence

    NASA Technical Reports Server (NTRS)

    Donohue, James M.; Victor, Kenneth G.; Mcdaniel, James C., Jr.

    1993-01-01

    A computer-controlled technique, using planar laser-induced iodine fluorescence, for measuring complex compressible flowfields is presented. A new laser permits the use of a planar two-line temperature technique so that all parameters can be measured with the laser operated narrowband. Pressure and temperature measurements in a step flowfield show agreement within 10 percent of a CFD model except in regions close to walls. Deviation of near wall temperature measurements from the model was decreased from 21 percent to 12 percent compared to broadband planar temperature measurements. Computer-control of the experiment has been implemented, except for the frequency tuning of the laser. Image data storage and processing has been improved by integrating a workstation into the experimental setup reducing the data reduction time by a factor of 50.

  8. Simulation of FIB-SEM images for analysis of porous microstructures.

    PubMed

    Prill, Torben; Schladitz, Katja

    2013-01-01

    Focused ion beam nanotomography-scanning electron microscopy tomography yields high-quality three-dimensional images of materials microstructures at the nanometer scale combining serial sectioning using a focused ion beam with SEM. However, FIB-SEM tomography of highly porous media leads to shine-through artifacts preventing automatic segmentation of the solid component. We simulate the SEM process in order to generate synthetic FIB-SEM image data for developing and validating segmentation methods. Monte-Carlo techniques yield accurate results, but are too slow for the simulation of FIB-SEM tomography requiring hundreds of SEM images for one dataset alone. Nevertheless, a quasi-analytic description of the specimen and various acceleration techniques, including a track compression algorithm and an acceleration for the simulation of secondary electrons, cut down the computing time by orders of magnitude, allowing for the first time to simulate FIB-SEM tomography. © Wiley Periodicals, Inc.

  9. Towards an in-plane methodology to track breast lesions using mammograms and patient-specific finite-element simulations

    NASA Astrophysics Data System (ADS)

    Lapuebla-Ferri, Andrés; Cegoñino-Banzo, José; Jiménez-Mocholí, Antonio-José; Pérez del Palomar, Amaya

    2017-11-01

    In breast cancer screening or diagnosis, it is usual to combine different images in order to locate a lesion as accurately as possible. These images are generated using a single or several imaging techniques. As x-ray-based mammography is widely used, a breast lesion is located in the same plane of the image (mammogram), but tracking it across mammograms corresponding to different views is a challenging task for medical physicians. Accordingly, simulation tools and methodologies that use patient-specific numerical models can facilitate the task of fusing information from different images. Additionally, these tools need to be as straightforward as possible to facilitate their translation to the clinical area. This paper presents a patient-specific, finite-element-based and semi-automated simulation methodology to track breast lesions across mammograms. A realistic three-dimensional computer model of a patient’s breast was generated from magnetic resonance imaging to simulate mammographic compressions in cranio-caudal (CC, head-to-toe) and medio-lateral oblique (MLO, shoulder-to-opposite hip) directions. For each compression being simulated, a virtual mammogram was obtained and posteriorly superimposed to the corresponding real mammogram, by sharing the nipple as a common feature. Two-dimensional rigid-body transformations were applied, and the error distance measured between the centroids of the tumors previously located on each image was 3.84 mm and 2.41 mm for CC and MLO compression, respectively. Considering that the scope of this work is to conceive a methodology translatable to clinical practice, the results indicate that it could be helpful in supporting the tracking of breast lesions.

  10. Improved Target Detection in Urban Structures Using Distributed Sensing and Fast Data Acquisition Techniques

    DTIC Science & Technology

    2013-04-01

    Trans. Signal Process., vol. 57, no. 6, pp. 2275-2284, 2009. [83] A. Gurbuz, J. IVIcClellan, and W. Scott, "Compressive sensing for subsurface ... imaging using ground penetrating radar," Signal Pracess., vol. 89, no. 10, pp. 1959 -1972, 2009. [84] A. Gurbuz, J. McClellan, and W. Scott, "A

  11. Observation of hohlraum-wall motion with spectrally selective x-ray imaging at the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Izumi, N.; Meezan, N. B.; Divol, L.; Hall, G. N.; Barrios, M. A.; Jones, O.; Landen, O. L.; Kroll, J. J.; Vonhof, S. A.; Nikroo, A.; Jaquez, J.; Bailey, C. G.; Hardy, C. M.; Ehrlich, R. B.; Town, R. P. J.; Bradley, D. K.; Hinkel, D. E.; Moody, J. D.

    2016-11-01

    The high fuel capsule compression required for indirect drive inertial confinement fusion requires careful control of the X-ray drive symmetry throughout the laser pulse. When the outer cone beams strike the hohlraum wall, the plasma ablated off the hohlraum wall expands into the hohlraum and can alter both the outer and inner cone beam propagations and hence the X-ray drive symmetry especially at the final stage of the drive pulse. To quantitatively understand the wall motion, we developed a new experimental technique which visualizes the expansion and stagnation of the hohlraum wall plasma. Details of the experiment and the technique of spectrally selective x-ray imaging are discussed.

  12. Observation of hohlraum-wall motion with spectrally selective x-ray imaging at the National Ignition Facility.

    PubMed

    Izumi, N; Meezan, N B; Divol, L; Hall, G N; Barrios, M A; Jones, O; Landen, O L; Kroll, J J; Vonhof, S A; Nikroo, A; Jaquez, J; Bailey, C G; Hardy, C M; Ehrlich, R B; Town, R P J; Bradley, D K; Hinkel, D E; Moody, J D

    2016-11-01

    The high fuel capsule compression required for indirect drive inertial confinement fusion requires careful control of the X-ray drive symmetry throughout the laser pulse. When the outer cone beams strike the hohlraum wall, the plasma ablated off the hohlraum wall expands into the hohlraum and can alter both the outer and inner cone beam propagations and hence the X-ray drive symmetry especially at the final stage of the drive pulse. To quantitatively understand the wall motion, we developed a new experimental technique which visualizes the expansion and stagnation of the hohlraum wall plasma. Details of the experiment and the technique of spectrally selective x-ray imaging are discussed.

  13. Intelligent bandwith compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.

  14. Onboard Image Processing System for Hyperspectral Sensor

    PubMed Central

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  15. JPEG vs. JPEG 2000: an objective comparison of image encoding quality

    NASA Astrophysics Data System (ADS)

    Ebrahimi, Farzad; Chamik, Matthieu; Winkler, Stefan

    2004-11-01

    This paper describes an objective comparison of the image quality of different encoders. Our approach is based on estimating the visual impact of compression artifacts on perceived quality. We present a tool that measures these artifacts in an image and uses them to compute a prediction of the Mean Opinion Score (MOS) obtained in subjective experiments. We show that the MOS predictions by our proposed tool are a better indicator of perceived image quality than PSNR, especially for highly compressed images. For the encoder comparison, we compress a set of 29 test images with two JPEG encoders (Adobe Photoshop and IrfanView) and three JPEG2000 encoders (JasPer, Kakadu, and IrfanView) at various compression ratios. We compute blockiness, blur, and MOS predictions as well as PSNR of the compressed images. Our results show that the IrfanView JPEG encoder produces consistently better images than the Adobe Photoshop JPEG encoder at the same data rate. The differences between the JPEG2000 encoders in our test are less pronounced; JasPer comes out as the best codec, closely followed by IrfanView and Kakadu. Comparing the JPEG- and JPEG2000-encoding quality of IrfanView, we find that JPEG has a slight edge at low compression ratios, while JPEG2000 is the clear winner at medium and high compression ratios.

  16. Splenorenal shunt via magnetic compression technique: a feasibility study in canine and cadaver.

    PubMed

    Xue, Fei; Li, Jianpeng; Lu, Jianwen; Zhu, Haoyang; Liu, Wenyan; Zhang, Hongke; Yang, Huan; Guo, Hongchang; Lv, Yi

    2016-12-01

    The concept of magnetic compression technique (MCT) has been accepted by surgeons to solve a variety of surgical problems. In this study, we attempted to explore the feasibility of a splenorenal shunt using MCT in canine and cadaver. The diameters of the splenic vein (SV), the left renal vein (LRV), and the vertical interval between them, were measured in computer tomography (CT) images obtained from 30 patients with portal hypertension and in 20 adult cadavers. The magnetic devices used for the splenorenal shunt were then manufactured based on the anatomic parameters measured above. The observation of the anatomical structure showed there were no special structural tissues or any important organs between SV and LRV. Then the magnetic compression splenorenal shunt procedure was performed in three dogs and five cadavers. Seven days later, the necrotic tissue between the two magnets was shed and the magnets were removed with the anchor wire. The feasibility of splenorenal shunt via MCT was successfully shown in both canine and cadaver, thus providing a theoretical support for future clinical application.

  17. Fast computational scheme of image compression for 32-bit microprocessors

    NASA Technical Reports Server (NTRS)

    Kasperovich, Leonid

    1994-01-01

    This paper presents a new computational scheme of image compression based on the discrete cosine transform (DCT), underlying JPEG and MPEG International Standards. The algorithm for the 2-d DCT computation uses integer operations (register shifts and additions / subtractions only); its computational complexity is about 8 additions per image pixel. As a meaningful example of an on-board image compression application we consider the software implementation of the algorithm for the Mars Rover (Marsokhod, in Russian) imaging system being developed as a part of Mars-96 International Space Project. It's shown that fast software solution for 32-bit microprocessors may compete with the DCT-based image compression hardware.

  18. Psychophysical Comparisons in Image Compression Algorithms.

    DTIC Science & Technology

    1999-03-01

    Leister, M., "Lossy Lempel - Ziv Algorithm for Large Alphabet Sources and Applications to Image Compression ," IEEE Proceedings, v.I, pp. 225-228, September...1623-1642, September 1990. Sanford, M.A., An Analysis of Data Compression Algorithms used in the Transmission of Imagery, Master’s Thesis, Naval...NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS PSYCHOPHYSICAL COMPARISONS IN IMAGE COMPRESSION ALGORITHMS by % Christopher J. Bodine • March

  19. Cosmological Particle Data Compression in Practice

    NASA Astrophysics Data System (ADS)

    Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.

    2017-12-01

    In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.

  20. Roundness variation in JPEG images affects the automated process of nuclear immunohistochemical quantification: correction with a linear regression model.

    PubMed

    López, Carlos; Jaén Martinez, Joaquín; Lejeune, Marylène; Escrivà, Patricia; Salvadó, Maria T; Pons, Lluis E; Alvaro, Tomás; Baucells, Jordi; García-Rojo, Marcial; Cugat, Xavier; Bosch, Ramón

    2009-10-01

    The volume of digital image (DI) storage continues to be an important problem in computer-assisted pathology. DI compression enables the size of files to be reduced but with the disadvantage of loss of quality. Previous results indicated that the efficiency of computer-assisted quantification of immunohistochemically stained cell nuclei may be significantly reduced when compressed DIs are used. This study attempts to show, with respect to immunohistochemically stained nuclei, which morphometric parameters may be altered by the different levels of JPEG compression, and the implications of these alterations for automated nuclear counts, and further, develops a method for correcting this discrepancy in the nuclear count. For this purpose, 47 DIs from different tissues were captured in uncompressed TIFF format and converted to 1:3, 1:23 and 1:46 compression JPEG images. Sixty-five positive objects were selected from these images, and six morphological parameters were measured and compared for each object in TIFF images and those of the different compression levels using a set of previously developed and tested macros. Roundness proved to be the only morphological parameter that was significantly affected by image compression. Factors to correct the discrepancy in the roundness estimate were derived from linear regression models for each compression level, thereby eliminating the statistically significant differences between measurements in the equivalent images. These correction factors were incorporated in the automated macros, where they reduced the nuclear quantification differences arising from image compression. Our results demonstrate that it is possible to carry out unbiased automated immunohistochemical nuclear quantification in compressed DIs with a methodology that could be easily incorporated in different systems of digital image analysis.

  1. Selective document image data compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1998-05-19

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel. 10 figs.

  2. Selective document image data compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1998-01-01

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel.--(235 words)

  3. Accelerating cine-MR Imaging in Mouse Hearts Using Compressed Sensing

    PubMed Central

    Wech, Tobias; Lemke, Angela; Medway, Debra; Stork, Lee-Anne; Lygate, Craig A; Neubauer, Stefan; Köstler, Herbert; Schneider, Jürgen E

    2011-01-01

    Purpose To combine global cardiac function imaging with compressed sensing (CS) in order to reduce scan time and to validate this technique in normal mouse hearts and in a murine model of chronic myocardial infarction. Materials and Methods To determine the maximally achievable acceleration factor, fully acquired cine data, obtained in sham and chronically infarcted (MI) mouse hearts were 2–4-fold undersampled retrospectively, followed by CS reconstruction and blinded image segmentation. Subsequently, dedicated CS sampling schemes were implemented at a preclinical 9.4 T magnetic resonance imaging (MRI) system, and 2- and 3-fold undersampled cine data were acquired in normal mouse hearts with high temporal and spatial resolution. Results The retrospective analysis demonstrated that an undersampling factor of three is feasible without impairing accuracy of cardiac functional parameters. Dedicated CS sampling schemes applied prospectively to normal mouse hearts yielded comparable left-ventricular functional parameters, and intra- and interobserver variability between fully and 3-fold undersampled data. Conclusion This study introduces and validates an alternative means to speed up experimental cine-MRI without the need for expensive hardware. J. Magn. Reson. Imaging 2011. © 2011 Wiley Periodicals, Inc. PMID:21932360

  4. Localized Spatio-Temporal Constraints for Accelerated CMR Perfusion

    PubMed Central

    Akçakaya, Mehmet; Basha, Tamer A.; Pflugi, Silvio; Foppa, Murilo; Kissinger, Kraig V.; Hauser, Thomas H.; Nezafat, Reza

    2013-01-01

    Purpose To develop and evaluate an image reconstruction technique for cardiac MRI (CMR)perfusion that utilizes localized spatio-temporal constraints. Methods CMR perfusion plays an important role in detecting myocardial ischemia in patients with coronary artery disease. Breath-hold k-t based image acceleration techniques are typically used in CMR perfusion for superior spatial/temporal resolution, and improved coverage. In this study, we propose a novel compressed sensing based image reconstruction technique for CMR perfusion, with applicability to free-breathing examinations. This technique uses local spatio-temporal constraints by regularizing image patches across a small number of dynamics. The technique is compared to conventional dynamic-by-dynamic reconstruction, and sparsity regularization using a temporal principal-component (pc) basis, as well as zerofilled data in multi-slice 2D and 3D CMR perfusion. Qualitative image scores are used (1=poor, 4=excellent) to evaluate the technique in 3D perfusion in 10 patients and 5 healthy subjects. On 4 healthy subjects, the proposed technique was also compared to a breath-hold multi-slice 2D acquisition with parallel imaging in terms of signal intensity curves. Results The proposed technique results in images that are superior in terms of spatial and temporal blurring compared to the other techniques, even in free-breathing datasets. The image scores indicate a significant improvement compared to other techniques in 3D perfusion (2.8±0.5 vs. 2.3±0.5 for x-pc regularization, 1.7±0.5 for dynamic-by-dynamic, 1.1±0.2 for zerofilled). Signal intensity curves indicate similar dynamics of uptake between the proposed method with a 3D acquisition and the breath-hold multi-slice 2D acquisition with parallel imaging. Conclusion The proposed reconstruction utilizes sparsity regularization based on localized information in both spatial and temporal domains for highly-accelerated CMR perfusion with potential utility in free-breathing 3D acquisitions. PMID:24123058

  5. JPEG2000 Image Compression on Solar EUV Images

    NASA Astrophysics Data System (ADS)

    Fischer, Catherine E.; Müller, Daniel; De Moortel, Ineke

    2017-01-01

    For future solar missions as well as ground-based telescopes, efficient ways to return and process data have become increasingly important. Solar Orbiter, which is the next ESA/NASA mission to explore the Sun and the heliosphere, is a deep-space mission, which implies a limited telemetry rate that makes efficient onboard data compression a necessity to achieve the mission science goals. Missions like the Solar Dynamics Observatory (SDO) and future ground-based telescopes such as the Daniel K. Inouye Solar Telescope, on the other hand, face the challenge of making petabyte-sized solar data archives accessible to the solar community. New image compression standards address these challenges by implementing efficient and flexible compression algorithms that can be tailored to user requirements. We analyse solar images from the Atmospheric Imaging Assembly (AIA) instrument onboard SDO to study the effect of lossy JPEG2000 (from the Joint Photographic Experts Group 2000) image compression at different bitrates. To assess the quality of compressed images, we use the mean structural similarity (MSSIM) index as well as the widely used peak signal-to-noise ratio (PSNR) as metrics and compare the two in the context of solar EUV images. In addition, we perform tests to validate the scientific use of the lossily compressed images by analysing examples of an on-disc and off-limb coronal-loop oscillation time-series observed by AIA/SDO.

  6. Tomographic Image Compression Using Multidimensional Transforms.

    ERIC Educational Resources Information Center

    Villasenor, John D.

    1994-01-01

    Describes a method for compressing tomographic images obtained using Positron Emission Tomography (PET) and Magnetic Resonance (MR) by applying transform compression using all available dimensions. This takes maximum advantage of redundancy of the data, allowing significant increases in compression efficiency and performance. (13 references) (KRN)

  7. Toward an MRI-based method to measure non-uniform cartilage deformation: an MRI-cyclic loading apparatus system and steady-state cyclic displacement of articular cartilage under compressive loading.

    PubMed

    Neu, C P; Hull, M L

    2003-04-01

    Recent magnetic resonance imaging (MRI) techniques have shown potential for measuring non-uniform deformations throughout the volume (i.e. three-dimensional (3D) deformations) in small orthopedic tissues such as articular cartilage. However, to analyze cartilage deformation using MRI techniques, a system is required which can construct images from multiple acquisitions of MRI signals from the cartilage in both the underformed and deformed states. The objectives of the work reported in this article were to 1) design an apparatus that could apply highly repeatable cyclic compressive loads of 400 N and operate in the bore of an MRI scanner, 2) demonstrate that the apparatus and MRI scanner can be successfully integrated to observe 3D deformations in a phantom material, 3) use the apparatus to determine the load cycle necessary to achieve a steady-state deformation response in normal bovine articular cartilage samples using a flat-surfaced and nonporous indentor in unconfined compression. Composed of electronic and pneumatic components, the apparatus regulated pressure to a double-acting pneumatic cylinder so that (1) load-controlled compression cycles were applied to cartilage samples immersed in a saline bath, (2) loading and recovery periods within a cycle varied in time duration, and (3) load magnitude varied so that the stress applied to cartilage samples was within typical physiological ranges. In addition the apparatus allowed gating for MR image acquisition, and operation within the bore of an MRI scanner without creating image artifacts. The apparatus demonstrated high repeatability in load application with a standard deviation of 1.8% of the mean 400 N load applied. When the apparatus was integrated with an MRI scanner programmed with appropriate pulse sequences, images of a phantom material in both the underformed and deformed states were constructed by assembling data acquired through multiple signal acquisitions. Additionally, the number of cycles to reach a steady-state response in normal bovine articular cartilage was 49 for a total cycle duration of 5 seconds, but decreased to 33 and 27 for increasing total cycle durations of 10 and 15 seconds, respectively. Once the steady-state response was achieved, 95% of all displacements were within +/- 7.42 microns of the mean displacement, indicating that the displacement response to the cyclic loads was highly repeatable. With this performance, the MRI-loading apparatus system meets the requirements to create images of articular cartilage from which 3D deformation can be determined.

  8. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  9. Intelligent bandwidth compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 bandwidth-compressed images are presented.

  10. Planning and executing motions for multibody systems in free-fall. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Cameron, Jonathan M.

    1991-01-01

    The purpose of this research is to develop an end-to-end system that can be applied to a multibody system in free-fall to analyze its possible motions, save those motions in a database, and design a controller that can execute those motions. A goal is for the process to be highly automated and involve little human intervention. Ideally, the output of the system would be data and algorithms that could be put in ROM to control the multibody system in free-fall. The research applies to more than just robots in space. It applies to any multibody system in free-fall. Mathematical techniques from nonlinear control theory were used to study the nature of the system dynamics and its possible motions. Optimization techniques were applied to plan motions. Image compression techniques were proposed to compress the precomputed motion data for storage. A linearized controller was derived to control the system while it executes preplanned trajectories.

  11. Quantitative DLA-based compressed sensing for T1-weighted acquisitions

    NASA Astrophysics Data System (ADS)

    Svehla, Pavel; Nguyen, Khieu-Van; Li, Jing-Rebecca; Ciobanu, Luisa

    2017-08-01

    High resolution Manganese Enhanced Magnetic Resonance Imaging (MEMRI), which uses manganese as a T1 contrast agent, has great potential for functional imaging of live neuronal tissue at single neuron scale. However, reaching high resolutions often requires long acquisition times which can lead to reduced image quality due to sample deterioration and hardware instability. Compressed Sensing (CS) techniques offer the opportunity to significantly reduce the imaging time. The purpose of this work is to test the feasibility of CS acquisitions based on Diffusion Limited Aggregation (DLA) sampling patterns for high resolution quantitative T1-weighted imaging. Fully encoded and DLA-CS T1-weighted images of Aplysia californica neural tissue were acquired on a 17.2T MRI system. The MR signal corresponding to single, identified neurons was quantified for both versions of the T1 weighted images. For a 50% undersampling, DLA-CS can accurately quantify signal intensities in T1-weighted acquisitions leading to only 1.37% differences when compared to the fully encoded data, with minimal impact on image spatial resolution. In addition, we compared the conventional polynomial undersampling scheme with the DLA and showed that, for the data at hand, the latter performs better. Depending on the image signal to noise ratio, higher undersampling ratios can be used to further reduce the acquisition time in MEMRI based functional studies of living tissues.

  12. Technology and Technique Standards for Camera-Acquired Digital Dermatologic Images: A Systematic Review.

    PubMed

    Quigley, Elizabeth A; Tokay, Barbara A; Jewell, Sarah T; Marchetti, Michael A; Halpern, Allan C

    2015-08-01

    Photographs are invaluable dermatologic diagnostic, management, research, teaching, and documentation tools. Digital Imaging and Communications in Medicine (DICOM) standards exist for many types of digital medical images, but there are no DICOM standards for camera-acquired dermatologic images to date. To identify and describe existing or proposed technology and technique standards for camera-acquired dermatologic images in the scientific literature. Systematic searches of the PubMed, EMBASE, and Cochrane databases were performed in January 2013 using photography and digital imaging, standardization, and medical specialty and medical illustration search terms and augmented by a gray literature search of 14 websites using Google. Two reviewers independently screened titles of 7371 unique publications, followed by 3 sequential full-text reviews, leading to the selection of 49 publications with the most recent (1985-2013) or detailed description of technology or technique standards related to the acquisition or use of images of skin disease (or related conditions). No universally accepted existing technology or technique standards for camera-based digital images in dermatology were identified. Recommendations are summarized for technology imaging standards, including spatial resolution, color resolution, reproduction (magnification) ratios, postacquisition image processing, color calibration, compression, output, archiving and storage, and security during storage and transmission. Recommendations are also summarized for technique imaging standards, including environmental conditions (lighting, background, and camera position), patient pose and standard view sets, and patient consent, privacy, and confidentiality. Proposed standards for specific-use cases in total body photography, teledermatology, and dermoscopy are described. The literature is replete with descriptions of obtaining photographs of skin disease, but universal imaging standards have not been developed, validated, and adopted to date. Dermatologic imaging is evolving without defined standards for camera-acquired images, leading to variable image quality and limited exchangeability. The development and adoption of universal technology and technique standards may first emerge in scenarios when image use is most associated with a defined clinical benefit.

  13. Basilar Artery Ectasia Causing Trigeminal Neuralgia: An Evolved Technique of Transpositional Suture-Pexy.

    PubMed

    Singh, Harminder; da Silva, Harley Brito; Zeinalizadeh, Mehdi; Elarjani, Turki; Straus, David; Sekhar, Laligam N

    2018-02-01

    Microvascular decompression for patients with trigeminal neuralgia (TGN) is widely accepted as one of the modalities of treatment. The standard approach has been retrosigmoid suboccipital craniotomy with placement of a Teflon pledget to cushion the trigeminal nerve from the offending artery, or cauterize and divide the offending vein(s). However, in cases of severe compression caused by a large artery, the standard decompression technique may not be effective. To describe a unique technique of vasculopexy of the ectatic basilar artery to the tentorium in a patient with TGN attributed to a severely ectatic and tortuous basilar artery. A case series of patients who underwent this technique of vasculopexy for arterial compression is presented. The patient underwent a subtemporal transtentorial approach and the basilar artery was mobilized away from the trigeminal nerve. A suture was then passed through the wall of the basilar artery (tunica media) and secured to the tentorial edge, to keep the artery away from the nerve. The neuralgia was promptly relieved after the operation, with no complications. A postoperative magnetic resonance imaging scan showed the basilar artery to be away from the trigeminal root. In a series of 7 patients who underwent this technique of vasculopexy, no arterial complications were noted at short- or long-term follow-up. Repositioning and vasculopexy of an ectatic basilar artery for the treatment of TGN is safe and effective. This technique can also be used for other neuropathies that result from direct arterial compression. Copyright © 2017 by the Congress of Neurological Surgeons

  14. Mapping in-vivo optic nerve head strains caused by intraocular and intracranial pressures

    NASA Astrophysics Data System (ADS)

    Tran, H.; Grimm, J.; Wang, B.; Smith, M. A.; Gogola, A.; Nelson, S.; Tyler-Kabara, E.; Schuman, J.; Wollstein, G.; Sigal, I. A.

    2017-02-01

    Although it is well documented that abnormal levels of either intraocular (IOP) or intracranial pressure (ICP) can lead to potentially blinding conditions, such as glaucoma and papilledema, little is known about how the pressures actually affect the eye. Even less is known about potential interplay between their effects, namely how the level of one pressure might alter the effects of the other. Our goal was to measure in-vivo the pressure-induced stretch and compression of the lamina cribrosa due to acute changes of IOP and ICP. The lamina cribrosa is a structure within the optic nerve head, in the back of the eye. It is important because it is in the lamina cribrosa that the pressure-induced deformations are believed to initiate damage to neural tissues leading to blindness. An eye of a rhesus macaque monkey was imaged in-vivo with optical coherence tomography while IOP and ICP were controlled through cannulas in the anterior chamber and lateral ventricle, respectively. The image volumes were analyzed with a newly developed digital image correlation technique. The effects of both pressures were highly localized, nonlinear and non-monotonic, with strong interactions. Pressure variations from the baseline normal levels caused substantial stretch and compression of the neural tissues in the posterior pole, sometimes exceeding 20%. Chronic exposure to such high levels of biomechanical insult would likely lead to neural tissue damage and loss of vision. Our results demonstrate the power of digital image correlation technique based on non-invasive imaging technologies to help understand how pressures induce biomechanical insults and lead to vision problems.

  15. The pore characteristics of geopolymer foam concrete and their impact on the compressive strength and modulus

    NASA Astrophysics Data System (ADS)

    Zhang, Zuhua; Wang, Hao

    2016-08-01

    The pore characteristics of GFCs manufactured in the laboratory with 0-16% foam additions were examined using image analysis (IA) and vacuum water saturation techniques. The pore size distribution, pore shape and porosity were obtained. The IA method provides a suitable approach to obtain the information of large pores, which are more important in affecting the compressive strength of GFC. By examining the applicability of the existing models of predicting compressive strength of foam concrete, a modified Ryshkevitch’s model is proposed for GFC, in which only the porosity that is contributed by the pores over a critical diameter (>100 μm) is considered. This “critical void model” is shown to have very satisfying prediction capability in the studied range of porosity. A compression-modulus model for Portland cement concrete is recommended for predicting the compression modulus elasticity of GFC. This study confirms that GFC have similar pore structures and mechanical behavior as those Portland cement foam concrete and can be used alternatively in the industry for the construction and insulation purposes.

  16. An Efficient, Lossless Database for Storing and Transmitting Medical Images

    NASA Technical Reports Server (NTRS)

    Fenstermacher, Marc J.

    1998-01-01

    This research aimed in creating new compression methods based on the central idea of Set Redundancy Compression (SRC). Set Redundancy refers to the common information that exists in a set of similar images. SRC compression methods take advantage of this common information and can achieve improved compression of similar images by reducing their Set Redundancy. The current research resulted in the development of three new lossless SRC compression methods: MARS (Median-Aided Region Sorting), MAZE (Max-Aided Zero Elimination) and MaxGBA (Max-Guided Bit Allocation).

  17. Low Complexity Compression and Speed Enhancement for Optical Scanning Holography

    PubMed Central

    Tsang, P. W. M.; Poon, T.-C.; Liu, J.-P.; Kim, T.; Kim, Y. S.

    2016-01-01

    In this paper we report a low complexity compression method that is suitable for compact optical scanning holography (OSH) systems with different optical settings. Our proposed method can be divided into 2 major parts. First, an automatic decision maker is applied to select the rows of holographic pixels to be scanned. This process enhances the speed of acquiring a hologram, and also lowers the data rate. Second, each row of down-sampled pixels is converted into a one-bit representation with delta modulation (DM). Existing DM-based hologram compression techniques suffers from the disadvantage that a core parameter, commonly known as the step size, has to be determined in advance. However, the correct value of the step size for compressing each row of hologram is dependent on the dynamic range of the pixels, which could deviate significantly with the object scene, as well as OSH systems with different opical settings. We have overcome this problem by incorporating a dynamic step-size adjustment scheme. The proposed method is applied in the compression of holograms that are acquired with 2 different OSH systems, demonstrating a compression ratio of over two orders of magnitude, while preserving favorable fidelity on the reconstructed images. PMID:27708410

  18. Accelerated dynamic EPR imaging using fast acquisition and compressive recovery

    NASA Astrophysics Data System (ADS)

    Ahmad, Rizwan; Samouilov, Alexandre; Zweier, Jay L.

    2016-12-01

    Electron paramagnetic resonance (EPR) allows quantitative imaging of tissue redox status, which provides important information about ischemic syndromes, cancer and other pathologies. For continuous wave EPR imaging, however, poor signal-to-noise ratio and low acquisition efficiency limit its ability to image dynamic processes in vivo including tissue redox, where conditions can change rapidly. Here, we present a data acquisition and processing framework that couples fast acquisition with compressive sensing-inspired image recovery to enable EPR-based redox imaging with high spatial and temporal resolutions. The fast acquisition (FA) allows collecting more, albeit noisier, projections in a given scan time. The composite regularization based processing method, called spatio-temporal adaptive recovery (STAR), not only exploits sparsity in multiple representations of the spatio-temporal image but also adaptively adjusts the regularization strength for each representation based on its inherent level of the sparsity. As a result, STAR adjusts to the disparity in the level of sparsity across multiple representations, without introducing any tuning parameter. Our simulation and phantom imaging studies indicate that a combination of fast acquisition and STAR (FASTAR) enables high-fidelity recovery of volumetric image series, with each volumetric image employing less than 10 s of scan. In addition to image fidelity, the time constants derived from FASTAR also match closely to the ground truth even when a small number of projections are used for recovery. This development will enhance the capability of EPR to study fast dynamic processes that cannot be investigated using existing EPR imaging techniques.

  19. Observer detection of image degradation caused by irreversible data compression processes

    NASA Astrophysics Data System (ADS)

    Chen, Ji; Flynn, Michael J.; Gross, Barry; Spizarny, David

    1991-05-01

    Irreversible data compression methods have been proposed to reduce the data storage and communication requirements of digital imaging systems. In general, the error produced by compression increases as an algorithm''s compression ratio is increased. We have studied the relationship between compression ratios and the detection of induced error using radiologic observers. The nature of the errors was characterized by calculating the power spectrum of the difference image. In contrast with studies designed to test whether detected errors alter diagnostic decisions, this study was designed to test whether observers could detect the induced error. A paired-film observer study was designed to test whether induced errors were detected. The study was conducted with chest radiographs selected and ranked for subtle evidence of interstitial disease, pulmonary nodules, or pneumothoraces. Images were digitized at 86 microns (4K X 5K) and 2K X 2K regions were extracted. A full-frame discrete cosine transform method was used to compress images at ratios varying between 6:1 and 60:1. The decompressed images were reprinted next to the original images in a randomized order with a laser film printer. The use of a film digitizer and a film printer which can reproduce all of the contrast and detail in the original radiograph makes the results of this study insensitive to instrument performance and primarily dependent on radiographic image quality. The results of this study define conditions for which errors associated with irreversible compression cannot be detected by radiologic observers. The results indicate that an observer can detect the errors introduced by this compression algorithm for compression ratios of 10:1 (1.2 bits/pixel) or higher.

  20. Review and Implementation of the Emerging CCSDS Recommended Standard for Multispectral and Hyperspectral Lossless Image Coding

    NASA Technical Reports Server (NTRS)

    Sanchez, Jose Enrique; Auge, Estanislau; Santalo, Josep; Blanes, Ian; Serra-Sagrista, Joan; Kiely, Aaron

    2011-01-01

    A new standard for image coding is being developed by the MHDC working group of the CCSDS, targeting onboard compression of multi- and hyper-spectral imagery captured by aircraft and satellites. The proposed standard is based on the "Fast Lossless" adaptive linear predictive compressor, and is adapted to better overcome issues of onboard scenarios. In this paper, we present a review of the state of the art in this field, and provide an experimental comparison of the coding performance of the emerging standard in relation to other state-of-the-art coding techniques. Our own independent implementation of the MHDC Recommended Standard, as well as of some of the other techniques, has been used to provide extensive results over the vast corpus of test images from the CCSDS-MHDC.

Top