Sample records for image sequence compression

  1. Novel Near-Lossless Compression Algorithm for Medical Sequence Images with Adaptive Block-Based Spatial Prediction.

    PubMed

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2016-12-01

    To address the low compression efficiency of lossless compression and the low image quality of general near-lossless compression, a novel near-lossless compression algorithm based on adaptive spatial prediction is proposed for medical sequence images for possible diagnostic use in this paper. The proposed method employs adaptive block size-based spatial prediction to predict blocks directly in the spatial domain and Lossless Hadamard Transform before quantization to improve the quality of reconstructed images. The block-based prediction breaks the pixel neighborhood constraint and takes full advantage of the local spatial correlations found in medical images. The adaptive block size guarantees a more rational division of images and the improved use of the local structure. The results indicate that the proposed algorithm can efficiently compress medical images and produces a better peak signal-to-noise ratio (PSNR) under the same pre-defined distortion than other near-lossless methods.

  2. High compression image and image sequence coding

    NASA Technical Reports Server (NTRS)

    Kunt, Murat

    1989-01-01

    The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.

  3. Compressive sensing method for recognizing cat-eye effect targets.

    PubMed

    Li, Li; Li, Hui; Dang, Ersheng; Liu, Bo

    2013-10-01

    This paper proposes a cat-eye effect target recognition method with compressive sensing (CS) and presents a recognition method (sample processing before reconstruction based on compressed sensing, or SPCS) for image processing. In this method, the linear projections of original image sequences are applied to remove dynamic background distractions and extract cat-eye effect targets. Furthermore, the corresponding imaging mechanism for acquiring active and passive image sequences is put forward. This method uses fewer images to recognize cat-eye effect targets, reduces data storage, and translates the traditional target identification, based on original image processing, into measurement vectors processing. The experimental results show that the SPCS method is feasible and superior to the shape-frequency dual criteria method.

  4. Visual pattern image sequence coding

    NASA Technical Reports Server (NTRS)

    Silsbee, Peter; Bovik, Alan C.; Chen, Dapang

    1990-01-01

    The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.

  5. CoGI: Towards Compressing Genomes as an Image.

    PubMed

    Xie, Xiaojing; Zhou, Shuigeng; Guan, Jihong

    2015-01-01

    Genomic science is now facing an explosive increase of data thanks to the fast development of sequencing technology. This situation poses serious challenges to genomic data storage and transferring. It is desirable to compress data to reduce storage and transferring cost, and thus to boost data distribution and utilization efficiency. Up to now, a number of algorithms / tools have been developed for compressing genomic sequences. Unlike the existing algorithms, most of which treat genomes as one-dimensional text strings and compress them based on dictionaries or probability models, this paper proposes a novel approach called CoGI (the abbreviation of Compressing Genomes as an Image) for genome compression, which transforms the genomic sequences to a two-dimensional binary image (or bitmap), then applies a rectangular partition coding algorithm to compress the binary image. CoGI can be used as either a reference-based compressor or a reference-free compressor. For the former, we develop two entropy-based algorithms to select a proper reference genome. Performance evaluation is conducted on various genomes. Experimental results show that the reference-based CoGI significantly outperforms two state-of-the-art reference-based genome compressors GReEn and RLZ-opt in both compression ratio and compression efficiency. It also achieves comparable compression ratio but two orders of magnitude higher compression efficiency in comparison with XM--one state-of-the-art reference-free genome compressor. Furthermore, our approach performs much better than Gzip--a general-purpose and widely-used compressor, in both compression speed and compression ratio. So, CoGI can serve as an effective and practical genome compressor. The source code and other related documents of CoGI are available at: http://admis.fudan.edu.cn/projects/cogi.htm.

  6. Cluster compression algorithm: A joint clustering/data compression concept

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.

    1977-01-01

    The Cluster Compression Algorithm (CCA), which was developed to reduce costs associated with transmitting, storing, distributing, and interpreting LANDSAT multispectral image data is described. The CCA is a preprocessing algorithm that uses feature extraction and data compression to more efficiently represent the information in the image data. The format of the preprocessed data enables simply a look-up table decoding and direct use of the extracted features to reduce user computation for either image reconstruction, or computer interpretation of the image data. Basically, the CCA uses spatially local clustering to extract features from the image data to describe spectral characteristics of the data set. In addition, the features may be used to form a sequence of scalar numbers that define each picture element in terms of the cluster features. This sequence, called the feature map, is then efficiently represented by using source encoding concepts. Various forms of the CCA are defined and experimental results are presented to show trade-offs and characteristics of the various implementations. Examples are provided that demonstrate the application of the cluster compression concept to multi-spectral images from LANDSAT and other sources.

  7. Compressed Sensing SEMAC: 8-fold Accelerated High Resolution Metal Artifact Reduction MRI of Cobalt-Chromium Knee Arthroplasty Implants.

    PubMed

    Fritz, Jan; Ahlawat, Shivani; Demehri, Shadpour; Thawait, Gaurav K; Raithel, Esther; Gilson, Wesley D; Nittka, Mathias

    2016-10-01

    The aim of this study was to prospectively test the hypothesis that a compressed sensing-based slice encoding for metal artifact correction (SEMAC) turbo spin echo (TSE) pulse sequence prototype facilitates high-resolution metal artifact reduction magnetic resonance imaging (MRI) of cobalt-chromium knee arthroplasty implants within acquisition times of less than 5 minutes, thereby yielding better image quality than high-bandwidth (BW) TSE of similar length and similar image quality than lengthier SEMAC standard of reference pulse sequences. This prospective study was approved by our institutional review board. Twenty asymptomatic subjects (12 men, 8 women; mean age, 56 years; age range, 44-82 years) with total knee arthroplasty implants underwent MRI of the knee using a commercially available, clinical 1.5 T MRI system. Two compressed sensing-accelerated SEMAC prototype pulse sequences with 8-fold undersampling and acquisition times of approximately 5 minutes each were compared with commercially available high-BW and SEMAC pulse sequences with acquisition times of approximately 5 minutes and 11 minutes, respectively. For each pulse sequence type, sagittal intermediate-weighted (TR, 3750-4120 milliseconds; TE, 26-28 milliseconds; voxel size, 0.5 × 0.5 × 3 mm) and short tau inversion recovery (TR, 4010 milliseconds; TE, 5.2-7.5 milliseconds; voxel size, 0.8 × 0.8 × 4 mm) were acquired. Outcome variables included image quality, display of the bone-implant interfaces and pertinent knee structures, artifact size, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR). Statistical analysis included Friedman, repeated measures analysis of variances, and Cohen weighted k tests. Bonferroni-corrected P values of 0.005 and less were considered statistically significant. Image quality, bone-implant interfaces, anatomic structures, artifact size, SNR, and CNR parameters were statistically similar between the compressed sensing-accelerated SEMAC prototype and SEMAC commercial pulse sequences. There was mild blur on images of both SEMAC sequences when compared with high-BW images (P < 0.001), which however did not impair the assessment of knee structures. Metal artifact reduction and visibility of central knee structures and bone-implant interfaces were good to very good and significantly better on both types of SEMAC than on high-BW images (P < 0.004). All 3 pulse sequences showed peripheral structures similarly well. The implant artifact size was 46% to 51% larger on high-BW images when compared with both types of SEMAC images (P < 0.0001). Signal-to-noise ratios and CNRs of fat tissue, tendon tissue, muscle tissue, and fluid were statistically similar on intermediate-weighted MR images of all 3 pulse sequence types. On short tau inversion recovery images, the SNRs of tendon tissue and the CNRs of fat and fluid, fluid and muscle, as well as fluid and tendon were significantly higher on SEMAC and compressed sensing SEMAC images (P < 0.005, respectively). We accept the hypothesis that prospective compressed sensing acceleration of SEMAC is feasible for high-quality metal artifact reduction MRI of cobalt-chromium knee arthroplasty implants in less than 5 minutes and yields better quality than high-BW TSE and similarly high quality than lengthier SEMAC pulse sequences.

  8. Simultaneous compression and encryption of closely resembling images: application to video sequences and polarimetric images.

    PubMed

    Aldossari, M; Alfalou, A; Brosseau, C

    2014-09-22

    This study presents and validates an optimized method of simultaneous compression and encryption designed to process images with close spectra. This approach is well adapted to the compression and encryption of images of a time-varying scene but also to static polarimetric images. We use the recently developed spectral fusion method [Opt. Lett.35, 1914-1916 (2010)] to deal with the close resemblance of the images. The spectral plane (containing the information to send and/or to store) is decomposed in several independent areas which are assigned according a specific way. In addition, each spectrum is shifted in order to minimize their overlap. The dual purpose of these operations is to optimize the spectral plane allowing us to keep the low- and high-frequency information (compression) and to introduce an additional noise for reconstructing the images (encryption). Our results show that not only can the control of the spectral plane enhance the number of spectra to be merged, but also that a compromise between the compression rate and the quality of the reconstructed images can be tuned. We use a root-mean-square (RMS) optimization criterion to treat compression. Image encryption is realized at different security levels. Firstly, we add a specific encryption level which is related to the different areas of the spectral plane, and then, we make use of several random phase keys. An in-depth analysis at the spectral fusion methodology is done in order to find a good trade-off between the compression rate and the quality of the reconstructed images. Our new proposal spectral shift allows us to minimize the image overlap. We further analyze the influence of the spectral shift on the reconstructed image quality and compression rate. The performance of the multiple-image optical compression and encryption method is verified by analyzing several video sequences and polarimetric images.

  9. Image compression-encryption algorithms by combining hyper-chaotic system with discrete fractional random transform

    NASA Astrophysics Data System (ADS)

    Gong, Lihua; Deng, Chengzhi; Pan, Shumin; Zhou, Nanrun

    2018-07-01

    Based on hyper-chaotic system and discrete fractional random transform, an image compression-encryption algorithm is designed. The original image is first transformed into a spectrum by the discrete cosine transform and the resulting spectrum is compressed according to the method of spectrum cutting. The random matrix of the discrete fractional random transform is controlled by a chaotic sequence originated from the high dimensional hyper-chaotic system. Then the compressed spectrum is encrypted by the discrete fractional random transform. The order of DFrRT and the parameters of the hyper-chaotic system are the main keys of this image compression and encryption algorithm. The proposed algorithm can compress and encrypt image signal, especially can encrypt multiple images once. To achieve the compression of multiple images, the images are transformed into spectra by the discrete cosine transform, and then the spectra are incised and spliced into a composite spectrum by Zigzag scanning. Simulation results demonstrate that the proposed image compression and encryption algorithm is of high security and good compression performance.

  10. Fractal-Based Image Compression

    DTIC Science & Technology

    1990-01-01

    used Ziv - Lempel - experiments and for software development. Addi- Welch compression algorithm (ZLW) [51 [4] was used tional thanks to Roger Boss, Bill...vol17no. 6 (June 4) and with the minimum number of maps. [5] J. Ziv and A. Lempel , Compression of !ndivid- 5 Summary ual Sequences via Variable-Rate...transient and should be discarded. 2.5 Collage Theorem algorithm2 C3.2 Deterministic Algorithm for IFS Attractor For fast image compression the best

  11. Storage and retrieval of large digital images

    DOEpatents

    Bradley, J.N.

    1998-01-20

    Image compression and viewing are implemented with (1) a method for performing DWT-based compression on a large digital image with a computer system possessing a two-level system of memory and (2) a method for selectively viewing areas of the image from its compressed representation at multiple resolutions and, if desired, in a client-server environment. The compression of a large digital image I(x,y) is accomplished by first defining a plurality of discrete tile image data subsets T{sub ij}(x,y) that, upon superposition, form the complete set of image data I(x,y). A seamless wavelet-based compression process is effected on I(x,y) that is comprised of successively inputting the tiles T{sub ij}(x,y) in a selected sequence to a DWT routine, and storing the resulting DWT coefficients in a first primary memory. These coefficients are periodically compressed and transferred to a secondary memory to maintain sufficient memory in the primary memory for data processing. The sequence of DWT operations on the tiles T{sub ij}(x,y) effectively calculates a seamless DWT of I(x,y). Data retrieval consists of specifying a resolution and a region of I(x,y) for display. The subset of stored DWT coefficients corresponding to each requested scene is determined and then decompressed for input to an inverse DWT, the output of which forms the image display. The repeated process whereby image views are specified may take the form an interaction with a computer pointing device on an image display from a previous retrieval. 6 figs.

  12. Storage and retrieval of large digital images

    DOEpatents

    Bradley, Jonathan N.

    1998-01-01

    Image compression and viewing are implemented with (1) a method for performing DWT-based compression on a large digital image with a computer system possessing a two-level system of memory and (2) a method for selectively viewing areas of the image from its compressed representation at multiple resolutions and, if desired, in a client-server environment. The compression of a large digital image I(x,y) is accomplished by first defining a plurality of discrete tile image data subsets T.sub.ij (x,y) that, upon superposition, form the complete set of image data I(x,y). A seamless wavelet-based compression process is effected on I(x,y) that is comprised of successively inputting the tiles T.sub.ij (x,y) in a selected sequence to a DWT routine, and storing the resulting DWT coefficients in a first primary memory. These coefficients are periodically compressed and transferred to a secondary memory to maintain sufficient memory in the primary memory for data processing. The sequence of DWT operations on the tiles T.sub.ij (x,y) effectively calculates a seamless DWT of I(x,y). Data retrieval consists of specifying a resolution and a region of I(x,y) for display. The subset of stored DWT coefficients corresponding to each requested scene is determined and then decompressed for input to an inverse DWT, the output of which forms the image display. The repeated process whereby image views are specified may take the form an interaction with a computer pointing device on an image display from a previous retrieval.

  13. Image and Video Compression with VLSI Neural Networks

    NASA Technical Reports Server (NTRS)

    Fang, W.; Sheu, B.

    1993-01-01

    An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.

  14. Observation sequences and onboard data processing of Planet-C

    NASA Astrophysics Data System (ADS)

    Suzuki, M.; Imamura, T.; Nakamura, M.; Ishi, N.; Ueno, M.; Hihara, H.; Abe, T.; Yamada, T.

    Planet-C or VCO Venus Climate Orbiter will carry 5 cameras IR1 IR 1micrometer camera IR2 IR 2micrometer camera UVI UV Imager LIR Long-IR camera and LAC Lightning and Airglow Camera in the UV-IR region to investigate atmospheric dynamics of Venus During 30 hr orbiting designed to quasi-synchronize to the super rotation of the Venus atmosphere 3 groups of scientific observations will be carried out i image acquisition of 4 cameras IR1 IR2 UVI LIR 20 min in 2 hrs ii LAC operation only when VCO is within Venus shadow and iii radio occultation These observation sequences will define the scientific outputs of VCO program but the sequences must be compromised with command telemetry downlink and thermal power conditions For maximizing science data downlink it must be well compressed and the compression efficiency and image quality have the significant scientific importance in the VCO program Images of 4 cameras IR1 2 and UVI 1Kx1K and LIR 240x240 will be compressed using JPEG2000 J2K standard J2K is selected because of a no block noise b efficiency c both reversible and irreversible d patent loyalty free and e already implemented as academic commercial software ICs and ASIC logic designs Data compression efficiencies of J2K are about 0 3 reversible and 0 1 sim 0 01 irreversible The DE Digital Electronics unit which controls 4 cameras and handles onboard data processing compression is under concept design stage It is concluded that the J2K data compression logics circuits using space

  15. Intelligent bandwith compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.

  16. Stereo sequence transmission via conventional transmission channel

    NASA Astrophysics Data System (ADS)

    Lee, Ho-Keun; Kim, Chul-Hwan; Han, Kyu-Phil; Ha, Yeong-Ho

    2003-05-01

    This paper proposes a new stereo sequence transmission technique using digital watermarking for compatibility with conventional 2D digital TV. We, generally, compress and transmit image sequence using temporal-spatial redundancy between stereo images. It is difficult for users with conventional digital TV to watch the transmitted 3D image sequence because many 3D image compression methods are different. To solve such a problem, in this paper, we perceive the concealment of new information of digital watermarking and conceal information of the other stereo image into three channels of the reference image. The main target of the technique presented is to let the people who have conventional DTV watch stereo movies at the same time. This goal is reached by considering the response of human eyes to color information and by using digital watermarking. To hide right images into left images effectively, bit-change in 3 color channels and disparity estimation according to the value of estimated disparity are performed. The proposed method assigns the displacement information of right image to each channel of YCbCr on DCT domain. Each LSB bit on YCbCr channels is changed according to the bits of disparity information. The performance of the presented methods is confirmed by several computer experiments.

  17. 3D Compressed Sensing for Highly Accelerated Hyperpolarized 13C MRSI With In Vivo Applications to Transgenic Mouse Models of Cancer

    PubMed Central

    Hu, Simon; Lustig, Michael; Balakrishnan, Asha; Larson, Peder E. Z.; Bok, Robert; Kurhanewicz, John; Nelson, Sarah J.; Goga, Andrei; Pauly, John M.; Vigneron, Daniel B.

    2010-01-01

    High polarization of nuclear spins in liquid state through hyperpolarized technology utilizing dynamic nuclear polarization has enabled the direct monitoring of 13C metabolites in vivo at a high signal-to-noise ratio. Acquisition time limitations due to T1 decay of the hyperpolarized signal require accelerated imaging methods, such as compressed sensing, for optimal speed and spatial coverage. In this paper, the design and testing of a new echo-planar 13C three-dimensional magnetic resonance spectroscopic imaging (MRSI) compressed sensing sequence is presented. The sequence provides up to a factor of 7.53 in acceleration with minimal reconstruction artifacts. The key to the design is employing x and y gradient blips during a fly-back readout to pseudorandomly undersample kf-kx-ky space. The design was validated in simulations and phantom experiments where the limits of undersampling and the effects of noise on the compressed sensing nonlinear reconstruction were tested. Finally, this new pulse sequence was applied in vivo in preclinical studies involving transgenic prostate cancer and transgenic liver cancer murine models to obtain much higher spatial and temporal resolution than possible with conventional echo-planar spectroscopic imaging methods. PMID:20017160

  18. Joint image encryption and compression scheme based on a new hyperchaotic system and curvelet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Miao; Tong, Xiaojun

    2017-07-01

    This paper proposes a joint image encryption and compression scheme based on a new hyperchaotic system and curvelet transform. A new five-dimensional hyperchaotic system based on the Rabinovich system is presented. By means of the proposed hyperchaotic system, a new pseudorandom key stream generator is constructed. The algorithm adopts diffusion and confusion structure to perform encryption, which is based on the key stream generator and the proposed hyperchaotic system. The key sequence used for image encryption is relation to plain text. By means of the second generation curvelet transform, run-length coding, and Huffman coding, the image data are compressed. The joint operation of compression and encryption in a single process is performed. The security test results indicate the proposed methods have high security and good compression effect.

  19. Efficient burst image compression using H.265/HEVC

    NASA Astrophysics Data System (ADS)

    Roodaki-Lavasani, Hoda; Lainema, Jani

    2014-02-01

    New imaging use cases are emerging as more powerful camera hardware is entering consumer markets. One family of such use cases is based on capturing multiple pictures instead of just one when taking a photograph. That kind of a camera operation allows e.g. selecting the most successful shot from a sequence of images, showing what happened right before or after the shot was taken or combining the shots by computational means to improve either visible characteristics of the picture (such as dynamic range or focus) or the artistic aspects of the photo (e.g. by superimposing pictures on top of each other). Considering that photographic images are typically of high resolution and quality and the fact that these kind of image bursts can consist of at least tens of individual pictures, an efficient compression algorithm is desired. However, traditional video coding approaches fail to provide the random access properties these use cases require to achieve near-instantaneous access to the pictures in the coded sequence. That feature is critical to allow users to browse the pictures in an arbitrary order or imaging algorithms to extract desired pictures from the sequence quickly. This paper proposes coding structures that provide such random access properties while achieving coding efficiency superior to existing image coders. The results indicate that using HEVC video codec with a single reference picture fixed for the whole sequence can achieve nearly as good compression as traditional IPPP coding structures. It is also shown that the selection of the reference frame can further improve the coding efficiency.

  20. Low-rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging

    PubMed Central

    Ravishankar, Saiprasad; Moore, Brian E.; Nadakuditi, Raj Rao; Fessler, Jeffrey A.

    2017-01-01

    Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery from undersampled measurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamic magnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method. PMID:28092528

  1. Low-Rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging.

    PubMed

    Ravishankar, Saiprasad; Moore, Brian E; Nadakuditi, Raj Rao; Fessler, Jeffrey A

    2017-05-01

    Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery fromundersampledmeasurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamicmagnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method.

  2. Rapid acquisition of magnetic resonance imaging of the shoulder using three-dimensional fast spin echo sequence with compressed sensing.

    PubMed

    Lee, Seung Hyun; Lee, Young Han; Song, Ho-Taek; Suh, Jin-Suck

    2017-10-01

    To evaluate the feasibility of 3D fast spin-echo (FSE) imaging with compressed sensing (CS) for the assessment of shoulder. Twenty-nine patients who underwent shoulder MRI including image sets of axial 3D-FSE sequence without CS and with CS, using an acceleration factor of 1.5, were included. Quantitative assessment was performed by calculating the root mean square error (RMSE) and structural similarity index (SSIM). Two musculoskeletal radiologists compared image quality of 3D-FSE sequences without CS and with CS, and scored the qualitative agreement between sequences, using a five-point scale. Diagnostic agreement for pathologic shoulder lesions between the two sequences was evaluated. The acquisition time of 3D-FSE MRI was reduced using CS (3min 23s vs. 2min 22s). Quantitative evaluations showed a significant correlation between the two sequences (r=0.872-0.993, p<0.05) and SSIM was in an acceptable range (0.940-0.993; mean±standard deviation, 0.968±0.018). Qualitative image quality showed good to excellent agreement between 3D-FSE images without CS and with CS. Diagnostic agreement for pathologic shoulder lesions between the two sequences was very good (κ=0.915-1). The 3D-FSE sequence with CS is feasible in evaluating the shoulder joint with reduced scan time compared to 3D-FSE without CS. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Video compression of coronary angiograms based on discrete wavelet transform with block classification.

    PubMed

    Ho, B T; Tsai, M J; Wei, J; Ma, M; Saipetch, P

    1996-01-01

    A new method of video compression for angiographic images has been developed to achieve high compression ratio (~20:1) while eliminating block artifacts which leads to loss of diagnostic accuracy. This method adopts motion picture experts group's (MPEGs) motion compensated prediction to takes advantage of frame to frame correlation. However, in contrast to MPEG, the error images arising from mismatches in the motion estimation are encoded by discrete wavelet transform (DWT) rather than block discrete cosine transform (DCT). Furthermore, the authors developed a classification scheme which label each block in an image as intra, error, or background type and encode it accordingly. This hybrid coding can significantly improve the compression efficiency in certain eases. This method can be generalized for any dynamic image sequences applications sensitive to block artifacts.

  4. Motion-compensated compressed sensing for dynamic imaging

    NASA Astrophysics Data System (ADS)

    Sundaresan, Rajagopalan; Kim, Yookyung; Nadar, Mariappan S.; Bilgin, Ali

    2010-08-01

    The recently introduced Compressed Sensing (CS) theory explains how sparse or compressible signals can be reconstructed from far fewer samples than what was previously believed possible. The CS theory has attracted significant attention for applications such as Magnetic Resonance Imaging (MRI) where long acquisition times have been problematic. This is especially true for dynamic MRI applications where high spatio-temporal resolution is needed. For example, in cardiac cine MRI, it is desirable to acquire the whole cardiac volume within a single breath-hold in order to avoid artifacts due to respiratory motion. Conventional MRI techniques do not allow reconstruction of high resolution image sequences from such limited amount of data. Vaswani et al. recently proposed an extension of the CS framework to problems with partially known support (i.e. sparsity pattern). In their work, the problem of recursive reconstruction of time sequences of sparse signals was considered. Under the assumption that the support of the signal changes slowly over time, they proposed using the support of the previous frame as the "known" part of the support for the current frame. While this approach works well for image sequences with little or no motion, motion causes significant change in support between adjacent frames. In this paper, we illustrate how motion estimation and compensation techniques can be used to reconstruct more accurate estimates of support for image sequences with substantial motion (such as cardiac MRI). Experimental results using phantoms as well as real MRI data sets illustrate the improved performance of the proposed technique.

  5. EBLAST: an efficient high-compression image transformation 3. application to Internet image and video transmission

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.

    2001-12-01

    A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.

  6. Single-pixel imaging based on compressive sensing with spectral-domain optical mixing

    NASA Astrophysics Data System (ADS)

    Zhu, Zhijing; Chi, Hao; Jin, Tao; Zheng, Shilie; Jin, Xiaofeng; Zhang, Xianmin

    2017-11-01

    In this letter a single-pixel imaging structure is proposed based on compressive sensing using a spatial light modulator (SLM)-based spectrum shaper. In the approach, an SLM-based spectrum shaper, the pattern of which is a predetermined pseudorandom bit sequence (PRBS), spectrally codes the optical pulse carrying image information. The energy of the spectrally mixed pulse is detected by a single-pixel photodiode and the measurement results are used to reconstruct the image via a sparse recovery algorithm. As the mixing of the image signal and the PRBS is performed in the spectral domain, optical pulse stretching, modulation, compression and synchronization in the time domain are avoided. Experiments are implemented to verify the feasibility of the approach.

  7. NIR hyperspectral compressive imager based on a modified Fabry–Perot resonator

    NASA Astrophysics Data System (ADS)

    Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Stern, Adrian

    2018-04-01

    The acquisition of hyperspectral (HS) image datacubes with available 2D sensor arrays involves a time consuming scanning process. In the last decade, several compressive sensing (CS) techniques were proposed to reduce the HS acquisition time. In this paper, we present a method for near-infrared (NIR) HS imaging which relies on our rapid CS resonator spectroscopy technique. Within the framework of CS, and by using a modified Fabry–Perot resonator, a sequence of spectrally modulated images is used to recover NIR HS datacubes. Owing to the innovative CS design, we demonstrate the ability to reconstruct NIR HS images with hundreds of spectral bands from an order of magnitude fewer measurements, i.e. with a compression ratio of about 10:1. This high compression ratio, together with the high optical throughput of the system, facilitates fast acquisition of large HS datacubes.

  8. Optical image encryption using chaos-based compressed sensing and phase-shifting interference in fractional wavelet domain

    NASA Astrophysics Data System (ADS)

    Liu, Qi; Wang, Ying; Wang, Jun; Wang, Qiong-Hua

    2018-02-01

    In this paper, a novel optical image encryption system combining compressed sensing with phase-shifting interference in fractional wavelet domain is proposed. To improve the encryption efficiency, the volume data of original image are decreased by compressed sensing. Then the compacted image is encoded through double random phase encoding in asymmetric fractional wavelet domain. In the encryption system, three pseudo-random sequences, generated by three-dimensional chaos map, are used as the measurement matrix of compressed sensing and two random-phase masks in the asymmetric fractional wavelet transform. It not only simplifies the keys to storage and transmission, but also enhances our cryptosystem nonlinearity to resist some common attacks. Further, holograms make our cryptosystem be immune to noises and occlusion attacks, which are obtained by two-step-only quadrature phase-shifting interference. And the compression and encryption can be achieved in the final result simultaneously. Numerical experiments have verified the security and validity of the proposed algorithm.

  9. High-grade video compression of echocardiographic studies: a multicenter validation study of selected motion pictures expert groups (MPEG)-4 algorithms.

    PubMed

    Barbier, Paolo; Alimento, Marina; Berna, Giovanni; Celeste, Fabrizio; Gentile, Francesco; Mantero, Antonio; Montericcio, Vincenzo; Muratori, Manuela

    2007-05-01

    Large files produced by standard compression algorithms slow down spread of digital and tele-echocardiography. We validated echocardiographic video high-grade compression with the new Motion Pictures Expert Groups (MPEG)-4 algorithms with a multicenter study. Seven expert cardiologists blindly scored (5-point scale) 165 uncompressed and compressed 2-dimensional and color Doppler video clips, based on combined diagnostic content and image quality (uncompressed files as references). One digital video and 3 MPEG-4 algorithms (WM9, MV2, and DivX) were used, the latter at 3 compression levels (0%, 35%, and 60%). Compressed file sizes decreased from 12 to 83 MB to 0.03 to 2.3 MB (1:1051-1:26 reduction ratios). Mean SD of differences was 0.81 for intraobserver variability (uncompressed and digital video files). Compared with uncompressed files, only the DivX mean score at 35% (P = .04) and 60% (P = .001) compression was significantly reduced. At subcategory analysis, these differences were still significant for gray-scale and fundamental imaging but not for color or second harmonic tissue imaging. Original image quality, session sequence, compression grade, and bitrate were all independent determinants of mean score. Our study supports use of MPEG-4 algorithms to greatly reduce echocardiographic file sizes, thus facilitating archiving and transmission. Quality evaluation studies should account for the many independent variables that affect image quality grading.

  10. Single-event transient imaging with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor.

    PubMed

    Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji

    2016-02-22

    In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.

  11. Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder

    NASA Astrophysics Data System (ADS)

    August, Isaac; Oiknine, Yaniv; Abuleil, Marwan; Abdulhalim, Ibrahim; Stern, Adrian

    2016-03-01

    Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems.

  12. Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder.

    PubMed

    August, Isaac; Oiknine, Yaniv; AbuLeil, Marwan; Abdulhalim, Ibrahim; Stern, Adrian

    2016-03-23

    Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems.

  13. Time-Reversal Based Range Extension technique for Ultra-wideband (UWB) Sensors and Applications in Tactical Communications and Networking

    DTIC Science & Technology

    2010-01-28

    has to rely on a uni- polar sequence whose autocorrelation is typically less sharp than that of a bi-polar sequence. Optical orthogonal code (OOC...detection in multipath environments," in Proc. IEEE ICC󈧇, vol. 5, pp. 3530-3534, May 2003. [11] M. Weisenhorn and W. Hirt, "Robust Noncoherent Receiver...M. Duarte, D. Baron, S. Sarvotham, K. Kelly, and R. Baraniuk, "A New Compressive Imaging Camera Architecture using Optical -Domain Compression," in

  14. Storage, retrieval, and edit of digital video using Motion JPEG

    NASA Astrophysics Data System (ADS)

    Sudharsanan, Subramania I.; Lee, D. H.

    1994-04-01

    In a companion paper we describe a Micro Channel adapter card that can perform real-time JPEG (Joint Photographic Experts Group) compression of a 640 by 480 24-bit image within 1/30th of a second. Since this corresponds to NTSC video rates at considerably good perceptual quality, this system can be used for real-time capture and manipulation of continuously fed video. To facilitate capturing the compressed video in a storage medium, an IBM Bus master SCSI adapter with cache is utilized. Efficacy of the data transfer mechanism is considerably improved using the System Control Block architecture, an extension to Micro Channel bus masters. We show experimental results that the overall system can perform at compressed data rates of about 1.5 MBytes/second sustained and with sporadic peaks to about 1.8 MBytes/second depending on the image sequence content. We also describe mechanisms to access the compressed data very efficiently through special file formats. This in turn permits creation of simpler sequence editors. Another advantage of the special file format is easy control of forward, backward and slow motion playback. The proposed method can be extended for design of a video compression subsystem for a variety of personal computing systems.

  15. Audiovisual focus of attention and its application to Ultra High Definition video compression

    NASA Astrophysics Data System (ADS)

    Rerabek, Martin; Nemoto, Hiromi; Lee, Jong-Seok; Ebrahimi, Touradj

    2014-02-01

    Using Focus of Attention (FoA) as a perceptual process in image and video compression belongs to well-known approaches to increase coding efficiency. It has been shown that foveated coding, when compression quality varies across the image according to region of interest, is more efficient than the alternative coding, when all region are compressed in a similar way. However, widespread use of such foveated compression has been prevented due to two main conflicting causes, namely, the complexity and the efficiency of algorithms for FoA detection. One way around these is to use as much information as possible from the scene. Since most video sequences have an associated audio, and moreover, in many cases there is a correlation between the audio and the visual content, audiovisual FoA can improve efficiency of the detection algorithm while remaining of low complexity. This paper discusses a simple yet efficient audiovisual FoA algorithm based on correlation of dynamics between audio and video signal components. Results of audiovisual FoA detection algorithm are subsequently taken into account for foveated coding and compression. This approach is implemented into H.265/HEVC encoder producing a bitstream which is fully compliant to any H.265/HEVC decoder. The influence of audiovisual FoA in the perceived quality of high and ultra-high definition audiovisual sequences is explored and the amount of gain in compression efficiency is analyzed.

  16. Image coding of SAR imagery

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Kwok, R.; Curlander, J. C.

    1987-01-01

    Five coding techniques in the spatial and transform domains have been evaluated for SAR image compression: linear three-point predictor (LTPP), block truncation coding (BTC), microadaptive picture sequencing (MAPS), adaptive discrete cosine transform (ADCT), and adaptive Hadamard transform (AHT). These techniques have been tested with Seasat data. Both LTPP and BTC spatial domain coding techniques provide very good performance at rates of 1-2 bits/pixel. The two transform techniques, ADCT and AHT, demonstrate the capability to compress the SAR imagery to less than 0.5 bits/pixel without visible artifacts. Tradeoffs such as the rate distortion performance, the computational complexity, the algorithm flexibility, and the controllability of compression ratios are also discussed.

  17. MRI of the lumbar spine: comparison of 3D isotropic turbo spin-echo SPACE sequence versus conventional 2D sequences at 3.0 T.

    PubMed

    Lee, Sungwon; Jee, Won-Hee; Jung, Joon-Yong; Lee, So-Yeon; Ryu, Kyeung-Sik; Ha, Kee-Yong

    2015-02-01

    Three-dimensional (3D) fast spin-echo sequence with variable flip-angle refocusing pulse allows retrospective alignments of magnetic resonance imaging (MRI) in any desired plane. To compare isotropic 3D T2-weighted (T2W) turbo spin-echo sequence (TSE-SPACE) with standard two-dimensional (2D) T2W TSE imaging for evaluating lumbar spine pathology at 3.0 T MRI. Forty-two patients who had spine surgery for disk herniation and had 3.0 T spine MRI were included in this study. In addition to standard 2D T2W TSE imaging, sagittal 3D T2W TSE-SPACE was obtained to produce multiplanar (MPR) images. Each set of MR images from 3D T2W TSE and 2D TSE-SPACE were independently scored for the degree of lumbar neural foraminal stenosis, central spinal stenosis, and nerve compression by two reviewers. These scores were compared with operative findings and the sensitivities were evaluated by McNemar test. Inter-observer agreements and the correlation with symptoms laterality were assessed with kappa statistics. The 3D T2W TSE and 2D TSE-SPACE had similar sensitivity in detecting foraminal stenosis (78.9% versus 78.9% in 32 foramen levels), spinal stenosis (100% versus 100% in 42 spinal levels), and nerve compression (92.9% versus 81.8% in 59 spinal nerves). The inter-observer agreements (κ = 0.849 vs. 0.451 for foraminal stenosis, κ = 0.809 vs. 0.503 for spinal stenosis, and κ = 0.681 vs. 0.429 for nerve compression) and symptoms correlation (κ = 0.449 vs. κ = 0.242) were better in 3D TSE-SPACE compared to 2D TSE. 3D TSE-SPACE with oblique coronal MPR images demonstrated better inter-observer agreements compared to 3D TSE-SPACE without oblique coronal MPR images (κ = 0.930 vs. κ = 0.681). Isotropic 3D T2W TSE-SPACE at 3.0 T was comparable to 2D T2W TSE for detecting foraminal stenosis, central spinal stenosis, and nerve compression with better inter-observer agreements and symptom correlation. © The Foundation Acta Radiologica 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  18. Four-dimensional wavelet compression of arbitrarily sized echocardiographic data.

    PubMed

    Zeng, Li; Jansen, Christian P; Marsch, Stephan; Unser, Michael; Hunziker, Patrick R

    2002-09-01

    Wavelet-based methods have become most popular for the compression of two-dimensional medical images and sequences. The standard implementations consider data sizes that are powers of two. There is also a large body of literature treating issues such as the choice of the "optimal" wavelets and the performance comparison of competing algorithms. With the advent of telemedicine, there is a strong incentive to extend these techniques to higher dimensional data such as dynamic three-dimensional (3-D) echocardiography [four-dimensional (4-D) datasets]. One of the practical difficulties is that the size of this data is often not a multiple of a power of two, which can lead to increased computational complexity and impaired compression power. Our contribution in this paper is to present a genuine 4-D extension of the well-known zerotree algorithm for arbitrarily sized data. The key component of our method is a one-dimensional wavelet algorithm that can handle arbitrarily sized input signals. The method uses a pair of symmetric/antisymmetric wavelets (10/6) together with some appropriate midpoint symmetry boundary conditions that reduce border artifacts. The zerotree structure is also adapted so that it can accommodate noneven data splitting. We have applied our method to the compression of real 3-D dynamic sequences from clinical cardiac ultrasound examinations. Our new algorithm compares very favorably with other more ad hoc adaptations (image extension and tiling) of the standard powers-of-two methods, in terms of both compression performance and computational cost. It is vastly superior to slice-by-slice wavelet encoding. This was seen not only in numerical image quality parameters but also in expert ratings, where significant improvement using the new approach could be documented. Our validation experiments show that one can safely compress 4-D data sets at ratios of 128:1 without compromising the diagnostic value of the images. We also display some more extreme compression results at ratios of 2000:1 where some key diagnostically relevant key features are preserved.

  19. Acoustic Response of Underwater Munitions near a Sediment Interface: Measurement Model Comparisons and Classification Schemes

    DTIC Science & Technology

    2015-04-23

    12 Figure 4. Pulse- compressed baseband signals for sequence 40 from TREX13 …… 13 Figure 5. SAS image for sequence 40 from TREX13...12 meshes with data …………… 28 Figure 14. FE simulations for aluminum and steel replicas of an 100-mm UXO …… 28 Figure 15. FE meshes for two targets...PCB Pulse- compressed and baseband PC SWAT Personal Computer Shallow Water Acoustic Toolset PondEx09 Pond Experiment 2009 PondEx10 Pond Experiment

  20. High temporal resolution dynamic contrast-enhanced MRI using compressed sensing-combined sequence in quantitative renal perfusion measurement.

    PubMed

    Chen, Bin; Zhao, Kai; Li, Bo; Cai, Wenchao; Wang, Xiaoying; Zhang, Jue; Fang, Jing

    2015-10-01

    To demonstrate the feasibility of the improved temporal resolution by using compressed sensing (CS) combined imaging sequence in dynamic contrast-enhanced MRI (DCE-MRI) of kidney, and investigate its quantitative effects on renal perfusion measurements. Ten rabbits were included in the accelerated scans with a CS-combined 3D pulse sequence. To evaluate the image quality, the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were compared between the proposed CS strategy and the conventional full sampling method. Moreover, renal perfusion was estimated by using the separable compartmental model in both CS simulation and realistic CS acquisitions. The CS method showed DCE-MRI images with improved temporal resolution and acceptable image contrast, while presenting significantly higher SNR than the fully sampled images (p<.01) at 2-, 3- and 4-X acceleration. In quantitative measurements, renal perfusion results were in good agreement with the fully sampled one (concordance correlation coefficient=0.95, 0.91, 0.88) at 2-, 3- and 4-X acceleration in CS simulation. Moreover, in realistic acquisitions, the estimated perfusion by the separable compartmental model exhibited no significant differences (p>.05) between each CS-accelerated acquisition and the full sampling method. The CS-combined 3D sequence could improve the temporal resolution for DCE-MRI in kidney while yielding diagnostically acceptable image quality, and it could provide effective measurements of renal perfusion. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Region-Based Prediction for Image Compression in the Cloud.

    PubMed

    Begaint, Jean; Thoreau, Dominique; Guillotel, Philippe; Guillemot, Christine

    2018-04-01

    Thanks to the increasing number of images stored in the cloud, external image similarities can be leveraged to efficiently compress images by exploiting inter-images correlations. In this paper, we propose a novel image prediction scheme for cloud storage. Unlike current state-of-the-art methods, we use a semi-local approach to exploit inter-image correlation. The reference image is first segmented into multiple planar regions determined from matched local features and super-pixels. The geometric and photometric disparities between the matched regions of the reference image and the current image are then compensated. Finally, multiple references are generated from the estimated compensation models and organized in a pseudo-sequence to differentially encode the input image using classical video coding tools. Experimental results demonstrate that the proposed approach yields significant rate-distortion performance improvements compared with the current image inter-coding solutions such as high efficiency video coding.

  2. Multi-pass encoding of hyperspectral imagery with spectral quality control

    NASA Astrophysics Data System (ADS)

    Wasson, Steven; Walker, William

    2015-05-01

    Multi-pass encoding is a technique employed in the field of video compression that maximizes the quality of an encoded video sequence within the constraints of a specified bit rate. This paper presents research where multi-pass encoding is extended to the field of hyperspectral image compression. Unlike video, which is primarily intended to be viewed by a human observer, hyperspectral imagery is processed by computational algorithms that generally attempt to classify the pixel spectra within the imagery. As such, these algorithms are more sensitive to distortion in the spectral dimension of the image than they are to perceptual distortion in the spatial dimension. The compression algorithm developed for this research, which uses the Karhunen-Loeve transform for spectral decorrelation followed by a modified H.264/Advanced Video Coding (AVC) encoder, maintains a user-specified spectral quality level while maximizing the compression ratio throughout the encoding process. The compression performance may be considered near-lossless in certain scenarios. For qualitative purposes, this paper presents the performance of the compression algorithm for several Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Hyperion datasets using spectral angle as the spectral quality assessment function. Specifically, the compression performance is illustrated in the form of rate-distortion curves that plot spectral angle versus bits per pixel per band (bpppb).

  3. Infrared super-resolution imaging based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Sui, Xiubao; Chen, Qian; Gu, Guohua; Shen, Xuewei

    2014-03-01

    The theoretical basis of traditional infrared super-resolution imaging method is Nyquist sampling theorem. The reconstruction premise is that the relative positions of the infrared objects in the low-resolution image sequences should keep fixed and the image restoration means is the inverse operation of ill-posed issues without fixed rules. The super-resolution reconstruction ability of the infrared image, algorithm's application area and stability of reconstruction algorithm are limited. To this end, we proposed super-resolution reconstruction method based on compressed sensing in this paper. In the method, we selected Toeplitz matrix as the measurement matrix and realized it by phase mask method. We researched complementary matching pursuit algorithm and selected it as the recovery algorithm. In order to adapt to the moving target and decrease imaging time, we take use of area infrared focal plane array to acquire multiple measurements at one time. Theoretically, the method breaks though Nyquist sampling theorem and can greatly improve the spatial resolution of the infrared image. The last image contrast and experiment data indicate that our method is effective in improving resolution of infrared images and is superior than some traditional super-resolution imaging method. The compressed sensing super-resolution method is expected to have a wide application prospect.

  4. Watermarking scheme for authentication of compressed image

    NASA Astrophysics Data System (ADS)

    Hsieh, Tsung-Han; Li, Chang-Tsun; Wang, Shuo

    2003-11-01

    As images are commonly transmitted or stored in compressed form such as JPEG, to extend the applicability of our previous work, a new scheme for embedding watermark in compressed domain without resorting to cryptography is proposed. In this work, a target image is first DCT transformed and quantised. Then, all the coefficients are implicitly watermarked in order to minimize the risk of being attacked on the unwatermarked coefficients. The watermarking is done through registering/blending the zero-valued coefficients with a binary sequence to create the watermark and involving the unembedded coefficients during the process of embedding the selected coefficients. The second-order neighbors and the block itself are considered in the process of the watermark embedding in order to thwart different attacks such as cover-up, vector quantisation, and transplantation. The experiments demonstrate the capability of the proposed scheme in thwarting local tampering, geometric transformation such as cropping, and common signal operations such as lowpass filtering.

  5. FRESCO: Referential compression of highly similar sequences.

    PubMed

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

  6. Research on compression performance of ultrahigh-definition videos

    NASA Astrophysics Data System (ADS)

    Li, Xiangqun; He, Xiaohai; Qing, Linbo; Tao, Qingchuan; Wu, Di

    2017-11-01

    With the popularization of high-definition (HD) images and videos (1920×1080 pixels and above), there are even 4K (3840×2160) television signals and 8 K (8192×4320) ultrahigh-definition videos. The demand for HD images and videos is increasing continuously, along with the increasing data volume. The storage and transmission cannot be properly solved only by virtue of the expansion capacity of hard disks and the update and improvement of transmission devices. Based on the full use of the coding standard high-efficiency video coding (HEVC), super-resolution reconstruction technology, and the correlation between the intra- and the interprediction, we first put forward a "division-compensation"-based strategy to further improve the compression performance of a single image and frame I. Then, by making use of the above thought and HEVC encoder and decoder, a video compression coding frame is designed. HEVC is used inside the frame. Last, with the super-resolution reconstruction technology, the reconstructed video quality is further improved. The experiment shows that by the proposed compression method for a single image (frame I) and video sequence here, the performance is superior to that of HEVC in a low bit rate environment.

  7. Compressing DNA sequence databases with coil.

    PubMed

    White, W Timothy J; Hendy, Michael D

    2008-05-20

    Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression - an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression - the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  8. Compressing DNA sequence databases with coil

    PubMed Central

    White, W Timothy J; Hendy, Michael D

    2008-01-01

    Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work. PMID:18489794

  9. A new security solution to JPEG using hyper-chaotic system and modified zigzag scan coding

    NASA Astrophysics Data System (ADS)

    Ji, Xiao-yong; Bai, Sen; Guo, Yu; Guo, Hui

    2015-05-01

    Though JPEG is an excellent compression standard of images, it does not provide any security performance. Thus, a security solution to JPEG was proposed in Zhang et al. (2014). But there are some flaws in Zhang's scheme and in this paper we propose a new scheme based on discrete hyper-chaotic system and modified zigzag scan coding. By shuffling the identifiers of zigzag scan encoded sequence with hyper-chaotic sequence and accurately encrypting the certain coefficients which have little relationship with the correlation of the plain image in zigzag scan encoded domain, we achieve high compression performance and robust security simultaneously. Meanwhile we present and analyze the flaws in Zhang's scheme through theoretical analysis and experimental verification, and give the comparisons between our scheme and Zhang's. Simulation results verify that our method has better performance in security and efficiency.

  10. Pre-processing SAR image stream to facilitate compression for transport on bandwidth-limited-link

    DOEpatents

    Rush, Bobby G.; Riley, Robert

    2015-09-29

    Pre-processing is applied to a raw VideoSAR (or similar near-video rate) product to transform the image frame sequence into a product that resembles more closely the type of product for which conventional video codecs are designed, while sufficiently maintaining utility and visual quality of the product delivered by the codec.

  11. R&D 100, 2016: Ultrafast X-ray Imager

    ScienceCinema

    Porter, John; Claus, Liam; Sanchez, Marcos; Robertson, Gideon; Riley, Nathan; Rochau, Greg

    2018-06-13

    The Ultrafast X-ray Imager is a solid-state camera capable of capturing a sequence of images with user-selectable exposure times as short as 2 billionths of a second. Using 3D semiconductor integration techniques to form a hybrid chip, this camera was developed to enable scientists to study the heating and compression of fusion targets in the quest to harness the energy process that powers the stars.

  12. R&D 100, 2016: Ultrafast X-ray Imager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, John; Claus, Liam; Sanchez, Marcos

    The Ultrafast X-ray Imager is a solid-state camera capable of capturing a sequence of images with user-selectable exposure times as short as 2 billionths of a second. Using 3D semiconductor integration techniques to form a hybrid chip, this camera was developed to enable scientists to study the heating and compression of fusion targets in the quest to harness the energy process that powers the stars.

  13. Advances in high throughput DNA sequence data compression.

    PubMed

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz

    2016-06-01

    Advances in high throughput sequencing technologies and reduction in cost of sequencing have led to exponential growth in high throughput DNA sequence data. This growth has posed challenges such as storage, retrieval, and transmission of sequencing data. Data compression is used to cope with these challenges. Various methods have been developed to compress genomic and sequencing data. In this article, we present a comprehensive review of compression methods for genome and reads compression. Algorithms are categorized as referential or reference free. Experimental results and comparative analysis of various methods for data compression are presented. Finally, key challenges and research directions in DNA sequence data compression are highlighted.

  14. MHz-Rate NO PLIF Imaging in a Mach 10 Hypersonic Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Jiang, N.; Webster, M.; Lempert, Walter R.; Miller, J. D.; Meyer, T. R.; Danehy, Paul M.

    2010-01-01

    NO PLIF imaging at repetition rates as high as 1 MHz is demonstrated in the NASA Langley 31 inch Mach 10 hypersonic wind tunnel. Approximately two hundred time correlated image sequences, of between ten and twenty individual frames, were obtained over eight days of wind tunnel testing spanning two entries in March and September of 2009. The majority of the image sequences were obtained from the boundary layer of a 20 flat plate model, in which transition was induced using a variety of cylindrical and triangular shaped protuberances. The high speed image sequences captured a variety of laminar and transitional flow phenomena, ranging from mostly laminar flow, typically at lower Reynolds number and/or in the near wall region of the model, to highly transitional flow in which the temporal evolution and progression of characteristic streak instabilities and/or corkscrew-shaped vortices could be clearly identified. A series of image sequences were also obtained from a 20 compression ramp at a 10 angle of attack in which the temporal dynamics of the characteristic separated flow was captured in a time correlated manner.

  15. Lossless compression algorithm for multispectral imagers

    NASA Astrophysics Data System (ADS)

    Gladkova, Irina; Grossberg, Michael; Gottipati, Srikanth

    2008-08-01

    Multispectral imaging is becoming an increasingly important tool for monitoring the earth and its environment from space borne and airborne platforms. Multispectral imaging data consists of visible and IR measurements from a scene across space and spectrum. Growing data rates resulting from faster scanning and finer spatial and spectral resolution makes compression an increasingly critical tool to reduce data volume for transmission and archiving. Research for NOAA NESDIS has been directed to finding for the characteristics of satellite atmospheric Earth science Imager sensor data what level of Lossless compression ratio can be obtained as well as appropriate types of mathematics and approaches that can lead to approaching this data's entropy level. Conventional lossless do not achieve the theoretical limits for lossless compression on imager data as estimated from the Shannon entropy. In a previous paper, the authors introduce a lossless compression algorithm developed for MODIS as a proxy for future NOAA-NESDIS satellite based Earth science multispectral imagers such as GOES-R. The algorithm is based on capturing spectral correlations using spectral prediction, and spatial correlations with a linear transform encoder. In decompression, the algorithm uses a statistically computed look up table to iteratively predict each channel from a channel decompressed in the previous iteration. In this paper we present a new approach which fundamentally differs from our prior work. In this new approach, instead of having a single predictor for each pair of bands we introduce a piecewise spatially varying predictor which significantly improves the compression results. Our new algorithm also now optimizes the sequence of channels we use for prediction. Our results are evaluated by comparison with a state of the art wavelet based image compression scheme, Jpeg2000. We present results on the 14 channel subset of the MODIS imager, which serves as a proxy for the GOES-R imager. We will also show results of the algorithm for on NOAA AVHRR data and data from SEVIRI. The algorithm is designed to be adapted to the wide range of multispectral imagers and should facilitate distribution of data throughout globally. This compression research is managed by Roger Heymann, PE of OSD NOAA NESDIS Engineering, in collaboration with the NOAA NESDIS STAR Research Office through Mitch Goldberg, Tim Schmit, Walter Wolf.

  16. Evaluation of an accelerated 3D SPACE sequence with compressed sensing and free-stop scan mode for imaging of the knee.

    PubMed

    Henninger, B; Raithel, E; Kranewitter, C; Steurer, M; Jaschke, W; Kremser, C

    2018-05-01

    To prospectively evaluate a prototypical 3D turbo-spin-echo proton-density-weighted sequence with compressed sensing and free-stop scan mode for preventing motion artefacts (3D-PD-CS-SPACE free-stop) for knee imaging in a clinical setting. 80 patients underwent 3T magnetic resonance imaging (MRI) of the knee with our 2D routine protocol and with 3D-PD-CS-SPACE free-stop. In case of a scan-stop caused by motion (images are calculated nevertheless) the sequence was repeated without free-stop mode. All scans were evaluated by 2 radiologists concerning image quality of the 3D-PD-CS-SPACE (with and without free-stop). Important knee structures were further assessed in a lesion based analysis and compared to our reference 2D-PD-fs sequences. Image quality of the 3D-PD-CS-SPACE free-stop was found optimal in 47/80, slightly compromised in 21/80, moderately in 10/80 and severely in 2/80. In 29/80, the free-stop scan mode stopped the 3D-PD-CS-SPACE due to subject motion with a slight increase of image quality at longer effective acquisition times. Compared to the 3D-PD-CS-SPACE with free-stop, the image quality of the acquired 3D-PD-CS-SPACE without free-stop was found equal in 6/29, slightly improved in 13/29, improved with equal contours in 8/29, and improved with sharper contours in 2/29. The lesion based analysis showed a high agreement between the results from the 3D-PD-CS-SPACE free-stop and our 2D-PD-fs routine protocol (overall agreement 96.25%-100%, Cohen's Kappa 0.883-1, p < 0.001). 3D-PD-CS-SPACE free-stop is a reliable alternative for standard 2D-PD-fs protocols with acceptable acquisition times. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. The Role of MRI in Diagnosing Neurovascular Compression of the Cochlear Nerve Resulting in Typewriter Tinnitus.

    PubMed

    Bae, Y J; Jeon, Y J; Choi, B S; Koo, J-W; Song, J-J

    2017-06-01

    Typewriter tinnitus, a symptom characterized by paroxysmal attacks of staccato sounds, has been thought to be caused by neurovascular compression of the cochlear nerve, but the correlation between radiologic evidence of neurovascular compression of the cochlear nerve and symptom presentation has not been thoroughly investigated. The purpose of this study was to examine whether radiologic evidence of neurovascular compression of the cochlear nerve is pathognomonic in typewriter tinnitus. Fifteen carbamazepine-responding patients with typewriter tinnitus and 8 control subjects were evaluated with a 3D T2-weighted volume isotropic turbo spin-echo acquisition sequence. Groups 1 (16 symptomatic sides), 2 (14 asymptomatic sides), and 3 (16 control sides) were compared with regard to the anatomic relation between the vascular loop and the internal auditory canal and the presence of neurovascular compression of the cochlear nerve with/without angulation/indentation. The anatomic location of the vascular loop was not significantly different among the 3 groups (all, P > .05). Meanwhile, neurovascular compression of the cochlear nerve on MR imaging was significantly higher in group 1 than in group 3 ( P = .032). However, considerable false-positive (no symptoms with neurovascular compression of the cochlear nerve on MR imaging) and false-negative (typewriter tinnitus without demonstrable neurovascular compression of the cochlear nerve) findings were also observed. Neurovascular compression of the cochlear nerve was more frequently detected on the symptomatic side of patients with typewriter tinnitus compared with the asymptomatic side of these patients or on both sides of control subjects on MR imaging. However, considering false-positive and false-negative findings, meticulous history-taking and the response to the initial carbamazepine trial should be regarded as more reliable diagnostic clues than radiologic evidence of neurovascular compression of the cochlear nerve. © 2017 by American Journal of Neuroradiology.

  18. Compressed Sensing for fMRI: Feasibility Study on the Acceleration of Non-EPI fMRI at 9.4T

    PubMed Central

    Kim, Seong-Gi; Ye, Jong Chul

    2015-01-01

    Conventional functional magnetic resonance imaging (fMRI) technique known as gradient-recalled echo (GRE) echo-planar imaging (EPI) is sensitive to image distortion and degradation caused by local magnetic field inhomogeneity at high magnetic fields. Non-EPI sequences such as spoiled gradient echo and balanced steady-state free precession (bSSFP) have been proposed as an alternative high-resolution fMRI technique; however, the temporal resolution of these sequences is lower than the typically used GRE-EPI fMRI. One potential approach to improve the temporal resolution is to use compressed sensing (CS). In this study, we tested the feasibility of k-t FOCUSS—one of the high performance CS algorithms for dynamic MRI—for non-EPI fMRI at 9.4T using the model of rat somatosensory stimulation. To optimize the performance of CS reconstruction, different sampling patterns and k-t FOCUSS variations were investigated. Experimental results show that an optimized k-t FOCUSS algorithm with acceleration by a factor of 4 works well for non-EPI fMRI at high field under various statistical criteria, which confirms that a combination of CS and a non-EPI sequence may be a good solution for high-resolution fMRI at high fields. PMID:26413503

  19. Human Motion Capture Data Tailored Transform Coding.

    PubMed

    Junhui Hou; Lap-Pui Chau; Magnenat-Thalmann, Nadia; Ying He

    2015-07-01

    Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed.

  20. Images multiplexing by code division technique

    NASA Astrophysics Data System (ADS)

    Kuo, Chung J.; Rigas, Harriett

    Spread Spectrum System (SSS) or Code Division Multiple Access System (CDMAS) has been studied for a long time, but most of the attention was focused on the transmission problems. In this paper, we study the results when the code division technique is applied to the image at the source stage. The idea is to convolve the N different images with the corresponding m-sequence to obtain the encrypted image. The superimposed image (summation of the encrypted images) is then stored or transmitted. The benefit of this is that no one knows what is stored or transmitted unless the m-sequence is known. The recovery of the original image is recovered by correlating the superimposed image with corresponding m-sequence. Two cases are studied in this paper. First, the two-dimensional image is treated as a long one-dimensional vector and the m-sequence is employed to obtain the results. Secondly, the two-dimensional quasi m-array is proposed and used for the code division multiplexing. It is shown that quasi m-array is faster when the image size is 256 x 256. The important features of the proposed technique are not only the image security but also the data compactness. The compression ratio depends on how many images are superimposed.

  1. Images Multiplexing By Code Division Technique

    NASA Astrophysics Data System (ADS)

    Kuo, Chung Jung; Rigas, Harriett B.

    1990-01-01

    Spread Spectrum System (SSS) or Code Division Multiple Access System (CDMAS) has been studied for a long time, but most of the attention was focused on the transmission problems. In this paper, we study the results when the code division technique is applied to the image at the source stage. The idea is to convolve the N different images with the corresponding m-sequence to obtain the encrypted image. The superimposed image (summation of the encrypted images) is then stored or transmitted. The benefit of this is that no one knows what is stored or transmitted unless the m-sequence is known. The recovery of the original image is recovered by correlating the superimposed image with corresponding m-sequence. Two cases are studied in this paper. First, the 2-D image is treated as a long 1-D vector and the m-sequence is employed to obtained the results. Secondly, the 2-D quasi m-array is proposed and used for the code division multiplexing. It is showed that quasi m-array is faster when the image size is 256x256. The important features of the proposed technique are not only the image security but also the data compactness. The compression ratio depends on how many images are superimposed.

  2. Accelerated self-gated UTE MRI of the murine heart

    NASA Astrophysics Data System (ADS)

    Motaal, Abdallah G.; Noorman, Nils; De Graaf, Wolter L.; Florack, Luc J.; Nicolay, Klaas; Strijkers, Gustav J.

    2014-03-01

    We introduce a new protocol to obtain radial Ultra-Short TE (UTE) MRI Cine of the beating mouse heart within reasonable measurement time. The method is based on a self-gated UTE with golden angle radial acquisition and compressed sensing reconstruction. The stochastic nature of the retrospective triggering acquisition scheme produces an under-sampled and random kt-space filling that allows for compressed sensing reconstruction, hence reducing scan time. As a standard, an intragate multislice FLASH sequence with an acquisition time of 4.5 min per slice was used to produce standard Cine movies of 4 mice hearts with 15 frames per cardiac cycle. The proposed self-gated sequence is used to produce Cine movies with short echo time. The total scan time was 11 min per slice. 6 slices were planned to cover the heart from the base to the apex. 2X, 4X and 6X under-sampled k-spaces cine movies were produced from 2, 1 and 0.7 min data acquisitions for each slice. The accelerated cine movies of the mouse hearts were successfully reconstructed with a compressed sensing algorithm. Compared to the FLASH cine images, the UTE images showed much less flow artifacts due to the short echo time. Besides, the accelerated movies had high image quality and the undersampling artifacts were effectively removed. Left ventricular functional parameters derived from the standard and the accelerated cine movies were nearly identical.

  3. Error mitigation for CCSD compressed imager data

    NASA Astrophysics Data System (ADS)

    Gladkova, Irina; Grossberg, Michael; Gottipati, Srikanth; Shahriar, Fazlul; Bonev, George

    2009-08-01

    To efficiently use the limited bandwidth available on the downlink from satellite to ground station, imager data is usually compressed before transmission. Transmission introduces unavoidable errors, which are only partially removed by forward error correction and packetization. In the case of the commonly used CCSD Rice-based compression, it results in a contiguous sequence of dummy values along scan lines in a band of the imager data. We have developed a method capable of using the image statistics to provide a principled estimate of the missing data. Our method outperforms interpolation yet can be performed fast enough to provide uninterrupted data flow. The estimation of the lost data provides significant value to end users who may use only part of the data, may not have statistical tools, or lack the expertise to mitigate the impact of the lost data. Since the locations of the lost data will be clearly marked as meta-data in the HDF or NetCDF header, experts who prefer to handle error mitigation themselves will be free to use or ignore our estimates as they see fit.

  4. SeqCompress: an algorithm for biological sequence compression.

    PubMed

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Multimodal Image-Based Virtual Reality Presurgical Simulation and Evaluation for Trigeminal Neuralgia and Hemifacial Spasm.

    PubMed

    Yao, Shujing; Zhang, Jiashu; Zhao, Yining; Hou, Yuanzheng; Xu, Xinghua; Zhang, Zhizhong; Kikinis, Ron; Chen, Xiaolei

    2018-05-01

    To address the feasibility and predictive value of multimodal image-based virtual reality in detecting and assessing features of neurovascular confliction (NVC), particularly regarding the detection of offending vessels, degree of compression exerted on the nerve root, in patients who underwent microvascular decompression for nonlesional trigeminal neuralgia and hemifacial spasm (HFS). This prospective study includes 42 consecutive patients who underwent microvascular decompression for classic primary trigeminal neuralgia or HFS. All patients underwent preoperative 1.5-T magnetic resonance imaging (MRI) with T2-weighted three-dimensional (3D) sampling perfection with application-optimized contrasts by using different flip angle evolutions, 3D time-of-flight magnetic resonance angiography, and 3D T1-weighted gadolinium-enhanced sequences in combination, whereas 2 patients underwent extra experimental preoperative 7.0-T MRI scans with the same imaging protocol. Multimodal MRIs were then coregistered with open-source software 3D Slicer, followed by 3D image reconstruction to generate virtual reality (VR) images for detection of possible NVC in the cerebellopontine angle. Evaluations were performed by 2 reviewers and compared with the intraoperative findings. For detection of NVC, multimodal image-based VR sensitivity was 97.6% (40/41) and specificity was 100% (1/1). Compared with the intraoperative findings, the κ coefficients for predicting the offending vessel and the degree of compression were >0.75 (P < 0.001). The 7.0-T scans have a clearer view of vessels in the cerebellopontine angle, which may have significant impact on detection of small-caliber offending vessels with relatively slow flow speed in cases of HFS. Multimodal image-based VR using 3D sampling perfection with application-optimized contrasts by using different flip angle evolutions in combination with 3D time-of-flight magnetic resonance angiography sequences proved to be reliable in detecting NVC and in predicting the degree of root compression. The VR image-based simulation correlated well with the real surgical view. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Biological sequence compression algorithms.

    PubMed

    Matsumoto, T; Sadakane, K; Imai, H

    2000-01-01

    Today, more and more DNA sequences are becoming available. The information about DNA sequences are stored in molecular biology databases. The size and importance of these databases will be bigger and bigger in the future, therefore this information must be stored or communicated efficiently. Furthermore, sequence compression can be used to define similarities between biological sequences. The standard compression algorithms such as gzip or compress cannot compress DNA sequences, but only expand them in size. On the other hand, CTW (Context Tree Weighting Method) can compress DNA sequences less than two bits per symbol. These algorithms do not use special structures of biological sequences. Two characteristic structures of DNA sequences are known. One is called palindromes or reverse complements and the other structure is approximate repeats. Several specific algorithms for DNA sequences that use these structures can compress them less than two bits per symbol. In this paper, we improve the CTW so that characteristic structures of DNA sequences are available. Before encoding the next symbol, the algorithm searches an approximate repeat and palindrome using hash and dynamic programming. If there is a palindrome or an approximate repeat with enough length then our algorithm represents it with length and distance. By using this preprocessing, a new program achieves a little higher compression ratio than that of existing DNA-oriented compression algorithms. We also describe new compression algorithm for protein sequences.

  7. FIVQ algorithm for interference hyper-spectral image compression

    NASA Astrophysics Data System (ADS)

    Wen, Jia; Ma, Caiwen; Zhao, Junsuo

    2014-07-01

    Based on the improved vector quantization (IVQ) algorithm [1] which was proposed in 2012, this paper proposes a further improved vector quantization (FIVQ) algorithm for LASIS (Large Aperture Static Imaging Spectrometer) interference hyper-spectral image compression. To get better image quality, IVQ algorithm takes both the mean values and the VQ indices as the encoding rules. Although IVQ algorithm can improve both the bit rate and the image quality, it still can be further improved in order to get much lower bit rate for the LASIS interference pattern with the special optical characteristics based on the pushing and sweeping in LASIS imaging principle. In the proposed algorithm FIVQ, the neighborhood of the encoding blocks of the interference pattern image, which are using the mean value rules, will be checked whether they have the same mean value as the current processing block. Experiments show the proposed algorithm FIVQ can get lower bit rate compared to that of the IVQ algorithm for the LASIS interference hyper-spectral sequences.

  8. Comparison of conventional DCE-MRI and a novel golden-angle radial multicoil compressed sensing method for the evaluation of breast lesion conspicuity.

    PubMed

    Heacock, Laura; Gao, Yiming; Heller, Samantha L; Melsaether, Amy N; Babb, James S; Block, Tobias K; Otazo, Ricardo; Kim, Sungheon G; Moy, Linda

    2017-06-01

    To compare a novel multicoil compressed sensing technique with flexible temporal resolution, golden-angle radial sparse parallel (GRASP), to conventional fat-suppressed spoiled three-dimensional (3D) gradient-echo (volumetric interpolated breath-hold examination, VIBE) MRI in evaluating the conspicuity of benign and malignant breast lesions. Between March and August 2015, 121 women (24-84 years; mean, 49.7 years) with 180 biopsy-proven benign and malignant lesions were imaged consecutively at 3.0 Tesla in a dynamic contrast-enhanced (DCE) MRI exam using sagittal T1-weighted fat-suppressed 3D VIBE in this Health Insurance Portability and Accountability Act-compliant, retrospective study. Subjects underwent MRI-guided breast biopsy (mean, 13 days [1-95 days]) using GRASP DCE-MRI, a fat-suppressed radial "stack-of-stars" 3D FLASH sequence with golden-angle ordering. Three readers independently evaluated breast lesions on both sequences. Statistical analysis included mixed models with generalized estimating equations, kappa-weighted coefficients and Fisher's exact test. All lesions demonstrated good conspicuity on VIBE and GRASP sequences (4.28 ± 0.81 versus 3.65 ± 1.22), with no significant difference in lesion detection (P = 0.248). VIBE had slightly higher lesion conspicuity than GRASP for all lesions, with VIBE 12.6% (0.63/5.0) more conspicuous (P < 0.001). Masses and nonmass enhancement (NME) were more conspicuous on VIBE (P < 0.001), with a larger difference for NME (14.2% versus 9.4% more conspicuous). Malignant lesions were more conspicuous than benign lesions (P < 0.001) on both sequences. GRASP DCE-MRI, a multicoil compressed sensing technique with high spatial resolution and flexible temporal resolution, has near-comparable performance to conventional VIBE imaging for breast lesion evaluation. 3 Technical Efficacy: Stage 3 J. MAGN. RESON. IMAGING 2017;45:1746-1752. © 2016 International Society for Magnetic Resonance in Medicine.

  9. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  10. Single Echo MRI

    PubMed Central

    Galiana, Gigi; Constable, R. Todd

    2014-01-01

    Purpose Previous nonlinear gradient research has focused on trajectories that reconstruct images with a minimum number of echoes. Here we describe sequences where the nonlinear gradients vary in time to acquire the image in a single readout. The readout is designed to be very smooth so that it can be compressed to minimal time without violating peripheral nerve stimulation limits, yielding an image from a single 4 ms echo. Theory and Methods This sequence was inspired by considering the code of each voxel, i.e. the phase accumulation that a voxel follows through the readout, an approach connected to traditional encoding theory. We present simulations for the initial sequence, a low slew rate analog, and higher resolution reconstructions. Results Extremely fast acquisitions are achievable, though as one would expect, SNR is reduced relative to the slower Cartesian sampling schemes because of the high gradient strengths. Conclusions The prospect that nonlinear gradients can acquire images in a single <10 ms echo makes this a novel and interesting approach to image encoding. PMID:24465837

  11. Fast carotid artery MR angiography with compressed sensing based three-dimensional time-of-flight sequence.

    PubMed

    Li, Bo; Li, Hao; Dong, Li; Huang, Guofu

    2017-11-01

    In this study, we sought to investigate the feasibility of fast carotid artery MR angiography (MRA) by combining three-dimensional time-of-flight (3D TOF) with compressed sensing method (CS-3D TOF). A pseudo-sequential phase encoding order was developed for CS-3D TOF to generate hyper-intense vessel and suppress background tissues in under-sampled 3D k-space. Seven healthy volunteers and one patient with carotid artery stenosis were recruited for this study. Five sequential CS-3D TOF scans were implemented at 1, 2, 3, 4 and 5-fold acceleration factors for carotid artery MRA. Blood signal-to-tissue ratio (BTR) values for fully-sampled and under-sampled acquisitions were calculated and compared in seven subjects. Blood area (BA) was measured and compared between fully sampled acquisition and each under-sampled one. There were no significant differences between the fully-sampled dataset and each under-sampled in BTR comparisons (P>0.05 for all comparisons). The carotid vessel BAs measured from the images of CS-3D TOF sequences with 2, 3, 4 and 5-fold acceleration scans were all highly correlated with that of the fully-sampled acquisition. The contrast between blood vessels and background tissues of the images at 2 to 5-fold acceleration is comparable to that of fully sampled images. The images at 2× to 5× exhibit the comparable lumen definition to the corresponding images at 1×. By combining the pseudo-sequential phase encoding order, CS reconstruction, and 3D TOF sequence, this technique provides excellent visualizations for carotid vessel and calcifications in a short scan time. It has the potential to be integrated into current multiple blood contrast imaging protocol. Copyright © 2017. Published by Elsevier Inc.

  12. Improving resolution of MR images with an adversarial network incorporating images with different contrast.

    PubMed

    Kim, Ki Hwan; Do, Won-Joon; Park, Sung-Hong

    2018-05-04

    The routine MRI scan protocol consists of multiple pulse sequences that acquire images of varying contrast. Since high frequency contents such as edges are not significantly affected by image contrast, down-sampled images in one contrast may be improved by high resolution (HR) images acquired in another contrast, reducing the total scan time. In this study, we propose a new deep learning framework that uses HR MR images in one contrast to generate HR MR images from highly down-sampled MR images in another contrast. The proposed convolutional neural network (CNN) framework consists of two CNNs: (a) a reconstruction CNN for generating HR images from the down-sampled images using HR images acquired with a different MRI sequence and (b) a discriminator CNN for improving the perceptual quality of the generated HR images. The proposed method was evaluated using a public brain tumor database and in vivo datasets. The performance of the proposed method was assessed in tumor and no-tumor cases separately, with perceptual image quality being judged by a radiologist. To overcome the challenge of training the network with a small number of available in vivo datasets, the network was pretrained using the public database and then fine-tuned using the small number of in vivo datasets. The performance of the proposed method was also compared to that of several compressed sensing (CS) algorithms. Incorporating HR images of another contrast improved the quantitative assessments of the generated HR image in reference to ground truth. Also, incorporating a discriminator CNN yielded perceptually higher image quality. These results were verified in regions of normal tissue as well as tumors for various MRI sequences from pseudo k-space data generated from the public database. The combination of pretraining with the public database and fine-tuning with the small number of real k-space datasets enhanced the performance of CNNs in in vivo application compared to training CNNs from scratch. The proposed method outperformed the compressed sensing methods. The proposed method can be a good strategy for accelerating routine MRI scanning. © 2018 American Association of Physicists in Medicine.

  13. Imaging of hemifacial spasm.

    PubMed

    Hermier, M

    2018-04-25

    Almost all primary hemifacial spasms are associated with one or more neurovascular conflicts, most often at the root exit zone in the immediate vicinity of the brainstem. Imaging has first to exclude a secondary hemifacial spasm and secondly to search for and characterize the responsible neurovascular conflict(s). Magnetic resonance imaging should include high-resolution anatomical hyper T2-weighted sequences and magnetic resonance angiography by using 1.5 or even better 3 Tesla magnets. The most frequent vascular compressions are from the anterior-inferior cerebellar artery, the posterior-inferior cerebellar artery and the vertebrobasilar artery; venous conflicts are very rare. Conflicts are often multiple; also, the same vessel may compress the facial nerve in two places. Also, conflicts may be aided by particular anatomical circumstances, including arterial dolichoectasia, posterior fossa with a small volume or bony malformations. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  14. DNABIT Compress – Genome compression algorithm

    PubMed Central

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  15. An Optimal Seed Based Compression Algorithm for DNA Sequences

    PubMed Central

    Gopalakrishnan, Gopakumar; Karunakaran, Muralikrishnan

    2016-01-01

    This paper proposes a seed based lossless compression algorithm to compress a DNA sequence which uses a substitution method that is similar to the LempelZiv compression scheme. The proposed method exploits the repetition structures that are inherent in DNA sequences by creating an offline dictionary which contains all such repeats along with the details of mismatches. By ensuring that only promising mismatches are allowed, the method achieves a compression ratio that is at par or better than the existing lossless DNA sequence compression algorithms. PMID:27555868

  16. Experimental Influences in the Accurate Measurement of Cartilage Thickness in MRI.

    PubMed

    Wang, Nian; Badar, Farid; Xia, Yang

    2018-01-01

    Objective To study the experimental influences to the measurement of cartilage thickness by magnetic resonance imaging (MRI). Design The complete thicknesses of healthy and trypsin-degraded cartilage were measured at high-resolution MRI under different conditions, using two intensity-based imaging sequences (ultra-short echo [UTE] and multislice-multiecho [MSME]) and 3 quantitative relaxation imaging sequences (T 1 , T 2 , and T 1 ρ). Other variables included different orientations in the magnet, 2 soaking solutions (saline and phosphate buffered saline [PBS]), and external loading. Results With cartilage soaked in saline, UTE and T 1 methods yielded complete and consistent measurement of cartilage thickness, while the thickness measurement by T 2 , T 1 ρ, and MSME methods were orientation dependent. The effect of external loading on cartilage thickness is also sequence and orientation dependent. All variations in cartilage thickness in MRI could be eliminated with the use of a 100 mM PBS or imaged by UTE sequence. Conclusions The appearance of articular cartilage and the measurement accuracy of cartilage thickness in MRI can be influenced by a number of experimental factors in ex vivo MRI, from the use of various pulse sequences and soaking solutions to the health of the tissue. T 2 -based imaging sequence, both proton-intensity sequence and quantitative relaxation sequence, similarly produced the largest variations. With adequate resolution, the accurate measurement of whole cartilage tissue in clinical MRI could be utilized to detect differences between healthy and osteoarthritic cartilage after compression.

  17. Adaptive efficient compression of genomes

    PubMed Central

    2012-01-01

    Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. However, memory requirements of the current algorithms are high and run times often are slow. In this paper, we propose an adaptive, parallel and highly efficient referential sequence compression method which allows fine-tuning of the trade-off between required memory and compression speed. When using 12 MB of memory, our method is for human genomes on-par with the best previous algorithms in terms of compression ratio (400:1) and compression speed. In contrast, it compresses a complete human genome in just 11 seconds when provided with 9 GB of main memory, which is almost three times faster than the best competitor while using less main memory. PMID:23146997

  18. High Spatial and Temporal Resolution Dynamic Contrast-Enhanced Magnetic Resonance Angiography (CE-MRA) using Compressed Sensing with Magnitude Image Subtraction

    PubMed Central

    Rapacchi, Stanislas; Han, Fei; Natsuaki, Yutaka; Kroeker, Randall; Plotnik, Adam; Lehman, Evan; Sayre, James; Laub, Gerhard; Finn, J Paul; Hu, Peng

    2014-01-01

    Purpose We propose a compressed-sensing (CS) technique based on magnitude image subtraction for high spatial and temporal resolution dynamic contrast-enhanced MR angiography (CE-MRA). Methods Our technique integrates the magnitude difference image into the CS reconstruction to promote subtraction sparsity. Fully sampled Cartesian 3D CE-MRA datasets from 6 volunteers were retrospectively under-sampled and three reconstruction strategies were evaluated: k-space subtraction CS, independent CS, and magnitude subtraction CS. The techniques were compared in image quality (vessel delineation, image artifacts, and noise) and image reconstruction error. Our CS technique was further tested on 7 volunteers using a prospectively under-sampled CE-MRA sequence. Results Compared with k-space subtraction and independent CS, our magnitude subtraction CS provides significantly better vessel delineation and less noise at 4X acceleration, and significantly less reconstruction error at 4X and 8X (p<0.05 for all). On a 1–4 point image quality scale in vessel delineation, our technique scored 3.8±0.4 at 4X, 2.8±0.4 at 8X and 2.3±0.6 at 12X acceleration. Using our CS sequence at 12X acceleration, we were able to acquire dynamic CE-MRA with higher spatial and temporal resolution than current clinical TWIST protocol while maintaining comparable image quality (2.8±0.5 vs. 3.0±0.4, p=NS). Conclusion Our technique is promising for dynamic CE-MRA. PMID:23801456

  19. Data compression of discrete sequence: A tree based approach using dynamic programming

    NASA Technical Reports Server (NTRS)

    Shivaram, Gurusrasad; Seetharaman, Guna; Rao, T. R. N.

    1994-01-01

    A dynamic programming based approach for data compression of a ID sequence is presented. The compression of an input sequence of size N to that of a smaller size k is achieved by dividing the input sequence into k subsequences and replacing the subsequences by their respective average values. The partitioning of the input sequence is carried with the intention of reducing the mean squared error in the reconstructed sequence. The complexity involved in finding the partitions which would result in such an optimal compressed sequence is reduced by using the dynamic programming approach, which is presented.

  20. Composeable Chat over Low-Bandwidth Intermittent Communication Links

    DTIC Science & Technology

    2007-04-01

    Compression (STC), introduced in this report, is a data compression algorithm intended to compress alphanumeric... Ziv - Lempel coding, the grandfather of most modern general-purpose file compression programs, watches for input symbol sequences that have previously... data . This section applies these techniques to create a new compression algorithm called Small Text Compression . Various sequence compression

  1. Relative Suffix Trees.

    PubMed

    Farruggia, Andrea; Gagie, Travis; Navarro, Gonzalo; Puglisi, Simon J; Sirén, Jouni

    2018-05-01

    Suffix trees are one of the most versatile data structures in stringology, with many applications in bioinformatics. Their main drawback is their size, which can be tens of times larger than the input sequence. Much effort has been put into reducing the space usage, leading ultimately to compressed suffix trees. These compressed data structures can efficiently simulate the suffix tree, while using space proportional to a compressed representation of the sequence. In this work, we take a new approach to compressed suffix trees for repetitive sequence collections, such as collections of individual genomes. We compress the suffix trees of individual sequences relative to the suffix tree of a reference sequence. These relative data structures provide competitive time/space trade-offs, being almost as small as the smallest compressed suffix trees for repetitive collections, and competitive in time with the largest and fastest compressed suffix trees.

  2. DELIMINATE--a fast and efficient method for loss-less compression of genomic sequences: sequence analysis.

    PubMed

    Mohammed, Monzoorul Haque; Dutta, Anirban; Bose, Tungadri; Chadaram, Sudha; Mande, Sharmila S

    2012-10-01

    An unprecedented quantity of genome sequence data is currently being generated using next-generation sequencing platforms. This has necessitated the development of novel bioinformatics approaches and algorithms that not only facilitate a meaningful analysis of these data but also aid in efficient compression, storage, retrieval and transmission of huge volumes of the generated data. We present a novel compression algorithm (DELIMINATE) that can rapidly compress genomic sequence data in a loss-less fashion. Validation results indicate relatively higher compression efficiency of DELIMINATE when compared with popular general purpose compression algorithms, namely, gzip, bzip2 and lzma. Linux, Windows and Mac implementations (both 32 and 64-bit) of DELIMINATE are freely available for download at: http://metagenomics.atc.tcs.com/compression/DELIMINATE. sharmila@atc.tcs.com Supplementary data are available at Bioinformatics online.

  3. Relative Suffix Trees

    PubMed Central

    Farruggia, Andrea; Gagie, Travis; Navarro, Gonzalo; Puglisi, Simon J; Sirén, Jouni

    2018-01-01

    Abstract Suffix trees are one of the most versatile data structures in stringology, with many applications in bioinformatics. Their main drawback is their size, which can be tens of times larger than the input sequence. Much effort has been put into reducing the space usage, leading ultimately to compressed suffix trees. These compressed data structures can efficiently simulate the suffix tree, while using space proportional to a compressed representation of the sequence. In this work, we take a new approach to compressed suffix trees for repetitive sequence collections, such as collections of individual genomes. We compress the suffix trees of individual sequences relative to the suffix tree of a reference sequence. These relative data structures provide competitive time/space trade-offs, being almost as small as the smallest compressed suffix trees for repetitive collections, and competitive in time with the largest and fastest compressed suffix trees. PMID:29795706

  4. Resolution enhancement of low-quality videos using a high-resolution frame

    NASA Astrophysics Data System (ADS)

    Pham, Tuan Q.; van Vliet, Lucas J.; Schutte, Klamer

    2006-01-01

    This paper proposes an example-based Super-Resolution (SR) algorithm of compressed videos in the Discrete Cosine Transform (DCT) domain. Input to the system is a Low-Resolution (LR) compressed video together with a High-Resolution (HR) still image of similar content. Using a training set of corresponding LR-HR pairs of image patches from the HR still image, high-frequency details are transferred from the HR source to the LR video. The DCT-domain algorithm is much faster than example-based SR in spatial domain 6 because of a reduction in search dimensionality, which is a direct result of the compact and uncorrelated DCT representation. Fast searching techniques like tree-structure vector quantization 16 and coherence search1 are also key to the improved efficiency. Preliminary results on MJPEG sequence show promising result of the DCT-domain SR synthesis approach.

  5. A study on multiresolution lossless video coding using inter/intra frame adaptive prediction

    NASA Astrophysics Data System (ADS)

    Nakachi, Takayuki; Sawabe, Tomoko; Fujii, Tetsuro

    2003-06-01

    Lossless video coding is required in the fields of archiving and editing digital cinema or digital broadcasting contents. This paper combines a discrete wavelet transform and adaptive inter/intra-frame prediction in the wavelet transform domain to create multiresolution lossless video coding. The multiresolution structure offered by the wavelet transform facilitates interchange among several video source formats such as Super High Definition (SHD) images, HDTV, SDTV, and mobile applications. Adaptive inter/intra-frame prediction is an extension of JPEG-LS, a state-of-the-art lossless still image compression standard. Based on the image statistics of the wavelet transform domains in successive frames, inter/intra frame adaptive prediction is applied to the appropriate wavelet transform domain. This adaptation offers superior compression performance. This is achieved with low computational cost and no increase in additional information. Experiments on digital cinema test sequences confirm the effectiveness of the proposed algorithm.

  6. ERGC: an efficient referential genome compression algorithm

    PubMed Central

    Saha, Subrata; Rajasekaran, Sanguthevar

    2015-01-01

    Motivation: Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. Results: We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. Availability and implementation: The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. Contact: rajasek@engr.uconn.edu PMID:26139636

  7. MR images from fewer data

    NASA Astrophysics Data System (ADS)

    Vafadar, Bahareh; Bones, Philip J.

    2012-10-01

    There is a strong motivation to reduce the amount of acquired data necessary to reconstruct clinically useful MR images, since less data means faster acquisition sequences, less time for the patient to remain motionless in the scanner and better time resolution for observing temporal changes within the body. We recently introduced an improvement in image quality for reconstructing parallel MR images by incorporating a data ordering step with compressed sensing (CS) in an algorithm named `PECS'. That method requires a prior estimate of the image to be available. We are extending the algorithm to explore ways of utilizing the data ordering step without requiring a prior estimate. The method presented here first reconstructs an initial image x1 by compressed sensing (with scarcity enhanced by SVD), then derives a data ordering from x1, R'1 , which ranks the voxels of x1 according to their value. A second reconstruction is then performed which incorporates minimization of the first norm of the estimate after ordering by R'1 , resulting in a new reconstruction x2. Preliminary results are encouraging.

  8. Mach 5 bow shock control by a nanosecond pulse surface dielectric barrier discharge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishihara, M.; Takashima, K.; Rich, J. W.

    2011-06-15

    Bow shock perturbations in a Mach 5 air flow, produced by low-temperature, nanosecond pulse, and surface dielectric barrier discharge (DBD), are detected by phase-locked schlieren imaging. A diffuse nanosecond pulse discharge is generated in a DBD plasma actuator on a surface of a cylinder model placed in air flow in a small scale blow-down supersonic wind tunnel. Discharge energy coupled to the actuator is 7.3-7.8 mJ/pulse. Plasma temperature inferred from nitrogen emission spectra is a few tens of degrees higher than flow stagnation temperature, T = 340 {+-} 30 K. Phase-locked Schlieren images are used to detect compression waves generatedmore » by individual nanosecond discharge pulses near the actuator surface. The compression wave propagates upstream toward the baseline bow shock standing in front of the cylinder model. Interaction of the compression wave and the bow shock causes its displacement in the upstream direction, increasing shock stand-off distance by up to 25%. The compression wave speed behind the bow shock and the perturbed bow shock velocity are inferred from the Schlieren images. The effect of compression waves generated by nanosecond discharge pulses on shock stand-off distance is demonstrated in a single-pulse regime (at pulse repetition rates of a few hundred Hz) and in a quasi-continuous mode (using a two-pulse sequence at a pulse repetition rate of 100 kHz). The results demonstrate feasibility of hypersonic flow control by low-temperature, repetitive nanosecond pulse discharges.« less

  9. Open source database of images DEIMOS: extension for large-scale subjective image quality assessment

    NASA Astrophysics Data System (ADS)

    Vítek, Stanislav

    2014-09-01

    DEIMOS (Database of Images: Open Source) is an open-source database of images and video sequences for testing, verification and comparison of various image and/or video processing techniques such as compression, reconstruction and enhancement. This paper deals with extension of the database allowing performing large-scale web-based subjective image quality assessment. Extension implements both administrative and client interface. The proposed system is aimed mainly at mobile communication devices, taking into account advantages of HTML5 technology; it means that participants don't need to install any application and assessment could be performed using web browser. The assessment campaign administrator can select images from the large database and then apply rules defined by various test procedure recommendations. The standard test procedures may be fully customized and saved as a template. Alternatively the administrator can define a custom test, using images from the pool and other components, such as evaluating forms and ongoing questionnaires. Image sequence is delivered to the online client, e.g. smartphone or tablet, as a fully automated assessment sequence or viewer can decide on timing of the assessment if required. Environmental data and viewing conditions (e.g. illumination, vibrations, GPS coordinates, etc.), may be collected and subsequently analyzed.

  10. JPEG and wavelet compression of ophthalmic images

    NASA Astrophysics Data System (ADS)

    Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.

    1999-05-01

    This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.

  11. High-speed imaging using compressed sensing and wavelength-dependent scattering (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Shin, Jaewook; Bosworth, Bryan T.; Foster, Mark A.

    2017-02-01

    The process of multiple scattering has inherent characteristics that are attractive for high-speed imaging with high spatial resolution and a wide field-of-view. A coherent source passing through a multiple-scattering medium naturally generates speckle patterns with diffraction-limited features over an arbitrarily large field-of-view. In addition, the process of multiple scattering is deterministic allowing a given speckle pattern to be reliably reproduced with identical illumination conditions. Here, by exploiting wavelength dependent multiple scattering and compressed sensing, we develop a high-speed 2D time-stretch microscope. Highly chirped pulses from a 90-MHz mode-locked laser are sent through a 2D grating and a ground-glass diffuser to produce 2D speckle patterns that rapidly evolve with the instantaneous frequency of the chirped pulse. To image a scene, we first characterize the high-speed evolution of the generated speckle patterns. Subsequently we project the patterns onto the microscopic region of interest and collect the total light from the scene using a single high-speed photodetector. Thus the wavelength dependent speckle patterns serve as high-speed pseudorandom structured illumination of the scene. An image sequence is then recovered using the time-dependent signal received by the photodetector, the known speckle pattern evolution, and compressed sensing algorithms. Notably, the use of compressed sensing allows for reconstruction of a time-dependent scene using a highly sub-Nyquist number of measurements, which both increases the speed of the imager and reduces the amount of data that must be collected and stored. We will discuss our experimental demonstration of this approach and the theoretical limits on imaging speed.

  12. Visual content highlighting via automatic extraction of embedded captions on MPEG compressed video

    NASA Astrophysics Data System (ADS)

    Yeo, Boon-Lock; Liu, Bede

    1996-03-01

    Embedded captions in TV programs such as news broadcasts, documentaries and coverage of sports events provide important information on the underlying events. In digital video libraries, such captions represent a highly condensed form of key information on the contents of the video. In this paper we propose a scheme to automatically detect the presence of captions embedded in video frames. The proposed method operates on reduced image sequences which are efficiently reconstructed from compressed MPEG video and thus does not require full frame decompression. The detection, extraction and analysis of embedded captions help to capture the highlights of visual contents in video documents for better organization of video, to present succinctly the important messages embedded in the images, and to facilitate browsing, searching and retrieval of relevant clips.

  13. Lossless Astronomical Image Compression and the Effects of Random Noise

    NASA Technical Reports Server (NTRS)

    Pence, William

    2009-01-01

    In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.

  14. ERGC: an efficient referential genome compression algorithm.

    PubMed

    Saha, Subrata; Rajasekaran, Sanguthevar

    2015-11-01

    Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. rajasek@engr.uconn.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    PubMed

    Gillespy, T; Rowberg, A H

    1994-02-01

    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.

  16. Diagnosing upper extremity deep vein thrombosis with non-contrast-enhanced Magnetic Resonance Direct Thrombus Imaging: A pilot study.

    PubMed

    Dronkers, C E A; Klok, F A; van Haren, G R; Gleditsch, J; Westerlund, E; Huisman, M V; Kroft, L J M

    2018-03-01

    Diagnosing upper extremity deep vein thrombosis (UEDVT) can be challenging. Compression ultrasonography is often inconclusive because of overlying anatomic structures that hamper compressing veins. Contrast venography is invasive and has a risk of contrast allergy. Magnetic Resonance Direct Thrombus Imaging (MRDTI) and Three Dimensional Turbo Spin-echo Spectral Attenuated Inversion Recovery (3D TSE-SPAIR) are both non-contrast-enhanced Magnetic Resonance Imaging (MRI) sequences that can visualize a thrombus directly by the visualization of methemoglobin, which is formed in a fresh blood clot. MRDTI has been proven to be accurate in diagnosing deep venous thrombosis (DVT) of the leg. The primary aim of this pilot study was to test the feasibility of diagnosing UEDVT with these MRI techniques. MRDTI and 3D TSE-SPAIR were performed in 3 pilot patients who were already diagnosed with UEDVT by ultrasonography or contrast venography. In all patients, UEDVT diagnosis could be confirmed by MRDTI and 3D TSE-SPAIR in all vein segments. In conclusion, this study showed that non-contrast MRDTI and 3D TSE-SPAIR sequences may be feasible tests to diagnose UEDVT. However diagnostic accuracy and management studies have to be performed before these techniques can be routinely used in clinical practice. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. A biological compression model and its applications.

    PubMed

    Cao, Minh Duc; Dix, Trevor I; Allison, Lloyd

    2011-01-01

    A biological compression model, expert model, is presented which is superior to existing compression algorithms in both compression performance and speed. The model is able to compress whole eukaryotic genomes. Most importantly, the model provides a framework for knowledge discovery from biological data. It can be used for repeat element discovery, sequence alignment and phylogenetic analysis. We demonstrate that the model can handle statistically biased sequences and distantly related sequences where conventional knowledge discovery tools often fail.

  18. Radiological Image Compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  19. Algorithm for Compressing Time-Series Data

    NASA Technical Reports Server (NTRS)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  20. Hear it, See it, Explore it: Visualizations and Sonifications of Seismic Signals

    NASA Astrophysics Data System (ADS)

    Fisher, M.; Peng, Z.; Simpson, D. W.; Kilb, D. L.

    2010-12-01

    Sonification of seismic data is an innovative way to represent seismic data in the audible range (Simpson, 2005). Seismic waves with different frequency and temporal characteristics, such as those from teleseismic earthquakes, deep “non-volcanic” tremor and local earthquakes, can be easily discriminated when time-compressed to the audio range. Hence, sonification is particularly useful for presenting complicated seismic signals with multiple sources, such as aftershocks within the coda of large earthquakes, and remote triggering of earthquakes and tremor by large teleseismic earthquakes. Previous studies mostly focused on converting the seismic data into audible files by simple time compression or frequency modulation (Simpson et al., 2009). Here we generate animations of the seismic data together with the sounds. We first read seismic data in the SAC format into Matlab, and generate a sequence of image files and an associated WAV sound file. Next, we use a third party video editor, such as the QuickTime Pro, to combine the image sequences and the sound file into an animation. We have applied this simple procedure to generate animations of remotely triggered earthquakes, tremor and low-frequency earthquakes in California, and mainshock-aftershock sequences in Japan and California. These animations clearly demonstrate the interactions of earthquake sequences and the richness of the seismic data. The tool developed in this study can be easily adapted for use in other research applications and to create sonification/animation of seismic data for education and outreach purpose.

  1. Implementation of compressive sensing for preclinical cine-MRI

    NASA Astrophysics Data System (ADS)

    Tan, Elliot; Yang, Ming; Ma, Lixin; Zheng, Yahong Rosa

    2014-03-01

    This paper presents a practical implementation of Compressive Sensing (CS) for a preclinical MRI machine to acquire randomly undersampled k-space data in cardiac function imaging applications. First, random undersampling masks were generated based on Gaussian, Cauchy, wrapped Cauchy and von Mises probability distribution functions by the inverse transform method. The best masks for undersampling ratios of 0.3, 0.4 and 0.5 were chosen for animal experimentation, and were programmed into a Bruker Avance III BioSpec 7.0T MRI system through method programming in ParaVision. Three undersampled mouse heart datasets were obtained using a fast low angle shot (FLASH) sequence, along with a control undersampled phantom dataset. ECG and respiratory gating was used to obtain high quality images. After CS reconstructions were applied to all acquired data, resulting images were quantitatively analyzed using the performance metrics of reconstruction error and Structural Similarity Index (SSIM). The comparative analysis indicated that CS reconstructed images from MRI machine undersampled data were indeed comparable to CS reconstructed images from retrospective undersampled data, and that CS techniques are practical in a preclinical setting. The implementation achieved 2 to 4 times acceleration for image acquisition and satisfactory quality of image reconstruction.

  2. NRGC: a novel referential genome compression algorithm.

    PubMed

    Saha, Subrata; Rajasekaran, Sanguthevar

    2016-11-15

    Next-generation sequencing techniques produce millions to billions of short reads. The procedure is not only very cost effective but also can be done in laboratory environment. The state-of-the-art sequence assemblers then construct the whole genomic sequence from these reads. Current cutting edge computing technology makes it possible to build genomic sequences from the billions of reads within a minimal cost and time. As a consequence, we see an explosion of biological sequences in recent times. In turn, the cost of storing the sequences in physical memory or transmitting them over the internet is becoming a major bottleneck for research and future medical applications. Data compression techniques are one of the most important remedies in this context. We are in need of suitable data compression algorithms that can exploit the inherent structure of biological sequences. Although standard data compression algorithms are prevalent, they are not suitable to compress biological sequencing data effectively. In this article, we propose a novel referential genome compression algorithm (NRGC) to effectively and efficiently compress the genomic sequences. We have done rigorous experiments to evaluate NRGC by taking a set of real human genomes. The simulation results show that our algorithm is indeed an effective genome compression algorithm that performs better than the best-known algorithms in most of the cases. Compression and decompression times are also very impressive. The implementations are freely available for non-commercial purposes. They can be downloaded from: http://www.engr.uconn.edu/~rajasek/NRGC.zip CONTACT: rajasek@engr.uconn.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Sub-band/transform compression of video sequences

    NASA Technical Reports Server (NTRS)

    Sauer, Ken; Bauer, Peter

    1992-01-01

    The progress on compression of video sequences is discussed. The overall goal of the research was the development of data compression algorithms for high-definition television (HDTV) sequences, but most of our research is general enough to be applicable to much more general problems. We have concentrated on coding algorithms based on both sub-band and transform approaches. Two very fundamental issues arise in designing a sub-band coder. First, the form of the signal decomposition must be chosen to yield band-pass images with characteristics favorable to efficient coding. A second basic consideration, whether coding is to be done in two or three dimensions, is the form of the coders to be applied to each sub-band. Computational simplicity is of essence. We review the first portion of the year, during which we improved and extended some of the previous grant period's results. The pyramid nonrectangular sub-band coder limited to intra-frame application is discussed. Perhaps the most critical component of the sub-band structure is the design of bandsplitting filters. We apply very simple recursive filters, which operate at alternating levels on rectangularly sampled, and quincunx sampled images. We will also cover the techniques we have studied for the coding of the resulting bandpass signals. We discuss adaptive three-dimensional coding which takes advantage of the detection algorithm developed last year. To this point, all the work on this project has been done without the benefit of motion compensation (MC). Motion compensation is included in many proposed codecs, but adds significant computational burden and hardware expense. We have sought to find a lower-cost alternative featuring a simple adaptation to motion in the form of the codec. In sequences of high spatial detail and zooming or panning, it appears that MC will likely be necessary for the proposed quality and bit rates.

  4. Recognizable or Not: Towards Image Semantic Quality Assessment for Compression

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Dandan; Li, Houqiang

    2017-12-01

    Traditionally, image compression was optimized for the pixel-wise fidelity or the perceptual quality of the compressed images given a bit-rate budget. But recently, compressed images are more and more utilized for automatic semantic analysis tasks such as recognition and retrieval. For these tasks, we argue that the optimization target of compression is no longer perceptual quality, but the utility of the compressed images in the given automatic semantic analysis task. Accordingly, we propose to evaluate the quality of the compressed images neither at pixel level nor at perceptual level, but at semantic level. In this paper, we make preliminary efforts towards image semantic quality assessment (ISQA), focusing on the task of optical character recognition (OCR) from compressed images. We propose a full-reference ISQA measure by comparing the features extracted from text regions of original and compressed images. We then propose to integrate the ISQA measure into an image compression scheme. Experimental results show that our proposed ISQA measure is much better than PSNR and SSIM in evaluating the semantic quality of compressed images; accordingly, adopting our ISQA measure to optimize compression for OCR leads to significant bit-rate saving compared to using PSNR or SSIM. Moreover, we perform subjective test about text recognition from compressed images, and observe that our ISQA measure has high consistency with subjective recognizability. Our work explores new dimensions in image quality assessment, and demonstrates promising direction to achieve higher compression ratio for specific semantic analysis tasks.

  5. Image quality (IQ) guided multispectral image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  6. Image compression system and method having optimized quantization tables

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)

    1998-01-01

    A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.

  7. High-quality JPEG compression history detection for fake uncompressed images

    NASA Astrophysics Data System (ADS)

    Zhang, Rong; Wang, Rang-Ding; Guo, Li-Jun; Jiang, Bao-Chuan

    2017-05-01

    Authenticity is one of the most important evaluation factors of images for photography competitions or journalism. Unusual compression history of an image often implies the illicit intent of its author. Our work aims at distinguishing real uncompressed images from fake uncompressed images that are saved in uncompressed formats but have been previously compressed. To detect the potential image JPEG compression, we analyze the JPEG compression artifacts based on the tetrolet covering, which corresponds to the local image geometrical structure. Since the compression can alter the structure information, the tetrolet covering indexes may be changed if a compression is performed on the test image. Such changes can provide valuable clues about the image compression history. To be specific, the test image is first compressed with different quality factors to generate a set of temporary images. Then, the test image is compared with each temporary image block-by-block to investigate whether the tetrolet covering index of each 4×4 block is different between them. The percentages of the changed tetrolet covering indexes corresponding to the quality factors (from low to high) are computed and used to form the p-curve, the local minimum of which may indicate the potential compression. Our experimental results demonstrate the advantage of our method to detect JPEG compressions of high quality, even the highest quality factors such as 98, 99, or 100 of the standard JPEG compression, from uncompressed-format images. At the same time, our detection algorithm can accurately identify the corresponding compression quality factor.

  8. Image Segmentation, Registration, Compression, and Matching

    NASA Technical Reports Server (NTRS)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity/topology components of the generated models. The highly efficient triangular mesh compression compacts the connectivity information at the rate of 1.5-4 bits per vertex (on average for triangle meshes), while reducing the 3D geometry by 40-50 percent. Finally, taking into consideration the characteristics of 3D terrain data, and using the innovative, regularized binary decomposition mesh modeling, a multistage, pattern-drive modeling, and compression technique has been developed to provide an effective framework for compressing digital elevation model (DEM) surfaces, high-resolution aerial imagery, and other types of NASA data.

  9. A New Challenge for Compression Algorithms: Genetic Sequences.

    ERIC Educational Resources Information Center

    Grumbach, Stephane; Tahi, Fariza

    1994-01-01

    Analyzes the properties of genetic sequences that cause the failure of classical algorithms used for data compression. A lossless algorithm, which compresses the information contained in DNA and RNA sequences by detecting regularities such as palindromes, is presented. This algorithm combines substitutional and statistical methods and appears to…

  10. Buckling Test Results from the 8-Foot-Diameter Orthogrid-Stiffened Cylinder Test Article TA01. [Test Dates: 19-21 November 2008

    NASA Technical Reports Server (NTRS)

    Hilburger, Mark W.; Waters, W. Allen, Jr.; Haynie, Waddy T.

    2015-01-01

    Results from the testing of cylinder test article SBKF-P2-CYLTA01 (referred to herein as TA01) are presented. The testing was conducted at the Marshall Space Flight Center (MSFC), November 19?21, 2008, in support of the Shell Buckling Knockdown Factor (SBKF) Project.i The test was used to verify the performance of a newly constructed buckling test facility at MSFC and to verify the test article design and analysis approach used by the SBKF project researchers. TA01 is an 8-foot-diameter (96-inches), 78.0-inch long, aluminum-lithium (Al-Li), orthogrid-stiffened cylindrical shell similar to those used in current state-of-the-art launch vehicle structures and was designed to exhibit global buckling when subjected to compression loads. Five different load sequences were applied to TA01 during testing and included four sub-critical load sequences, i.e., loading conditions that did not cause buckling or material failure, and one final load sequence to buckling and collapse. The sub-critical load sequences consisted of either uniform axial compression loading or combined axial compression and bending and the final load sequence subjected TA01 to uniform axial compression. Traditional displacement transducers and strain gages were used to monitor the test article response at nearly 300 locations and an advanced digital image correlation system was used to obtain low-speed and high-speed full-field displacement measurements of the outer surface of the test article. Overall, the test facility and test article performed as designed. In particular, the test facility successfully applied all desired load combinations to the test article and was able to test safely into the postbuckling range of loading, and the test article failed by global buckling. In addition, the test results correlated well with initial pretest predictions.

  11. AFRESh: an adaptive framework for compression of reads and assembled sequences with random access functionality.

    PubMed

    Paridaens, Tom; Van Wallendael, Glenn; De Neve, Wesley; Lambert, Peter

    2017-05-15

    The past decade has seen the introduction of new technologies that lowered the cost of genomic sequencing increasingly. We can even observe that the cost of sequencing is dropping significantly faster than the cost of storage and transmission. The latter motivates a need for continuous improvements in the area of genomic data compression, not only at the level of effectiveness (compression rate), but also at the level of functionality (e.g. random access), configurability (effectiveness versus complexity, coding tool set …) and versatility (support for both sequenced reads and assembled sequences). In that regard, we can point out that current approaches mostly do not support random access, requiring full files to be transmitted, and that current approaches are restricted to either read or sequence compression. We propose AFRESh, an adaptive framework for no-reference compression of genomic data with random access functionality, targeting the effective representation of the raw genomic symbol streams of both reads and assembled sequences. AFRESh makes use of a configurable set of prediction and encoding tools, extended by a Context-Adaptive Binary Arithmetic Coding scheme (CABAC), to compress raw genetic codes. To the best of our knowledge, our paper is the first to describe an effective implementation CABAC outside of its' original application. By applying CABAC, the compression effectiveness improves by up to 19% for assembled sequences and up to 62% for reads. By applying AFRESh to the genomic symbols of the MPEG genomic compression test set for reads, a compression gain is achieved of up to 51% compared to SCALCE, 42% compared to LFQC and 44% compared to ORCOM. When comparing to generic compression approaches, a compression gain is achieved of up to 41% compared to GNU Gzip and 22% compared to 7-Zip at the Ultra setting. Additionaly, when compressing assembled sequences of the Human Genome, a compression gain is achieved up to 34% compared to GNU Gzip and 16% compared to 7-Zip at the Ultra setting. A Windows executable version can be downloaded at https://github.com/tparidae/AFresh . tom.paridaens@ugent.be. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  12. Time-of-flight camera via a single-pixel correlation image sensor

    NASA Astrophysics Data System (ADS)

    Mao, Tianyi; Chen, Qian; He, Weiji; Dai, Huidong; Ye, Ling; Gu, Guohua

    2018-04-01

    A time-of-flight imager based on single-pixel correlation image sensors is proposed for noise-free depth map acquisition in presence of ambient light. Digital micro-mirror device and time-modulated IR-laser provide spatial and temporal illumination on the unknown object. Compressed sensing and ‘four bucket principle’ method are combined to reconstruct the depth map from a sequence of measurements at a low sampling rate. Second-order correlation transform is also introduced to reduce the noise from the detector itself and direct ambient light. Computer simulations are presented to validate the computational models and improvement of reconstructions.

  13. Tracking lung tissue motion and expansion/compression with inverse consistent image registration and spirometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christensen, Gary E.; Song, Joo Hyun; Lu, Wei

    2007-06-15

    Breathing motion is one of the major limiting factors for reducing dose and irradiation of normal tissue for conventional conformal radiotherapy. This paper describes a relationship between tracking lung motion using spirometry data and image registration of consecutive CT image volumes collected from a multislice CT scanner over multiple breathing periods. Temporal CT sequences from 5 individuals were analyzed in this study. The couch was moved from 11 to 14 different positions to image the entire lung. At each couch position, 15 image volumes were collected over approximately 3 breathing periods. It is assumed that the expansion and contraction ofmore » lung tissue can be modeled as an elastic material. Furthermore, it is assumed that the deformation of the lung is small over one-fifth of a breathing period and therefore the motion of the lung can be adequately modeled using a small deformation linear elastic model. The small deformation inverse consistent linear elastic image registration algorithm is therefore well suited for this problem and was used to register consecutive image scans. The pointwise expansion and compression of lung tissue was measured by computing the Jacobian of the transformations used to register the images. The logarithm of the Jacobian was computed so that expansion and compression of the lung were scaled equally. The log-Jacobian was computed at each voxel in the volume to produce a map of the local expansion and compression of the lung during the breathing period. These log-Jacobian images demonstrate that the lung does not expand uniformly during the breathing period, but rather expands and contracts locally at different rates during inhalation and exhalation. The log-Jacobian numbers were averaged over a cross section of the lung to produce an estimate of the average expansion or compression from one time point to the next and compared to the air flow rate measured by spirometry. In four out of five individuals, the average log-Jacobian value and the air flow rate correlated well (R{sup 2}=0.858 on average for the entire lung). The correlation for the fifth individual was not as good (R{sup 2}=0.377 on average for the entire lung) and can be explained by the small variation in tidal volume for this individual. The correlation of the average log-Jacobian value and the air flow rate for images near the diaphragm correlated well in all five individuals (R{sup 2}=0.943 on average). These preliminary results indicate a strong correlation between the expansion/compression of the lung measured by image registration and the air flow rate measured by spirometry. Predicting the location, motion, and compression/expansion of the tumor and normal tissue using image registration and spirometry could have many important benefits for radiotherapy treatment. These benefits include reducing radiation dose to normal tissue, maximizing dose to the tumor, improving patient care, reducing treatment cost, and increasing patient throughput.« less

  14. Tracking lung tissue motion and expansion/compression with inverse consistent image registration and spirometry.

    PubMed

    Christensen, Gary E; Song, Joo Hyun; Lu, Wei; El Naqa, Issam; Low, Daniel A

    2007-06-01

    Breathing motion is one of the major limiting factors for reducing dose and irradiation of normal tissue for conventional conformal radiotherapy. This paper describes a relationship between tracking lung motion using spirometry data and image registration of consecutive CT image volumes collected from a multislice CT scanner over multiple breathing periods. Temporal CT sequences from 5 individuals were analyzed in this study. The couch was moved from 11 to 14 different positions to image the entire lung. At each couch position, 15 image volumes were collected over approximately 3 breathing periods. It is assumed that the expansion and contraction of lung tissue can be modeled as an elastic material. Furthermore, it is assumed that the deformation of the lung is small over one-fifth of a breathing period and therefore the motion of the lung can be adequately modeled using a small deformation linear elastic model. The small deformation inverse consistent linear elastic image registration algorithm is therefore well suited for this problem and was used to register consecutive image scans. The pointwise expansion and compression of lung tissue was measured by computing the Jacobian of the transformations used to register the images. The logarithm of the Jacobian was computed so that expansion and compression of the lung were scaled equally. The log-Jacobian was computed at each voxel in the volume to produce a map of the local expansion and compression of the lung during the breathing period. These log-Jacobian images demonstrate that the lung does not expand uniformly during the breathing period, but rather expands and contracts locally at different rates during inhalation and exhalation. The log-Jacobian numbers were averaged over a cross section of the lung to produce an estimate of the average expansion or compression from one time point to the next and compared to the air flow rate measured by spirometry. In four out of five individuals, the average log-Jacobian value and the air flow rate correlated well (R2 = 0.858 on average for the entire lung). The correlation for the fifth individual was not as good (R2 = 0.377 on average for the entire lung) and can be explained by the small variation in tidal volume for this individual. The correlation of the average log-Jacobian value and the air flow rate for images near the diaphragm correlated well in all five individuals (R2 = 0.943 on average). These preliminary results indicate a strong correlation between the expansion/compression of the lung measured by image registration and the air flow rate measured by spirometry. Predicting the location, motion, and compression/expansion of the tumor and normal tissue using image registration and spirometry could have many important benefits for radiotherapy treatment. These benefits include reducing radiation dose to normal tissue, maximizing dose to the tumor, improving patient care, reducing treatment cost, and increasing patient throughput.

  15. Application of content-based image compression to telepathology

    NASA Astrophysics Data System (ADS)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  16. Fpack and Funpack Utilities for FITS Image Compression and Uncompression

    NASA Technical Reports Server (NTRS)

    Pence, W.

    2008-01-01

    Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see http://fits.gsfc.nasa.gov). The associated funpack program restores the compressed image file back to its original state (as long as a lossless compression algorithm is used). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression algorithms. Fpack stores the compressed image using the FITS tiled image compression convention (see http://fits.gsfc.nasa.gov/fits_registry.html). Under this convention, the image is first divided into a user-configurable grid of rectangular tiles, and then each tile is individually compressed and stored in a variable-length array column in a FITS binary table. By default, fpack usually adopts a row-by-row tiling pattern. The FITS image header keywords remain uncompressed for fast access by FITS reading and writing software. The tiled image compression convention can in principle support any number of different compression algorithms. The fpack and funpack utilities call on routines in the CFITSIO library (http://hesarc.gsfc.nasa.gov/fitsio) to perform the actual compression and uncompression of the FITS images, which currently supports the GZIP, Rice, H-compress, and PLIO IRAF pixel list compression algorithms.

  17. 3D single point imaging with compressed sensing provides high temporal resolution R 2* mapping for in vivo preclinical applications.

    PubMed

    Rioux, James A; Beyea, Steven D; Bowen, Chris V

    2017-02-01

    Purely phase-encoded techniques such as single point imaging (SPI) are generally unsuitable for in vivo imaging due to lengthy acquisition times. Reconstruction of highly undersampled data using compressed sensing allows SPI data to be quickly obtained from animal models, enabling applications in preclinical cellular and molecular imaging. TurboSPI is a multi-echo single point technique that acquires hundreds of images with microsecond spacing, enabling high temporal resolution relaxometry of large-R 2 * systems such as iron-loaded cells. TurboSPI acquisitions can be pseudo-randomly undersampled in all three dimensions to increase artifact incoherence, and can provide prior information to improve reconstruction. We evaluated the performance of CS-TurboSPI in phantoms, a rat ex vivo, and a mouse in vivo. An algorithm for iterative reconstruction of TurboSPI relaxometry time courses does not affect image quality or R 2 * mapping in vitro at acceleration factors up to 10. Imaging ex vivo is possible at similar acceleration factors, and in vivo imaging is demonstrated at an acceleration factor of 8, such that acquisition time is under 1 h. Accelerated TurboSPI enables preclinical R 2 * mapping without loss of data quality, and may show increased specificity to iron oxide compared to other sequences.

  18. Task-oriented lossy compression of magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques

    1996-04-01

    A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.

  19. Technique development of 3D dynamic CS-EPSI for hyperpolarized 13 C pyruvate MR molecular imaging of human prostate cancer.

    PubMed

    Chen, Hsin-Yu; Larson, Peder E Z; Gordon, Jeremy W; Bok, Robert A; Ferrone, Marcus; van Criekinge, Mark; Carvajal, Lucas; Cao, Peng; Pauly, John M; Kerr, Adam B; Park, Ilwoo; Slater, James B; Nelson, Sarah J; Munster, Pamela N; Aggarwal, Rahul; Kurhanewicz, John; Vigneron, Daniel B

    2018-03-25

    The purpose of this study was to develop a new 3D dynamic carbon-13 compressed sensing echoplanar spectroscopic imaging (EPSI) MR sequence and test it in phantoms, animal models, and then in prostate cancer patients to image the metabolic conversion of hyperpolarized [1- 13 C]pyruvate to [1- 13 C]lactate with whole gland coverage at high spatial and temporal resolution. A 3D dynamic compressed sensing (CS)-EPSI sequence with spectral-spatial excitation was designed to meet the required spatial coverage, time and spatial resolution, and RF limitations of the 3T MR scanner for its clinical translation for prostate cancer patient imaging. After phantom testing, animal studies were performed in rats and transgenic mice with prostate cancers. For patient studies, a GE SPINlab polarizer (GE Healthcare, Waukesha, WI) was used to produce hyperpolarized sterile GMP [1- 13 C]pyruvate. 3D dynamic 13 C CS-EPSI data were acquired starting 5 s after injection throughout the gland with a spatial resolution of 0.5 cm 3 , 18 time frames, 2-s temporal resolution, and 36 s total acquisition time. Through preclinical testing, the 3D CS-EPSI sequence developed in this project was shown to provide the desired spectral, temporal, and spatial 5D HP 13 C MR data. In human studies, the 3D dynamic HP CS-EPSI approach provided first-ever simultaneously volumetric and dynamic images of the LDH-catalyzed conversion of [1- 13 C]pyruvate to [1- 13 C]lactate in a biopsy-proven prostate cancer patient with full gland coverage. The results demonstrate the feasibility to characterize prostate cancer metabolism in animals, and now patients using this new 3D dynamic HP MR technique to measure k PL , the kinetic rate constant of [1- 13 C]pyruvate to [1- 13 C]lactate conversion. © 2018 International Society for Magnetic Resonance in Medicine.

  20. Compressive Coded-Aperture Multimodal Imaging Systems

    NASA Astrophysics Data System (ADS)

    Rueda-Chacon, Hoover F.

    Multimodal imaging refers to the framework of capturing images that span different physical domains such as space, spectrum, depth, time, polarization, and others. For instance, spectral images are modeled as 3D cubes with two spatial and one spectral coordinate. Three-dimensional cubes spanning just the space domain, are referred as depth volumes. Imaging cubes varying in time, spectra or depth, are referred as 4D-images. Nature itself spans different physical domains, thus imaging our real world demands capturing information in at least 6 different domains simultaneously, giving turn to 3D-spatial+spectral+polarized dynamic sequences. Conventional imaging devices, however, can capture dynamic sequences with up-to 3 spectral channels, in real-time, by the use of color sensors. Capturing multiple spectral channels require scanning methodologies, which demand long time. In general, to-date multimodal imaging requires a sequence of different imaging sensors, placed in tandem, to simultaneously capture the different physical properties of a scene. Then, different fusion techniques are employed to mix all the individual information into a single image. Therefore, new ways to efficiently capture more than 3 spectral channels of 3D time-varying spatial information, in a single or few sensors, are of high interest. Compressive spectral imaging (CSI) is an imaging framework that seeks to optimally capture spectral imagery (tens of spectral channels of 2D spatial information), using fewer measurements than that required by traditional sensing procedures which follows the Shannon-Nyquist sampling. Instead of capturing direct one-to-one representations of natural scenes, CSI systems acquire linear random projections of the scene and then solve an optimization algorithm to estimate the 3D spatio-spectral data cube by exploiting the theory of compressive sensing (CS). To date, the coding procedure in CSI has been realized through the use of ``block-unblock" coded apertures, commonly implemented as chrome-on-quartz photomasks. These apertures block or permit to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. In the first part, this thesis aims to expand the framework of CSI by replacing the traditional block-unblock coded apertures by patterned optical filter arrays, referred as ``color" coded apertures. These apertures are formed by tiny pixelated optical filters, which in turn, allow the input image to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed colored coded apertures are either synthesized through linear combinations of low-pass, high-pass and band-pass filters, paired with binary pattern ensembles realized by a digital-micromirror-device (DMD), or experimentally realized through thin-film color-patterned filter arrays. The optical forward model of the proposed CSI architectures will be presented along with the design and proof-of-concept implementations, which achieve noticeable improvements in the quality of the reconstructions compared with conventional block-unblock coded aperture-based CSI architectures. On another front, due to the rich information contained in the infrared spectrum as well as the depth domain, this thesis aims to explore multimodal imaging by extending the range sensitivity of current CSI systems to a dual-band visible+near-infrared spectral domain, and also, it proposes, for the first time, a new imaging device that captures simultaneously 4D data cubes (2D spatial+1D spectral+depth imaging) with as few as a single snapshot. Due to the snapshot advantage of this camera, video sequences are possible, thus enabling the joint capture of 5D imagery. It aims to create super-human sensing that will enable the perception of our world in new and exciting ways. With this, we intend to advance in the state of the art in compressive sensing systems to extract depth while accurately capturing spatial and spectral material properties. The applications of such a sensor are self-evident in fields such as computer/robotic vision because they would allow an artificial intelligence to make informed decisions about not only the location of objects within a scene but also their material properties.

  1. Toward a Better Compression for DNA Sequences Using Huffman Encoding

    PubMed Central

    Almarri, Badar; Al Yami, Sultan; Huang, Chun-Hsi

    2017-01-01

    Abstract Due to the significant amount of DNA data that are being generated by next-generation sequencing machines for genomes of lengths ranging from megabases to gigabases, there is an increasing need to compress such data to a less space and a faster transmission. Different implementations of Huffman encoding incorporating the characteristics of DNA sequences prove to better compress DNA data. These implementations center on the concepts of selecting frequent repeats so as to force a skewed Huffman tree, as well as the construction of multiple Huffman trees when encoding. The implementations demonstrate improvements on the compression ratios for five genomes with lengths ranging from 5 to 50 Mbp, compared with the standard Huffman tree algorithm. The research hence suggests an improvement on all such DNA sequence compression algorithms that use the conventional Huffman encoding. The research suggests an improvement on all DNA sequence compression algorithms that use the conventional Huffman encoding. Accompanying software is publicly available (AL-Okaily, 2016). PMID:27960065

  2. Toward a Better Compression for DNA Sequences Using Huffman Encoding.

    PubMed

    Al-Okaily, Anas; Almarri, Badar; Al Yami, Sultan; Huang, Chun-Hsi

    2017-04-01

    Due to the significant amount of DNA data that are being generated by next-generation sequencing machines for genomes of lengths ranging from megabases to gigabases, there is an increasing need to compress such data to a less space and a faster transmission. Different implementations of Huffman encoding incorporating the characteristics of DNA sequences prove to better compress DNA data. These implementations center on the concepts of selecting frequent repeats so as to force a skewed Huffman tree, as well as the construction of multiple Huffman trees when encoding. The implementations demonstrate improvements on the compression ratios for five genomes with lengths ranging from 5 to 50 Mbp, compared with the standard Huffman tree algorithm. The research hence suggests an improvement on all such DNA sequence compression algorithms that use the conventional Huffman encoding. The research suggests an improvement on all DNA sequence compression algorithms that use the conventional Huffman encoding. Accompanying software is publicly available (AL-Okaily, 2016 ).

  3. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.

  4. Spatial compression algorithm for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R [Albuquerque, NM

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  5. Fast Compressive Tracking.

    PubMed

    Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan

    2014-10-01

    It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness.

  6. Data compression for sequencing data

    PubMed Central

    2013-01-01

    Post-Sanger sequencing methods produce tons of data, and there is a general agreement that the challenge to store and process them must be addressed with data compression. In this review we first answer the question “why compression” in a quantitative manner. Then we also answer the questions “what” and “how”, by sketching the fundamental compression ideas, describing the main sequencing data types and formats, and comparing the specialized compression algorithms and tools. Finally, we go back to the question “why compression” and give other, perhaps surprising answers, demonstrating the pervasiveness of data compression techniques in computational biology. PMID:24252160

  7. New patient-controlled abdominal compression method in radiography: radiation dose and image quality.

    PubMed

    Piippo-Huotari, Oili; Norrman, Eva; Anderzén-Carlsson, Agneta; Geijer, Håkan

    2018-05-01

    The radiation dose for patients can be reduced with many methods and one way is to use abdominal compression. In this study, the radiation dose and image quality for a new patient-controlled compression device were compared with conventional compression and compression in the prone position . To compare radiation dose and image quality of patient-controlled compression compared with conventional and prone compression in general radiography. An experimental design with quantitative approach. After obtaining the approval of the ethics committee, a consecutive sample of 48 patients was examined with the standard clinical urography protocol. The radiation doses were measured as dose-area product and analyzed with a paired t-test. The image quality was evaluated by visual grading analysis. Four radiologists evaluated each image individually by scoring nine criteria modified from the European quality criteria for diagnostic radiographic images. There was no significant difference in radiation dose or image quality between conventional and patient-controlled compression. Prone position resulted in both higher dose and inferior image quality. Patient-controlled compression gave similar dose levels as conventional compression and lower than prone compression. Image quality was similar with both patient-controlled and conventional compression and was judged to be better than in the prone position.

  8. Development and evaluation of a novel lossless image compression method (AIC: artificial intelligence compression method) using neural networks as artificial intelligence.

    PubMed

    Fukatsu, Hiroshi; Naganawa, Shinji; Yumura, Shinnichiro

    2008-04-01

    This study was aimed to validate the performance of a novel image compression method using a neural network to achieve a lossless compression. The encoding consists of the following blocks: a prediction block; a residual data calculation block; a transformation and quantization block; an organization and modification block; and an entropy encoding block. The predicted image is divided into four macro-blocks using the original image for teaching; and then redivided into sixteen sub-blocks. The predicted image is compared to the original image to create the residual image. The spatial and frequency data of the residual image are compared and transformed. Chest radiography, computed tomography (CT), magnetic resonance imaging, positron emission tomography, radioisotope mammography, ultrasonography, and digital subtraction angiography images were compressed using the AIC lossless compression method; and the compression rates were calculated. The compression rates were around 15:1 for chest radiography and mammography, 12:1 for CT, and around 6:1 for other images. This method thus enables greater lossless compression than the conventional methods. This novel method should improve the efficiency of handling of the increasing volume of medical imaging data.

  9. Image splitting and remapping method for radiological image compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  10. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-12-30

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.

  11. High-speed large angle mammography tomosynthesis system

    NASA Astrophysics Data System (ADS)

    Eberhard, Jeffrey W.; Staudinger, Paul; Smolenski, Joe; Ding, Jason; Schmitz, Andrea; McCoy, Julie; Rumsey, Michael; Al-Khalidy, Abdulrahman; Ross, William; Landberg, Cynthia E.; Claus, Bernhard E. H.; Carson, Paul; Goodsitt, Mitchell; Chan, Heang-Ping; Roubidoux, Marilyn; Thomas, Jerry A.; Osland, Jacqueline

    2006-03-01

    A new mammography tomosynthesis prototype system that acquires 21 projection images over a 60 degree angular range in approximately 8 seconds has been developed and characterized. Fast imaging sequences are facilitated by a high power tube and generator for faster delivery of the x-ray exposure and a high speed detector read-out. An enhanced a-Si/CsI flat panel digital detector provides greater DQE at low exposure, enabling tomo image sequence acquisitions at total patient dose levels between 150% and 200% of the dose of a standard mammographic view. For clinical scenarios where a single MLO tomographic acquisition per breast may replace the standard CC and MLO views, total tomosynthesis breast dose is comparable to or below the dose in standard mammography. The system supports co-registered acquisition of x-ray tomosynthesis and 3-D ultrasound data sets by incorporating an ultrasound transducer scanning system that flips into position above the compression paddle for the ultrasound exam. Initial images acquired with the system are presented.

  12. Implementation of image transmission server system using embedded Linux

    NASA Astrophysics Data System (ADS)

    Park, Jong-Hyun; Jung, Yeon Sung; Nam, Boo Hee

    2005-12-01

    In this paper, we performed the implementation of image transmission server system using embedded system that is for the specified object and easy to install and move. Since the embedded system has lower capability than the PC, we have to reduce the quantity of calculation of the baseline JPEG image compression and transmission. We used the Redhat Linux 9.0 OS at the host PC and the target board based on embedded Linux. The image sequences are obtained from the camera attached to the FPGA (Field Programmable Gate Array) board with ALTERA cooperation chip. For effectiveness and avoiding some constraints from the vendor's own, we made the device driver using kernel module.

  13. Prediction of compression-induced image interpretability degradation

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Chen, Hua-Mei; Irvine, John M.; Wang, Zhonghai; Chen, Genshe; Nagy, James; Scott, Stephen

    2018-04-01

    Image compression is an important component in modern imaging systems as the volume of the raw data collected is increasing. To reduce the volume of data while collecting imagery useful for analysis, choosing the appropriate image compression method is desired. Lossless compression is able to preserve all the information, but it has limited reduction power. On the other hand, lossy compression, which may result in very high compression ratios, suffers from information loss. We model the compression-induced information loss in terms of the National Imagery Interpretability Rating Scale or NIIRS. NIIRS is a user-based quantification of image interpretability widely adopted by the Geographic Information System community. Specifically, we present the Compression Degradation Image Function Index (CoDIFI) framework that predicts the NIIRS degradation (i.e., a decrease of NIIRS level) for a given compression setting. The CoDIFI-NIIRS framework enables a user to broker the maximum compression setting while maintaining a specified NIIRS rating.

  14. Spinal Subdural Haematoma.

    PubMed

    Manish K, Kothari; Chandrakant, Shah Kunal; Abhay M, Nene

    2015-01-01

    Spinal Subdural hematoma is a rare cause of radiculopathy and spinal cord compression syndromes. It's early diagnosis is essential. Chronological appearance of these bleeds vary on MRI. A 56 year old man presented with progressive left lower limb radiculopathy and paraesthesias with claudication of three days duration. MRI revealed a subdural space occupying lesion compressing the cauda equina at L5-S1 level producing a 'Y' shaped dural sac (Y sign), which was hyperintense on T1W imaging and hypointense to cord on T2W image. The STIR sequence showed hyperintensity to cord. There was no history of bleeding diathesis. The patient underwent decompressive durotomy and biopsy which confirmed the diagnosis. Spinal subdural hematoma may present with rapidly progressive neurological symptoms. MRI is the investigation of choice. The knowledge of MRI appearance with respect to the chronological stage of the bleed is essential to avoid diagnostic and hence surgical dilemma.

  15. Compressed domain indexing of losslessly compressed images

    NASA Astrophysics Data System (ADS)

    Schaefer, Gerald

    2001-12-01

    Image retrieval and image compression have been pursued separately in the past. Only little research has been done on a synthesis of the two by allowing image retrieval to be performed directly in the compressed domain of images without the need to uncompress them first. In this paper methods for image retrieval in the compressed domain of losslessly compressed images are introduced. While most image compression techniques are lossy, i.e. discard visually less significant information, lossless techniques are still required in fields like medical imaging or in situations where images must not be changed due to legal reasons. The algorithms in this paper are based on predictive coding methods where a pixel is encoded based on the pixel values of its (already encoded) neighborhood. The first method is based on an understanding that predictively coded data is itself indexable and represents a textural description of the image. The second method operates directly on the entropy encoded data by comparing codebooks of images. Experiments show good image retrieval results for both approaches.

  16. Iterative dictionary construction for compression of large DNA data sets.

    PubMed

    Kuruppu, Shanika; Beresford-Smith, Bryan; Conway, Thomas; Zobel, Justin

    2012-01-01

    Genomic repositories increasingly include individual as well as reference sequences, which tend to share long identical and near-identical strings of nucleotides. However, the sequential processing used by most compression algorithms, and the volumes of data involved, mean that these long-range repetitions are not detected. An order-insensitive, disk-based dictionary construction method can detect this repeated content and use it to compress collections of sequences. We explore a dictionary construction method that improves repeat identification in large DNA data sets. Our adaptation, COMRAD, of an existing disk-based method identifies exact repeated content in collections of sequences with similarities within and across the set of input sequences. COMRAD compresses the data over multiple passes, which is an expensive process, but allows COMRAD to compress large data sets within reasonable time and space. COMRAD allows for random access to individual sequences and subsequences without decompressing the whole data set. COMRAD has no competitor in terms of the size of data sets that it can compress (extending to many hundreds of gigabytes) and, even for smaller data sets, the results are competitive compared to alternatives; as an example, 39 S. cerevisiae genomes compressed to 0.25 bits per base.

  17. A comparison of select image-compression algorithms for an electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.

  18. Lossless medical image compression with a hybrid coder

    NASA Astrophysics Data System (ADS)

    Way, Jing-Dar; Cheng, Po-Yuen

    1998-10-01

    The volume of medical image data is expected to increase dramatically in the next decade due to the large use of radiological image for medical diagnosis. The economics of distributing the medical image dictate that data compression is essential. While there is lossy image compression, the medical image must be recorded and transmitted lossless before it reaches the users to avoid wrong diagnosis due to the image data lost. Therefore, a low complexity, high performance lossless compression schematic that can approach the theoretic bound and operate in near real-time is needed. In this paper, we propose a hybrid image coder to compress the digitized medical image without any data loss. The hybrid coder is constituted of two key components: an embedded wavelet coder and a lossless run-length coder. In this system, the medical image is compressed with the lossy wavelet coder first, and the residual image between the original and the compressed ones is further compressed with the run-length coder. Several optimization schemes have been used in these coders to increase the coding performance. It is shown that the proposed algorithm is with higher compression ratio than run-length entropy coders such as arithmetic, Huffman and Lempel-Ziv coders.

  19. Fast and accurate face recognition based on image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2017-05-01

    Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.

  20. A database for assessment of effect of lossy compression on digital mammograms

    NASA Astrophysics Data System (ADS)

    Wang, Jiheng; Sahiner, Berkman; Petrick, Nicholas; Pezeshk, Aria

    2018-03-01

    With widespread use of screening digital mammography, efficient storage of the vast amounts of data has become a challenge. While lossless image compression causes no risk to the interpretation of the data, it does not allow for high compression rates. Lossy compression and the associated higher compression ratios are therefore more desirable. The U.S. Food and Drug Administration (FDA) currently interprets the Mammography Quality Standards Act as prohibiting lossy compression of digital mammograms for primary image interpretation, image retention, or transfer to the patient or her designated recipient. Previous work has used reader studies to determine proper usage criteria for evaluating lossy image compression in mammography, and utilized different measures and metrics to characterize medical image quality. The drawback of such studies is that they rely on a threshold on compression ratio as the fundamental criterion for preserving the quality of images. However, compression ratio is not a useful indicator of image quality. On the other hand, many objective image quality metrics (IQMs) have shown excellent performance for natural image content for consumer electronic applications. In this paper, we create a new synthetic mammogram database with several unique features. We compare and characterize the impact of image compression on several clinically relevant image attributes such as perceived contrast and mass appearance for different kinds of masses. We plan to use this database to develop a new objective IQM for measuring the quality of compressed mammographic images to help determine the allowed maximum compression for different kinds of breasts and masses in terms of visual and diagnostic quality.

  1. Improving transmission efficiency of large sequence alignment/map (SAM) files.

    PubMed

    Sakib, Muhammad Nazmus; Tang, Jijun; Zheng, W Jim; Huang, Chin-Tser

    2011-01-01

    Research in bioinformatics primarily involves collection and analysis of a large volume of genomic data. Naturally, it demands efficient storage and transfer of this huge amount of data. In recent years, some research has been done to find efficient compression algorithms to reduce the size of various sequencing data. One way to improve the transmission time of large files is to apply a maximum lossless compression on them. In this paper, we present SAMZIP, a specialized encoding scheme, for sequence alignment data in SAM (Sequence Alignment/Map) format, which improves the compression ratio of existing compression tools available. In order to achieve this, we exploit the prior knowledge of the file format and specifications. Our experimental results show that our encoding scheme improves compression ratio, thereby reducing overall transmission time significantly.

  2. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1990-01-01

    A process is disclosed for x ray registration and differencing which results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  3. Digital Data Registration and Differencing Compression System

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1996-01-01

    A process for X-ray registration and differencing results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic X-ray digital images.

  4. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1992-01-01

    A process for x ray registration and differencing results in more efficient compression is discussed. Differencing of registered modeled subject image with a modeled reference image forms a differential image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three dimensional model, which three dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  5. Data-dependent bucketing improves reference-free compression of sequencing reads.

    PubMed

    Patro, Rob; Kingsford, Carl

    2015-09-01

    The storage and transmission of high-throughput sequencing data consumes significant resources. As our capacity to produce such data continues to increase, this burden will only grow. One approach to reduce storage and transmission requirements is to compress this sequencing data. We present a novel technique to boost the compression of sequencing that is based on the concept of bucketing similar reads so that they appear nearby in the file. We demonstrate that, by adopting a data-dependent bucketing scheme and employing a number of encoding ideas, we can achieve substantially better compression ratios than existing de novo sequence compression tools, including other bucketing and reordering schemes. Our method, Mince, achieves up to a 45% reduction in file sizes (28% on average) compared with existing state-of-the-art de novo compression schemes. Mince is written in C++11, is open source and has been made available under the GPLv3 license. It is available at http://www.cs.cmu.edu/∼ckingsf/software/mince. carlk@cs.cmu.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  6. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information

    NASA Technical Reports Server (NTRS)

    Pence, William D.; White, R. L.; Seaman, R.

    2010-01-01

    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  7. Compression for radiological images

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.

    1992-07-01

    The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

  8. Reversible Watermarking Surviving JPEG Compression.

    PubMed

    Zain, J; Clarke, M

    2005-01-01

    This paper will discuss the properties of watermarking medical images. We will also discuss the possibility of such images being compressed by JPEG and give an overview of JPEG compression. We will then propose a watermarking scheme that is reversible and robust to JPEG compression. The purpose is to verify the integrity and authenticity of medical images. We used 800x600x8 bits ultrasound (US) images in our experiment. SHA-256 of the image is then embedded in the Least significant bits (LSB) of an 8x8 block in the Region of Non Interest (RONI). The image is then compressed using JPEG and decompressed using Photoshop 6.0. If the image has not been altered, the watermark extracted will match the hash (SHA256) of the original image. The result shown that the embedded watermark is robust to JPEG compression up to image quality 60 (~91% compressed).

  9. Mixed raster content (MRC) model for compound image compression

    NASA Astrophysics Data System (ADS)

    de Queiroz, Ricardo L.; Buckley, Robert R.; Xu, Ming

    1998-12-01

    This paper will describe the Mixed Raster Content (MRC) method for compressing compound images, containing both binary test and continuous-tone images. A single compression algorithm that simultaneously meets the requirements for both text and image compression has been elusive. MRC takes a different approach. Rather than using a single algorithm, MRC uses a multi-layered imaging model for representing the results of multiple compression algorithms, including ones developed specifically for text and for images. As a result, MRC can combine the best of existing or new compression algorithms and offer different quality-compression ratio tradeoffs. The algorithms used by MRC set the lower bound on its compression performance. Compared to existing algorithms, MRC has some image-processing overhead to manage multiple algorithms and the imaging model. This paper will develop the rationale for the MRC approach by describing the multi-layered imaging model in light of a rate-distortion trade-off. Results will be presented comparing images compressed using MRC, JPEG and state-of-the-art wavelet algorithms such as SPIHT. MRC has been approved or proposed as an architectural model for several standards, including ITU Color Fax, IETF Internet Fax, and JPEG 2000.

  10. Compression of next-generation sequencing quality scores using memetic algorithm

    PubMed Central

    2014-01-01

    Background The exponential growth of next-generation sequencing (NGS) derived DNA data poses great challenges to data storage and transmission. Although many compression algorithms have been proposed for DNA reads in NGS data, few methods are designed specifically to handle the quality scores. Results In this paper we present a memetic algorithm (MA) based NGS quality score data compressor, namely MMQSC. The algorithm extracts raw quality score sequences from FASTQ formatted files, and designs compression codebook using MA based multimodal optimization. The input data is then compressed in a substitutional manner. Experimental results on five representative NGS data sets show that MMQSC obtains higher compression ratio than the other state-of-the-art methods. Particularly, MMQSC is a lossless reference-free compression algorithm, yet obtains an average compression ratio of 22.82% on the experimental data sets. Conclusions The proposed MMQSC compresses NGS quality score data effectively. It can be utilized to improve the overall compression ratio on FASTQ formatted files. PMID:25474747

  11. High bit depth infrared image compression via low bit depth codecs

    NASA Astrophysics Data System (ADS)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-08-01

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.

  12. Nerve compression and pain in human volunteers with narrow vs wide tourniquets

    PubMed Central

    Kovar, Florian M; Jaindl, Manuela; Oberleitner, Gerhard; Endler, Georg; Breitenseher, Julia; Prayer, Daniela; Kasprian, Gregor; Kutscha-Lissberg, Florian

    2015-01-01

    AIM: To assess the clinical effects and the morphological grade of nerve compression. METHODS: In a prospective single-center randomized, open study we assessed the clinical effects and the morphological grade of nerve compression during 20 min of either a silicon ring (group A) or pneumatic tourniquet (group B) placement variantly on the upper non-dominant limb in 14 healthy human volunteers. Before and during compression, the median and radial nerves were visualized in both groups by 3 Tesla MR imaging, using high resolutional (2.5 mm slice thickness) axial T2-weighted sequences. RESULTS: In group A, Visual analog pain scale was 5.4 ± 2.2 compared to results of group B, 2.9 ± 2.5, showing a significant difference (P = 0.028). FPS levels in group A were 2.6 ± 0.9 compared to levels in group B 1.6 ± 1, showing a significant difference (P = 0.039). Results related to measureable effect on median and radial nerve function were equal in both groups. No undue pressure signs on the skin, redness or nerve damage occurred in either group. There was no significant difference in the diameters of the nerves without and under compression in either group on T2 weighted images. CONCLUSION: Based on our results, no differences between narrow and wide tourniquets were identified. Silicon ring tourniquets can be regarded as safe for short time application. PMID:25992317

  13. Onboard Processor for Compressing HSI Data

    NASA Technical Reports Server (NTRS)

    Cook, Sid; Harsanyi, Joe; Day, John H. (Technical Monitor)

    2002-01-01

    With EO-1 Hyperion and MightySat in orbit NASA and the DoD are showing their continued commitment to hyperspectral imaging (HSI). As HSI sensor technology continues to mature, the ever-increasing amounts of sensor data generated will result in a need for more cost effective communication and data handling systems. Lockheed Martin, with considerable experience in spacecraft design and developing special purpose onboard processors, has teamed with Applied Signal & Image Technology (ASIT), who has an extensive heritage in HSI, to develop a real-time and intelligent onboard processing (OBP) system to reduce HSI sensor downlink requirements. Our goal is to reduce the downlink requirement by a factor greater than 100, while retaining the necessary spectral fidelity of the sensor data needed to satisfy the many science, military, and intelligence goals of these systems. Our initial spectral compression experiments leverage commercial-off-the-shelf (COTS) spectral exploitation algorithms for segmentation, material identification and spectral compression that ASIT has developed. ASIT will also support the modification and integration of this COTS software into the OBP. Other commercially available COTS software for spatial compression will also be employed as part of the overall compression processing sequence. Over the next year elements of a high-performance reconfigurable OBP will be developed to implement proven preprocessing steps that distill the HSI data stream in both spectral and spatial dimensions. The system will intelligently reduce the volume of data that must be stored, transmitted to the ground, and processed while minimizing the loss of information.

  14. The Pixon Method for Data Compression Image Classification, and Image Reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard; Yahil, Amos

    2002-01-01

    As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.

  15. A new hyperspectral image compression paradigm based on fusion

    NASA Astrophysics Data System (ADS)

    Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.

  16. Data Compression Techniques for Maps

    DTIC Science & Technology

    1989-01-01

    Lempel - Ziv compression is applied to the classified and unclassified images as also to the output of the compression algorithms . The algorithms ...resulted in a compression of 7:1. The output of the quadtree coding algorithm was then compressed using Lempel - Ziv coding. The compression ratio achieved...using Lempel - Ziv coding. The unclassified image gave a compression ratio of only 1.4:1. The K means classified image

  17. Visually lossless compression of digital hologram sequences

    NASA Astrophysics Data System (ADS)

    Darakis, Emmanouil; Kowiel, Marcin; Näsänen, Risto; Naughton, Thomas J.

    2010-01-01

    Digital hologram sequences have great potential for the recording of 3D scenes of moving macroscopic objects as their numerical reconstruction can yield a range of perspective views of the scene. Digital holograms inherently have large information content and lossless coding of holographic data is rather inefficient due to the speckled nature of the interference fringes they contain. Lossy coding of still holograms and hologram sequences has shown promising results. By definition, lossy compression introduces errors in the reconstruction. In all of the previous studies, numerical metrics were used to measure the compression error and through it, the coding quality. Digital hologram reconstructions are highly speckled and the speckle pattern is very sensitive to data changes. Hence, numerical quality metrics can be misleading. For example, for low compression ratios, a numerically significant coding error can have visually negligible effects. Yet, in several cases, it is of high interest to know how much lossy compression can be achieved, while maintaining the reconstruction quality at visually lossless levels. Using an experimental threshold estimation method, the staircase algorithm, we determined the highest compression ratio that was not perceptible to human observers for objects compressed with Dirac and MPEG-4 compression methods. This level of compression can be regarded as the point below which compression is perceptually lossless although physically the compression is lossy. It was found that up to 4 to 7.5 fold compression can be obtained with the above methods without any perceptible change in the appearance of video sequences.

  18. QualComp: a new lossy compressor for quality scores based on rate distortion theory

    PubMed Central

    2013-01-01

    Background Next Generation Sequencing technologies have revolutionized many fields in biology by reducing the time and cost required for sequencing. As a result, large amounts of sequencing data are being generated. A typical sequencing data file may occupy tens or even hundreds of gigabytes of disk space, prohibitively large for many users. This data consists of both the nucleotide sequences and per-base quality scores that indicate the level of confidence in the readout of these sequences. Quality scores account for about half of the required disk space in the commonly used FASTQ format (before compression), and therefore the compression of the quality scores can significantly reduce storage requirements and speed up analysis and transmission of sequencing data. Results In this paper, we present a new scheme for the lossy compression of the quality scores, to address the problem of storage. Our framework allows the user to specify the rate (bits per quality score) prior to compression, independent of the data to be compressed. Our algorithm can work at any rate, unlike other lossy compression algorithms. We envisage our algorithm as being part of a more general compression scheme that works with the entire FASTQ file. Numerical experiments show that we can achieve a better mean squared error (MSE) for small rates (bits per quality score) than other lossy compression schemes. For the organism PhiX, whose assembled genome is known and assumed to be correct, we show that it is possible to achieve a significant reduction in size with little compromise in performance on downstream applications (e.g., alignment). Conclusions QualComp is an open source software package, written in C and freely available for download at https://sourceforge.net/projects/qualcomp. PMID:23758828

  19. RNACompress: Grammar-based compression and informational complexity measurement of RNA secondary structure.

    PubMed

    Liu, Qi; Yang, Yu; Chen, Chun; Bu, Jiajun; Zhang, Yin; Ye, Xiuzi

    2008-03-31

    With the rapid emergence of RNA databases and newly identified non-coding RNAs, an efficient compression algorithm for RNA sequence and structural information is needed for the storage and analysis of such data. Although several algorithms for compressing DNA sequences have been proposed, none of them are suitable for the compression of RNA sequences with their secondary structures simultaneously. This kind of compression not only facilitates the maintenance of RNA data, but also supplies a novel way to measure the informational complexity of RNA structural data, raising the possibility of studying the relationship between the functional activities of RNA structures and their complexities, as well as various structural properties of RNA based on compression. RNACompress employs an efficient grammar-based model to compress RNA sequences and their secondary structures. The main goals of this algorithm are two fold: (1) present a robust and effective way for RNA structural data compression; (2) design a suitable model to represent RNA secondary structure as well as derive the informational complexity of the structural data based on compression. Our extensive tests have shown that RNACompress achieves a universally better compression ratio compared with other sequence-specific or common text-specific compression algorithms, such as Gencompress, winrar and gzip. Moreover, a test of the activities of distinct GTP-binding RNAs (aptamers) compared with their structural complexity shows that our defined informational complexity can be used to describe how complexity varies with activity. These results lead to an objective means of comparing the functional properties of heteropolymers from the information perspective. A universal algorithm for the compression of RNA secondary structure as well as the evaluation of its informational complexity is discussed in this paper. We have developed RNACompress, as a useful tool for academic users. Extensive tests have shown that RNACompress is a universally efficient algorithm for the compression of RNA sequences with their secondary structures. RNACompress also serves as a good measurement of the informational complexity of RNA secondary structure, which can be used to study the functional activities of RNA molecules.

  20. RNACompress: Grammar-based compression and informational complexity measurement of RNA secondary structure

    PubMed Central

    Liu, Qi; Yang, Yu; Chen, Chun; Bu, Jiajun; Zhang, Yin; Ye, Xiuzi

    2008-01-01

    Background With the rapid emergence of RNA databases and newly identified non-coding RNAs, an efficient compression algorithm for RNA sequence and structural information is needed for the storage and analysis of such data. Although several algorithms for compressing DNA sequences have been proposed, none of them are suitable for the compression of RNA sequences with their secondary structures simultaneously. This kind of compression not only facilitates the maintenance of RNA data, but also supplies a novel way to measure the informational complexity of RNA structural data, raising the possibility of studying the relationship between the functional activities of RNA structures and their complexities, as well as various structural properties of RNA based on compression. Results RNACompress employs an efficient grammar-based model to compress RNA sequences and their secondary structures. The main goals of this algorithm are two fold: (1) present a robust and effective way for RNA structural data compression; (2) design a suitable model to represent RNA secondary structure as well as derive the informational complexity of the structural data based on compression. Our extensive tests have shown that RNACompress achieves a universally better compression ratio compared with other sequence-specific or common text-specific compression algorithms, such as Gencompress, winrar and gzip. Moreover, a test of the activities of distinct GTP-binding RNAs (aptamers) compared with their structural complexity shows that our defined informational complexity can be used to describe how complexity varies with activity. These results lead to an objective means of comparing the functional properties of heteropolymers from the information perspective. Conclusion A universal algorithm for the compression of RNA secondary structure as well as the evaluation of its informational complexity is discussed in this paper. We have developed RNACompress, as a useful tool for academic users. Extensive tests have shown that RNACompress is a universally efficient algorithm for the compression of RNA sequences with their secondary structures. RNACompress also serves as a good measurement of the informational complexity of RNA secondary structure, which can be used to study the functional activities of RNA molecules. PMID:18373878

  1. Fast Lossless Compression of Multispectral-Image Data

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew

    2006-01-01

    An algorithm that effects fast lossless compression of multispectral-image data is based on low-complexity, proven adaptive-filtering algorithms. This algorithm is intended for use in compressing multispectral-image data aboard spacecraft for transmission to Earth stations. Variants of this algorithm could be useful for lossless compression of three-dimensional medical imagery and, perhaps, for compressing image data in general.

  2. Absolute Position Encoders With Vertical Image Binning

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B.

    2005-01-01

    Improved optoelectronic patternrecognition encoders that measure rotary and linear 1-dimensional positions at conversion rates (numbers of readings per unit time) exceeding 20 kHz have been invented. Heretofore, optoelectronic pattern-recognition absoluteposition encoders have been limited to conversion rates <15 Hz -- too low for emerging industrial applications in which conversion rates ranging from 1 kHz to as much as 100 kHz are required. The high conversion rates of the improved encoders are made possible, in part, by use of vertically compressible or binnable (as described below) scale patterns in combination with modified readout sequences of the image sensors [charge-coupled devices (CCDs)] used to read the scale patterns. The modified readout sequences and the processing of the images thus read out are amenable to implementation by use of modern, high-speed, ultra-compact microprocessors and digital signal processors or field-programmable gate arrays. This combination of improvements makes it possible to greatly increase conversion rates through substantial reductions in all three components of conversion time: exposure time, image-readout time, and image-processing time.

  3. Agreement between T2 and haste sequences in the evaluation of thoracolumbar intervertebral disc disease in dogs.

    PubMed

    Mankin, Joseph M; Hecht, Silke; Thomas, William B

    2012-01-01

    The purpose of this study was to compare half-Fourier-acquisition single-shot turbo spin-echo (HASTE) and T2-weighted (T2-W) sequences in dogs with thoracolumbar disc extrusion. MRI studies in 60 dogs (767 individual intervertebral disc spaces) were evaluated. Agreement between T2-W and HASTE sequences was assessed for two criteria: presence of an extradural lesion and treatment recommendation. There was moderate agreement between T2-W and HASTE sequences as to presence of an extradural lesion (kappa = 0.575). HASTE was in agreement in 96.1% of the sites where no extradural lesion was identified on T2-W images, but only in 58.1% of the sites where extradural lesions were identified on T2-W images. There was also moderate agreement between T2-W and HASTE sequences as to treatment recommendations (kappa = 0.476). HASTE was in agreement in 98.4% of the sites where a lesion was considered nonsurgical on T2 but only 82.1% of sites a lesion was considered surgical on T2. In 1.0% of sites considered not surgical and in 9.8% of sites considered equivocal based on T2-W images, a surgical lesion was identified on HASTE. Acquisition of a HASTE sequence in addition to conventional sequences may be beneficial in determining the severity of spinal cord compression in some cases when evaluating the canine spine.

  4. Optimal Compression Methods for Floating-point Format Images

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2009-01-01

    We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.

  5. Outer planet Pioneer imaging communications system study. [data compression

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The effects of different types of imaging data compression on the elements of the Pioneer end-to-end data system were studied for three imaging transmission methods. These were: no data compression, moderate data compression, and the advanced imaging communications system. It is concluded that: (1) the value of data compression is inversely related to the downlink telemetry bit rate; (2) the rolling characteristics of the spacecraft limit the selection of data compression ratios; and (3) data compression might be used to perform acceptable outer planet mission at reduced downlink telemetry bit rates.

  6. Compressive sensing in medical imaging

    PubMed Central

    Graff, Christian G.; Sidky, Emil Y.

    2015-01-01

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400

  7. Fingerprint recognition of wavelet-based compressed images by neuro-fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Liu, Ti C.; Mitra, Sunanda

    1996-06-01

    Image compression plays a crucial role in many important and diverse applications requiring efficient storage and transmission. This work mainly focuses on a wavelet transform (WT) based compression of fingerprint images and the subsequent classification of the reconstructed images. The algorithm developed involves multiresolution wavelet decomposition, uniform scalar quantization, entropy and run- length encoder/decoder and K-means clustering of the invariant moments as fingerprint features. The performance of the WT-based compression algorithm has been compared with JPEG current image compression standard. Simulation results show that WT outperforms JPEG in high compression ratio region and the reconstructed fingerprint image yields proper classification.

  8. Fast l₁-SPIRiT compressed sensing parallel imaging MRI: scalable parallel implementation and clinically feasible runtime.

    PubMed

    Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael

    2012-06-01

    We present l₁-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative self-consistent parallel imaging (SPIRiT). Like many iterative magnetic resonance imaging reconstructions, l₁-SPIRiT's image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing l₁-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of l₁-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT spoiled gradient echo (SPGR) sequence with up to 8× acceleration via Poisson-disc undersampling in the two phase-encoded directions.

  9. Artifacts in slab average-intensity-projection images reformatted from JPEG 2000 compressed thin-section abdominal CT data sets.

    PubMed

    Kim, Bohyoung; Lee, Kyoung Ho; Kim, Kil Joong; Mantiuk, Rafal; Kim, Hye-ri; Kim, Young Hoon

    2008-06-01

    The objective of our study was to assess the effects of compressing source thin-section abdominal CT images on final transverse average-intensity-projection (AIP) images. At reversible, 4:1, 6:1, 8:1, 10:1, and 15:1 Joint Photographic Experts Group (JPEG) 2000 compressions, we compared the artifacts in 20 matching compressed thin sections (0.67 mm), compressed thick sections (5 mm), and AIP images (5 mm) reformatted from the compressed thin sections. The artifacts were quantitatively measured with peak signal-to-noise ratio (PSNR) and a perceptual quality metric (High Dynamic Range Visual Difference Predictor [HDR-VDP]). By comparing the compressed and original images, three radiologists independently graded the artifacts as 0 (none, indistinguishable), 1 (barely perceptible), 2 (subtle), or 3 (significant). Friedman tests and exact tests for paired proportions were used. At irreversible compressions, the artifacts tended to increase in the order of AIP, thick-section, and thin-section images in terms of PSNR (p < 0.0001), HDR-VDP (p < 0.0001), and the readers' grading (p < 0.01 at 6:1 or higher compressions). At 6:1 and 8:1, distinguishable pairs (grades 1-3) tended to increase in the order of AIP, thick-section, and thin-section images. Visually lossless threshold for the compression varied between images but decreased in the order of AIP, thick-section, and thin-section images (p < 0.0001). Compression artifacts in thin sections are significantly attenuated in AIP images. On the premise that thin sections are typically reviewed using an AIP technique, it is justifiable to compress them to a compression level currently accepted for thick sections.

  10. JPEG2000 still image coding quality.

    PubMed

    Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei

    2013-10-01

    This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression.

  11. The compression and storage method of the same kind of medical images: DPCM

    NASA Astrophysics Data System (ADS)

    Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong

    2006-09-01

    Medical imaging has started to take advantage of digital technology, opening the way for advanced medical imaging and teleradiology. Medical images, however, require large amounts of memory. At over 1 million bytes per image, a typical hospital needs a staggering amount of memory storage (over one trillion bytes per year), and transmitting an image over a network (even the promised superhighway) could take minutes--too slow for interactive teleradiology. This calls for image compression to reduce significantly the amount of data needed to represent an image. Several compression techniques with different compression ratio have been developed. However, the lossless techniques, which allow for perfect reconstruction of the original images, yield modest compression ratio, while the techniques that yield higher compression ratio are lossy, that is, the original image is reconstructed only approximately. Medical imaging poses the great challenge of having compression algorithms that are lossless (for diagnostic and legal reasons) and yet have high compression ratio for reduced storage and transmission time. To meet this challenge, we are developing and studying some compression schemes, which are either strictly lossless or diagnostically lossless, taking advantage of the peculiarities of medical images and of the medical practice. In order to increase the Signal to Noise Ratio (SNR) by exploitation of correlations within the source signal, a method of combining differential pulse code modulation (DPCM) is presented.

  12. Subjective evaluation of compressed image quality

    NASA Astrophysics Data System (ADS)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  13. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  14. Toward an image compression algorithm for the high-resolution electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.

  15. Selective encryption for H.264/AVC video coding

    NASA Astrophysics Data System (ADS)

    Shi, Tuo; King, Brian; Salama, Paul

    2006-02-01

    Due to the ease with which digital data can be manipulated and due to the ongoing advancements that have brought us closer to pervasive computing, the secure delivery of video and images has become a challenging problem. Despite the advantages and opportunities that digital video provide, illegal copying and distribution as well as plagiarism of digital audio, images, and video is still ongoing. In this paper we describe two techniques for securing H.264 coded video streams. The first technique, SEH264Algorithm1, groups the data into the following blocks of data: (1) a block that contains the sequence parameter set and the picture parameter set, (2) a block containing a compressed intra coded frame, (3) a block containing the slice header of a P slice, all the headers of the macroblock within the same P slice, and all the luma and chroma DC coefficients belonging to the all the macroblocks within the same slice, (4) a block containing all the ac coefficients, and (5) a block containing all the motion vectors. The first three are encrypted whereas the last two are not. The second method, SEH264Algorithm2, relies on the use of multiple slices per coded frame. The algorithm searches the compressed video sequence for start codes (0x000001) and then encrypts the next N bits of data.

  16. A Data Hiding Technique to Synchronously Embed Physiological Signals in H.264/AVC Encoded Video for Medicine Healthcare.

    PubMed

    Peña, Raul; Ávila, Alfonso; Muñoz, David; Lavariega, Juan

    2015-01-01

    The recognition of clinical manifestations in both video images and physiological-signal waveforms is an important aid to improve the safety and effectiveness in medical care. Physicians can rely on video-waveform (VW) observations to recognize difficult-to-spot signs and symptoms. The VW observations can also reduce the number of false positive incidents and expand the recognition coverage to abnormal health conditions. The synchronization between the video images and the physiological-signal waveforms is fundamental for the successful recognition of the clinical manifestations. The use of conventional equipment to synchronously acquire and display the video-waveform information involves complex tasks such as the video capture/compression, the acquisition/compression of each physiological signal, and the video-waveform synchronization based on timestamps. This paper introduces a data hiding technique capable of both enabling embedding channels and synchronously hiding samples of physiological signals into encoded video sequences. Our data hiding technique offers large data capacity and simplifies the complexity of the video-waveform acquisition and reproduction. The experimental results revealed successful embedding and full restoration of signal's samples. Our results also demonstrated a small distortion in the video objective quality, a small increment in bit-rate, and embedded cost savings of -2.6196% for high and medium motion video sequences.

  17. Mache: No-Loss Trace Compaction

    DTIC Science & Technology

    1988-09-15

    Data Compression . IEEE Computer 176 (June 1984), 8-19. 10. ZIV , J. AND LEMPEL , A. A Universal Algorithm for Sequential Data Com- pression. IEEE... compression scheme which takes ad- vantage of repeating patterns in the sequence of bytes. I have used the Lempel - Ziv compression algorithm [9,10,11...Transactions on Information Theory 23 (1976), 75-81. 11. ZIV , J. AND LEMPEL , A. Compression of Individual Sequences via Variable-

  18. Lossless compression of VLSI layout image data.

    PubMed

    Dai, Vito; Zakhor, Avideh

    2006-09-01

    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.

  19. Observations of disconnection of open coronal magnetic structures

    NASA Technical Reports Server (NTRS)

    Mccomas, D. J.; Phillips, J. L.; Hundhausen, A. J.; Burkepile, J. T.

    1991-01-01

    The solar maximum mission coronagraph/polarimeter observations are surveyed for evidence of magnetic disconnection of previously open magnetic structures and several sequences of images consistent with this interpretation are identified. Such disconnection occurs when open field lines above helmet streamers reconnect, in contrast to previously suggested disconnections of CMEs into closed plasmoids. In this paper a clear example of open field disconnection is shown in detail. The event, on June 27, 1988, is preceded by compression of a preexisting helmet streamer and the open coronal field around it. The compressed helmet streamer and surrounding open field region detach in a large U-shaped structure which subsequently accelerates outward from the sun. The observed sequence of events is consistent with reconnection across the heliospheric current sheet and the creation of a detached U-shaped magnetic structure. Unlike CMEs, which may open new magnetic flux into interplanetary space, this process could serve to close off previously open flux, perhaps helping to maintain the roughly constant amount of open magnetic flux observed in interplanetary space.

  20. Cloud solution for histopathological image analysis using region of interest based compression.

    PubMed

    Kanakatte, Aparna; Subramanya, Rakshith; Delampady, Ashik; Nayak, Rajarama; Purushothaman, Balamuralidhar; Gubbi, Jayavardhana

    2017-07-01

    Recent technological gains have led to the adoption of innovative cloud based solutions in medical imaging field. Once the medical image is acquired, it can be viewed, modified, annotated and shared on many devices. This advancement is mainly due to the introduction of Cloud computing in medical domain. Tissue pathology images are complex and are normally collected at different focal lengths using a microscope. The single whole slide image contains many multi resolution images stored in a pyramidal structure with the highest resolution image at the base and the smallest thumbnail image at the top of the pyramid. Highest resolution image will be used for tissue pathology diagnosis and analysis. Transferring and storing such huge images is a big challenge. Compression is a very useful and effective technique to reduce the size of these images. As pathology images are used for diagnosis, no information can be lost during compression (lossless compression). A novel method of extracting the tissue region and applying lossless compression on this region and lossy compression on the empty regions has been proposed in this paper. The resulting compression ratio along with lossless compression on tissue region is in acceptable range allowing efficient storage and transmission to and from the Cloud.

  1. Quality and noise measurements in mobile phone video capture

    NASA Astrophysics Data System (ADS)

    Petrescu, Doina; Pincenti, John

    2011-02-01

    The quality of videos captured with mobile phones has become increasingly important particularly since resolutions and formats have reached a level that rivals the capabilities available in the digital camcorder market, and since many mobile phones now allow direct playback on large HDTVs. The video quality is determined by the combined quality of the individual parts of the imaging system including the image sensor, the digital color processing, and the video compression, each of which has been studied independently. In this work, we study the combined effect of these elements on the overall video quality. We do this by evaluating the capture under various lighting, color processing, and video compression conditions. First, we measure full reference quality metrics between encoder input and the reconstructed sequence, where the encoder input changes with light and color processing modifications. Second, we introduce a system model which includes all elements that affect video quality, including a low light additive noise model, ISP color processing, as well as the video encoder. Our experiments show that in low light conditions and for certain choices of color processing the system level visual quality may not improve when the encoder becomes more capable or the compression ratio is reduced.

  2. Compression of regions in the global advanced very high resolution radiometer 1-km data set

    NASA Technical Reports Server (NTRS)

    Kess, Barbara L.; Steinwand, Daniel R.; Reichenbach, Stephen E.

    1994-01-01

    The global advanced very high resolution radiometer (AVHRR) 1-km data set is a 10-band image produced at USGS' EROS Data Center for the study of the world's land surfaces. The image contains masked regions for non-land areas which are identical in each band but vary between data sets. They comprise over 75 percent of this 9.7 gigabyte image. The mask is compressed once and stored separately from the land data which is compressed for each of the 10 bands. The mask is stored in a hierarchical format for multi-resolution decompression of geographic subwindows of the image. The land for each band is compressed by modifying a method that ignores fill values. This multi-spectral region compression efficiently compresses the region data and precludes fill values from interfering with land compression statistics. Results show that the masked regions in a one-byte test image (6.5 Gigabytes) compress to 0.2 percent of the 557,756,146 bytes they occupy in the original image, resulting in a compression ratio of 89.9 percent for the entire image.

  3. Human movement analysis with image processing in real time

    NASA Astrophysics Data System (ADS)

    Fauvet, Eric; Paindavoine, Michel; Cannard, F.

    1991-04-01

    In the field of the human sciences, a lot of applications needs to know the kinematic characteristics of the human movements Psycology is associating the characteristics with the control mechanism, sport and biomechariics are associating them with the performance of the sportman or of the patient. So the trainers or the doctors can correct the gesture of the subject to obtain a better performance if he knows the motion properties. Roherton's studies show the children motion evolution2 . Several investigations methods are able to measure the human movement But now most of the studies are based on image processing. Often the systems are working at the T.V. standard (50 frame per secund ). they permit only to study very slow gesture. A human operator analyses the digitizing sequence of the film manually giving a very expensive, especially long and unprecise operation. On these different grounds many human movement analysis systems were implemented. They consist of: - markers which are fixed to the anatomical interesting points on the subject in motion, - Image compression which is the art to coding picture data. Generally the compression Is limited to the centroid coordinates calculation tor each marker. These systems differ from one other in image acquisition and markers detection.

  4. Digital audio watermarking using moment-preserving thresholding

    NASA Astrophysics Data System (ADS)

    Choi, DooSeop; Jung, Hae Kyung; Choi, Hyuk; Kim, Taejeong

    2007-09-01

    The Moment-Preserving Thresholding technique for digital images has been used in digital image processing for decades, especially in image binarization and image compression. Its main strength lies in that the binary values that the MPT produces as a result, called representative values, are usually unaffected when the signal being thresholded goes through a signal processing operation. The two representative values in MPT together with the threshold value are obtained by solving the system of the preservation equations for the first, second, and third moment. Relying on this robustness of the representative values to various signal processing attacks considered in the watermarking context, this paper proposes a new watermarking scheme for audio signals. The watermark is embedded in the root-sum-square (RSS) of the two representative values of each signal block using the quantization technique. As a result, the RSS values are modified by scaling the signal according to the watermark bit sequence under the constraint of inaudibility relative to the human psycho-acoustic model. We also address and suggest solutions to the problem of synchronization and power scaling attacks. Experimental results show that the proposed scheme maintains high audio quality and robustness to various attacks including MP3 compression, re-sampling, jittering, and, DA/AD conversion.

  5. Noise Gating Solar Images

    NASA Astrophysics Data System (ADS)

    DeForest, Craig; Seaton, Daniel B.; Darnell, John A.

    2017-08-01

    I present and demonstrate a new, general purpose post-processing technique, "3D noise gating", that can reduce image noise by an order of magnitude or more without effective loss of spatial or temporal resolution in typical solar applications.Nearly all scientific images are, ultimately, limited by noise. Noise can be direct Poisson "shot noise" from photon counting effects, or introduced by other means such as detector read noise. Noise is typically represented as a random variable (perhaps with location- or image-dependent characteristics) that is sampled once per pixel or once per resolution element of an image sequence. Noise limits many aspects of image analysis, including photometry, spatiotemporal resolution, feature identification, morphology extraction, and background modeling and separation.Identifying and separating noise from image signal is difficult. The common practice of blurring in space and/or time works because most image "signal" is concentrated in the low Fourier components of an image, while noise is evenly distributed. Blurring in space and/or time attenuates the high spatial and temporal frequencies, reducing noise at the expense of also attenuating image detail. Noise-gating exploits the same property -- "coherence" -- that we use to identify features in images, to separate image features from noise.Processing image sequences through 3-D noise gating results in spectacular (more than 10x) improvements in signal-to-noise ratio, while not blurring bright, resolved features in either space or time. This improves most types of image analysis, including feature identification, time sequence extraction, absolute and relative photometry (including differential emission measure analysis), feature tracking, computer vision, correlation tracking, background modeling, cross-scale analysis, visual display/presentation, and image compression.I will introduce noise gating, describe the method, and show examples from several instruments (including SDO/AIA , SDO/HMI, STEREO/SECCHI, and GOES-R/SUVI) that explore the benefits and limits of the technique.

  6. Clinical Application of 3D-CISS MRI Sequences for Diagnosis and Surgical Planning of Spinal Arachnoid Diverticula and Adhesions in Dogs.

    PubMed

    Tauro, Anna; Jovanovik, Jelena; Driver, Colin John; Rusbridge, Clare

    2018-02-01

     Abnormalities within the spinal arachnoid space are often treated surgically, but they can be challenging to detect with conventional magnetic resonance imaging (MRI) sequences. 3D-CISS sequences are considered superior in evaluating structures surrounded by cerebrospinal fluid (CSF) due to the high signal-to-noise ratio, high contrast-to-noise ratio and intrinsic insensitivity to motion with minimal signal loss due to CSF pulsations. Our objective was to describe findings and advantages in adding 3D-CISS sequences to routine MRI in patients affected by spinal arachnoid diverticula (SAD) or arachnoid adhesions.  This article is a retrospective review of medical records of 19 dogs admitted at Fitzpatrick Referrals between 2013 and 2017 that were diagnosed with SAD and confirmed surgically. Inclusion criterions were the presence of clinical signs compatible with compressive myelopathy and an MRI diagnosis, which included the 3D-CISS sequence. Our database was searched for additional 19 dogs diagnosed with other spinal lesions other than SAD that had the same MR sequences. All MR images were anonymized and evaluated by two assessors.  3D-CISS sequence appears to improve confidence in diagnosing and surgical planning (Mann-Whitney U -test: p  < 0.0005), delineating SAD from other changes associated with abnormal CSF hydrodynamics and providing more anatomical details than conventional MRI sequences. The clinical data in combination with imaging findings would limit over interpretation, when concurrent pathology within the arachnoid space is present. Schattauer GmbH Stuttgart.

  7. A new efficient method for color image compression based on visual attention mechanism

    NASA Astrophysics Data System (ADS)

    Shao, Xiaoguang; Gao, Kun; Lv, Lily; Ni, Guoqiang

    2010-11-01

    One of the key procedures in color image compression is to extract its region of interests (ROIs) and evaluate different compression ratios. A new non-uniform color image compression algorithm with high efficiency is proposed in this paper by using a biology-motivated selective attention model for the effective extraction of ROIs in natural images. When the ROIs have been extracted and labeled in the image, the subsequent work is to encode the ROIs and other regions with different compression ratios via popular JPEG algorithm. Furthermore, experiment results and quantitative and qualitative analysis in the paper show perfect performance when comparing with other traditional color image compression approaches.

  8. Digitized hand-wrist radiographs: comparison of subjective and software-derived image quality at various compression ratios.

    PubMed

    McCord, Layne K; Scarfe, William C; Naylor, Rachel H; Scheetz, James P; Silveira, Anibal; Gillespie, Kevin R

    2007-05-01

    The objectives of this study were to compare the effect of JPEG 2000 compression of hand-wrist radiographs on observer image quality qualitative assessment and to compare with a software-derived quantitative image quality index. Fifteen hand-wrist radiographs were digitized and saved as TIFF and JPEG 2000 images at 4 levels of compression (20:1, 40:1, 60:1, and 80:1). The images, including rereads, were viewed by 13 orthodontic residents who determined the image quality rating on a scale of 1 to 5. A quantitative analysis was also performed by using a readily available software based on the human visual system (Image Quality Measure Computer Program, version 6.2, Mitre, Bedford, Mass). ANOVA was used to determine the optimal compression level (P < or =.05). When we compared subjective indexes, JPEG compression greater than 60:1 significantly reduced image quality. When we used quantitative indexes, the JPEG 2000 images had lower quality at all compression ratios compared with the original TIFF images. There was excellent correlation (R2 >0.92) between qualitative and quantitative indexes. Image Quality Measure indexes are more sensitive than subjective image quality assessments in quantifying image degradation with compression. There is potential for this software-based quantitative method in determining the optimal compression ratio for any image without the use of subjective raters.

  9. Data compression experiments with LANDSAT thematic mapper and Nimbus-7 coastal zone color scanner data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Ramapriyan, H. K.

    1989-01-01

    A case study is presented where an image segmentation based compression technique is applied to LANDSAT Thematic Mapper (TM) and Nimbus-7 Coastal Zone Color Scanner (CZCS) data. The compression technique, called Spatially Constrained Clustering (SCC), can be regarded as an adaptive vector quantization approach. The SCC can be applied to either single or multiple spectral bands of image data. The segmented image resulting from SCC is encoded in small rectangular blocks, with the codebook varying from block to block. Lossless compression potential (LDP) of sample TM and CZCS images are evaluated. For the TM test image, the LCP is 2.79. For the CZCS test image the LCP is 1.89, even though when only a cloud-free section of the image is considered the LCP increases to 3.48. Examples of compressed images are shown at several compression ratios ranging from 4 to 15. In the case of TM data, the compressed data are classified using the Bayes' classifier. The results show an improvement in the similarity between the classification results and ground truth when compressed data are used, thus showing that compression is, in fact, a useful first step in the analysis.

  10. Comparison of two SVD-based color image compression schemes.

    PubMed

    Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli

    2017-01-01

    Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR.

  11. Comparison of two SVD-based color image compression schemes

    PubMed Central

    Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli

    2017-01-01

    Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR. PMID:28257451

  12. Compression of the Global Land 1-km AVHRR dataset

    USGS Publications Warehouse

    Kess, B. L.; Steinwand, D.R.; Reichenbach, S.E.

    1996-01-01

    Large datasets, such as the Global Land 1-km Advanced Very High Resolution Radiometer (AVHRR) Data Set (Eidenshink and Faundeen 1994), require compression methods that provide efficient storage and quick access to portions of the data. A method of lossless compression is described that provides multiresolution decompression within geographic subwindows of multi-spectral, global, 1-km, AVHRR images. The compression algorithm segments each image into blocks and compresses each block in a hierarchical format. Users can access the data by specifying either a geographic subwindow or the whole image and a resolution (1,2,4, 8, or 16 km). The Global Land 1-km AVHRR data are presented in the Interrupted Goode's Homolosine map projection. These images contain masked regions for non-land areas which comprise 80 per cent of the image. A quadtree algorithm is used to compress the masked regions. The compressed region data are stored separately from the compressed land data. Results show that the masked regions compress to 0·143 per cent of the bytes they occupy in the test image and the land areas are compressed to 33·2 per cent of their original size. The entire image is compressed hierarchically to 6·72 per cent of the original image size, reducing the data from 9·05 gigabytes to 623 megabytes. These results are compared to the first order entropy of the residual image produced with lossless Joint Photographic Experts Group predictors. Compression results are also given for Lempel-Ziv-Welch (LZW) and LZ77, the algorithms used by UNIX compress and GZIP respectively. In addition to providing multiresolution decompression of geographic subwindows of the data, the hierarchical approach and the use of quadtrees for storing the masked regions gives a marked improvement over these popular methods.

  13. Coding visual features extracted from video sequences.

    PubMed

    Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2014-05-01

    Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics.

  14. Edge-preserving image compression for magnetic-resonance images using dynamic associative neural networks (DANN)-based neural networks

    NASA Astrophysics Data System (ADS)

    Wan, Tat C.; Kabuka, Mansur R.

    1994-05-01

    With the tremendous growth in imaging applications and the development of filmless radiology, the need for compression techniques that can achieve high compression ratios with user specified distortion rates becomes necessary. Boundaries and edges in the tissue structures are vital for detection of lesions and tumors, which in turn requires the preservation of edges in the image. The proposed edge preserving image compressor (EPIC) combines lossless compression of edges with neural network compression techniques based on dynamic associative neural networks (DANN), to provide high compression ratios with user specified distortion rates in an adaptive compression system well-suited to parallel implementations. Improvements to DANN-based training through the use of a variance classifier for controlling a bank of neural networks speed convergence and allow the use of higher compression ratios for `simple' patterns. The adaptation and generalization capabilities inherent in EPIC also facilitate progressive transmission of images through varying the number of quantization levels used to represent compressed patterns. Average compression ratios of 7.51:1 with an averaged average mean squared error of 0.0147 were achieved.

  15. Developing JSequitur to Study the Hierarchical Structure of Biological Sequences in a Grammatical Inference Framework of String Compression Algorithms.

    PubMed

    Galbadrakh, Bulgan; Lee, Kyung-Eun; Park, Hyun-Seok

    2012-12-01

    Grammatical inference methods are expected to find grammatical structures hidden in biological sequences. One hopes that studies of grammar serve as an appropriate tool for theory formation. Thus, we have developed JSequitur for automatically generating the grammatical structure of biological sequences in an inference framework of string compression algorithms. Our original motivation was to find any grammatical traits of several cancer genes that can be detected by string compression algorithms. Through this research, we could not find any meaningful unique traits of the cancer genes yet, but we could observe some interesting traits in regards to the relationship among gene length, similarity of sequences, the patterns of the generated grammar, and compression rate.

  16. Multiple-image encryption via lifting wavelet transform and XOR operation based on compressive ghost imaging scheme

    NASA Astrophysics Data System (ADS)

    Li, Xianye; Meng, Xiangfeng; Yang, Xiulun; Wang, Yurong; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-03-01

    A multiple-image encryption method via lifting wavelet transform (LWT) and XOR operation is proposed, which is based on a row scanning compressive ghost imaging scheme. In the encryption process, the scrambling operation is implemented for the sparse images transformed by LWT, then the XOR operation is performed on the scrambled images, and the resulting XOR images are compressed in the row scanning compressive ghost imaging, through which the ciphertext images can be detected by bucket detector arrays. During decryption, the participant who possesses his/her correct key-group, can successfully reconstruct the corresponding plaintext image by measurement key regeneration, compression algorithm reconstruction, XOR operation, sparse images recovery, and inverse LWT (iLWT). Theoretical analysis and numerical simulations validate the feasibility of the proposed method.

  17. Fast ℓ1-SPIRiT Compressed Sensing Parallel Imaging MRI: Scalable Parallel Implementation and Clinically Feasible Runtime

    PubMed Central

    Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael

    2012-01-01

    We present ℓ1-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the Wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative Self-Consistent Parallel Imaging (SPIRiT). Like many iterative MRI reconstructions, ℓ1-SPIRiT’s image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing ℓ1-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of ℓ1-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT Spoiled Gradient Echo (SPGR) sequence with up to 8× acceleration via poisson-disc undersampling in the two phase-encoded directions. PMID:22345529

  18. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  19. High efficient optical remote sensing images acquisition for nano-satellite-framework

    NASA Astrophysics Data System (ADS)

    Li, Feng; Xin, Lei; Liu, Yang; Fu, Jie; Liu, Yuhong; Guo, Yi

    2017-09-01

    It is more difficult and challenging to implement Nano-satellite (NanoSat) based optical Earth observation missions than conventional satellites because of the limitation of volume, weight and power consumption. In general, an image compression unit is a necessary onboard module to save data transmission bandwidth and disk space. The image compression unit can get rid of redundant information of those captured images. In this paper, a new image acquisition framework is proposed for NanoSat based optical Earth observation applications. The entire process of image acquisition and compression unit can be integrated in the photo detector array chip, that is, the output data of the chip is already compressed. That is to say, extra image compression unit is no longer needed; therefore, the power, volume, and weight of the common onboard image compression units consumed can be largely saved. The advantages of the proposed framework are: the image acquisition and image compression are combined into a single step; it can be easily built in CMOS architecture; quick view can be provided without reconstruction in the framework; Given a certain compression ratio, the reconstructed image quality is much better than those CS based methods. The framework holds promise to be widely used in the future.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bucourt, Maximilian de, E-mail: mdb@charite.de; Streitparth, Florian, E-mail: florian.streitparth@charite.de; Collettini, Federico

    Purpose: To evaluate the feasibility of minimally invasive magnetic resonance imaging (MRI)-guided free-hand aspiration of symptomatic nerve route compressing lumbosacral cysts in a 1.0-Tesla (T) open MRI system using a tailored interactive sequence. Materials and Methods: Eleven patients with MRI-evident symptomatic cysts in the lumbosacral region and possible nerve route compressing character were referred to a 1.0-T open MRI system. For MRI interventional cyst aspiration, an interactive sequence was used, allowing for near real-time position validation of the needle in any desired three-dimensional plane. Results: Seven of 11 cysts in the lumbosacral region were successfully aspirated (average 10.1 mm [SDmore » {+-} 1.9]). After successful cyst aspiration, each patient reported speedy relief of initial symptoms. Average cyst size was 9.6 mm ({+-}2.6 mm). Four cysts (8.8 {+-} 3.8 mm) could not be aspirated. Conclusion: Open MRI systems with tailored interactive sequences have great potential for cyst aspiration in the lumbosacral region. The authors perceive major advantages of the MR-guided cyst aspiration in its minimally invasive character compared to direct and open surgical options along with consecutive less trauma, less stress, and also less side-effects for the patient.« less

  1. Estimating JPEG2000 compression for image forensics using Benford's Law

    NASA Astrophysics Data System (ADS)

    Qadir, Ghulam; Zhao, Xi; Ho, Anthony T. S.

    2010-05-01

    With the tremendous growth and usage of digital images nowadays, the integrity and authenticity of digital content is becoming increasingly important, and a growing concern to many government and commercial sectors. Image Forensics, based on a passive statistical analysis of the image data only, is an alternative approach to the active embedding of data associated with Digital Watermarking. Benford's Law was first introduced to analyse the probability distribution of the 1st digit (1-9) numbers of natural data, and has since been applied to Accounting Forensics for detecting fraudulent income tax returns [9]. More recently, Benford's Law has been further applied to image processing and image forensics. For example, Fu et al. [5] proposed a Generalised Benford's Law technique for estimating the Quality Factor (QF) of JPEG compressed images. In our previous work, we proposed a framework incorporating the Generalised Benford's Law to accurately detect unknown JPEG compression rates of watermarked images in semi-fragile watermarking schemes. JPEG2000 (a relatively new image compression standard) offers higher compression rates and better image quality as compared to JPEG compression. In this paper, we propose the novel use of Benford's Law for estimating JPEG2000 compression for image forensics applications. By analysing the DWT coefficients and JPEG2000 compression on 1338 test images, the initial results indicate that the 1st digit probability of DWT coefficients follow the Benford's Law. The unknown JPEG2000 compression rates of the image can also be derived, and proved with the help of a divergence factor, which shows the deviation between the probabilities and Benford's Law. Based on 1338 test images, the mean divergence for DWT coefficients is approximately 0.0016, which is lower than DCT coefficients at 0.0034. However, the mean divergence for JPEG2000 images compression rate at 0.1 is 0.0108, which is much higher than uncompressed DWT coefficients. This result clearly indicates a presence of compression in the image. Moreover, we compare the results of 1st digit probability and divergence among JPEG2000 compression rates at 0.1, 0.3, 0.5 and 0.9. The initial results show that the expected difference among them could be used for further analysis to estimate the unknown JPEG2000 compression rates.

  2. Still-to-video face recognition in unconstrained environments

    NASA Astrophysics Data System (ADS)

    Wang, Haoyu; Liu, Changsong; Ding, Xiaoqing

    2015-02-01

    Face images from video sequences captured in unconstrained environments usually contain several kinds of variations, e.g. pose, facial expression, illumination, image resolution and occlusion. Motion blur and compression artifacts also deteriorate recognition performance. Besides, in various practical systems such as law enforcement, video surveillance and e-passport identification, only a single still image per person is enrolled as the gallery set. Many existing methods may fail to work due to variations in face appearances and the limit of available gallery samples. In this paper, we propose a novel approach for still-to-video face recognition in unconstrained environments. By assuming that faces from still images and video frames share the same identity space, a regularized least squares regression method is utilized to tackle the multi-modality problem. Regularization terms based on heuristic assumptions are enrolled to avoid overfitting. In order to deal with the single image per person problem, we exploit face variations learned from training sets to synthesize virtual samples for gallery samples. We adopt a learning algorithm combining both affine/convex hull-based approach and regularizations to match image sets. Experimental results on a real-world dataset consisting of unconstrained video sequences demonstrate that our method outperforms the state-of-the-art methods impressively.

  3. Optimal Filter Estimation for Lucas-Kanade Optical Flow

    PubMed Central

    Sharmin, Nusrat; Brad, Remus

    2012-01-01

    Optical flow algorithms offer a way to estimate motion from a sequence of images. The computation of optical flow plays a key-role in several computer vision applications, including motion detection and segmentation, frame interpolation, three-dimensional scene reconstruction, robot navigation and video compression. In the case of gradient based optical flow implementation, the pre-filtering step plays a vital role, not only for accurate computation of optical flow, but also for the improvement of performance. Generally, in optical flow computation, filtering is used at the initial level on original input images and afterwards, the images are resized. In this paper, we propose an image filtering approach as a pre-processing step for the Lucas-Kanade pyramidal optical flow algorithm. Based on a study of different types of filtering methods and applied on the Iterative Refined Lucas-Kanade, we have concluded on the best filtering practice. As the Gaussian smoothing filter was selected, an empirical approach for the Gaussian variance estimation was introduced. Tested on the Middlebury image sequences, a correlation between the image intensity value and the standard deviation value of the Gaussian function was established. Finally, we have found that our selection method offers a better performance for the Lucas-Kanade optical flow algorithm.

  4. Evaluation of image compression for computer-aided diagnosis of breast tumors in 3D sonography

    NASA Astrophysics Data System (ADS)

    Chen, We-Min; Huang, Yu-Len; Tao, Chi-Chuan; Chen, Dar-Ren; Moon, Woo-Kyung

    2006-03-01

    Medical imaging examinations form the basis for physicians diagnosing diseases, as evidenced by the increasing use of digital medical images for picture archiving and communications systems (PACS). However, with enlarged medical image databases and rapid growth of patients' case reports, PACS requires image compression to accelerate the image transmission rate and conserve disk space for diminishing implementation costs. For this purpose, JPEG and JPEG2000 have been accepted as legal formats for the digital imaging and communications in medicine (DICOM). The high compression ratio is felt to be useful for medical imagery. Therefore, this study evaluates the compression ratios of JPEG and JPEG2000 standards for computer-aided diagnosis (CAD) of breast tumors in 3-D medical ultrasound (US) images. The 3-D US data sets with various compression ratios are compressed using the two efficacious image compression standards. The reconstructed data sets are then diagnosed by a previous proposed CAD system. The diagnostic accuracy is measured based on receiver operating characteristic (ROC) analysis. Namely, the ROC curves are used to compare the diagnostic performance of two or more reconstructed images. Analysis results ensure a comparison of the compression ratios by using JPEG and JPEG2000 for 3-D US images. Results of this study provide the possible bit rates using JPEG and JPEG2000 for 3-D breast US images.

  5. Learning random networks for compression of still and moving images

    NASA Technical Reports Server (NTRS)

    Gelenbe, Erol; Sungur, Mert; Cramer, Christopher

    1994-01-01

    Image compression for both still and moving images is an extremely important area of investigation, with numerous applications to videoconferencing, interactive education, home entertainment, and potential applications to earth observations, medical imaging, digital libraries, and many other areas. We describe work on a neural network methodology to compress/decompress still and moving images. We use the 'point-process' type neural network model which is closer to biophysical reality than standard models, and yet is mathematically much more tractable. We currently achieve compression ratios of the order of 120:1 for moving grey-level images, based on a combination of motion detection and compression. The observed signal-to-noise ratio varies from values above 25 to more than 35. The method is computationally fast so that compression and decompression can be carried out in real-time. It uses the adaptive capabilities of a set of neural networks so as to select varying compression ratios in real-time as a function of quality achieved. It also uses a motion detector which will avoid retransmitting portions of the image which have varied little from the previous frame. Further improvements can be achieved by using on-line learning during compression, and by appropriate compensation of nonlinearities in the compression/decompression scheme. We expect to go well beyond the 250:1 compression level for color images with good quality levels.

  6. Wavelet-based compression of pathological images for telemedicine applications

    NASA Astrophysics Data System (ADS)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  7. An image assessment study of image acceptability of the Galileo low gain antenna mission

    NASA Technical Reports Server (NTRS)

    Chuang, S. L.; Haines, R. F.; Grant, T.; Gold, Yaron; Cheung, Kar-Ming

    1994-01-01

    This paper describes a study conducted by NASA Ames Research Center (ARC) in collaboration with the Jet Propulsion Laboratory (JPL), Pasadena, California on the image acceptability of the Galileo Low Gain Antenna mission. The primary objective of the study is to determine the impact of the Integer Cosine Transform (ICT) compression algorithm on Galilean images of atmospheric bodies, moons, asteroids and Jupiter's rings. The approach involved fifteen volunteer subjects representing twelve institutions involved with the Galileo Solid State Imaging (SSI) experiment. Four different experiment specific quantization tables (q-table) and various compression stepsizes (q-factor) to achieve different compression ratios were used. It then determined the acceptability of the compressed monochromatic astronomical images as evaluated by Galileo SSI mission scientists. Fourteen different images were evaluated. Each observer viewed two versions of the same image side by side on a high resolution monitor, each was compressed using a different quantization stepsize. They were requested to select which image had the highest overall quality to support them in carrying out their visual evaluations of image content. Then they rated both images using a scale from one to five on its judged degree of usefulness. Up to four pre-selected types of images were presented with and without noise to each subject based upon results of a previously administered survey of their image preferences. Fourteen different images in seven image groups were studied. The results showed that: (1) acceptable compression ratios vary widely with the type of images; (2) noisy images detract greatly from image acceptability and acceptable compression ratios; and (3) atmospheric images of Jupiter seem to have higher compression ratios of 4 to 5 times that of some clear surface satellite images.

  8. Compressed/reconstructed test images for CRAF/Cassini

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.

    1991-01-01

    A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.

  9. HUGO: Hierarchical mUlti-reference Genome cOmpression for aligned reads

    PubMed Central

    Li, Pinghao; Jiang, Xiaoqian; Wang, Shuang; Kim, Jihoon; Xiong, Hongkai; Ohno-Machado, Lucila

    2014-01-01

    Background and objective Short-read sequencing is becoming the standard of practice for the study of structural variants associated with disease. However, with the growth of sequence data largely surpassing reasonable storage capability, the biomedical community is challenged with the management, transfer, archiving, and storage of sequence data. Methods We developed Hierarchical mUlti-reference Genome cOmpression (HUGO), a novel compression algorithm for aligned reads in the sorted Sequence Alignment/Map (SAM) format. We first aligned short reads against a reference genome and stored exactly mapped reads for compression. For the inexact mapped or unmapped reads, we realigned them against different reference genomes using an adaptive scheme by gradually shortening the read length. Regarding the base quality value, we offer lossy and lossless compression mechanisms. The lossy compression mechanism for the base quality values uses k-means clustering, where a user can adjust the balance between decompression quality and compression rate. The lossless compression can be produced by setting k (the number of clusters) to the number of different quality values. Results The proposed method produced a compression ratio in the range 0.5–0.65, which corresponds to 35–50% storage savings based on experimental datasets. The proposed approach achieved 15% more storage savings over CRAM and comparable compression ratio with Samcomp (CRAM and Samcomp are two of the state-of-the-art genome compression algorithms). The software is freely available at https://sourceforge.net/projects/hierachicaldnac/with a General Public License (GPL) license. Limitation Our method requires having different reference genomes and prolongs the execution time for additional alignments. Conclusions The proposed multi-reference-based compression algorithm for aligned reads outperforms existing single-reference based algorithms. PMID:24368726

  10. High-performance compression of astronomical images

    NASA Technical Reports Server (NTRS)

    White, Richard L.

    1993-01-01

    Astronomical images have some rather unusual characteristics that make many existing image compression techniques either ineffective or inapplicable. A typical image consists of a nearly flat background sprinkled with point sources and occasional extended sources. The images are often noisy, so that lossless compression does not work very well; furthermore, the images are usually subjected to stringent quantitative analysis, so any lossy compression method must be proven not to discard useful information, but must instead discard only the noise. Finally, the images can be extremely large. For example, the Space Telescope Science Institute has digitized photographic plates covering the entire sky, generating 1500 images each having 14000 x 14000 16-bit pixels. Several astronomical groups are now constructing cameras with mosaics of large CCD's (each 2048 x 2048 or larger); these instruments will be used in projects that generate data at a rate exceeding 100 MBytes every 5 minutes for many years. An effective technique for image compression may be based on the H-transform (Fritze et al. 1977). The method that we have developed can be used for either lossless or lossy compression. The digitized sky survey images can be compressed by at least a factor of 10 with no noticeable losses in the astrometric and photometric properties of the compressed images. The method has been designed to be computationally efficient: compression or decompression of a 512 x 512 image requires only 4 seconds on a Sun SPARCstation 1. The algorithm uses only integer arithmetic, so it is completely reversible in its lossless mode, and it could easily be implemented in hardware for space applications.

  11. Compression of computer generated phase-shifting hologram sequence using AVC and HEVC

    NASA Astrophysics Data System (ADS)

    Xing, Yafei; Pesquet-Popescu, Béatrice; Dufaux, Frederic

    2013-09-01

    With the capability of achieving twice the compression ratio of Advanced Video Coding (AVC) with similar reconstruction quality, High Efficiency Video Coding (HEVC) is expected to become the newleading technique of video coding. In order to reduce the storage and transmission burden of digital holograms, in this paper we propose to use HEVC for compressing the phase-shifting digital hologram sequences (PSDHS). By simulating phase-shifting digital holography (PSDH) interferometry, interference patterns between illuminated three dimensional( 3D) virtual objects and the stepwise phase changed reference wave are generated as digital holograms. The hologram sequences are obtained by the movement of the virtual objects and compressed by AVC and HEVC. The experimental results show that AVC and HEVC are efficient to compress PSDHS, with HEVC giving better performance. Good compression rate and reconstruction quality can be obtained with bitrate above 15000kbps.

  12. SCALCE: boosting sequence compression algorithms using locally consistent encoding.

    PubMed

    Hach, Faraz; Numanagic, Ibrahim; Alkan, Can; Sahinalp, S Cenk

    2012-12-01

    The high throughput sequencing (HTS) platforms generate unprecedented amounts of data that introduce challenges for the computational infrastructure. Data management, storage and analysis have become major logistical obstacles for those adopting the new platforms. The requirement for large investment for this purpose almost signalled the end of the Sequence Read Archive hosted at the National Center for Biotechnology Information (NCBI), which holds most of the sequence data generated world wide. Currently, most HTS data are compressed through general purpose algorithms such as gzip. These algorithms are not designed for compressing data generated by the HTS platforms; for example, they do not take advantage of the specific nature of genomic sequence data, that is, limited alphabet size and high similarity among reads. Fast and efficient compression algorithms designed specifically for HTS data should be able to address some of the issues in data management, storage and communication. Such algorithms would also help with analysis provided they offer additional capabilities such as random access to any read and indexing for efficient sequence similarity search. Here we present SCALCE, a 'boosting' scheme based on Locally Consistent Parsing technique, which reorganizes the reads in a way that results in a higher compression speed and compression rate, independent of the compression algorithm in use and without using a reference genome. Our tests indicate that SCALCE can improve the compression rate achieved through gzip by a factor of 4.19-when the goal is to compress the reads alone. In fact, on SCALCE reordered reads, gzip running time can improve by a factor of 15.06 on a standard PC with a single core and 6 GB memory. Interestingly even the running time of SCALCE + gzip improves that of gzip alone by a factor of 2.09. When compared with the recently published BEETL, which aims to sort the (inverted) reads in lexicographic order for improving bzip2, SCALCE + gzip provides up to 2.01 times better compression while improving the running time by a factor of 5.17. SCALCE also provides the option to compress the quality scores as well as the read names, in addition to the reads themselves. This is achieved by compressing the quality scores through order-3 Arithmetic Coding (AC) and the read names through gzip through the reordering SCALCE provides on the reads. This way, in comparison with gzip compression of the unordered FASTQ files (including reads, read names and quality scores), SCALCE (together with gzip and arithmetic encoding) can provide up to 3.34 improvement in the compression rate and 1.26 improvement in running time. Our algorithm, SCALCE (Sequence Compression Algorithm using Locally Consistent Encoding), is implemented in C++ with both gzip and bzip2 compression options. It also supports multithreading when gzip option is selected, and the pigz binary is available. It is available at http://scalce.sourceforge.net. fhach@cs.sfu.ca or cenk@cs.sfu.ca Supplementary data are available at Bioinformatics online.

  13. Halftoning processing on a JPEG-compressed image

    NASA Astrophysics Data System (ADS)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  14. Neurovascular Study of the Trigeminal Nerve at 3 T MRI

    PubMed Central

    Gonzalez, Nadia; Muñoz, Alexandra; Bravo, Fernando; Sarroca, Daniel; Morales, Carlos

    2015-01-01

    This study aimed to show a novel visualization method to investigate neurovascular compression of the trigeminal nerve (TN) using a volume-rendering fusion imaging technique of 3D fast imaging employing steady-state acquisition (3D FIESTA) and coregistered 3D time of flight MR angiography (3D TOF MRA) sequences, which we called “neurovascular study of the trigeminal nerve”. We prospectively studied 30 patients with unilateral trigeminal neuralgia (TN) and 50 subjects without symptoms of TN (control group), on a 3 Tesla scanner. All patients were assessed using 3D FIESTA and 3D TOF MRA sequences centered on the pons, as well as a standard brain protocol including axial T1, T2, FLAIR and GRE sequences to exclude other pathologies that could cause TN. Post-contrast T1-weighted sequences were also performed. All cases showing arterial imprinting on the trigeminal nerve (n = 11) were identified on the ipsilateral side of the pain. No significant relationship was found between the presence of an artery in contact with the trigeminal nerve and TN. Eight cases were found showing arterial contact on the ipsilateral side of the pain and five cases of arterial contact on the contralateral side. The fusion imaging technique of 3D FIESTA and 3D TOF MRA sequences, combining the high anatomical detail provided by the 3D FIESTA sequence with the 3D TOF MRA sequence and its capacity to depict arterial structures, results in a tool that enables quick and efficient visualization and assessment of the relationship between the trigeminal nerve and the neighboring vascular structures. PMID:25924169

  15. Development of ultrasound/endoscopy PACS (picture archiving and communication system) and investigation of compression method for cine images

    NASA Astrophysics Data System (ADS)

    Osada, Masakazu; Tsukui, Hideki

    2002-09-01

    ABSTRACT Picture Archiving and Communication System (PACS) is a system which connects imaging modalities, image archives, and image workstations to reduce film handling cost and improve hospital workflow. Handling diagnostic ultrasound and endoscopy images is challenging, because it produces large amount of data such as motion (cine) images of 30 frames per second, 640 x 480 in resolution, with 24-bit color. Also, it requires enough image quality for clinical review. We have developed PACS which is able to manage ultrasound and endoscopy cine images with above resolution and frame rate, and investigate suitable compression method and compression rate for clinical image review. Results show that clinicians require capability for frame-by-frame forward and backward review of cine images because they carefully look through motion images to find certain color patterns which may appear in one frame. In order to satisfy this quality, we have chosen motion JPEG, installed and confirmed that we could capture this specific pattern. As for acceptable image compression rate, we have performed subjective evaluation. No subjects could tell the difference between original non-compressed images and 1:10 lossy compressed JPEG images. One subject could tell the difference between original and 1:20 lossy compressed JPEG images although it is acceptable. Thus, ratios of 1:10 to 1:20 are acceptable to reduce data amount and cost while maintaining quality for clinical review.

  16. The effect of lossy image compression on image classification

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.

  17. Reevaluation of JPEG image compression to digitalized gastrointestinal endoscopic color images: a pilot study

    NASA Astrophysics Data System (ADS)

    Kim, Christopher Y.

    1999-05-01

    Endoscopic images p lay an important role in describing many gastrointestinal (GI) disorders. The field of radiology has been on the leading edge of creating, archiving and transmitting digital images. With the advent of digital videoendoscopy, endoscopists now have the ability to generate images for storage and transmission. X-rays can be compressed 30-40X without appreciable decline in quality. We reported results of a pilot study using JPEG compression of 24-bit color endoscopic images. For that study, the result indicated that adequate compression ratios vary according to the lesion and that images could be compressed to between 31- and 99-fold smaller than the original size without an appreciable decline in quality. The purpose of this study was to expand upon the methodology of the previous sty with an eye towards application for the WWW, a medium which would expand both clinical and educational purposes of color medical imags. The results indicate that endoscopists are able to tolerate very significant compression of endoscopic images without loss of clinical image quality. This finding suggests that even 1 MB color images can be compressed to well under 30KB, which is considered a maximal tolerable image size for downloading on the WWW.

  18. Converting Panax ginseng DNA and chemical fingerprints into two-dimensional barcode.

    PubMed

    Cai, Yong; Li, Peng; Li, Xi-Wen; Zhao, Jing; Chen, Hai; Yang, Qing; Hu, Hao

    2017-07-01

    In this study, we investigated how to convert the Panax ginseng DNA sequence code and chemical fingerprints into a two-dimensional code. In order to improve the compression efficiency, GATC2Bytes and digital merger compression algorithms are proposed. HPLC chemical fingerprint data of 10 groups of P. ginseng from Northeast China and the internal transcribed spacer 2 (ITS2) sequence code as the DNA sequence code were ready for conversion. In order to convert such data into a two-dimensional code, the following six steps were performed: First, the chemical fingerprint characteristic data sets were obtained through the inflection filtering algorithm. Second, precompression processing of such data sets is undertaken. Third, precompression processing was undertaken with the P. ginseng DNA (ITS2) sequence codes. Fourth, the precompressed chemical fingerprint data and the DNA (ITS2) sequence code were combined in accordance with the set data format. Such combined data can be compressed by Zlib, an open source data compression algorithm. Finally, the compressed data generated a two-dimensional code called a quick response code (QR code). Through the abovementioned converting process, it can be found that the number of bytes needed for storing P. ginseng chemical fingerprints and its DNA (ITS2) sequence code can be greatly reduced. After GTCA2Bytes algorithm processing, the ITS2 compression rate reaches 75% and the chemical fingerprint compression rate exceeds 99.65% via filtration and digital merger compression algorithm processing. Therefore, the overall compression ratio even exceeds 99.36%. The capacity of the formed QR code is around 0.5k, which can easily and successfully be read and identified by any smartphone. P. ginseng chemical fingerprints and its DNA (ITS2) sequence code can form a QR code after data processing, and therefore the QR code can be a perfect carrier of the authenticity and quality of P. ginseng information. This study provides a theoretical basis for the development of a quality traceability system of traditional Chinese medicine based on a two-dimensional code.

  19. Compressed Sensing for Resolution Enhancement of Hyperpolarized 13C Flyback 3D-MRSI

    PubMed Central

    Hu, Simon; Lustig, Michael; Chen, Albert P.; Crane, Jason; Kerr, Adam; Kelley, Douglas A.C.; Hurd, Ralph; Kurhanewicz, John; Nelson, Sarah J.; Pauly, John M.; Vigneron, Daniel B.

    2008-01-01

    High polarization of nuclear spins in liquid state through dynamic nuclear polarization has enabled the direct monitoring of 13C metabolites in vivo at very high signal to noise, allowing for rapid assessment of tissue metabolism. The abundant SNR afforded by this hyperpolarization technique makes high resolution 13C 3D-MRSI feasible. However, the number of phase encodes that can be fit into the short acquisition time for hyperpolarized imaging limits spatial coverage and resolution. To take advantage of the high SNR available from hyperpolarization, we have applied compressed sensing to achieve a factor of 2 enhancement in spatial resolution without increasing acquisition time or decreasing coverage. In this paper, the design and testing of compressed sensing suited for a flyback 13C 3D-MRSI sequence are presented. The key to this design was the undersampling of spectral k-space using a novel blipped scheme, thus taking advantage of the considerable sparsity in typical hyperpolarized 13C spectra. Phantom tests validated the accuracy of the compressed sensing approach and initial mouse experiments demonstrated in vivo feasibility. PMID:18367420

  20. Diagnosing common bile duct obstruction: comparison of image quality and diagnostic performance of three-dimensional magnetic resonance cholangiopancreatography with and without compressed sensing.

    PubMed

    Kwon, Heejin; Reid, Scott; Kim, Dongeun; Lee, Sangyun; Cho, Jinhan; Oh, Jongyeong

    2018-01-04

    This study aimed to evaluate image quality and diagnostic performance of a recently developed navigated three-dimensional magnetic resonance cholangiopancreatography (3D-MRCP) with compressed sensing (CS) based on parallel imaging (PI) and conventional 3D-MRCP with PI only in patients with abnormal bile duct dilatation. This institutional review board-approved study included 45 consecutive patients [non-malignant common bile duct lesions (n = 21) and malignant common bile duct lesions (n = 24)] who underwent MRCP of the abdomen to evaluate bile duct dilatation. All patients were imaged at 3T (MR 750, GE Healthcare, Waukesha, WI) including two kinds of 3D-MRCP using 352 × 288 matrices with and without CS based on PI. Two radiologists independently and blindly assessed randomized images. CS acceleration reduced the acquisition time on average 5 min and 6 s to a total of 2 min and 56 s. The all CS cine image quality was significantly higher than standard cine MR image for all quantitative measurements. Diagnostic accuracy for benign and malignant lesions is statistically different between standard and CS 3D-MRCP. Total image quality and diagnostic accuracy at biliary obstruction evaluation demonstrates that CS-accelerated 3D-MRCP sequences can provide superior quality of diagnostic information in 42.5% less time. This has the potential to reduce motion-related artifacts and improve diagnostic efficacy.

  1. FaStore - a space-saving solution for raw sequencing data.

    PubMed

    Roguski, Lukasz; Ochoa, Idoia; Hernaez, Mikel; Deorowicz, Sebastian

    2018-03-29

    The affordability of DNA sequencing has led to the generation of unprecedented volumes of raw sequencing data. These data must be stored, processed, and transmitted, which poses significant challenges. To facilitate this effort, we introduce FaStore, a specialized compressor for FASTQ files. FaStore does not use any reference sequences for compression, and permits the user to choose from several lossy modes to improve the overall compression ratio, depending on the specific needs. FaStore in the lossless mode achieves a significant improvement in compression ratio with respect to previously proposed algorithms. We perform an analysis on the effect that the different lossy modes have on variant calling, the most widely used application for clinical decision making, especially important in the era of precision medicine. We show that lossy compression can offer significant compression gains, while preserving the essential genomic information and without affecting the variant calling performance. FaStore can be downloaded from https://github.com/refresh-bio/FaStore. sebastian.deorowicz@polsl.pl. Supplementary data are available at Bioinformatics online.

  2. Using irreversible compression in digital radiology: a preliminary study of the opinions of radiologists

    NASA Astrophysics Data System (ADS)

    Seeram, Euclid

    2006-03-01

    The large volumes of digital images produced by digital imaging modalities in Radiology have provided the motivation for the development of picture archiving and communication systems (PACS) in an effort to provide an organized mechanism for digital image management. The development of more sophisticated methods of digital image acquisition (Multislice CT and Digital Mammography, for example), as well as the implementation and performance of PACS and Teleradiology systems in a health care environment, have created challenges in the area of image compression with respect to storing and transmitting digital images. Image compression can be reversible (lossless) or irreversible (lossy). While in the former, there is no loss of information, the latter presents concerns since there is a loss of information. This loss of information from diagnostic medical images is of primary concern not only to radiologists, but also to patients and their physicians. In 1997, Goldberg pointed out that "there is growing evidence that lossy compression can be applied without significantly affecting the diagnostic content of images... there is growing consensus in the radiologic community that some forms of lossy compression are acceptable". The purpose of this study was to explore the opinions of expert radiologists, and related professional organizations on the use of irreversible compression in routine practice The opinions of notable radiologists in the US and Canada are varied indicating no consensus of opinion on the use of irreversible compression in primary diagnosis, however, they are generally positive on the notion of the image storage and transmission advantages. Almost all radiologists are concerned with the litigation potential of an incorrect diagnosis based on irreversible compressed images. The survey of several radiology professional and related organizations reveals that no professional practice standards exist for the use of irreversible compression. Currently, the only standard for image compression is stated in the ACR's Technical Standards for Teleradiology and Digital Image Management.

  3. Oblivious image watermarking combined with JPEG compression

    NASA Astrophysics Data System (ADS)

    Chen, Qing; Maitre, Henri; Pesquet-Popescu, Beatrice

    2003-06-01

    For most data hiding applications, the main source of concern is the effect of lossy compression on hidden information. The objective of watermarking is fundamentally in conflict with lossy compression. The latter attempts to remove all irrelevant and redundant information from a signal, while the former uses the irrelevant information to mask the presence of hidden data. Compression on a watermarked image can significantly affect the retrieval of the watermark. Past investigations of this problem have heavily relied on simulation. It is desirable not only to measure the effect of compression on embedded watermark, but also to control the embedding process to survive lossy compression. In this paper, we focus on oblivious watermarking by assuming that the watermarked image inevitably undergoes JPEG compression prior to watermark extraction. We propose an image-adaptive watermarking scheme where the watermarking algorithm and the JPEG compression standard are jointly considered. Watermark embedding takes into consideration the JPEG compression quality factor and exploits an HVS model to adaptively attain a proper trade-off among transparency, hiding data rate, and robustness to JPEG compression. The scheme estimates the image-dependent payload under JPEG compression to achieve the watermarking bit allocation in a determinate way, while maintaining consistent watermark retrieval performance.

  4. Clinical utility of wavelet compression for resolution-enhanced chest radiography

    NASA Astrophysics Data System (ADS)

    Andriole, Katherine P.; Hovanes, Michael E.; Rowberg, Alan H.

    2000-05-01

    This study evaluates the usefulness of wavelet compression for resolution-enhanced storage phosphor chest radiographs in the detection of subtle interstitial disease, pneumothorax and other abnormalities. A wavelet compression technique, MrSIDTM (LizardTech, Inc., Seattle, WA), is implemented which compresses the images from their original 2,000 by 2,000 (2K) matrix size, and then decompresses the image data for display at optimal resolution by matching the spatial frequency characteristics of image objects using a 4,000- square matrix. The 2K-matrix computed radiography (CR) chest images are magnified to a 4K-matrix using wavelet series expansion. The magnified images are compared with the original uncompressed 2K radiographs and with two-times magnification of the original images. Preliminary results show radiologist preference for MrSIDTM wavelet-based magnification over magnification of original data, and suggest that the compressed/decompressed images may provide an enhancement to the original. Data collection for clinical trials of 100 chest radiographs including subtle interstitial abnormalities and/or subtle pneumothoraces and normal cases, are in progress. Three experienced thoracic radiologists will view images side-by- side on calibrated softcopy workstations under controlled viewing conditions, and rank order preference tests will be performed. This technique combines image compression with image enhancement, and suggests that compressed/decompressed images can actually improve the originals.

  5. Pornographic image recognition and filtering using incremental learning in compressed domain

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Wang, Chao; Zhuo, Li; Geng, Wenhao

    2015-11-01

    With the rapid development and popularity of the network, the openness, anonymity, and interactivity of networks have led to the spread and proliferation of pornographic images on the Internet, which have done great harm to adolescents' physical and mental health. With the establishment of image compression standards, pornographic images are mainly stored with compressed formats. Therefore, how to efficiently filter pornographic images is one of the challenging issues for information security. A pornographic image recognition and filtering method in the compressed domain is proposed by using incremental learning, which includes the following steps: (1) low-resolution (LR) images are first reconstructed from the compressed stream of pornographic images, (2) visual words are created from the LR image to represent the pornographic image, and (3) incremental learning is adopted to continuously adjust the classification rules to recognize the new pornographic image samples after the covering algorithm is utilized to train and recognize the visual words in order to build the initial classification model of pornographic images. The experimental results show that the proposed pornographic image recognition method using incremental learning has a higher recognition rate as well as costing less recognition time in the compressed domain.

  6. A new approach of objective quality evaluation on JPEG2000 lossy-compressed lung cancer CT images

    NASA Astrophysics Data System (ADS)

    Cai, Weihua; Tan, Yongqiang; Zhang, Jianguo

    2007-03-01

    Image compression has been used to increase the communication efficiency and storage capacity. JPEG 2000 compression, based on the wavelet transformation, has its advantages comparing to other compression methods, such as ROI coding, error resilience, adaptive binary arithmetic coding and embedded bit-stream. However it is still difficult to find an objective method to evaluate the image quality of lossy-compressed medical images so far. In this paper, we present an approach to evaluate the image quality by using a computer aided diagnosis (CAD) system. We selected 77 cases of CT images, bearing benign and malignant lung nodules with confirmed pathology, from our clinical Picture Archiving and Communication System (PACS). We have developed a prototype of CAD system to classify these images into benign ones and malignant ones, the performance of which was evaluated by the receiver operator characteristics (ROC) curves. We first used JPEG 2000 to compress these cases of images with different compression ratio from lossless to lossy, and used the CAD system to classify the cases with different compressed ratio, then compared the ROC curves from the CAD classification results. Support vector machine (SVM) and neural networks (NN) were used to classify the malignancy of input nodules. In each approach, we found that the area under ROC (AUC) decreases with the increment of compression ratio with small fluctuations.

  7. A Framework of Hyperspectral Image Compression using Neural Networks

    DOE PAGES

    Masalmah, Yahya M.; Martínez Nieves, Christian; Rivera Soto, Rafael; ...

    2015-01-01

    Hyperspectral image analysis has gained great attention due to its wide range of applications. Hyperspectral images provide a vast amount of information about underlying objects in an image by using a large range of the electromagnetic spectrum for each pixel. However, since the same image is taken multiple times using distinct electromagnetic bands, the size of such images tend to be significant, which leads to greater processing requirements. The aim of this paper is to present a proposed framework for image compression and to study the possible effects of spatial compression on quality of unmixing results. Image compression allows usmore » to reduce the dimensionality of an image while still preserving most of the original information, which could lead to faster image processing. Lastly, this paper presents preliminary results of different training techniques used in Artificial Neural Network (ANN) based compression algorithm.« less

  8. Two-dimensional compression of surface electromyographic signals using column-correlation sorting and image encoders.

    PubMed

    Costa, Marcus V C; Carvalho, Joao L A; Berger, Pedro A; Zaghetto, Alexandre; da Rocha, Adson F; Nascimento, Francisco A O

    2009-01-01

    We present a new preprocessing technique for two-dimensional compression of surface electromyographic (S-EMG) signals, based on correlation sorting. We show that the JPEG2000 coding system (originally designed for compression of still images) and the H.264/AVC encoder (video compression algorithm operating in intraframe mode) can be used for compression of S-EMG signals. We compare the performance of these two off-the-shelf image compression algorithms for S-EMG compression, with and without the proposed preprocessing step. Compression of both isotonic and isometric contraction S-EMG signals is evaluated. The proposed methods were compared with other S-EMG compression algorithms from the literature.

  9. Adaptive compressive learning for prediction of protein-protein interactions from primary sequence.

    PubMed

    Zhang, Ya-Nan; Pan, Xiao-Yong; Huang, Yan; Shen, Hong-Bin

    2011-08-21

    Protein-protein interactions (PPIs) play an important role in biological processes. Although much effort has been devoted to the identification of novel PPIs by integrating experimental biological knowledge, there are still many difficulties because of lacking enough protein structural and functional information. It is highly desired to develop methods based only on amino acid sequences for predicting PPIs. However, sequence-based predictors are often struggling with the high-dimensionality causing over-fitting and high computational complexity problems, as well as the redundancy of sequential feature vectors. In this paper, a novel computational approach based on compressed sensing theory is proposed to predict yeast Saccharomyces cerevisiae PPIs from primary sequence and has achieved promising results. The key advantage of the proposed compressed sensing algorithm is that it can compress the original high-dimensional protein sequential feature vector into a much lower but more condensed space taking the sparsity property of the original signal into account. What makes compressed sensing much more attractive in protein sequence analysis is its compressed signal can be reconstructed from far fewer measurements than what is usually considered necessary in traditional Nyquist sampling theory. Experimental results demonstrate that proposed compressed sensing method is powerful for analyzing noisy biological data and reducing redundancy in feature vectors. The proposed method represents a new strategy of dealing with high-dimensional protein discrete model and has great potentiality to be extended to deal with many other complicated biological systems. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. A block-based JPEG-LS compression technique with lossless region of interest

    NASA Astrophysics Data System (ADS)

    Deng, Lihua; Huang, Zhenghua; Yao, Shoukui

    2018-03-01

    JPEG-LS lossless compression algorithm is used in many specialized applications that emphasize on the attainment of high fidelity for its lower complexity and better compression ratios than the lossless JPEG standard. But it cannot prevent error diffusion because of the context dependence of the algorithm, and have low compression rate when compared to lossy compression. In this paper, we firstly divide the image into two parts: ROI regions and non-ROI regions. Then we adopt a block-based image compression technique to decrease the range of error diffusion. We provide JPEG-LS lossless compression for the image blocks which include the whole or part region of interest (ROI) and JPEG-LS near lossless compression for the image blocks which are included in the non-ROI (unimportant) regions. Finally, a set of experiments are designed to assess the effectiveness of the proposed compression method.

  11. CWICOM: A Highly Integrated & Innovative CCSDS Image Compression ASIC

    NASA Astrophysics Data System (ADS)

    Poupat, Jean-Luc; Vitulli, Raffaele

    2013-08-01

    The space market is more and more demanding in terms of on image compression performances. The earth observation satellites instrument resolution, the agility and the swath are continuously increasing. It multiplies by 10 the volume of picture acquired on one orbit. In parallel, the satellites size and mass are decreasing, requiring innovative electronic technologies reducing size, mass and power consumption. Astrium, leader on the market of the combined solutions for compression and memory for space application, has developed a new image compression ASIC which is presented in this paper. CWICOM is a high performance and innovative image compression ASIC developed by Astrium in the frame of the ESA contract n°22011/08/NLL/LvH. The objective of this ESA contract is to develop a radiation hardened ASIC that implements the CCSDS 122.0-B-1 Standard for Image Data Compression, that has a SpaceWire interface for configuring and controlling the device, and that is compatible with Sentinel-2 interface and with similar Earth Observation missions. CWICOM stands for CCSDS Wavelet Image COMpression ASIC. It is a large dynamic, large image and very high speed image compression ASIC potentially relevant for compression of any 2D image with bi-dimensional data correlation such as Earth observation, scientific data compression… The paper presents some of the main aspects of the CWICOM development, such as the algorithm and specification, the innovative memory organization, the validation approach and the status of the project.

  12. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  13. Compression of next-generation sequencing reads aided by highly efficient de novo assembly

    PubMed Central

    Jones, Daniel C.; Ruzzo, Walter L.; Peng, Xinxia

    2012-01-01

    We present Quip, a lossless compression algorithm for next-generation sequencing data in the FASTQ and SAM/BAM formats. In addition to implementing reference-based compression, we have developed, to our knowledge, the first assembly-based compressor, using a novel de novo assembly algorithm. A probabilistic data structure is used to dramatically reduce the memory required by traditional de Bruijn graph assemblers, allowing millions of reads to be assembled very efficiently. Read sequences are then stored as positions within the assembled contigs. This is combined with statistical compression of read identifiers, quality scores, alignment information and sequences, effectively collapsing very large data sets to <15% of their original size with no loss of information. Availability: Quip is freely available under the 3-clause BSD license from http://cs.washington.edu/homes/dcjones/quip. PMID:22904078

  14. High-performance compression and double cryptography based on compressive ghost imaging with the fast Fourier transform

    NASA Astrophysics Data System (ADS)

    Leihong, Zhang; Zilan, Pan; Luying, Wu; Xiuhua, Ma

    2016-11-01

    To solve the problem that large images can hardly be retrieved for stringent hardware restrictions and the security level is low, a method based on compressive ghost imaging (CGI) with Fast Fourier Transform (FFT) is proposed, named FFT-CGI. Initially, the information is encrypted by the sender with FFT, and the FFT-coded image is encrypted by the system of CGI with a secret key. Then the receiver decrypts the image with the aid of compressive sensing (CS) and FFT. Simulation results are given to verify the feasibility, security, and compression of the proposed encryption scheme. The experiment suggests the method can improve the quality of large images compared with conventional ghost imaging and achieve the imaging for large-sized images, further the amount of data transmitted largely reduced because of the combination of compressive sensing and FFT, and improve the security level of ghost images through ciphertext-only attack (COA), chosen-plaintext attack (CPA), and noise attack. This technique can be immediately applied to encryption and data storage with the advantages of high security, fast transmission, and high quality of reconstructed information.

  15. Blind compressed sensing image reconstruction based on alternating direction method

    NASA Astrophysics Data System (ADS)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  16. Streamlined Genome Sequence Compression using Distributed Source Coding

    PubMed Central

    Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel

    2014-01-01

    We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552

  17. Acceleration techniques and their impact on arterial input function sampling: Non-accelerated versus view-sharing and compressed sensing sequences.

    PubMed

    Benz, Matthias R; Bongartz, Georg; Froehlich, Johannes M; Winkel, David; Boll, Daniel T; Heye, Tobias

    2018-07-01

    The aim was to investigate the variation of the arterial input function (AIF) within and between various DCE MRI sequences. A dynamic flow-phantom and steady signal reference were scanned on a 3T MRI using fast low angle shot (FLASH) 2d, FLASH3d (parallel imaging factor (P) = P0, P2, P4), volumetric interpolated breath-hold examination (VIBE) (P = P0, P3, P2 × 2, P2 × 3, P3 × 2), golden-angle radial sparse parallel imaging (GRASP), and time-resolved imaging with stochastic trajectories (TWIST). Signal over time curves were normalized and quantitatively analyzed by full width half maximum (FWHM) measurements to assess variation within and between sequences. The coefficient of variation (CV) for the steady signal reference ranged from 0.07-0.8%. The non-accelerated gradient echo FLASH2d, FLASH3d, and VIBE sequences showed low within sequence variation with 2.1%, 1.0%, and 1.6%. The maximum FWHM CV was 3.2% for parallel imaging acceleration (VIBE P2 × 3), 2.7% for GRASP and 9.1% for TWIST. The FWHM CV between sequences ranged from 8.5-14.4% for most non-accelerated/accelerated gradient echo sequences except 6.2% for FLASH3d P0 and 0.3% for FLASH3d P2; GRASP FWHM CV was 9.9% versus 28% for TWIST. MRI acceleration techniques vary in reproducibility and quantification of the AIF. Incomplete coverage of the k-space with TWIST as a representative of view-sharing techniques showed the highest variation within sequences and might be less suited for reproducible quantification of the AIF. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. GTRAC: fast retrieval from compressed collections of genomic variants

    PubMed Central

    Tatwawadi, Kedar; Hernaez, Mikel; Ochoa, Idoia; Weissman, Tsachy

    2016-01-01

    Motivation: The dramatic decrease in the cost of sequencing has resulted in the generation of huge amounts of genomic data, as evidenced by projects such as the UK10K and the Million Veteran Project, with the number of sequenced genomes ranging in the order of 10 K to 1 M. Due to the large redundancies among genomic sequences of individuals from the same species, most of the medical research deals with the variants in the sequences as compared with a reference sequence, rather than with the complete genomic sequences. Consequently, millions of genomes represented as variants are stored in databases. These databases are constantly updated and queried to extract information such as the common variants among individuals or groups of individuals. Previous algorithms for compression of this type of databases lack efficient random access capabilities, rendering querying the database for particular variants and/or individuals extremely inefficient, to the point where compression is often relinquished altogether. Results: We present a new algorithm for this task, called GTRAC, that achieves significant compression ratios while allowing fast random access over the compressed database. For example, GTRAC is able to compress a Homo sapiens dataset containing 1092 samples in 1.1 GB (compression ratio of 160), while allowing for decompression of specific samples in less than a second and decompression of specific variants in 17 ms. GTRAC uses and adapts techniques from information theory, such as a specialized Lempel-Ziv compressor, and tailored succinct data structures. Availability and Implementation: The GTRAC algorithm is available for download at: https://github.com/kedartatwawadi/GTRAC Contact: kedart@stanford.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27587665

  19. GTRAC: fast retrieval from compressed collections of genomic variants.

    PubMed

    Tatwawadi, Kedar; Hernaez, Mikel; Ochoa, Idoia; Weissman, Tsachy

    2016-09-01

    The dramatic decrease in the cost of sequencing has resulted in the generation of huge amounts of genomic data, as evidenced by projects such as the UK10K and the Million Veteran Project, with the number of sequenced genomes ranging in the order of 10 K to 1 M. Due to the large redundancies among genomic sequences of individuals from the same species, most of the medical research deals with the variants in the sequences as compared with a reference sequence, rather than with the complete genomic sequences. Consequently, millions of genomes represented as variants are stored in databases. These databases are constantly updated and queried to extract information such as the common variants among individuals or groups of individuals. Previous algorithms for compression of this type of databases lack efficient random access capabilities, rendering querying the database for particular variants and/or individuals extremely inefficient, to the point where compression is often relinquished altogether. We present a new algorithm for this task, called GTRAC, that achieves significant compression ratios while allowing fast random access over the compressed database. For example, GTRAC is able to compress a Homo sapiens dataset containing 1092 samples in 1.1 GB (compression ratio of 160), while allowing for decompression of specific samples in less than a second and decompression of specific variants in 17 ms. GTRAC uses and adapts techniques from information theory, such as a specialized Lempel-Ziv compressor, and tailored succinct data structures. The GTRAC algorithm is available for download at: https://github.com/kedartatwawadi/GTRAC CONTACT: : kedart@stanford.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. A novel color image compression algorithm using the human visual contrast sensitivity characteristics

    NASA Astrophysics Data System (ADS)

    Yao, Juncai; Liu, Guizhong

    2017-03-01

    In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.

  1. Effects of Image Compression on Automatic Count of Immunohistochemically Stained Nuclei in Digital Images

    PubMed Central

    López, Carlos; Lejeune, Marylène; Escrivà, Patricia; Bosch, Ramón; Salvadó, Maria Teresa; Pons, Lluis E.; Baucells, Jordi; Cugat, Xavier; Álvaro, Tomás; Jaén, Joaquín

    2008-01-01

    This study investigates the effects of digital image compression on automatic quantification of immunohistochemical nuclear markers. We examined 188 images with a previously validated computer-assisted analysis system. A first group was composed of 47 images captured in TIFF format, and other three contained the same images converted from TIFF to JPEG format with 3×, 23× and 46× compression. Counts of TIFF format images were compared with the other three groups. Overall, differences in the count of the images increased with the percentage of compression. Low-complexity images (≤100 cells/field, without clusters or with small-area clusters) had small differences (<5 cells/field in 95–100% of cases) and high-complexity images showed substantial differences (<35–50 cells/field in 95–100% of cases). Compression does not compromise the accuracy of immunohistochemical nuclear marker counts obtained by computer-assisted analysis systems for digital images with low complexity and could be an efficient method for storing these images. PMID:18755997

  2. SCALCE: boosting sequence compression algorithms using locally consistent encoding

    PubMed Central

    Hach, Faraz; Numanagić, Ibrahim; Sahinalp, S Cenk

    2012-01-01

    Motivation: The high throughput sequencing (HTS) platforms generate unprecedented amounts of data that introduce challenges for the computational infrastructure. Data management, storage and analysis have become major logistical obstacles for those adopting the new platforms. The requirement for large investment for this purpose almost signalled the end of the Sequence Read Archive hosted at the National Center for Biotechnology Information (NCBI), which holds most of the sequence data generated world wide. Currently, most HTS data are compressed through general purpose algorithms such as gzip. These algorithms are not designed for compressing data generated by the HTS platforms; for example, they do not take advantage of the specific nature of genomic sequence data, that is, limited alphabet size and high similarity among reads. Fast and efficient compression algorithms designed specifically for HTS data should be able to address some of the issues in data management, storage and communication. Such algorithms would also help with analysis provided they offer additional capabilities such as random access to any read and indexing for efficient sequence similarity search. Here we present SCALCE, a ‘boosting’ scheme based on Locally Consistent Parsing technique, which reorganizes the reads in a way that results in a higher compression speed and compression rate, independent of the compression algorithm in use and without using a reference genome. Results: Our tests indicate that SCALCE can improve the compression rate achieved through gzip by a factor of 4.19—when the goal is to compress the reads alone. In fact, on SCALCE reordered reads, gzip running time can improve by a factor of 15.06 on a standard PC with a single core and 6 GB memory. Interestingly even the running time of SCALCE + gzip improves that of gzip alone by a factor of 2.09. When compared with the recently published BEETL, which aims to sort the (inverted) reads in lexicographic order for improving bzip2, SCALCE + gzip provides up to 2.01 times better compression while improving the running time by a factor of 5.17. SCALCE also provides the option to compress the quality scores as well as the read names, in addition to the reads themselves. This is achieved by compressing the quality scores through order-3 Arithmetic Coding (AC) and the read names through gzip through the reordering SCALCE provides on the reads. This way, in comparison with gzip compression of the unordered FASTQ files (including reads, read names and quality scores), SCALCE (together with gzip and arithmetic encoding) can provide up to 3.34 improvement in the compression rate and 1.26 improvement in running time. Availability: Our algorithm, SCALCE (Sequence Compression Algorithm using Locally Consistent Encoding), is implemented in C++ with both gzip and bzip2 compression options. It also supports multithreading when gzip option is selected, and the pigz binary is available. It is available at http://scalce.sourceforge.net. Contact: fhach@cs.sfu.ca or cenk@cs.sfu.ca Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23047557

  3. A no-reference image and video visual quality metric based on machine learning

    NASA Astrophysics Data System (ADS)

    Frantc, Vladimir; Voronin, Viacheslav; Semenishchev, Evgenii; Minkin, Maxim; Delov, Aliy

    2018-04-01

    The paper presents a novel visual quality metric for lossy compressed video quality assessment. High degree of correlation with subjective estimations of quality is due to using of a convolutional neural network trained on a large amount of pairs video sequence-subjective quality score. We demonstrate how our predicted no-reference quality metric correlates with qualitative opinion in a human observer study. Results are shown on the EVVQ dataset with comparison existing approaches.

  4. Flexible theta sequence compression mediated via phase precessing interneurons

    PubMed Central

    Chadwick, Angus; van Rossum, Mark CW; Nolan, Matthew F

    2016-01-01

    Encoding of behavioral episodes as spike sequences during hippocampal theta oscillations provides a neural substrate for computations on events extended across time and space. However, the mechanisms underlying the numerous and diverse experimentally observed properties of theta sequences remain poorly understood. Here we account for theta sequences using a novel model constrained by the septo-hippocampal circuitry. We show that when spontaneously active interneurons integrate spatial signals and theta frequency pacemaker inputs, they generate phase precessing action potentials that can coordinate theta sequences in place cell populations. We reveal novel constraints on sequence generation, predict cellular properties and neural dynamics that characterize sequence compression, identify circuit organization principles for high capacity sequential representation, and show that theta sequences can be used as substrates for association of conditioned stimuli with recent and upcoming events. Our results suggest mechanisms for flexible sequence compression that are suited to associative learning across an animal’s lifespan. DOI: http://dx.doi.org/10.7554/eLife.20349.001 PMID:27929374

  5. Optimal color coding for compression of true color images

    NASA Astrophysics Data System (ADS)

    Musatenko, Yurij S.; Kurashov, Vitalij N.

    1998-11-01

    In the paper we present the method that improves lossy compression of the true color or other multispectral images. The essence of the method is to project initial color planes into Karhunen-Loeve (KL) basis that gives completely decorrelated representation for the image and to compress basis functions instead of the planes. To do that the new fast algorithm of true KL basis construction with low memory consumption is suggested and our recently proposed scheme for finding optimal losses of Kl functions while compression is used. Compare to standard JPEG compression of the CMYK images the method provides the PSNR gain from 0.2 to 2 dB for the convenient compression ratios. Experimental results are obtained for high resolution CMYK images. It is demonstrated that presented scheme could work on common hardware.

  6. A Lossless hybrid wavelet-fractal compression for welding radiographic images.

    PubMed

    Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud

    2016-01-01

    In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm.

  7. Digital mammography, cancer screening: Factors important for image compression

    NASA Technical Reports Server (NTRS)

    Clarke, Laurence P.; Blaine, G. James; Doi, Kunio; Yaffe, Martin J.; Shtern, Faina; Brown, G. Stephen; Winfield, Daniel L.; Kallergi, Maria

    1993-01-01

    The use of digital mammography for breast cancer screening poses several novel problems such as development of digital sensors, computer assisted diagnosis (CAD) methods for image noise suppression, enhancement, and pattern recognition, compression algorithms for image storage, transmission, and remote diagnosis. X-ray digital mammography using novel direct digital detection schemes or film digitizers results in large data sets and, therefore, image compression methods will play a significant role in the image processing and analysis by CAD techniques. In view of the extensive compression required, the relative merit of 'virtually lossless' versus lossy methods should be determined. A brief overview is presented here of the developments of digital sensors, CAD, and compression methods currently proposed and tested for mammography. The objective of the NCI/NASA Working Group on Digital Mammography is to stimulate the interest of the image processing and compression scientific community for this medical application and identify possible dual use technologies within the NASA centers.

  8. Wavelet/scalar quantization compression standard for fingerprint images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class ofmore » potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.« less

  9. Reference-free compression of high throughput sequencing data with a probabilistic de Bruijn graph.

    PubMed

    Benoit, Gaëtan; Lemaitre, Claire; Lavenier, Dominique; Drezen, Erwan; Dayris, Thibault; Uricaru, Raluca; Rizk, Guillaume

    2015-09-14

    Data volumes generated by next-generation sequencing (NGS) technologies is now a major concern for both data storage and transmission. This triggered the need for more efficient methods than general purpose compression tools, such as the widely used gzip method. We present a novel reference-free method meant to compress data issued from high throughput sequencing technologies. Our approach, implemented in the software LEON, employs techniques derived from existing assembly principles. The method is based on a reference probabilistic de Bruijn Graph, built de novo from the set of reads and stored in a Bloom filter. Each read is encoded as a path in this graph, by memorizing an anchoring kmer and a list of bifurcations. The same probabilistic de Bruijn Graph is used to perform a lossy transformation of the quality scores, which allows to obtain higher compression rates without losing pertinent information for downstream analyses. LEON was run on various real sequencing datasets (whole genome, exome, RNA-seq or metagenomics). In all cases, LEON showed higher overall compression ratios than state-of-the-art compression software. On a C. elegans whole genome sequencing dataset, LEON divided the original file size by more than 20. LEON is an open source software, distributed under GNU affero GPL License, available for download at http://gatb.inria.fr/software/leon/.

  10. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  11. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  12. Hyperspectral data compression using a Wiener filter predictor

    NASA Astrophysics Data System (ADS)

    Villeneuve, Pierre V.; Beaven, Scott G.; Stocker, Alan D.

    2013-09-01

    The application of compression to hyperspectral image data is a significant technical challenge. A primary bottleneck in disseminating data products to the tactical user community is the limited communication bandwidth between the airborne sensor and the ground station receiver. This report summarizes the newly-developed "Z-Chrome" algorithm for lossless compression of hyperspectral image data. A Wiener filter prediction framework is used as a basis for modeling new image bands from already-encoded bands. The resulting residual errors are then compressed using available state-of-the-art lossless image compression functions. Compression performance is demonstrated using a large number of test data collected over a wide variety of scene content from six different airborne and spaceborne sensors .

  13. Impact of lossy compression on diagnostic accuracy of radiographs for periapical lesions

    NASA Technical Reports Server (NTRS)

    Eraso, Francisco E.; Analoui, Mostafa; Watson, Andrew B.; Rebeschini, Regina

    2002-01-01

    OBJECTIVES: The purpose of this study was to evaluate the lossy Joint Photographic Experts Group compression for endodontic pretreatment digital radiographs. STUDY DESIGN: Fifty clinical charge-coupled device-based, digital radiographs depicting periapical areas were selected. Each image was compressed at 2, 4, 8, 16, 32, 48, and 64 compression ratios. One root per image was marked for examination. Images were randomized and viewed by four clinical observers under standardized viewing conditions. Each observer read the image set three times, with at least two weeks between each reading. Three pre-selected sites per image (mesial, distal, apical) were scored on a five-scale score confidence scale. A panel of three examiners scored the uncompressed images, with a consensus score for each site. The consensus score was used as the baseline for assessing the impact of lossy compression on the diagnostic values of images. The mean absolute error between consensus and observer scores was computed for each observer, site, and reading session. RESULTS: Balanced one-way analysis of variance for all observers indicated that for compression ratios 48 and 64, there was significant difference between mean absolute error of uncompressed and compressed images (P <.05). After converting the five-scale score to two-level diagnostic values, the diagnostic accuracy was strongly correlated (R (2) = 0.91) with the compression ratio. CONCLUSION: The results of this study suggest that high compression ratios can have a severe impact on the diagnostic quality of the digital radiographs for detection of periapical lesions.

  14. Image Quality Assessment of JPEG Compressed Mars Science Laboratory Mastcam Images using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Kerner, H. R.; Bell, J. F., III; Ben Amor, H.

    2017-12-01

    The Mastcam color imaging system on the Mars Science Laboratory Curiosity rover acquires images within Gale crater for a variety of geologic and atmospheric studies. Images are often JPEG compressed before being downlinked to Earth. While critical for transmitting images on a low-bandwidth connection, this compression can result in image artifacts most noticeable as anomalous brightness or color changes within or near JPEG compression block boundaries. In images with significant high-frequency detail (e.g., in regions showing fine layering or lamination in sedimentary rocks), the image might need to be re-transmitted losslessly to enable accurate scientific interpretation of the data. The process of identifying which images have been adversely affected by compression artifacts is performed manually by the Mastcam science team, costing significant expert human time. To streamline the tedious process of identifying which images might need to be re-transmitted, we present an input-efficient neural network solution for predicting the perceived quality of a compressed Mastcam image. Most neural network solutions require large amounts of hand-labeled training data for the model to learn the target mapping between input (e.g. distorted images) and output (e.g. quality assessment). We propose an automatic labeling method using joint entropy between a compressed and uncompressed image to avoid the need for domain experts to label thousands of training examples by hand. We use automatically labeled data to train a convolutional neural network to estimate the probability that a Mastcam user would find the quality of a given compressed image acceptable for science analysis. We tested our model on a variety of Mastcam images and found that the proposed method correlates well with image quality perception by science team members. When assisted by our proposed method, we estimate that a Mastcam investigator could reduce the time spent reviewing images by a minimum of 70%.

  15. Handling the data management needs of high-throughput sequencing data: SpeedGene, a compression algorithm for the efficient storage of genetic data

    PubMed Central

    2012-01-01

    Background As Next-Generation Sequencing data becomes available, existing hardware environments do not provide sufficient storage space and computational power to store and process the data due to their enormous size. This is and will be a frequent problem that is encountered everyday by researchers who are working on genetic data. There are some options available for compressing and storing such data, such as general-purpose compression software, PBAT/PLINK binary format, etc. However, these currently available methods either do not offer sufficient compression rates, or require a great amount of CPU time for decompression and loading every time the data is accessed. Results Here, we propose a novel and simple algorithm for storing such sequencing data. We show that, the compression factor of the algorithm ranges from 16 to several hundreds, which potentially allows SNP data of hundreds of Gigabytes to be stored in hundreds of Megabytes. We provide a C++ implementation of the algorithm, which supports direct loading and parallel loading of the compressed format without requiring extra time for decompression. By applying the algorithm to simulated and real datasets, we show that the algorithm gives greater compression rate than the commonly used compression methods, and the data-loading process takes less time. Also, The C++ library provides direct-data-retrieving functions, which allows the compressed information to be easily accessed by other C++ programs. Conclusions The SpeedGene algorithm enables the storage and the analysis of next generation sequencing data in current hardware environment, making system upgrades unnecessary. PMID:22591016

  16. An adaptive technique to maximize lossless image data compression of satellite images

    NASA Technical Reports Server (NTRS)

    Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe

    1994-01-01

    Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.

  17. The effect of JPEG compression on automated detection of microaneurysms in retinal images

    NASA Astrophysics Data System (ADS)

    Cree, M. J.; Jelinek, H. F.

    2008-02-01

    As JPEG compression at source is ubiquitous in retinal imaging, and the block artefacts introduced are known to be of similar size to microaneurysms (an important indicator of diabetic retinopathy) it is prudent to evaluate the effect of JPEG compression on automated detection of retinal pathology. Retinal images were acquired at high quality and then compressed to various lower qualities. An automated microaneurysm detector was run on the retinal images of various qualities of JPEG compression and the ability to predict the presence of diabetic retinopathy based on the detected presence of microaneurysms was evaluated with receiver operating characteristic (ROC) methodology. The negative effect of JPEG compression on automated detection was observed even at levels of compression sometimes used in retinal eye-screening programmes and these may have important clinical implications for deciding on acceptable levels of compression for a fully automated eye-screening programme.

  18. Novel image compression-encryption hybrid algorithm based on key-controlled measurement matrix in compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua

    2014-10-01

    The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.

  19. COxSwAIN: Compressive Sensing for Advanced Imaging and Navigation

    NASA Technical Reports Server (NTRS)

    Kurwitz, Richard; Pulley, Marina; LaFerney, Nathan; Munoz, Carlos

    2015-01-01

    The COxSwAIN project focuses on building an image and video compression scheme that can be implemented in a small or low-power satellite. To do this, we used Compressive Sensing, where the compression is performed by matrix multiplications on the satellite and reconstructed on the ground. Our paper explains our methodology and demonstrates the results of the scheme, being able to achieve high quality image compression that is robust to noise and corruption.

  20. Novel approach to multispectral image compression on the Internet

    NASA Astrophysics Data System (ADS)

    Zhu, Yanqiu; Jin, Jesse S.

    2000-10-01

    Still image coding techniques such as JPEG have been always applied onto intra-plane images. Coding fidelity is always utilized in measuring the performance of intra-plane coding methods. In many imaging applications, it is more and more necessary to deal with multi-spectral images, such as the color images. In this paper, a novel approach to multi-spectral image compression is proposed by using transformations among planes for further compression of spectral planes. Moreover, a mechanism of introducing human visual system to the transformation is provided for exploiting the psycho visual redundancy. The new technique for multi-spectral image compression, which is designed to be compatible with the JPEG standard, is demonstrated on extracting correlation among planes based on human visual system. A high measure of compactness in the data representation and compression can be seen with the power of the scheme taken into account.

  1. Efficient Imaging and Real-Time Display of Scanning Ion Conductance Microscopy Based on Block Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Li, Gongxin; Li, Peng; Wang, Yuechao; Wang, Wenxue; Xi, Ning; Liu, Lianqing

    2014-07-01

    Scanning Ion Conductance Microscopy (SICM) is one kind of Scanning Probe Microscopies (SPMs), and it is widely used in imaging soft samples for many distinctive advantages. However, the scanning speed of SICM is much slower than other SPMs. Compressive sensing (CS) could improve scanning speed tremendously by breaking through the Shannon sampling theorem, but it still requires too much time in image reconstruction. Block compressive sensing can be applied to SICM imaging to further reduce the reconstruction time of sparse signals, and it has another unique application that it can achieve the function of image real-time display in SICM imaging. In this article, a new method of dividing blocks and a new matrix arithmetic operation were proposed to build the block compressive sensing model, and several experiments were carried out to verify the superiority of block compressive sensing in reducing imaging time and real-time display in SICM imaging.

  2. Compressed Sensing for Body MRI

    PubMed Central

    Feng, Li; Benkert, Thomas; Block, Kai Tobias; Sodickson, Daniel K; Otazo, Ricardo; Chandarana, Hersh

    2016-01-01

    The introduction of compressed sensing for increasing imaging speed in MRI has raised significant interest among researchers and clinicians, and has initiated a large body of research across multiple clinical applications over the last decade. Compressed sensing aims to reconstruct unaliased images from fewer measurements than that are traditionally required in MRI by exploiting image compressibility or sparsity. Moreover, appropriate combinations of compressed sensing with previously introduced fast imaging approaches, such as parallel imaging, have demonstrated further improved performance. The advent of compressed sensing marks the prelude to a new era of rapid MRI, where the focus of data acquisition has changed from sampling based on the nominal number of voxels and/or frames to sampling based on the desired information content. This paper presents a brief overview of the application of compressed sensing techniques in body MRI, where imaging speed is crucial due to the presence of respiratory motion along with stringent constraints on spatial and temporal resolution. The first section provides an overview of the basic compressed sensing methodology, including the notion of sparsity, incoherence, and non-linear reconstruction. The second section reviews state-of-the-art compressed sensing techniques that have been demonstrated for various clinical body MRI applications. In the final section, the paper discusses current challenges and future opportunities. PMID:27981664

  3. Digital Image Compression Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.

    1993-01-01

    The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.

  4. Iris Recognition: The Consequences of Image Compression

    NASA Astrophysics Data System (ADS)

    Ives, Robert W.; Bishop, Daniel A.; Du, Yingzi; Belcher, Craig

    2010-12-01

    Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA) is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  5. Measurement of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique

    DTIC Science & Technology

    2013-05-01

    Measurement of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC...of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique Todd C...Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c

  6. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Astrophysics Data System (ADS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-07-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters according to scientific discipline and experiment type is critical to the success of remote experiments.

  7. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-01-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters according to scientific discipline and experiment type is critical to the success of remote experiments.

  8. Digital compression algorithms for HDTV transmission

    NASA Technical Reports Server (NTRS)

    Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.

    1990-01-01

    Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.

  9. Real-Time Aggressive Image Data Compression

    DTIC Science & Technology

    1990-03-31

    implemented with higher degrees of modularity, concurrency, and higher levels of machine intelligence , thereby providing higher data -throughput rates...Project Summary Project Title: Real-Time Aggressive Image Data Compression Principal Investigators: Dr. Yih-Fang Huang and Dr. Ruey-wen Liu Institution...Summary The objective of the proposed research is to develop reliable algorithms !.hat can achieve aggressive image data compression (with a compression

  10. Context Modeler for Wavelet Compression of Spectral Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Xie, Hua; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    A context-modeling sub-algorithm has been developed as part of an algorithm that effects three-dimensional (3D) wavelet-based compression of hyperspectral image data. The context-modeling subalgorithm, hereafter denoted the context modeler, provides estimates of probability distributions of wavelet-transformed data being encoded. These estimates are utilized by an entropy coding subalgorithm that is another major component of the compression algorithm. The estimates make it possible to compress the image data more effectively than would otherwise be possible. The following background discussion is prerequisite to a meaningful summary of the context modeler. This discussion is presented relative to ICER-3D, which is the name attached to a particular compression algorithm and the software that implements it. The ICER-3D software is summarized briefly in the preceding article, ICER-3D Hyperspectral Image Compression Software (NPO-43238). Some aspects of this algorithm were previously described, in a slightly more general context than the ICER-3D software, in "Improving 3D Wavelet-Based Compression of Hyperspectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. In turn, ICER-3D is a product of generalization of ICER, another previously reported algorithm and computer program that can perform both lossless and lossy wavelet-based compression and decompression of gray-scale-image data. In ICER-3D, hyperspectral image data are decomposed using a 3D discrete wavelet transform (DWT). Following wavelet decomposition, mean values are subtracted from spatial planes of spatially low-pass subbands prior to encoding. The resulting data are converted to sign-magnitude form and compressed. In ICER-3D, compression is progressive, in that compressed information is ordered so that as more of the compressed data stream is received, successive reconstructions of the hyperspectral image data are of successively higher overall fidelity.

  11. Image-Data Compression Using Edge-Optimizing Algorithm for WFA Inference.

    ERIC Educational Resources Information Center

    Culik, Karel II; Kari, Jarkko

    1994-01-01

    Presents an inference algorithm that produces a weighted finite automata (WFA), in particular, the grayness functions of graytone images. Image-data compression results based on the new inference algorithm produces a WFA with a relatively small number of edges. Image-data compression results alone and in combination with wavelets are discussed.…

  12. Disk-based compression of data from genome sequencing.

    PubMed

    Grabowski, Szymon; Deorowicz, Sebastian; Roguski, Łukasz

    2015-05-01

    High-coverage sequencing data have significant, yet hard to exploit, redundancy. Most FASTQ compressors cannot efficiently compress the DNA stream of large datasets, since the redundancy between overlapping reads cannot be easily captured in the (relatively small) main memory. More interesting solutions for this problem are disk based, where the better of these two, from Cox et al. (2012), is based on the Burrows-Wheeler transform (BWT) and achieves 0.518 bits per base for a 134.0 Gbp human genome sequencing collection with almost 45-fold coverage. We propose overlapping reads compression with minimizers, a compression algorithm dedicated to sequencing reads (DNA only). Our method makes use of a conceptually simple and easily parallelizable idea of minimizers, to obtain 0.317 bits per base as the compression ratio, allowing to fit the 134.0 Gbp dataset into only 5.31 GB of space. http://sun.aei.polsl.pl/orcom under a free license. sebastian.deorowicz@polsl.pl Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Performance evaluation of the multiple-image optical compression and encryption method by increasing the number of target images

    NASA Astrophysics Data System (ADS)

    Aldossari, M.; Alfalou, A.; Brosseau, C.

    2017-08-01

    In an earlier study [Opt. Express 22, 22349-22368 (2014)], a compression and encryption method that simultaneous compress and encrypt closely resembling images was proposed and validated. This multiple-image optical compression and encryption (MIOCE) method is based on a special fusion of the different target images spectra in the spectral domain. Now for the purpose of assessing the capacity of the MIOCE method, we would like to evaluate and determine the influence of the number of target images. This analysis allows us to evaluate the performance limitation of this method. To achieve this goal, we use a criterion based on the root-mean-square (RMS) [Opt. Lett. 35, 1914-1916 (2010)] and compression ratio to determine the spectral plane area. Then, the different spectral areas are merged in a single spectrum plane. By choosing specific areas, we can compress together 38 images instead of 26 using the classical MIOCE method. The quality of the reconstructed image is evaluated by making use of the mean-square-error criterion (MSE).

  14. A Lossy Compression Technique Enabling Duplication-Aware Sequence Alignment

    PubMed Central

    Freschi, Valerio; Bogliolo, Alessandro

    2012-01-01

    In spite of the recognized importance of tandem duplications in genome evolution, commonly adopted sequence comparison algorithms do not take into account complex mutation events involving more than one residue at the time, since they are not compliant with the underlying assumption of statistical independence of adjacent residues. As a consequence, the presence of tandem repeats in sequences under comparison may impair the biological significance of the resulting alignment. Although solutions have been proposed, repeat-aware sequence alignment is still considered to be an open problem and new efficient and effective methods have been advocated. The present paper describes an alternative lossy compression scheme for genomic sequences which iteratively collapses repeats of increasing length. The resulting approximate representations do not contain tandem duplications, while retaining enough information for making their comparison even more significant than the edit distance between the original sequences. This allows us to exploit traditional alignment algorithms directly on the compressed sequences. Results confirm the validity of the proposed approach for the problem of duplication-aware sequence alignment. PMID:22518086

  15. Compression of multispectral fluorescence microscopic images based on a modified set partitioning in hierarchal trees

    NASA Astrophysics Data System (ADS)

    Mansoor, Awais; Robinson, J. Paul; Rajwa, Bartek

    2009-02-01

    Modern automated microscopic imaging techniques such as high-content screening (HCS), high-throughput screening, 4D imaging, and multispectral imaging are capable of producing hundreds to thousands of images per experiment. For quick retrieval, fast transmission, and storage economy, these images should be saved in a compressed format. A considerable number of techniques based on interband and intraband redundancies of multispectral images have been proposed in the literature for the compression of multispectral and 3D temporal data. However, these works have been carried out mostly in the elds of remote sensing and video processing. Compression for multispectral optical microscopy imaging, with its own set of specialized requirements, has remained under-investigated. Digital photography{oriented 2D compression techniques like JPEG (ISO/IEC IS 10918-1) and JPEG2000 (ISO/IEC 15444-1) are generally adopted for multispectral images which optimize visual quality but do not necessarily preserve the integrity of scientic data, not to mention the suboptimal performance of 2D compression techniques in compressing 3D images. Herein we report our work on a new low bit-rate wavelet-based compression scheme for multispectral fluorescence biological imaging. The sparsity of signicant coefficients in high-frequency subbands of multispectral microscopic images is found to be much greater than in natural images; therefore a quad-tree concept such as Said et al.'s SPIHT1 along with correlation of insignicant wavelet coefficients has been proposed to further exploit redundancy at high-frequency subbands. Our work propose a 3D extension to SPIHT, incorporating a new hierarchal inter- and intra-spectral relationship amongst the coefficients of 3D wavelet-decomposed image. The new relationship, apart from adopting the parent-child relationship of classical SPIHT, also brought forth the conditional "sibling" relationship by relating only the insignicant wavelet coefficients of subbands at the same level of decomposition. The insignicant quadtrees in dierent subbands in the high-frequency subband class are coded by a combined function to reduce redundancy. A number of experiments conducted on microscopic multispectral images have shown promising results for the proposed method over current state-of-the-art image-compression techniques.

  16. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera

    PubMed Central

    Qu, Yufu; Huang, Jianyu; Zhang, Xuan

    2018-01-01

    In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles’ camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth–map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable. PMID:29342908

  17. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera.

    PubMed

    Qu, Yufu; Huang, Jianyu; Zhang, Xuan

    2018-01-14

    In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles' camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth-map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.

  18. Multispectral Image Compression Based on DSC Combined with CCSDS-IDC

    PubMed Central

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741

  19. Two-level image authentication by two-step phase-shifting interferometry and compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Xue; Meng, Xiangfeng; Yin, Yongkai; Yang, Xiulun; Wang, Yurong; Li, Xianye; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-01-01

    A two-level image authentication method is proposed; the method is based on two-step phase-shifting interferometry, double random phase encoding, and compressive sensing (CS) theory, by which the certification image can be encoded into two interferograms. Through discrete wavelet transform (DWT), sparseness processing, Arnold transform, and data compression, two compressed signals can be generated and delivered to two different participants of the authentication system. Only the participant who possesses the first compressed signal attempts to pass the low-level authentication. The application of Orthogonal Match Pursuit CS algorithm reconstruction, inverse Arnold transform, inverse DWT, two-step phase-shifting wavefront reconstruction, and inverse Fresnel transform can result in the output of a remarkable peak in the central location of the nonlinear correlation coefficient distributions of the recovered image and the standard certification image. Then, the other participant, who possesses the second compressed signal, is authorized to carry out the high-level authentication. Therefore, both compressed signals are collected to reconstruct the original meaningful certification image with a high correlation coefficient. Theoretical analysis and numerical simulations verify the feasibility of the proposed method.

  20. Multispectral image compression based on DSC combined with CCSDS-IDC.

    PubMed

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.

  1. Planning/scheduling techniques for VQ-based image compression

    NASA Technical Reports Server (NTRS)

    Short, Nicholas M., Jr.; Manohar, Mareboyana; Tilton, James C.

    1994-01-01

    The enormous size of the data holding and the complexity of the information system resulting from the EOS system pose several challenges to computer scientists, one of which is data archival and dissemination. More than ninety percent of the data holdings of NASA is in the form of images which will be accessed by users across the computer networks. Accessing the image data in its full resolution creates data traffic problems. Image browsing using a lossy compression reduces this data traffic, as well as storage by factor of 30-40. Of the several image compression techniques, VQ is most appropriate for this application since the decompression of the VQ compressed images is a table lookup process which makes minimal additional demands on the user's computational resources. Lossy compression of image data needs expert level knowledge in general and is not straightforward to use. This is especially true in the case of VQ. It involves the selection of appropriate codebooks for a given data set and vector dimensions for each compression ratio, etc. A planning and scheduling system is described for using the VQ compression technique in the data access and ingest of raw satellite data.

  2. Effect of data compression on diagnostic accuracy in digital hand and chest radiography

    NASA Astrophysics Data System (ADS)

    Sayre, James W.; Aberle, Denise R.; Boechat, Maria I.; Hall, Theodore R.; Huang, H. K.; Ho, Bruce K. T.; Kashfian, Payam; Rahbar, Guita

    1992-05-01

    Image compression is essential to handle a large volume of digital images including CT, MR, CR, and digitized films in a digital radiology operation. The full-frame bit allocation using the cosine transform technique developed during the last few years has been proven to be an excellent irreversible image compression method. This paper describes the effect of using the hardware compression module on diagnostic accuracy in hand radiographs with subperiosteal resorption and chest radiographs with interstitial disease. Receiver operating characteristic analysis using 71 hand radiographs and 52 chest radiographs with five observers each demonstrates that there is no statistical significant difference in diagnostic accuracy between the original films and the compressed images with a compression ratio as high as 20:1.

  3. Light-weight reference-based compression of FASTQ data.

    PubMed

    Zhang, Yongpeng; Li, Linsen; Yang, Yanli; Yang, Xiao; He, Shan; Zhu, Zexuan

    2015-06-09

    The exponential growth of next generation sequencing (NGS) data has posed big challenges to data storage, management and archive. Data compression is one of the effective solutions, where reference-based compression strategies can typically achieve superior compression ratios compared to the ones not relying on any reference. This paper presents a lossless light-weight reference-based compression algorithm namely LW-FQZip to compress FASTQ data. The three components of any given input, i.e., metadata, short reads and quality score strings, are first parsed into three data streams in which the redundancy information are identified and eliminated independently. Particularly, well-designed incremental and run-length-limited encoding schemes are utilized to compress the metadata and quality score streams, respectively. To handle the short reads, LW-FQZip uses a novel light-weight mapping model to fast map them against external reference sequence(s) and produce concise alignment results for storage. The three processed data streams are then packed together with some general purpose compression algorithms like LZMA. LW-FQZip was evaluated on eight real-world NGS data sets and achieved compression ratios in the range of 0.111-0.201. This is comparable or superior to other state-of-the-art lossless NGS data compression algorithms. LW-FQZip is a program that enables efficient lossless FASTQ data compression. It contributes to the state of art applications for NGS data storage and transmission. LW-FQZip is freely available online at: http://csse.szu.edu.cn/staff/zhuzx/LWFQZip.

  4. Architecture for one-shot compressive imaging using computer-generated holograms.

    PubMed

    Macfaden, Alexander J; Kindness, Stephen J; Wilkinson, Timothy D

    2016-09-10

    We propose a synchronous implementation of compressive imaging. This method is mathematically equivalent to prevailing sequential methods, but uses a static holographic optical element to create a spatially distributed spot array from which the image can be reconstructed with an instantaneous measurement. We present the holographic design requirements and demonstrate experimentally that the linear algebra of compressed imaging can be implemented with this technique. We believe this technique can be integrated with optical metasurfaces, which will allow the development of new compressive sensing methods.

  5. Onboard Image Processing System for Hyperspectral Sensor

    PubMed Central

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  6. JPEG vs. JPEG 2000: an objective comparison of image encoding quality

    NASA Astrophysics Data System (ADS)

    Ebrahimi, Farzad; Chamik, Matthieu; Winkler, Stefan

    2004-11-01

    This paper describes an objective comparison of the image quality of different encoders. Our approach is based on estimating the visual impact of compression artifacts on perceived quality. We present a tool that measures these artifacts in an image and uses them to compute a prediction of the Mean Opinion Score (MOS) obtained in subjective experiments. We show that the MOS predictions by our proposed tool are a better indicator of perceived image quality than PSNR, especially for highly compressed images. For the encoder comparison, we compress a set of 29 test images with two JPEG encoders (Adobe Photoshop and IrfanView) and three JPEG2000 encoders (JasPer, Kakadu, and IrfanView) at various compression ratios. We compute blockiness, blur, and MOS predictions as well as PSNR of the compressed images. Our results show that the IrfanView JPEG encoder produces consistently better images than the Adobe Photoshop JPEG encoder at the same data rate. The differences between the JPEG2000 encoders in our test are less pronounced; JasPer comes out as the best codec, closely followed by IrfanView and Kakadu. Comparing the JPEG- and JPEG2000-encoding quality of IrfanView, we find that JPEG has a slight edge at low compression ratios, while JPEG2000 is the clear winner at medium and high compression ratios.

  7. Change Detection in Uav Video Mosaics Combining a Feature Based Approach and Extended Image Differencing

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang

    2016-06-01

    Change detection is an important task when using unmanned aerial vehicles (UAV) for video surveillance. We address changes of short time scale using observations in time distances of a few hours. Each observation (previous and current) is a short video sequence acquired by UAV in near-Nadir view. Relevant changes are, e.g., recently parked or moved vehicles. Examples for non-relevant changes are parallaxes caused by 3D structures of the scene, shadow and illumination changes, and compression or transmission artifacts. In this paper we present (1) a new feature based approach to change detection, (2) a combination with extended image differencing (Saur et al., 2014), and (3) the application to video sequences using temporal filtering. In the feature based approach, information about local image features, e.g., corners, is extracted in both images. The label "new object" is generated at image points, where features occur in the current image and no or weaker features are present in the previous image. The label "vanished object" corresponds to missing or weaker features in the current image and present features in the previous image. This leads to two "directed" change masks and differs from image differencing where only one "undirected" change mask is extracted which combines both label types to the single label "changed object". The combination of both algorithms is performed by merging the change masks of both approaches. A color mask showing the different contributions is used for visual inspection by a human image interpreter.

  8. Intermediate view reconstruction using adaptive disparity search algorithm for real-time 3D processing

    NASA Astrophysics Data System (ADS)

    Bae, Kyung-hoon; Park, Changhan; Kim, Eun-soo

    2008-03-01

    In this paper, intermediate view reconstruction (IVR) using adaptive disparity search algorithm (ASDA) is for realtime 3-dimensional (3D) processing proposed. The proposed algorithm can reduce processing time of disparity estimation by selecting adaptive disparity search range. Also, the proposed algorithm can increase the quality of the 3D imaging. That is, by adaptively predicting the mutual correlation between stereo images pair using the proposed algorithm, the bandwidth of stereo input images pair can be compressed to the level of a conventional 2D image and a predicted image also can be effectively reconstructed using a reference image and disparity vectors. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm improves the PSNRs of a reconstructed image to about 4.8 dB by comparing with that of conventional algorithms, and reduces the Synthesizing time of a reconstructed image to about 7.02 sec by comparing with that of conventional algorithms.

  9. Integrating dynamic and distributed compressive sensing techniques to enhance image quality of the compressive line sensing system for unmanned aerial vehicles application

    NASA Astrophysics Data System (ADS)

    Ouyang, Bing; Hou, Weilin; Caimi, Frank M.; Dalgleish, Fraser R.; Vuorenkoski, Anni K.; Gong, Cuiling

    2017-07-01

    The compressive line sensing imaging system adopts distributed compressive sensing (CS) to acquire data and reconstruct images. Dynamic CS uses Bayesian inference to capture the correlated nature of the adjacent lines. An image reconstruction technique that incorporates dynamic CS in the distributed CS framework was developed to improve the quality of reconstructed images. The effectiveness of the technique was validated using experimental data acquired in an underwater imaging test facility. Results that demonstrate contrast and resolution improvements will be presented. The improved efficiency is desirable for unmanned aerial vehicles conducting long-duration missions.

  10. Fast computational scheme of image compression for 32-bit microprocessors

    NASA Technical Reports Server (NTRS)

    Kasperovich, Leonid

    1994-01-01

    This paper presents a new computational scheme of image compression based on the discrete cosine transform (DCT), underlying JPEG and MPEG International Standards. The algorithm for the 2-d DCT computation uses integer operations (register shifts and additions / subtractions only); its computational complexity is about 8 additions per image pixel. As a meaningful example of an on-board image compression application we consider the software implementation of the algorithm for the Mars Rover (Marsokhod, in Russian) imaging system being developed as a part of Mars-96 International Space Project. It's shown that fast software solution for 32-bit microprocessors may compete with the DCT-based image compression hardware.

  11. Psychophysical Comparisons in Image Compression Algorithms.

    DTIC Science & Technology

    1999-03-01

    Leister, M., "Lossy Lempel - Ziv Algorithm for Large Alphabet Sources and Applications to Image Compression ," IEEE Proceedings, v.I, pp. 225-228, September...1623-1642, September 1990. Sanford, M.A., An Analysis of Data Compression Algorithms used in the Transmission of Imagery, Master’s Thesis, Naval...NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS PSYCHOPHYSICAL COMPARISONS IN IMAGE COMPRESSION ALGORITHMS by % Christopher J. Bodine • March

  12. Roundness variation in JPEG images affects the automated process of nuclear immunohistochemical quantification: correction with a linear regression model.

    PubMed

    López, Carlos; Jaén Martinez, Joaquín; Lejeune, Marylène; Escrivà, Patricia; Salvadó, Maria T; Pons, Lluis E; Alvaro, Tomás; Baucells, Jordi; García-Rojo, Marcial; Cugat, Xavier; Bosch, Ramón

    2009-10-01

    The volume of digital image (DI) storage continues to be an important problem in computer-assisted pathology. DI compression enables the size of files to be reduced but with the disadvantage of loss of quality. Previous results indicated that the efficiency of computer-assisted quantification of immunohistochemically stained cell nuclei may be significantly reduced when compressed DIs are used. This study attempts to show, with respect to immunohistochemically stained nuclei, which morphometric parameters may be altered by the different levels of JPEG compression, and the implications of these alterations for automated nuclear counts, and further, develops a method for correcting this discrepancy in the nuclear count. For this purpose, 47 DIs from different tissues were captured in uncompressed TIFF format and converted to 1:3, 1:23 and 1:46 compression JPEG images. Sixty-five positive objects were selected from these images, and six morphological parameters were measured and compared for each object in TIFF images and those of the different compression levels using a set of previously developed and tested macros. Roundness proved to be the only morphological parameter that was significantly affected by image compression. Factors to correct the discrepancy in the roundness estimate were derived from linear regression models for each compression level, thereby eliminating the statistically significant differences between measurements in the equivalent images. These correction factors were incorporated in the automated macros, where they reduced the nuclear quantification differences arising from image compression. Our results demonstrate that it is possible to carry out unbiased automated immunohistochemical nuclear quantification in compressed DIs with a methodology that could be easily incorporated in different systems of digital image analysis.

  13. Adaptive compressive ghost imaging based on wavelet trees and sparse representation.

    PubMed

    Yu, Wen-Kai; Li, Ming-Fei; Yao, Xu-Ri; Liu, Xue-Feng; Wu, Ling-An; Zhai, Guang-Jie

    2014-03-24

    Compressed sensing is a theory which can reconstruct an image almost perfectly with only a few measurements by finding its sparsest representation. However, the computation time consumed for large images may be a few hours or more. In this work, we both theoretically and experimentally demonstrate a method that combines the advantages of both adaptive computational ghost imaging and compressed sensing, which we call adaptive compressive ghost imaging, whereby both the reconstruction time and measurements required for any image size can be significantly reduced. The technique can be used to improve the performance of all computational ghost imaging protocols, especially when measuring ultra-weak or noisy signals, and can be extended to imaging applications at any wavelength.

  14. High-resolution STIR for 3-T MRI of the posterior fossa: visualization of the lower cranial nerves and arteriovenous structures related to neurovascular compression.

    PubMed

    Hiwatashi, Akio; Yoshiura, Takashi; Yamashita, Koji; Kamano, Hironori; Honda, Hiroshi

    2012-09-01

    Preoperative evaluation of small vessels without contrast material is sometimes difficult in patients with neurovascular compression disease. The purpose of this retrospective study was to evaluate whether 3D STIR MRI could simultaneously depict the lower cranial nerves--fifth through twelfth--and the blood vessels in the posterior fossa. The posterior fossae of 47 adults (26 women, 21 men) without gross pathologic changes were imaged with 3D STIR and turbo spin-echo heavily T2-weighted MRI sequences and with contrast-enhanced turbo field-echo MR angiography (MRA). Visualization of the cranial nerves on STIR images was graded on a 4-point scale and compared with visualization on T2-weighted images. Visualization of the arteries on STIR images was evaluated according to the segments in each artery and compared with that on MRA images. Visualization of the veins on STIR images was also compared with that on MRA images. Statistical analysis was performed with the Mann-Whitney U test. There were no significant differences between STIR and T2-weighted images with respect to visualization of the cranial nerves (p > 0.05). Identified on STIR and MRA images were 94 superior cerebellar arteries, 81 anteroinferior cerebellar arteries, and 79 posteroinferior cerebellar arteries. All veins evaluated were seen on STIR and MRA images. There were no significant differences between STIR and MRA images with respect to visualization of arteries and veins (p > 0.05). High-resolution STIR is a feasible method for simultaneous evaluation of the lower cranial nerves and the vessels in the posterior fossa without the use of contrast material.

  15. Memory Efficient Sequence Analysis Using Compressed Data Structures (Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    ScienceCinema

    Simpson, Jared

    2018-01-24

    Wellcome Trust Sanger Institute's Jared Simpson on Memory efficient sequence analysis using compressed data structures at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.

  16. Adjustable lossless image compression based on a natural splitting of an image into drawing, shading, and fine-grained components

    NASA Technical Reports Server (NTRS)

    Novik, Dmitry A.; Tilton, James C.

    1993-01-01

    The compression, or efficient coding, of single band or multispectral still images is becoming an increasingly important topic. While lossy compression approaches can produce reconstructions that are visually close to the original, many scientific and engineering applications require exact (lossless) reconstructions. However, the most popular and efficient lossless compression techniques do not fully exploit the two-dimensional structural links existing in the image data. We describe here a general approach to lossless data compression that effectively exploits two-dimensional structural links of any length. After describing in detail two main variants on this scheme, we discuss experimental results.

  17. Fast electron microscopy via compressive sensing

    DOEpatents

    Larson, Kurt W; Anderson, Hyrum S; Wheeler, Jason W

    2014-12-09

    Various technologies described herein pertain to compressive sensing electron microscopy. A compressive sensing electron microscope includes a multi-beam generator and a detector. The multi-beam generator emits a sequence of electron patterns over time. Each of the electron patterns can include a plurality of electron beams, where the plurality of electron beams is configured to impart a spatially varying electron density on a sample. Further, the spatially varying electron density varies between each of the electron patterns in the sequence. Moreover, the detector collects signals respectively corresponding to interactions between the sample and each of the electron patterns in the sequence.

  18. Locating Encrypted Data Hidden Among Non-Encrypted Data Using Statistical Tools

    DTIC Science & Technology

    2007-03-01

    length of a compressed sequence). If a bit sequence can be significantly compressed , then it is not random. Lempel - Ziv Compression Test This test...communication, targeting, and a host other of tasks. This software will most assuredly contain classified data or algorithms requiring protection in...containing the classified data and algorithms . As the program is executed the solider would have access to the common unclassified tasks, however, to

  19. JPEG2000 Image Compression on Solar EUV Images

    NASA Astrophysics Data System (ADS)

    Fischer, Catherine E.; Müller, Daniel; De Moortel, Ineke

    2017-01-01

    For future solar missions as well as ground-based telescopes, efficient ways to return and process data have become increasingly important. Solar Orbiter, which is the next ESA/NASA mission to explore the Sun and the heliosphere, is a deep-space mission, which implies a limited telemetry rate that makes efficient onboard data compression a necessity to achieve the mission science goals. Missions like the Solar Dynamics Observatory (SDO) and future ground-based telescopes such as the Daniel K. Inouye Solar Telescope, on the other hand, face the challenge of making petabyte-sized solar data archives accessible to the solar community. New image compression standards address these challenges by implementing efficient and flexible compression algorithms that can be tailored to user requirements. We analyse solar images from the Atmospheric Imaging Assembly (AIA) instrument onboard SDO to study the effect of lossy JPEG2000 (from the Joint Photographic Experts Group 2000) image compression at different bitrates. To assess the quality of compressed images, we use the mean structural similarity (MSSIM) index as well as the widely used peak signal-to-noise ratio (PSNR) as metrics and compare the two in the context of solar EUV images. In addition, we perform tests to validate the scientific use of the lossily compressed images by analysing examples of an on-disc and off-limb coronal-loop oscillation time-series observed by AIA/SDO.

  20. Tomographic Image Compression Using Multidimensional Transforms.

    ERIC Educational Resources Information Center

    Villasenor, John D.

    1994-01-01

    Describes a method for compressing tomographic images obtained using Positron Emission Tomography (PET) and Magnetic Resonance (MR) by applying transform compression using all available dimensions. This takes maximum advantage of redundancy of the data, allowing significant increases in compression efficiency and performance. (13 references) (KRN)

  1. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  2. Intelligent bandwidth compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 bandwidth-compressed images are presented.

  3. Telemedicine + OCT: toward design of optimized algorithms for high-quality compressed images

    NASA Astrophysics Data System (ADS)

    Mousavi, Mahta; Lurie, Kristen; Land, Julian; Javidi, Tara; Ellerbee, Audrey K.

    2014-03-01

    Telemedicine is an emerging technology that aims to provide clinical healthcare at a distance. Among its goals, the transfer of diagnostic images over telecommunication channels has been quite appealing to the medical community. When viewed as an adjunct to biomedical device hardware, one highly important consideration aside from the transfer rate and speed is the accuracy of the reconstructed image at the receiver end. Although optical coherence tomography (OCT) is an established imaging technique that is ripe for telemedicine, the effects of OCT data compression, which may be necessary on certain telemedicine platforms, have not received much attention in the literature. We investigate the performance and efficiency of several lossless and lossy compression techniques for OCT data and characterize their effectiveness with respect to achievable compression ratio, compression rate and preservation of image quality. We examine the effects of compression in the interferogram vs. A-scan domain as assessed with various objective and subjective metrics.

  4. iDoComp: a compression scheme for assembled genomes

    PubMed Central

    Ochoa, Idoia; Hernaez, Mikel; Weissman, Tsachy

    2015-01-01

    Motivation: With the release of the latest next-generation sequencing (NGS) machine, the HiSeq X by Illumina, the cost of sequencing a Human has dropped to a mere $4000. Thus we are approaching a milestone in the sequencing history, known as the $1000 genome era, where the sequencing of individuals is affordable, opening the doors to effective personalized medicine. Massive generation of genomic data, including assembled genomes, is expected in the following years. There is crucial need for compression of genomes guaranteed of performing well simultaneously on different species, from simple bacteria to humans, which will ease their transmission, dissemination and analysis. Further, most of the new genomes to be compressed will correspond to individuals of a species from which a reference already exists on the database. Thus, it is natural to propose compression schemes that assume and exploit the availability of such references. Results: We propose iDoComp, a compressor of assembled genomes presented in FASTA format that compresses an individual genome using a reference genome for both the compression and the decompression. In terms of compression efficiency, iDoComp outperforms previously proposed algorithms in most of the studied cases, with comparable or better running time. For example, we observe compression gains of up to 60% in several cases, including H.sapiens data, when comparing with the best compression performance among the previously proposed algorithms. Availability: iDoComp is written in C and can be downloaded from: http://www.stanford.edu/~iochoa/iDoComp.html (We also provide a full explanation on how to run the program and an example with all the necessary files to run it.). Contact: iochoa@stanford.edu Supplementary information: Supplementary Data are available at Bioinformatics online. PMID:25344501

  5. Observer performance assessment of JPEG-compressed high-resolution chest images

    NASA Astrophysics Data System (ADS)

    Good, Walter F.; Maitz, Glenn S.; King, Jill L.; Gennari, Rose C.; Gur, David

    1999-05-01

    The JPEG compression algorithm was tested on a set of 529 chest radiographs that had been digitized at a spatial resolution of 100 micrometer and contrast sensitivity of 12 bits. Images were compressed using five fixed 'psychovisual' quantization tables which produced average compression ratios in the range 15:1 to 61:1, and were then printed onto film. Six experienced radiologists read all cases from the laser printed film, in each of the five compressed modes as well as in the non-compressed mode. For comparison purposes, observers also read the same cases with reduced pixel resolutions of 200 micrometer and 400 micrometer. The specific task involved detecting masses, pneumothoraces, interstitial disease, alveolar infiltrates and rib fractures. Over the range of compression ratios tested, for images digitized at 100 micrometer, we were unable to demonstrate any statistically significant decrease (p greater than 0.05) in observer performance as measured by ROC techniques. However, the observers' subjective assessments of image quality did decrease significantly as image resolution was reduced and suggested a decreasing, but nonsignificant, trend as the compression ratio was increased. The seeming discrepancy between our failure to detect a reduction in observer performance, and other published studies, is likely due to: (1) the higher resolution at which we digitized our images; (2) the higher signal-to-noise ratio of our digitized films versus typical CR images; and (3) our particular choice of an optimized quantization scheme.

  6. Image compression using singular value decomposition

    NASA Astrophysics Data System (ADS)

    Swathi, H. R.; Sohini, Shah; Surbhi; Gopichand, G.

    2017-11-01

    We often need to transmit and store the images in many applications. Smaller the image, less is the cost associated with transmission and storage. So we often need to apply data compression techniques to reduce the storage space consumed by the image. One approach is to apply Singular Value Decomposition (SVD) on the image matrix. In this method, digital image is given to SVD. SVD refactors the given digital image into three matrices. Singular values are used to refactor the image and at the end of this process, image is represented with smaller set of values, hence reducing the storage space required by the image. Goal here is to achieve the image compression while preserving the important features which describe the original image. SVD can be adapted to any arbitrary, square, reversible and non-reversible matrix of m × n size. Compression ratio and Mean Square Error is used as performance metrics.

  7. Watermarking of ultrasound medical images in teleradiology using compressed watermark

    PubMed Central

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohamad; Ali, Mushtaq

    2016-01-01

    Abstract. The open accessibility of Internet-based medical images in teleradialogy face security threats due to the nonsecured communication media. This paper discusses the spatial domain watermarking of ultrasound medical images for content authentication, tamper detection, and lossless recovery. For this purpose, the image is divided into two main parts, the region of interest (ROI) and region of noninterest (RONI). The defined ROI and its hash value are combined as watermark, lossless compressed, and embedded into the RONI part of images at pixel’s least significant bits (LSBs). The watermark lossless compression and embedding at pixel’s LSBs preserve image diagnostic and perceptual qualities. Different lossless compression techniques including Lempel-Ziv-Welch (LZW) were tested for watermark compression. The performances of these techniques were compared based on more bit reduction and compression ratio. LZW was found better than others and used in tamper detection and recovery watermarking of medical images (TDARWMI) scheme development to be used for ROI authentication, tamper detection, localization, and lossless recovery. TDARWMI performance was compared and found to be better than other watermarking schemes. PMID:26839914

  8. Chunk formation in immediate memory and how it relates to data compression.

    PubMed

    Chekaf, Mustapha; Cowan, Nelson; Mathy, Fabien

    2016-10-01

    This paper attempts to evaluate the capacity of immediate memory to cope with new situations in relation to the compressibility of information likely to allow the formation of chunks. We constructed a task in which untrained participants had to immediately recall sequences of stimuli with possible associations between them. Compressibility of information was used to measure the chunkability of each sequence on a single trial. Compressibility refers to the recoding of information in a more compact representation. Although compressibility has almost exclusively been used to study long-term memory, our theory suggests that a compression process relying on redundancies within the structure of the list materials can occur very rapidly in immediate memory. The results indicated a span of about three items when the list had no structure, but increased linearly as structure was added. The amount of information retained in immediate memory was maximal for the most compressible sequences, particularly when information was ordered in a way that facilitated the compression process. We discuss the role of immediate memory in the rapid formation of chunks made up of new associations that did not already exist in long-term memory, and we conclude that immediate memory is the starting place for the reorganization of information. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Efficient image acquisition design for a cancer detection system

    NASA Astrophysics Data System (ADS)

    Nguyen, Dung; Roehrig, Hans; Borders, Marisa H.; Fitzpatrick, Kimberly A.; Roveda, Janet

    2013-09-01

    Modern imaging modalities, such as Computed Tomography (CT), Digital Breast Tomosynthesis (DBT) or Magnetic Resonance Tomography (MRT) are able to acquire volumetric images with an isotropic resolution in micrometer (um) or millimeter (mm) range. When used in interactive telemedicine applications, these raw images need a huge storage unit, thereby necessitating the use of high bandwidth data communication link. To reduce the cost of transmission and enable archiving, especially for medical applications, image compression is performed. Recent advances in compression algorithms have resulted in a vast array of data compression techniques, but because of the characteristics of these images, there are challenges to overcome to transmit these images efficiently. In addition, the recent studies raise the low dose mammography risk on high risk patient. Our preliminary studies indicate that by bringing the compression before the analog-to-digital conversion (ADC) stage is more efficient than other compression techniques after the ADC. The linearity characteristic of the compressed sensing and ability to perform the digital signal processing (DSP) during data conversion open up a new area of research regarding the roles of sparsity in medical image registration, medical image analysis (for example, automatic image processing algorithm to efficiently extract the relevant information for the clinician), further Xray dose reduction for mammography, and contrast enhancement.

  10. Binary video codec for data reduction in wireless visual sensor networks

    NASA Astrophysics Data System (ADS)

    Khursheed, Khursheed; Ahmad, Naeem; Imran, Muhammad; O'Nils, Mattias

    2013-02-01

    Wireless Visual Sensor Networks (WVSN) is formed by deploying many Visual Sensor Nodes (VSNs) in the field. Typical applications of WVSN include environmental monitoring, health care, industrial process monitoring, stadium/airports monitoring for security reasons and many more. The energy budget in the outdoor applications of WVSN is limited to the batteries and the frequent replacement of batteries is usually not desirable. So the processing as well as the communication energy consumption of the VSN needs to be optimized in such a way that the network remains functional for longer duration. The images captured by VSN contain huge amount of data and require efficient computational resources for processing the images and wide communication bandwidth for the transmission of the results. Image processing algorithms must be designed and developed in such a way that they are computationally less complex and must provide high compression rate. For some applications of WVSN, the captured images can be segmented into bi-level images and hence bi-level image coding methods will efficiently reduce the information amount in these segmented images. But the compression rate of the bi-level image coding methods is limited by the underlined compression algorithm. Hence there is a need for designing other intelligent and efficient algorithms which are computationally less complex and provide better compression rate than that of bi-level image coding methods. Change coding is one such algorithm which is computationally less complex (require only exclusive OR operations) and provide better compression efficiency compared to image coding but it is effective for applications having slight changes between adjacent frames of the video. The detection and coding of the Region of Interest (ROIs) in the change frame efficiently reduce the information amount in the change frame. But, if the number of objects in the change frames is higher than a certain level then the compression efficiency of both the change coding and ROI coding becomes worse than that of image coding. This paper explores the compression efficiency of the Binary Video Codec (BVC) for the data reduction in WVSN. We proposed to implement all the three compression techniques i.e. image coding, change coding and ROI coding at the VSN and then select the smallest bit stream among the results of the three compression techniques. In this way the compression performance of the BVC will never become worse than that of image coding. We concluded that the compression efficiency of BVC is always better than that of change coding and is always better than or equal that of ROI coding and image coding.

  11. Adequate performance of cardiopulmonary resuscitation techniques during simulated cardiac arrest over and under protective equipment in football.

    PubMed

    Waninger, Kevin N; Goodbred, Andrew; Vanic, Keith; Hauth, John; Onia, Joshua; Stoltzfus, Jill; Melanson, Scott

    2014-07-01

    To investigate (1) cardiopulmonary resuscitation (CPR) adequacy during simulated cardiac arrest of equipped football players and (2) whether protective football equipment impedes CPR performance measures. Exploratory crossover study performed on Laerdal SimMan 3 G interactive manikin simulator. Temple University/St Luke's University Health Network Regional Medical School Simulation Laboratory. Thirty BCLS-certified ATCs and 6 ACLS-certified emergency department technicians. Subjects were given standardized rescuer scenarios to perform three 2-minute sequences of compression-only CPR. Baseline CPR sequences were captured on each subject. Experimental conditions included 2-minute sequences of CPR either over protective football shoulder pads or under unlaced pads. Subjects were instructed to adhere to 2010 American Heart Association guidelines (initiation of compressions alone at 100/min to 51 mm). Dependent variables included average compression depth, average compression rate, percentage of time chest wall recoiled, and percentage of hands-on contact during compressions. Differences between subject groups were not found to be statistically significant, so groups were combined (n = 36) for analysis of CPR compression adequacy. Compression depth was deeper under shoulder pads than over (P = 0.02), with mean depths of 36.50 and 31.50 mm, respectively. No significant difference was found with compression rate or chest wall recoil. Chest compression depth is significantly decreased when performed over shoulder pads, while there is no apparent effect on rate or chest wall recoil. Although the clinical outcomes from our observed 15% difference in compression depth are uncertain, chest compression under the pads significantly increases the depth of compressions and more closely approaches American Heart Association guidelines for chest compression depth in cardiac arrest.

  12. DNA-COMPACT: DNA COMpression Based on a Pattern-Aware Contextual Modeling Technique

    PubMed Central

    Li, Pinghao; Wang, Shuang; Kim, Jihoon; Xiong, Hongkai; Ohno-Machado, Lucila; Jiang, Xiaoqian

    2013-01-01

    Genome data are becoming increasingly important for modern medicine. As the rate of increase in DNA sequencing outstrips the rate of increase in disk storage capacity, the storage and data transferring of large genome data are becoming important concerns for biomedical researchers. We propose a two-pass lossless genome compression algorithm, which highlights the synthesis of complementary contextual models, to improve the compression performance. The proposed framework could handle genome compression with and without reference sequences, and demonstrated performance advantages over best existing algorithms. The method for reference-free compression led to bit rates of 1.720 and 1.838 bits per base for bacteria and yeast, which were approximately 3.7% and 2.6% better than the state-of-the-art algorithms. Regarding performance with reference, we tested on the first Korean personal genome sequence data set, and our proposed method demonstrated a 189-fold compression rate, reducing the raw file size from 2986.8 MB to 15.8 MB at a comparable decompression cost with existing algorithms. DNAcompact is freely available at https://sourceforge.net/projects/dnacompact/for research purpose. PMID:24282536

  13. An Efficient, Lossless Database for Storing and Transmitting Medical Images

    NASA Technical Reports Server (NTRS)

    Fenstermacher, Marc J.

    1998-01-01

    This research aimed in creating new compression methods based on the central idea of Set Redundancy Compression (SRC). Set Redundancy refers to the common information that exists in a set of similar images. SRC compression methods take advantage of this common information and can achieve improved compression of similar images by reducing their Set Redundancy. The current research resulted in the development of three new lossless SRC compression methods: MARS (Median-Aided Region Sorting), MAZE (Max-Aided Zero Elimination) and MaxGBA (Max-Guided Bit Allocation).

  14. Spectral compression algorithms for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R.

    2007-10-16

    A method for spectrally compressing data sets enables the efficient analysis of very large multivariate images. The spectral compression algorithm uses a factored representation of the data that can be obtained from Principal Components Analysis or other factorization technique. Furthermore, a block algorithm can be used for performing common operations more efficiently. An image analysis can be performed on the factored representation of the data, using only the most significant factors. The spectral compression algorithm can be combined with a spatial compression algorithm to provide further computational efficiencies.

  15. Integer cosine transform compression for Galileo at Jupiter: A preliminary look

    NASA Technical Reports Server (NTRS)

    Ekroot, L.; Dolinar, S.; Cheung, K.-M.

    1993-01-01

    The Galileo low-gain antenna mission has a severely rate-constrained channel over which we wish to send large amounts of information. Because of this link pressure, compression techniques for image and other data are being selected. The compression technique that will be used for images is the integer cosine transform (ICT). This article investigates the compression performance of Galileo's ICT algorithm as applied to Galileo images taken during the early portion of the mission and to images that simulate those expected from the encounter at Jupiter.

  16. A High Performance Image Data Compression Technique for Space Applications

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Venbrux, Jack

    2003-01-01

    A highly performing image data compression technique is currently being developed for space science applications under the requirement of high-speed and pushbroom scanning. The technique is also applicable to frame based imaging data. The algorithm combines a two-dimensional transform with a bitplane encoding; this results in an embedded bit string with exact desirable compression rate specified by the user. The compression scheme performs well on a suite of test images acquired from spacecraft instruments. It can also be applied to three-dimensional data cube resulting from hyper-spectral imaging instrument. Flight qualifiable hardware implementations are in development. The implementation is being designed to compress data in excess of 20 Msampledsec and support quantization from 2 to 16 bits. This paper presents the algorithm, its applications and status of development.

  17. Observer detection of image degradation caused by irreversible data compression processes

    NASA Astrophysics Data System (ADS)

    Chen, Ji; Flynn, Michael J.; Gross, Barry; Spizarny, David

    1991-05-01

    Irreversible data compression methods have been proposed to reduce the data storage and communication requirements of digital imaging systems. In general, the error produced by compression increases as an algorithm''s compression ratio is increased. We have studied the relationship between compression ratios and the detection of induced error using radiologic observers. The nature of the errors was characterized by calculating the power spectrum of the difference image. In contrast with studies designed to test whether detected errors alter diagnostic decisions, this study was designed to test whether observers could detect the induced error. A paired-film observer study was designed to test whether induced errors were detected. The study was conducted with chest radiographs selected and ranked for subtle evidence of interstitial disease, pulmonary nodules, or pneumothoraces. Images were digitized at 86 microns (4K X 5K) and 2K X 2K regions were extracted. A full-frame discrete cosine transform method was used to compress images at ratios varying between 6:1 and 60:1. The decompressed images were reprinted next to the original images in a randomized order with a laser film printer. The use of a film digitizer and a film printer which can reproduce all of the contrast and detail in the original radiograph makes the results of this study insensitive to instrument performance and primarily dependent on radiographic image quality. The results of this study define conditions for which errors associated with irreversible compression cannot be detected by radiologic observers. The results indicate that an observer can detect the errors introduced by this compression algorithm for compression ratios of 10:1 (1.2 bits/pixel) or higher.

  18. Comparison of lossless compression techniques for prepress color images

    NASA Astrophysics Data System (ADS)

    Van Assche, Steven; Denecker, Koen N.; Philips, Wilfried R.; Lemahieu, Ignace L.

    1998-12-01

    In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only lossless compression is acceptable. Most existing lossless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. In this paper we present a comparison of three state-of-the-art lossless compression techniques which exploit such color redundancies: IEP (Inter- color Error Prediction) and a KLT-based technique, which are both linear color decorrelation techniques, and Interframe CALIC, which uses a non-linear approach to color decorrelation. It is shown that these techniques are able to exploit color redundancies and that color decorrelation can be done effectively and efficiently. The linear color decorrelators provide a considerable coding gain (about 2 bpp) on some typical prepress images. The non-linear interframe CALIC predictor does not yield better results, but the full interframe CALIC technique does.

  19. Frequency of Magnetic Resonance Imaging patterns of tuberculous spondylitis in a public sector hospital.

    PubMed

    Tabassum, Sumera; Haider, Shahbaz

    2016-01-01

    To determine frequencies of different MRI patterns of tuberculous spondylitisin a public sector hospital in Karachi. This descriptive multidisciplinary case series study was done from October 25, 2011 to May 28, 2012 in Radiology Department and Department of Medicine in the Jinnah Postgraduate Medical Center Karachi. MRI scans (dorsal / lumbosacral spine) of the Patients presenting with backache in Medical OPD, were performed in Radiology Department. Axial and sagittal images of T1 weighted, T2 weighted and STIR sequences of the affected region were taken. A total of 140 patients who were diagnosed as having tuberculous spondylitis were further evaluated and analyzed for having different patterns of involvement of the spine and compared with similar studies. Among frequencies of different MRI pattern of tuberculous spondylitis, contiguous vertebral involvement was 100%, discal involvement 98.6%, paravertebral abscess 92.1% cases, epidural abscess 91.4%, spinal cord / thecal sac compression 89.3%, vertebral collapse 72.9%, gibbus deformity 42.9% and psoas abscess 36.4%. Contiguous vertebral involvement was commonest MRI pattern, followed by disk involvement, paravertebral & epidural abscesses, thecal sac compression and vertebral collapse.

  20. A generalized Benford's law for JPEG coefficients and its applications in image forensics

    NASA Astrophysics Data System (ADS)

    Fu, Dongdong; Shi, Yun Q.; Su, Wei

    2007-02-01

    In this paper, a novel statistical model based on Benford's law for the probability distributions of the first digits of the block-DCT and quantized JPEG coefficients is presented. A parametric logarithmic law, i.e., the generalized Benford's law, is formulated. Furthermore, some potential applications of this model in image forensics are discussed in this paper, which include the detection of JPEG compression for images in bitmap format, the estimation of JPEG compression Qfactor for JPEG compressed bitmap image, and the detection of double compressed JPEG image. The results of our extensive experiments demonstrate the effectiveness of the proposed statistical model.

  1. Computational simulation of breast compression based on segmented breast and fibroglandular tissues on magnetic resonance images.

    PubMed

    Shih, Tzu-Ching; Chen, Jeon-Hor; Liu, Dongxu; Nie, Ke; Sun, Lizhi; Lin, Muqing; Chang, Daniel; Nalcioglu, Orhan; Su, Min-Ying

    2010-07-21

    This study presents a finite element-based computational model to simulate the three-dimensional deformation of a breast and fibroglandular tissues under compression. The simulation was based on 3D MR images of the breast, and craniocaudal and mediolateral oblique compression, as used in mammography, was applied. The geometry of the whole breast and the segmented fibroglandular tissues within the breast were reconstructed using triangular meshes by using the Avizo 6.0 software package. Due to the large deformation in breast compression, a finite element model was used to simulate the nonlinear elastic tissue deformation under compression, using the MSC.Marc software package. The model was tested in four cases. The results showed a higher displacement along the compression direction compared to the other two directions. The compressed breast thickness in these four cases at a compression ratio of 60% was in the range of 5-7 cm, which is a typical range of thickness in mammography. The projection of the fibroglandular tissue mesh at a compression ratio of 60% was compared to the corresponding mammograms of two women, and they demonstrated spatially matched distributions. However, since the compression was based on magnetic resonance imaging (MRI), which has much coarser spatial resolution than the in-plane resolution of mammography, this method is unlikely to generate a synthetic mammogram close to the clinical quality. Whether this model may be used to understand the technical factors that may impact the variations in breast density needs further investigation. Since this method can be applied to simulate compression of the breast at different views and different compression levels, another possible application is to provide a tool for comparing breast images acquired using different imaging modalities--such as MRI, mammography, whole breast ultrasound and molecular imaging--that are performed using different body positions and under different compression conditions.

  2. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications of our technology to the special problems of telemedicine.

  3. LFQC: a lossless compression algorithm for FASTQ files

    PubMed Central

    Nicolae, Marius; Pathak, Sudipta; Rajasekaran, Sanguthevar

    2015-01-01

    Motivation: Next Generation Sequencing (NGS) technologies have revolutionized genomic research by reducing the cost of whole genome sequencing. One of the biggest challenges posed by modern sequencing technology is economic storage of NGS data. Storing raw data is infeasible because of its enormous size and high redundancy. In this article, we address the problem of storage and transmission of large FASTQ files using innovative compression techniques. Results: We introduce a new lossless non-reference based FASTQ compression algorithm named Lossless FASTQ Compressor. We have compared our algorithm with other state of the art big data compression algorithms namely gzip, bzip2, fastqz (Bonfield and Mahoney, 2013), fqzcomp (Bonfield and Mahoney, 2013), Quip (Jones et al., 2012), DSRC2 (Roguski and Deorowicz, 2014). This comparison reveals that our algorithm achieves better compression ratios on LS454 and SOLiD datasets. Availability and implementation: The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/rajasek/lfqc-v1.1.zip. Contact: rajasek@engr.uconn.edu PMID:26093148

  4. Nonlinear pulse compression in pulse-inversion fundamental imaging.

    PubMed

    Cheng, Yun-Chien; Shen, Che-Chou; Li, Pai-Chi

    2007-04-01

    Coded excitation can be applied in ultrasound contrast agent imaging to enhance the signal-to-noise ratio with minimal destruction of the microbubbles. Although the axial resolution is usually compromised by the requirement for a long coded transmit waveforms, this can be restored by using a compression filter to compress the received echo. However, nonlinear responses from microbubbles may cause difficulties in pulse compression and result in severe range side-lobe artifacts, particularly in pulse-inversion-based (PI) fundamental imaging. The efficacy of pulse compression in nonlinear contrast imaging was evaluated by investigating several factors relevant to PI fundamental generation using both in-vitro experiments and simulations. The results indicate that the acoustic pressure and the bubble size can alter the nonlinear characteristics of microbubbles and change the performance of the compression filter. When nonlinear responses from contrast agents are enhanced by using a higher acoustic pressure or when more microbubbles are near the resonance size of the transmit frequency, higher range side lobes are produced in both linear imaging and PI fundamental imaging. On the other hand, contrast detection in PI fundamental imaging significantly depends on the magnitude of the nonlinear responses of the bubbles and thus the resultant contrast-to-tissue ratio (CTR) still increases with acoustic pressure and the nonlinear resonance of microbubbles. It should be noted, however, that the CTR in PI fundamental imaging after compression is consistently lower than that before compression due to obvious side-lobe artifacts. Therefore, the use of coded excitation is not beneficial in PI fundamental contrast detection.

  5. An investigative study of multispectral data compression for remotely-sensed images using vector quantization and difference-mapped shift-coding

    NASA Technical Reports Server (NTRS)

    Jaggi, S.

    1993-01-01

    A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.

  6. Application of a Noise Adaptive Contrast Sensitivity Function to Image Data Compression

    NASA Astrophysics Data System (ADS)

    Daly, Scott J.

    1989-08-01

    The visual contrast sensitivity function (CSF) has found increasing use in image compression as new algorithms optimize the display-observer interface in order to reduce the bit rate and increase the perceived image quality. In most compression algorithms, increasing the quantization intervals reduces the bit rate at the expense of introducing more quantization error, a potential image quality degradation. The CSF can be used to distribute this error as a function of spatial frequency such that it is undetectable by the human observer. Thus, instead of being mathematically lossless, the compression algorithm can be designed to be visually lossless, with the advantage of a significantly reduced bit rate. However, the CSF is strongly affected by image noise, changing in both shape and peak sensitivity. This work describes a model of the CSF that includes these changes as a function of image noise level by using the concepts of internal visual noise, and tests this model in the context of image compression with an observer study.

  7. Polarimetric and Indoor Imaging Fusion Based on Compressive Sensing

    DTIC Science & Technology

    2013-04-01

    Signal Process., vol. 57, no. 6, pp. 2275-2284, 2009. [20] A. Gurbuz, J. McClellan, and W. Scott, Jr., "Compressive sensing for subsurface imaging using...SciTech Publishing, 2010, pp. 922- 938. [45] A. C. Gurbuz, J. H. McClellan, and W. R. Scott, Jr., "Compressive sensing for subsurface imaging using

  8. Benchmark Dataset for Whole Genome Sequence Compression.

    PubMed

    C L, Biji; S Nair, Achuthsankar

    2017-01-01

    The research in DNA data compression lacks a standard dataset to test out compression tools specific to DNA. This paper argues that the current state of achievement in DNA compression is unable to be benchmarked in the absence of such scientifically compiled whole genome sequence dataset and proposes a benchmark dataset using multistage sampling procedure. Considering the genome sequence of organisms available in the National Centre for Biotechnology and Information (NCBI) as the universe, the proposed dataset selects 1,105 prokaryotes, 200 plasmids, 164 viruses, and 65 eukaryotes. This paper reports the results of using three established tools on the newly compiled dataset and show that their strength and weakness are evident only with a comparison based on the scientifically compiled benchmark dataset. The sample dataset and the respective links are available @ https://sourceforge.net/projects/benchmarkdnacompressiondataset/.

  9. Avalanches in compressed Ti-Ni shape-memory porous alloys: An acoustic emission study.

    PubMed

    Soto-Parra, Daniel; Zhang, Xiaoxin; Cao, Shanshan; Vives, Eduard; Salje, Ekhard K H; Planes, Antoni

    2015-06-01

    Mechanical avalanches during compression of martensitic porous Ti-Ni have been characterized by high-frequency acoustic emission (AE). Two sequences of AE signals were found in the same sample. The first sequence is mainly generated by detwinning at the early stages of compression while fracture dominates the later stages. Fracture also determines the catastrophic failure (big crash). For high-porosity samples, the AE energies of both sequences display power-law distributions with exponents ɛ≃2 (twinning) and 1.7 (fracture). The two power laws confirm that twinning and fracture both lead to avalanche criticality during compression. As twinning precedes fracture, the observation of twinning allows us to predict incipient fracture of the porous shape memory material as an early warning sign (i.e., in bone implants) before the fracture collapse actually happens.

  10. Compression of high-density EMG signals for trapezius and gastrocnemius muscles.

    PubMed

    Itiki, Cinthia; Furuie, Sergio S; Merletti, Roberto

    2014-03-10

    New technologies for data transmission and multi-electrode arrays increased the demand for compressing high-density electromyography (HD EMG) signals. This article aims the compression of HD EMG signals recorded by two-dimensional electrode matrices at different muscle-contraction forces. It also shows methodological aspects of compressing HD EMG signals for non-pinnate (upper trapezius) and pinnate (medial gastrocnemius) muscles, using image compression techniques. HD EMG signals were placed in image rows, according to two distinct electrode orders: parallel and perpendicular to the muscle longitudinal axis. For the lossless case, the images obtained from single-differential signals as well as their differences in time were compressed. For the lossy algorithm, the images associated to the recorded monopolar or single-differential signals were compressed for different compression levels. Lossless compression provided up to 59.3% file-size reduction (FSR), with lower contraction forces associated to higher FSR. For lossy compression, a 90.8% reduction on the file size was attained, while keeping the signal-to-noise ratio (SNR) at 21.19 dB. For a similar FSR, higher contraction forces corresponded to higher SNR CONCLUSIONS: The computation of signal differences in time improves the performance of lossless compression while the selection of signals in the transversal order improves the lossy compression of HD EMG, for both pinnate and non-pinnate muscles.

  11. Compression of high-density EMG signals for trapezius and gastrocnemius muscles

    PubMed Central

    2014-01-01

    Background New technologies for data transmission and multi-electrode arrays increased the demand for compressing high-density electromyography (HD EMG) signals. This article aims the compression of HD EMG signals recorded by two-dimensional electrode matrices at different muscle-contraction forces. It also shows methodological aspects of compressing HD EMG signals for non-pinnate (upper trapezius) and pinnate (medial gastrocnemius) muscles, using image compression techniques. Methods HD EMG signals were placed in image rows, according to two distinct electrode orders: parallel and perpendicular to the muscle longitudinal axis. For the lossless case, the images obtained from single-differential signals as well as their differences in time were compressed. For the lossy algorithm, the images associated to the recorded monopolar or single-differential signals were compressed for different compression levels. Results Lossless compression provided up to 59.3% file-size reduction (FSR), with lower contraction forces associated to higher FSR. For lossy compression, a 90.8% reduction on the file size was attained, while keeping the signal-to-noise ratio (SNR) at 21.19 dB. For a similar FSR, higher contraction forces corresponded to higher SNR Conclusions The computation of signal differences in time improves the performance of lossless compression while the selection of signals in the transversal order improves the lossy compression of HD EMG, for both pinnate and non-pinnate muscles. PMID:24612604

  12. JPEG2000 and dissemination of cultural heritage over the Internet.

    PubMed

    Politou, Eugenia A; Pavlidis, George P; Chamzas, Christodoulos

    2004-03-01

    By applying the latest technologies in image compression for managing the storage of massive image data within cultural heritage databases and by exploiting the universality of the Internet we are now able not only to effectively digitize, record and preserve, but also to promote the dissemination of cultural heritage. In this work we present an application of the latest image compression standard JPEG2000 in managing and browsing image databases, focusing on the image transmission aspect rather than database management and indexing. We combine the technologies of JPEG2000 image compression with client-server socket connections and client browser plug-in, as to provide with an all-in-one package for remote browsing of JPEG2000 compressed image databases, suitable for the effective dissemination of cultural heritage.

  13. Technology study of quantum remote sensing imaging

    NASA Astrophysics Data System (ADS)

    Bi, Siwen; Lin, Xuling; Yang, Song; Wu, Zhiqiang

    2016-02-01

    According to remote sensing science and technology development and application requirements, quantum remote sensing is proposed. First on the background of quantum remote sensing, quantum remote sensing theory, information mechanism, imaging experiments and prototype principle prototype research situation, related research at home and abroad are briefly introduced. Then we expounds compress operator of the quantum remote sensing radiation field and the basic principles of single-mode compression operator, quantum quantum light field of remote sensing image compression experiment preparation and optical imaging, the quantum remote sensing imaging principle prototype, Quantum remote sensing spaceborne active imaging technology is brought forward, mainly including quantum remote sensing spaceborne active imaging system composition and working principle, preparation and injection compression light active imaging device and quantum noise amplification device. Finally, the summary of quantum remote sensing research in the past 15 years work and future development are introduced.

  14. Estimation of stress relaxation time for normal and abnormal breast phantoms using optical technique

    NASA Astrophysics Data System (ADS)

    Udayakumar, K.; Sujatha, N.

    2015-03-01

    Many of the early occurring micro-anomalies in breast may transform into a deadliest cancer tumor in future. Probability of curing early occurring abnormalities in breast is more if rightly identified. Even in mammogram, considered as a golden standard technique for breast imaging, it is hard to pick up early occurring changes in the breast tissue due to the difference in mechanical behavior of the normal and abnormal tissue when subjected to compression prior to x-ray or laser exposure. In this paper, an attempt has been made to estimate the stress relaxation time of normal and abnormal breast mimicking phantom using laser speckle image correlation. Phantoms mimicking normal breast is prepared and subjected to precise mechanical compression. The phantom is illuminated by a Helium Neon laser and by using a CCD camera, a sequence of strained phantom speckle images are captured and correlated by the image mean intensity value at specific time intervals. From the relation between mean intensity versus time, tissue stress relaxation time is quantified. Experiments were repeated for phantoms with increased stiffness mimicking abnormal tissue for similar ranges of applied loading. Results shows that phantom with more stiffness representing abnormal tissue shows uniform relaxation for varying load of the selected range, whereas phantom with less stiffness representing normal tissue shows irregular behavior for varying loadings in the given range.

  15. Diffusion-weighted imaging and the skeletal system: a literature review.

    PubMed

    Yao, K; Troupis, J M

    2016-11-01

    Diffusion-weighted imaging (DWI) is a magnetic resonance imaging (MRI) sequence that has a well-established role in neuroimaging, and is increasingly being utilised in other clinical contexts, including the assessment of various skeletal disorders. It utilises the variability of Brownian motion of water molecules; the differing patterns of water molecular diffusion in various biological tissues help determine the contrast obtained in DWI. Although early research on the clinical role of DWI focused mainly on the field of neuroimaging, there are now more studies demonstrating the promising role DWI has in the diagnosis and monitoring of various osseous diseases. DWI has been shown to be useful in assessing a patient's skeletal tumour burden, monitoring the post-chemotherapy response of various bony malignancies, detecting hip ischaemia in patients with Legg-Calvé-Perthes disease, as well as determining the quality of repaired articular cartilage. Despite its relative successes, DWI has several limitations, including its limited clinical value in differentiating chondrosarcomas from benign bone lesions, as well as osteoporotic vertebral compression fractures from compression fractures due to malignancy. This literature review aims to provide an overview of the recent developments in the use of DWI in imaging the skeletal system, and to clarify the role of DWI in assessing various osseous diseases. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  16. Joint image encryption and compression scheme based on IWT and SPIHT

    NASA Astrophysics Data System (ADS)

    Zhang, Miao; Tong, Xiaojun

    2017-03-01

    A joint lossless image encryption and compression scheme based on integer wavelet transform (IWT) and set partitioning in hierarchical trees (SPIHT) is proposed to achieve lossless image encryption and compression simultaneously. Making use of the properties of IWT and SPIHT, encryption and compression are combined. Moreover, the proposed secure set partitioning in hierarchical trees (SSPIHT) via the addition of encryption in the SPIHT coding process has no effect on compression performance. A hyper-chaotic system, nonlinear inverse operation, Secure Hash Algorithm-256(SHA-256), and plaintext-based keystream are all used to enhance the security. The test results indicate that the proposed methods have high security and good lossless compression performance.

  17. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).

    PubMed

    Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling

    2018-04-17

    Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  18. The New CCSDS Image Compression Recommendation

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron; Masschelein, Bart; Moury, Gilles; Schaefer, Christoph

    2005-01-01

    The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists of a two-dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-Earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An Application-Specific Integrated Circuit (ASIC) implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm. Performance results and comparisons with other compressors are given for a test set of space images.

  19. Wavelet compression of noisy tomographic images

    NASA Astrophysics Data System (ADS)

    Kappeler, Christian; Mueller, Stefan P.

    1995-09-01

    3D data acquisition is increasingly used in positron emission tomography (PET) to collect a larger fraction of the emitted radiation. A major practical difficulty with data storage and transmission in 3D-PET is the large size of the data sets. A typical dynamic study contains about 200 Mbyte of data. PET images inherently have a high level of photon noise and therefore usually are evaluated after being processed by a smoothing filter. In this work we examined lossy compression schemes under the postulate not induce image modifications exceeding those resulting from low pass filtering. The standard we will refer to is the Hanning filter. Resolution and inhomogeneity serve as figures of merit for quantification of image quality. The images to be compressed are transformed to a wavelet representation using Daubechies12 wavelets and compressed after filtering by thresholding. We do not include further compression by quantization and coding here. Achievable compression factors at this level of processing are thirty to fifty.

  20. Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method

    NASA Astrophysics Data System (ADS)

    Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan

    2018-04-01

    Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.

  1. Dynamic Liver Magnetic Resonance Imaging in Free-Breathing: Feasibility of a Cartesian T1-Weighted Acquisition Technique With Compressed Sensing and Additional Self-Navigation Signal for Hard-Gated and Motion-Resolved Reconstruction.

    PubMed

    Kaltenbach, Benjamin; Bucher, Andreas M; Wichmann, Julian L; Nickel, Dominik; Polkowski, Christoph; Hammerstingl, Renate; Vogl, Thomas J; Bodelle, Boris

    2017-11-01

    The aim of this study was to assess the feasibility of a free-breathing dynamic liver imaging technique using a prototype Cartesian T1-weighted volumetric interpolated breathhold examination (VIBE) sequence with compressed sensing and simultaneous acquisition of a navigation signal for hard-gated and motion state-resolved reconstruction. A total of 43 consecutive oncologic patients (mean age, 66 ± 11 years; 44% female) underwent free-breathing dynamic liver imaging for the evaluation of liver metastases from colorectal cancer using a prototype Cartesian VIBE sequence (field of view, 380 × 345 mm; image matrix, 320 × 218; echo time/repetition time, 1.8/3.76 milliseconds; flip angle, 10 degrees; slice thickness, 3.0 mm; acquisition time, 188 seconds) with continuous data sampling and additionally acquired self-navigation signal. Data were iteratively reconstructed using 2 different approaches: first, a hard-gated reconstruction only using data associated to the dominating motion state (CS VIBE, Compressed Sensing VIBE), and second, a motion-resolved reconstruction with 6 different motion states as additional image dimension (XD VIBE, eXtended dimension VIBE). Continuous acquired data were grouped in 16 subsequent time increments with 11.57 seconds each to resolve arterial and venous contrast phases. For image quality assessment, both CS VIBE and XD VIBE were compared with the patient's last staging dynamic liver magnetic resonance imaging including a breathhold (BH) VIBE as reference standard 4.5 ± 1.2 months before. Representative quality parameters including respiratory artifacts were evaluated for arterial and venous phase images independently, retrospectively and blindly by 3 experienced radiologists, with higher scores indicating better examination quality. To assess diagnostic accuracy, same readers evaluated the presence of metastatic lesions for XD VIBE and CS VIBE compared with reference BH examination in a second session. Compared with CS VIBE, XD VIBE showed significantly higher overall image quality for both arterial phase (4.2 ± 0.6 vs 3.8 ± 0.7, P = 0.008) and venous phase (4.7 ± 0.4 vs 4.3 ± 0.7, P < 0.001) imaging. There was no significant difference between XD VIBE and BH VIBE for overall image quality in the venous phase (4.7 ± 0.4 vs 4.8 ± 0.4, P = 0.834), whereas arterial phase images were scored slightly lower for XD VIBE (4.5 ± 0.6 vs 4.2 ± 0.6, P = 0.024). Both XD VIBE and BH VIBE were characterized by a very low level of respiratory artifacts with no significant difference between BH and motion-resolved free-breathing strategy (P = 0.505 for arterial phase; P = 0.496 for venous phase). Compared with CS VIBE, obvious quality improvement could be achieved for the extended XD VIBE reconstruction with significantly reduced motion artifacts for venous phase images (P = 0.007). Generally, arterial phase images were scored slightly lower compared with venous phase images when using the free-breathing protocol. Overall, 98% of all metastatic lesions were identified on XD VIBE images and 92% of all metastases were found on CS VIBE. Dynamic liver imaging using the proposed free-breathing Cartesian strategy is feasible in oncologic patients with excellent image quality, high respiratory motion robustness, and accurate lesion detection. Overall, XD VIBE was superior to CS VIBE in our study.

  2. Quantization Distortion in Block Transform-Compressed Data

    NASA Technical Reports Server (NTRS)

    Boden, A. F.

    1995-01-01

    The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.

  3. Perceptually lossless fractal image compression

    NASA Astrophysics Data System (ADS)

    Lin, Huawu; Venetsanopoulos, Anastasios N.

    1996-02-01

    According to the collage theorem, the encoding distortion for fractal image compression is directly related to the metric used in the encoding process. In this paper, we introduce a perceptually meaningful distortion measure based on the human visual system's nonlinear response to luminance and the visual masking effects. Blackwell's psychophysical raw data on contrast threshold are first interpolated as a function of background luminance and visual angle, and are then used as an error upper bound for perceptually lossless image compression. For a variety of images, experimental results show that the algorithm produces a compression ratio of 8:1 to 10:1 without introducing visual artifacts.

  4. A model-based reconstruction for undersampled radial spin echo DTI with variational penalties on the diffusion tensor

    PubMed Central

    Knoll, Florian; Raya, José G; Halloran, Rafael O; Baete, Steven; Sigmund, Eric; Bammer, Roland; Block, Tobias; Otazo, Ricardo; Sodickson, Daniel K

    2015-01-01

    Radial spin echo diffusion imaging allows motion-robust imaging of tissues with very low T2 values like articular cartilage with high spatial resolution and signal-to-noise ratio (SNR). However, in vivo measurements are challenging due to the significantly slower data acquisition speed of spin-echo sequences and the less efficient k-space coverage of radial sampling, which raises the demand for accelerated protocols by means of undersampling. This work introduces a new reconstruction approach for undersampled DTI. A model-based reconstruction implicitly exploits redundancies in the diffusion weighted images by reducing the number of unknowns in the optimization problem and compressed sensing is performed directly in the target quantitative domain by imposing a Total Variation (TV) constraint on the elements of the diffusion tensor. Experiments were performed for an anisotropic phantom and the knee and brain of healthy volunteers (3 and 2 volunteers, respectively). Evaluation of the new approach was conducted by comparing the results to reconstructions performed with gridding, combined parallel imaging and compressed sensing, and a recently proposed model-based approach. The experiments demonstrated improvement in terms of reduction of noise and streaking artifacts in the quantitative parameter maps as well as a reduction of angular dispersion of the primary eigenvector when using the proposed method, without introducing systematic errors into the maps. This may enable an essential reduction of the acquisition time in radial spin echo diffusion tensor imaging without degrading parameter quantification and/or SNR. PMID:25594167

  5. Remote Sensing Image Quality Assessment Experiment with Post-Processing

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Chen, S.; Wang, X.; Huang, Q.; Shi, H.; Man, Y.

    2018-04-01

    This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.

  6. Lossless Compression of Classification-Map Data

    NASA Technical Reports Server (NTRS)

    Hua, Xie; Klimesh, Matthew

    2009-01-01

    A lossless image-data-compression algorithm intended specifically for application to classification-map data is based on prediction, context modeling, and entropy coding. The algorithm was formulated, in consideration of the differences between classification maps and ordinary images of natural scenes, so as to be capable of compressing classification- map data more effectively than do general-purpose image-data-compression algorithms. Classification maps are typically generated from remote-sensing images acquired by instruments aboard aircraft (see figure) and spacecraft. A classification map is a synthetic image that summarizes information derived from one or more original remote-sensing image(s) of a scene. The value assigned to each pixel in such a map is the index of a class that represents some type of content deduced from the original image data for example, a type of vegetation, a mineral, or a body of water at the corresponding location in the scene. When classification maps are generated onboard the aircraft or spacecraft, it is desirable to compress the classification-map data in order to reduce the volume of data that must be transmitted to a ground station.

  7. Comparison of Liver Tumor Motion With and Without Abdominal Compression Using Cine-Magnetic Resonance Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eccles, Cynthia L.; Patel, Ritesh; Simeonov, Anna K.

    2011-02-01

    Purpose: Abdominal compression (AC) can be used to reduce respiratory liver motion in patients undergoing liver stereotactic body radiotherapy. The purpose of the present study was to measure the changes in three-dimensional liver tumor motion with and without compression using cine-magnetic resonance imaging. Patients and Methods: A total of 60 patients treated as a part of an institutional research ethics board-approved liver stereotactic body radiotherapy protocol underwent cine T2-weighted magnetic resonance imaging through the tumor centroid in the coronal and sagittal planes. A total of 240 cine-magnetic resonance imaging sequences acquired at one to three images each second for 30-60more » s were evaluated using an in-house-developed template matching tool (based on the coefficient correlation) to measure the magnitude of the tumor motion. The average tumor edge displacements were used to determine the magnitude of changes in the caudal-cranial (CC) and anteroposterior (AP) directions, with and without AC. Results: The mean tumor motion without AC of 11.7 mm (range, 4.8-23.3) in the CC direction was reduced to 9.4 mm (range, 1.6-23.4) with AC. The tumor motion was reduced in both directions (CC and AP) in 52% of the patients and in a single direction (CC or AP) in 90% of the patients. The mean decrease in tumor motion with AC was 2.3 and 0.6 mm in the CC and AP direction, respectively. Increased motion occurred in one or more directions in 28% of patients. Clinically significant (>3 mm) decreases were observed in 40% and increases in <2% of patients in the CC direction. Conclusion: AC can significantly reduce three-dimensional liver tumor motion in most patients, although the magnitude of the reduction was smaller than previously reported.« less

  8. Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm

    NASA Astrophysics Data System (ADS)

    Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan

    2017-12-01

    Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.

  9. Breast compression in mammography: how much is enough?

    PubMed

    Poulos, Ann; McLean, Donald; Rickard, Mary; Heard, Robert

    2003-06-01

    The amount of breast compression that is applied during mammography potentially influences image quality and the discomfort experienced. The aim of this study was to determine the relationship between applied compression force, breast thickness, reported discomfort and image quality. Participants were women attending routine breast screening by mammography at BreastScreen New South Wales Central and Eastern Sydney. During the mammographic procedure, an 'extra' craniocaudal (CC) film was taken at a reduced level of compression ranging from 10 to 30 Newtons. Breast thickness measurements were recorded for both the normal and the extra CC film. Details of discomfort experienced, cup size, menstrual status, existing breast pain and breast problems were also recorded. Radiologists were asked to compare the image quality of the normal and manipulated film. The results indicated that 24% of women did not experience a difference in thickness when the compression was reduced. This is an important new finding because the aim of breast compression is to reduce breast thickness. If breast thickness is not reduced when compression force is applied then discomfort is increased with no benefit in image quality. This has implications for mammographic practice when determining how much breast compression is sufficient. Radiologists found a decrease in contrast resolution within the fatty area of the breast between the normal and the extra CC film, confirming a decrease in image quality due to insufficient applied compression force.

  10. Using a visual discrimination model for the detection of compression artifacts in virtual pathology images.

    PubMed

    Johnson, Jeffrey P; Krupinski, Elizabeth A; Yan, Michelle; Roehrig, Hans; Graham, Anna R; Weinstein, Ronald S

    2011-02-01

    A major issue in telepathology is the extremely large and growing size of digitized "virtual" slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. "Visually lossless" compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods.

  11. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    PubMed

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.

  12. Locally adaptive vector quantization: Data compression with feature preservation

    NASA Technical Reports Server (NTRS)

    Cheung, K. M.; Sayano, M.

    1992-01-01

    A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.

  13. Interleaved EPI diffusion imaging using SPIRiT-based reconstruction with virtual coil compression.

    PubMed

    Dong, Zijing; Wang, Fuyixue; Ma, Xiaodong; Zhang, Zhe; Dai, Erpeng; Yuan, Chun; Guo, Hua

    2018-03-01

    To develop a novel diffusion imaging reconstruction framework based on iterative self-consistent parallel imaging reconstruction (SPIRiT) for multishot interleaved echo planar imaging (iEPI), with computation acceleration by virtual coil compression. As a general approach for autocalibrating parallel imaging, SPIRiT improves the performance of traditional generalized autocalibrating partially parallel acquisitions (GRAPPA) methods in that the formulation with self-consistency is better conditioned, suggesting SPIRiT to be a better candidate in k-space-based reconstruction. In this study, a general SPIRiT framework is adopted to incorporate both coil sensitivity and phase variation information as virtual coils and then is applied to 2D navigated iEPI diffusion imaging. To reduce the reconstruction time when using a large number of coils and shots, a novel shot-coil compression method is proposed for computation acceleration in Cartesian sampling. Simulations and in vivo experiments were conducted to evaluate the performance of the proposed method. Compared with the conventional coil compression, the shot-coil compression achieved higher compression rates with reduced errors. The simulation and in vivo experiments demonstrate that the SPIRiT-based reconstruction outperformed the existing method, realigned GRAPPA, and provided superior images with reduced artifacts. The SPIRiT-based reconstruction with virtual coil compression is a reliable method for high-resolution iEPI diffusion imaging. Magn Reson Med 79:1525-1531, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  14. A new complexity measure for time series analysis and classification

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Balasubramanian, Karthi; Dey, Sutirth

    2013-07-01

    Complexity measures are used in a number of applications including extraction of information from data such as ecological time series, detection of non-random structure in biomedical signals, testing of random number generators, language recognition and authorship attribution etc. Different complexity measures proposed in the literature like Shannon entropy, Relative entropy, Lempel-Ziv, Kolmogrov and Algorithmic complexity are mostly ineffective in analyzing short sequences that are further corrupted with noise. To address this problem, we propose a new complexity measure ETC and define it as the "Effort To Compress" the input sequence by a lossless compression algorithm. Here, we employ the lossless compression algorithm known as Non-Sequential Recursive Pair Substitution (NSRPS) and define ETC as the number of iterations needed for NSRPS to transform the input sequence to a constant sequence. We demonstrate the utility of ETC in two applications. ETC is shown to have better correlation with Lyapunov exponent than Shannon entropy even with relatively short and noisy time series. The measure also has a greater rate of success in automatic identification and classification of short noisy sequences, compared to entropy and a popular measure based on Lempel-Ziv compression (implemented by Gzip).

  15. The wavelet/scalar quantization compression standard for digital fingerprint images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  16. Mammographic compression in Asian women.

    PubMed

    Lau, Susie; Abdul Aziz, Yang Faridah; Ng, Kwan Hoong

    2017-01-01

    To investigate: (1) the variability of mammographic compression parameters amongst Asian women; and (2) the effects of reducing compression force on image quality and mean glandular dose (MGD) in Asian women based on phantom study. We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35-80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD) measurement software (Volpara) to assess compression force, compression pressure, compressed breast thickness (CBT), breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA) slabs. Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (p<0.0001). Compression parameters including compression force, compression pressure, CBT and breast contact area were widely variable between [relative standard deviation (RSD)≥21.0%] and within (p<0.0001) Asian women. The median compression force should be about 8.1 daN compared to the current 12.0 daN. Decreasing compression force from 12.0 daN to 9.0 daN increased CBT by 3.3±1.4 mm, MGD by 6.2-11.0%, and caused no significant effects on image quality (p>0.05). Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD.

  17. Complex adaptation-based LDR image rendering for 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  18. Lossless Video Sequence Compression Using Adaptive Prediction

    NASA Technical Reports Server (NTRS)

    Li, Ying; Sayood, Khalid

    2007-01-01

    We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.

  19. Lossy compression of quality scores in genomic data.

    PubMed

    Cánovas, Rodrigo; Moffat, Alistair; Turpin, Andrew

    2014-08-01

    Next-generation sequencing technologies are revolutionizing medicine. Data from sequencing technologies are typically represented as a string of bases, an associated sequence of per-base quality scores and other metadata, and in aggregate can require a large amount of space. The quality scores show how accurate the bases are with respect to the sequencing process, that is, how confident the sequencer is of having called them correctly, and are the largest component in datasets in which they are retained. Previous research has examined how to store sequences of bases effectively; here we add to that knowledge by examining methods for compressing quality scores. The quality values originate in a continuous domain, and so if a fidelity criterion is introduced, it is possible to introduce flexibility in the way these values are represented, allowing lossy compression over the quality score data. We present existing compression options for quality score data, and then introduce two new lossy techniques. Experiments measuring the trade-off between compression ratio and information loss are reported, including quantifying the effect of lossy representations on a downstream application that carries out single nucleotide polymorphism and insert/deletion detection. The new methods are demonstrably superior to other techniques when assessed against the spectrum of possible trade-offs between storage required and fidelity of representation. An implementation of the methods described here is available at https://github.com/rcanovas/libCSAM. rcanovas@student.unimelb.edu.au Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. A Posteriori Restoration of Block Transform-Compressed Data

    NASA Technical Reports Server (NTRS)

    Brown, R.; Boden, A. F.

    1995-01-01

    The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.

  1. Compressive hyperspectral and multispectral imaging fusion

    NASA Astrophysics Data System (ADS)

    Espitia, Óscar; Castillo, Sergio; Arguello, Henry

    2016-05-01

    Image fusion is a valuable framework which combines two or more images of the same scene from one or multiple sensors, allowing to improve the resolution of the images and increase the interpretable content. In remote sensing a common fusion problem consists of merging hyperspectral (HS) and multispectral (MS) images that involve large amount of redundant data, which ignores the highly correlated structure of the datacube along the spatial and spectral dimensions. Compressive HS and MS systems compress the spectral data in the acquisition step allowing to reduce the data redundancy by using different sampling patterns. This work presents a compressed HS and MS image fusion approach, which uses a high dimensional joint sparse model. The joint sparse model is formulated by combining HS and MS compressive acquisition models. The high spectral and spatial resolution image is reconstructed by using sparse optimization algorithms. Different fusion spectral image scenarios are used to explore the performance of the proposed scheme. Several simulations with synthetic and real datacubes show promising results as the reliable reconstruction of a high spectral and spatial resolution image can be achieved by using as few as just the 50% of the datacube.

  2. Adaptive compressed sensing of remote-sensing imaging based on the sparsity prediction

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Li, Xilong; Chong, Xin

    2017-10-01

    The conventional compressive sensing works based on the non-adaptive linear projections, and the parameter of its measurement times is usually set empirically. As a result, the quality of image reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was given. Then an estimation method for the sparsity of image was proposed based on the two dimensional discrete cosine transform (2D DCT). With an energy threshold given beforehand, the DCT coefficients were processed with both energy normalization and sorting in descending order, and the sparsity of the image can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of image effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparse degree estimated with the energy threshold provided, the proposed method can ensure the quality of image reconstruction.

  3. Quality of reconstruction of compressed off-axis digital holograms by frequency filtering and wavelets.

    PubMed

    Cheremkhin, Pavel A; Kurbatova, Ekaterina A

    2018-01-01

    Compression of digital holograms can significantly help with the storage of objects and data in 2D and 3D form, its transmission, and its reconstruction. Compression of standard images by methods based on wavelets allows high compression ratios (up to 20-50 times) with minimum losses of quality. In the case of digital holograms, application of wavelets directly does not allow high values of compression to be obtained. However, additional preprocessing and postprocessing can afford significant compression of holograms and the acceptable quality of reconstructed images. In this paper application of wavelet transforms for compression of off-axis digital holograms are considered. The combined technique based on zero- and twin-order elimination, wavelet compression of the amplitude and phase components of the obtained Fourier spectrum, and further additional compression of wavelet coefficients by thresholding and quantization is considered. Numerical experiments on reconstruction of images from the compressed holograms are performed. The comparative analysis of applicability of various wavelets and methods of additional compression of wavelet coefficients is performed. Optimum parameters of compression of holograms by the methods can be estimated. Sizes of holographic information were decreased up to 190 times.

  4. Imaging industry expectations for compressed sensing in MRI

    NASA Astrophysics Data System (ADS)

    King, Kevin F.; Kanwischer, Adriana; Peters, Rob

    2015-09-01

    Compressed sensing requires compressible data, incoherent acquisition and a nonlinear reconstruction algorithm to force creation of a compressible image consistent with the acquired data. MRI images are compressible using various transforms (commonly total variation or wavelets). Incoherent acquisition of MRI data by appropriate selection of pseudo-random or non-Cartesian locations in k-space is straightforward. Increasingly, commercial scanners are sold with enough computing power to enable iterative reconstruction in reasonable times. Therefore integration of compressed sensing into commercial MRI products and clinical practice is beginning. MRI frequently requires the tradeoff of spatial resolution, temporal resolution and volume of spatial coverage to obtain reasonable scan times. Compressed sensing improves scan efficiency and reduces the need for this tradeoff. Benefits to the user will include shorter scans, greater patient comfort, better image quality, more contrast types per patient slot, the enabling of previously impractical applications, and higher throughput. Challenges to vendors include deciding which applications to prioritize, guaranteeing diagnostic image quality, maintaining acceptable usability and workflow, and acquisition and reconstruction algorithm details. Application choice depends on which customer needs the vendor wants to address. The changing healthcare environment is putting cost and productivity pressure on healthcare providers. The improved scan efficiency of compressed sensing can help alleviate some of this pressure. Image quality is strongly influenced by image compressibility and acceleration factor, which must be appropriately limited. Usability and workflow concerns include reconstruction time and user interface friendliness and response. Reconstruction times are limited to about one minute for acceptable workflow. The user interface should be designed to optimize workflow and minimize additional customer training. Algorithm concerns include the decision of which algorithms to implement as well as the problem of optimal setting of adjustable parameters. It will take imaging vendors several years to work through these challenges and provide solutions for a wide range of applications.

  5. Cloud Optimized Image Format and Compression

    NASA Astrophysics Data System (ADS)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  6. New image compression scheme for digital angiocardiography application

    NASA Astrophysics Data System (ADS)

    Anastassopoulos, George C.; Lymberopoulos, Dimitris C.; Kotsopoulos, Stavros A.; Kokkinakis, George C.

    1993-06-01

    The present paper deals with the development and evaluation of a new compression scheme, for angiocardiography images. This scheme provides considerable compression of the medical data file, through two different stages. The first stage obliterates the redundancy inside a single frame domain since the second stage obliterates the redundancy among the sequential frames. Within these stages the employed data compression ratio can be easily adjusted according to the needs of the angiocardiography applications, where still or moving (in slow or full motion) images are hauled. The developed scheme has been tailored on the real needs of the diagnosis oriented conferencing-teleworking processes, where Unified Image Viewing facilities are required.

  7. Multi-rate, real time image compression for images dominated by point sources

    NASA Technical Reports Server (NTRS)

    Huber, A. Kris; Budge, Scott E.; Harris, Richard W.

    1993-01-01

    An image compression system recently developed for compression of digital images dominated by point sources is presented. Encoding consists of minimum-mean removal, vector quantization, adaptive threshold truncation, and modified Huffman encoding. Simulations are presented showing that the peaks corresponding to point sources can be transmitted losslessly for low signal-to-noise ratios (SNR) and high point source densities while maintaining a reduced output bit rate. Encoding and decoding hardware has been built and tested which processes 552,960 12-bit pixels per second at compression rates of 10:1 and 4:1. Simulation results are presented for the 10:1 case only.

  8. Science-based Region-of-Interest Image Compression

    NASA Technical Reports Server (NTRS)

    Wagstaff, K. L.; Castano, R.; Dolinar, S.; Klimesh, M.; Mukai, R.

    2004-01-01

    As the number of currently active space missions increases, so does competition for Deep Space Network (DSN) resources. Even given unbounded DSN time, power and weight constraints onboard the spacecraft limit the maximum possible data transmission rate. These factors highlight a critical need for very effective data compression schemes. Images tend to be the most bandwidth-intensive data, so image compression methods are particularly valuable. In this paper, we describe a method for prioritizing regions in an image based on their scientific value. Using a wavelet compression method that can incorporate priority information, we ensure that the highest priority regions are transmitted with the highest fidelity.

  9. Least Median of Squares Filtering of Locally Optimal Point Matches for Compressible Flow Image Registration

    PubMed Central

    Castillo, Edward; Castillo, Richard; White, Benjamin; Rojo, Javier; Guerrero, Thomas

    2012-01-01

    Compressible flow based image registration operates under the assumption that the mass of the imaged material is conserved from one image to the next. Depending on how the mass conservation assumption is modeled, the performance of existing compressible flow methods is limited by factors such as image quality, noise, large magnitude voxel displacements, and computational requirements. The Least Median of Squares Filtered Compressible Flow (LFC) method introduced here is based on a localized, nonlinear least squares, compressible flow model that describes the displacement of a single voxel that lends itself to a simple grid search (block matching) optimization strategy. Spatially inaccurate grid search point matches, corresponding to erroneous local minimizers of the nonlinear compressible flow model, are removed by a novel filtering approach based on least median of squares fitting and the forward search outlier detection method. The spatial accuracy of the method is measured using ten thoracic CT image sets and large samples of expert determined landmarks (available at www.dir-lab.com). The LFC method produces an average error within the intra-observer error on eight of the ten cases, indicating that the method is capable of achieving a high spatial accuracy for thoracic CT registration. PMID:22797602

  10. Power strain imaging based on vibro-elastography techniques

    NASA Astrophysics Data System (ADS)

    Wen, Xu; Salcudean, S. E.

    2007-03-01

    This paper describes a new ultrasound elastography technique, power strain imaging, based on vibro-elastography (VE) techniques. With this method, tissue is compressed by a vibrating actuator driven by low-pass or band-pass filtered white noise, typically in the 0-20 Hz range. Tissue displacements at different spatial locations are estimated by correlation-based approaches on the raw ultrasound radio frequency signals and recorded in time sequences. The power spectra of these time sequences are computed by Fourier spectral analysis techniques. As the average of the power spectrum is proportional to the squared amplitude of the tissue motion, the square root of the average power over the range of excitation frequencies is used as a measure of the tissue displacement. Then tissue strain is determined by the least squares estimation of the gradient of the displacement field. The computation of the power spectra of the time sequences can be implemented efficiently by using Welch's periodogram method with moving windows or with accumulative windows with a forgetting factor. Compared to the transfer function estimation originally used in VE, the computation of cross spectral densities is not needed, which saves both the memory and computational times. Phantom experiments demonstrate that the proposed method produces stable and operator-independent strain images with high signal-to-noise ratio in real time. This approach has been also tested on a few patient data of the prostate region, and the results are encouraging.

  11. Design of Restoration Method Based on Compressed Sensing and TwIST Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Fei; Piao, Yan

    2018-04-01

    In order to improve the subjective and objective quality of degraded images at low sampling rates effectively,save storage space and reduce computational complexity at the same time, this paper proposes a joint restoration algorithm of compressed sensing and two step iterative threshold shrinkage (TwIST). The algorithm applies the TwIST algorithm which used in image restoration to the compressed sensing theory. Then, a small amount of sparse high-frequency information is obtained in frequency domain. The TwIST algorithm based on compressed sensing theory is used to accurately reconstruct the high frequency image. The experimental results show that the proposed algorithm achieves better subjective visual effects and objective quality of degraded images while accurately restoring degraded images.

  12. The FBI compression standard for digitized fingerprint images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.

    1996-10-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the currentmore » status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.« less

  13. FBI compression standard for digitized fingerprint images

    NASA Astrophysics Data System (ADS)

    Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas

    1996-11-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  14. a Sensor Aided H.264/AVC Video Encoder for Aerial Video Sequences with in the Loop Metadata Correction

    NASA Astrophysics Data System (ADS)

    Cicala, L.; Angelino, C. V.; Ruatta, G.; Baccaglini, E.; Raimondo, N.

    2015-08-01

    Unmanned Aerial Vehicles (UAVs) are often employed to collect high resolution images in order to perform image mosaicking and/or 3D reconstruction. Images are usually stored on board and then processed with on-ground desktop software. In such a way the computational load, and hence the power consumption, is moved on ground, leaving on board only the task of storing data. Such an approach is important in the case of small multi-rotorcraft UAVs because of their low endurance due to the short battery life. Images can be stored on board with either still image or video data compression. Still image system are preferred when low frame rates are involved, because video coding systems are based on motion estimation and compensation algorithms which fail when the motion vectors are significantly long and when the overlapping between subsequent frames is very small. In this scenario, UAVs attitude and position metadata from the Inertial Navigation System (INS) can be employed to estimate global motion parameters without video analysis. A low complexity image analysis can be still performed in order to refine the motion field estimated using only the metadata. In this work, we propose to use this refinement step in order to improve the position and attitude estimation produced by the navigation system in order to maximize the encoder performance. Experiments are performed on both simulated and real world video sequences.

  15. Hyperspectral image compressing using wavelet-based method

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  16. Model based estimation of image depth and displacement

    NASA Technical Reports Server (NTRS)

    Damour, Kevin T.

    1992-01-01

    Passive depth and displacement map determinations have become an important part of computer vision processing. Applications that make use of this type of information include autonomous navigation, robotic assembly, image sequence compression, structure identification, and 3-D motion estimation. With the reliance of such systems on visual image characteristics, a need to overcome image degradations, such as random image-capture noise, motion, and quantization effects, is clearly necessary. Many depth and displacement estimation algorithms also introduce additional distortions due to the gradient operations performed on the noisy intensity images. These degradations can limit the accuracy and reliability of the displacement or depth information extracted from such sequences. Recognizing the previously stated conditions, a new method to model and estimate a restored depth or displacement field is presented. Once a model has been established, the field can be filtered using currently established multidimensional algorithms. In particular, the reduced order model Kalman filter (ROMKF), which has been shown to be an effective tool in the reduction of image intensity distortions, was applied to the computed displacement fields. Results of the application of this model show significant improvements on the restored field. Previous attempts at restoring the depth or displacement fields assumed homogeneous characteristics which resulted in the smoothing of discontinuities. In these situations, edges were lost. An adaptive model parameter selection method is provided that maintains sharp edge boundaries in the restored field. This has been successfully applied to images representative of robotic scenarios. In order to accommodate image sequences, the standard 2-D ROMKF model is extended into 3-D by the incorporation of a deterministic component based on previously restored fields. The inclusion of past depth and displacement fields allows a means of incorporating the temporal information into the restoration process. A summary on the conditions that indicate which type of filtering should be applied to a field is provided.

  17. Coil Compression for Accelerated Imaging with Cartesian Sampling

    PubMed Central

    Zhang, Tao; Pauly, John M.; Vasanawala, Shreyas S.; Lustig, Michael

    2012-01-01

    MRI using receiver arrays with many coil elements can provide high signal-to-noise ratio and increase parallel imaging acceleration. At the same time, the growing number of elements results in larger datasets and more computation in the reconstruction. This is of particular concern in 3D acquisitions and in iterative reconstructions. Coil compression algorithms are effective in mitigating this problem by compressing data from many channels into fewer virtual coils. In Cartesian sampling there often are fully sampled k-space dimensions. In this work, a new coil compression technique for Cartesian sampling is presented that exploits the spatially varying coil sensitivities in these non-subsampled dimensions for better compression and computation reduction. Instead of directly compressing in k-space, coil compression is performed separately for each spatial location along the fully-sampled directions, followed by an additional alignment process that guarantees the smoothness of the virtual coil sensitivities. This important step provides compatibility with autocalibrating parallel imaging techniques. Its performance is not susceptible to artifacts caused by a tight imaging fieldof-view. High quality compression of in-vivo 3D data from a 32 channel pediatric coil into 6 virtual coils is demonstrated. PMID:22488589

  18. Subjective evaluations of integer cosine transform compressed Galileo solid state imagery

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Gold, Yaron; Grant, Terry; Chuang, Sherry

    1994-01-01

    This paper describes a study conducted for the Jet Propulsion Laboratory, Pasadena, California, using 15 evaluators from 12 institutions involved in the Galileo Solid State Imaging (SSI) experiment. The objective of the study was to determine the impact of integer cosine transform (ICT) compression using specially formulated quantization (q) tables and compression ratios on acceptability of the 800 x 800 x 8 monochromatic astronomical images as evaluated visually by Galileo SSI mission scientists. Fourteen different images in seven image groups were evaluated. Each evaluator viewed two versions of the same image side by side on a high-resolution monitor; each was compressed using a different q level. First the evaluators selected the image with the highest overall quality to support them in their visual evaluations of image content. Next they rated each image using a scale from one to five indicating its judged degree of usefulness. Up to four preselected types of images with and without noise were presented to each evaluator.

  19. Computational Simulation of Breast Compression Based on Segmented Breast and Fibroglandular Tissues on Magnetic Resonance Images

    PubMed Central

    Shih, Tzu-Ching; Chen, Jeon-Hor; Liu, Dongxu; Nie, Ke; Sun, Lizhi; Lin, Muqing; Chang, Daniel; Nalcioglu, Orhan; Su, Min-Ying

    2010-01-01

    This study presents a finite element based computational model to simulate the three-dimensional deformation of the breast and the fibroglandular tissues under compression. The simulation was based on 3D MR images of the breast, and the craniocaudal and mediolateral oblique compression as used in mammography was applied. The geometry of whole breast and the segmented fibroglandular tissues within the breast were reconstructed using triangular meshes by using the Avizo® 6.0 software package. Due to the large deformation in breast compression, a finite element model was used to simulate the non-linear elastic tissue deformation under compression, using the MSC.Marc® software package. The model was tested in 4 cases. The results showed a higher displacement along the compression direction compared to the other two directions. The compressed breast thickness in these 4 cases at 60% compression ratio was in the range of 5-7 cm, which is the typical range of thickness in mammography. The projection of the fibroglandular tissue mesh at 60% compression ratio was compared to the corresponding mammograms of two women, and they demonstrated spatially matched distributions. However, since the compression was based on MRI, which has much coarser spatial resolution than the in-plane resolution of mammography, this method is unlikely to generate a synthetic mammogram close to the clinical quality. Whether this model may be used to understand the technical factors that may impact the variations in breast density measurements needs further investigation. Since this method can be applied to simulate compression of the breast at different views and different compression levels, another possible application is to provide a tool for comparing breast images acquired using different imaging modalities – such as MRI, mammography, whole breast ultrasound, and molecular imaging – that are performed using different body positions and different compression conditions. PMID:20601773

  20. Image segmentation by iterative parallel region growing with application to data compression and image analysis

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    1988-01-01

    Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.

  1. An ultra-low-power image compressor for capsule endoscope.

    PubMed

    Lin, Meng-Chun; Dung, Lan-Rong; Weng, Ping-Kuo

    2006-02-25

    Gastrointestinal (GI) endoscopy has been popularly applied for the diagnosis of diseases of the alimentary canal including Crohn's Disease, Celiac disease and other malabsorption disorders, benign and malignant tumors of the small intestine, vascular disorders and medication related small bowel injury. The wireless capsule endoscope has been successfully utilized to diagnose diseases of the small intestine and alleviate the discomfort and pain of patients. However, the resolution of demosaicked image is still low, and some interesting spots may be unintentionally omitted. Especially, the images will be severely distorted when physicians zoom images in for detailed diagnosis. Increasing resolution may cause significant power consumption in RF transmitter; hence, image compression is necessary for saving the power dissipation of RF transmitter. To overcome this drawback, we have been developing a new capsule endoscope, called GICam. We developed an ultra-low-power image compression processor for capsule endoscope or swallowable imaging capsules. In applications of capsule endoscopy, it is imperative to consider battery life/performance trade-offs. Applying state-of-the-art video compression techniques may significantly reduce the image bit rate by their high compression ratio, but they all require intensive computation and consume much battery power. There are many fast compression algorithms for reducing computation load; however, they may result in distortion of the original image, which is not good for use in the medical care. Thus, this paper will first simplify traditional video compression algorithms and propose a scalable compression architecture. As the result, the developed video compressor only costs 31 K gates at 2 frames per second, consumes 14.92 mW, and reduces the video size by 75% at least.

  2. Color image lossy compression based on blind evaluation and prediction of noise characteristics

    NASA Astrophysics Data System (ADS)

    Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena

    2011-03-01

    The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.

  3. Image compression software for the SOHO LASCO and EIT experiments

    NASA Technical Reports Server (NTRS)

    Grunes, Mitchell R.; Howard, Russell A.; Hoppel, Karl; Mango, Stephen A.; Wang, Dennis

    1994-01-01

    This paper describes the lossless and lossy image compression algorithms to be used on board the Solar Heliospheric Observatory (SOHO) in conjunction with the Large Angle Spectrometric Coronograph and Extreme Ultraviolet Imaging Telescope experiments. It also shows preliminary results obtained using similar prior imagery and discusses the lossy compression artifacts which will result. This paper is in part intended for the use of SOHO investigators who need to understand the results of SOHO compression in order to better allocate the transmission bits which they have been allocated.

  4. Design of a Lossless Image Compression System for Video Capsule Endoscopy and Its Performance in In-Vivo Trials

    PubMed Central

    Khan, Tareq H.; Wahid, Khan A.

    2014-01-01

    In this paper, a new low complexity and lossless image compression system for capsule endoscopy (CE) is presented. The compressor consists of a low-cost YEF color space converter and variable-length predictive with a combination of Golomb-Rice and unary encoding. All these components have been heavily optimized for low-power and low-cost and lossless in nature. As a result, the entire compression system does not incur any loss of image information. Unlike transform based algorithms, the compressor can be interfaced with commercial image sensors which send pixel data in raster-scan fashion that eliminates the need of having large buffer memory. The compression algorithm is capable to work with white light imaging (WLI) and narrow band imaging (NBI) with average compression ratio of 78% and 84% respectively. Finally, a complete capsule endoscopy system is developed on a single, low-power, 65-nm field programmable gate arrays (FPGA) chip. The prototype is developed using circular PCBs having a diameter of 16 mm. Several in-vivo and ex-vivo trials using pig's intestine have been conducted using the prototype to validate the performance of the proposed lossless compression algorithm. The results show that, compared with all other existing works, the proposed algorithm offers a solution to wireless capsule endoscopy with lossless and yet acceptable level of compression. PMID:25375753

  5. Selection of bi-level image compression method for reduction of communication energy in wireless visual sensor networks

    NASA Astrophysics Data System (ADS)

    Khursheed, Khursheed; Imran, Muhammad; Ahmad, Naeem; O'Nils, Mattias

    2012-06-01

    Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN.

  6. Quark enables semi-reference-based compression of RNA-seq data.

    PubMed

    Sarkar, Hirak; Patro, Rob

    2017-11-01

    The past decade has seen an exponential increase in biological sequencing capacity, and there has been a simultaneous effort to help organize and archive some of the vast quantities of sequencing data that are being generated. Although these developments are tremendous from the perspective of maximizing the scientific utility of available data, they come with heavy costs. The storage and transmission of such vast amounts of sequencing data is expensive. We present Quark, a semi-reference-based compression tool designed for RNA-seq data. Quark makes use of a reference sequence when encoding reads, but produces a representation that can be decoded independently, without the need for a reference. This allows Quark to achieve markedly better compression rates than existing reference-free schemes, while still relieving the burden of assuming a specific, shared reference sequence between the encoder and decoder. We demonstrate that Quark achieves state-of-the-art compression rates, and that, typically, only a small fraction of the reference sequence must be encoded along with the reads to allow reference-free decompression. Quark is implemented in C ++11, and is available under a GPLv3 license at www.github.com/COMBINE-lab/quark. rob.patro@cs.stonybrook.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  7. On use of image quality metrics for perceptual blur modeling: image/video compression case

    NASA Astrophysics Data System (ADS)

    Cha, Jae H.; Olson, Jeffrey T.; Preece, Bradley L.; Espinola, Richard L.; Abbott, A. Lynn

    2018-02-01

    Linear system theory is employed to make target acquisition performance predictions for electro-optical/infrared imaging systems where the modulation transfer function (MTF) may be imposed from a nonlinear degradation process. Previous research relying on image quality metrics (IQM) methods, which heuristically estimate perceived MTF has supported that an average perceived MTF can be used to model some types of degradation such as image compression. Here, we discuss the validity of the IQM approach by mathematically analyzing the associated heuristics from the perspective of reliability, robustness, and tractability. Experiments with standard images compressed by x.264 encoding suggest that the compression degradation can be estimated by a perceived MTF within boundaries defined by well-behaved curves with marginal error. Our results confirm that the IQM linearizer methodology provides a credible tool for sensor performance modeling.

  8. Low cost voice compression for mobile digital radios

    NASA Technical Reports Server (NTRS)

    Omura, J. K.

    1985-01-01

    A new technique for low cost rubust voice compression at 4800 bits per second was studied. The approach was based on using a cascade of digital biquad adaptive filters with simplified multipulse excitation followed by simple bit sequence compression.

  9. New procedures to evaluate visually lossless compression for display systems

    NASA Astrophysics Data System (ADS)

    Stolitzka, Dale F.; Schelkens, Peter; Bruylants, Tim

    2017-09-01

    Visually lossless image coding in isochronous display streaming or plesiochronous networks reduces link complexity and power consumption and increases available link bandwidth. A new set of codecs developed within the last four years promise a new level of coding quality, but require new techniques that are sufficiently sensitive to the small artifacts or color variations induced by this new breed of codecs. This paper begins with a summary of the new ISO/IEC 29170-2, a procedure for evaluation of lossless coding and reports the new work by JPEG to extend the procedure in two important ways, for HDR content and for evaluating the differences between still images, panning images and image sequences. ISO/IEC 29170-2 relies on processing test images through a well-defined process chain for subjective, forced-choice psychophysical experiments. The procedure sets an acceptable quality level equal to one just noticeable difference. Traditional image and video coding evaluation techniques, such as, those used for television evaluation have not proven sufficiently sensitive to the small artifacts that may be induced by this breed of codecs. In 2015, JPEG received new requirements to expand evaluation of visually lossless coding for high dynamic range images, slowly moving images, i.e., panning, and image sequences. These requirements are the basis for new amendments of the ISO/IEC 29170-2 procedures described in this paper. These amendments promise to be highly useful for the new content in television and cinema mezzanine networks. The amendments passed the final ballot in April 2017 and are on track to be published in 2018.

  10. Fat embolism syndrome following percutaneous vertebroplasty: a case report.

    PubMed

    Ahmadzai, Hasib; Campbell, Scott; Archis, Constantine; Clark, William A

    2014-04-01

    Vertebroplasty is commonly performed for management of pain associated with vertebral compression fractures. There have been two previous reports of fatal fat embolism following vertebroplasty. Here we describe a case of fat embolism syndrome following this procedure, and also provide fluoroscopic video evidence consistent with this occurrence. The purpose of this study was to review the literature and report a case of fat embolism syndrome in a patient who underwent percutaneous vertebroplasty for compression fracture. The study design for this manuscript was of a clinical case report. A 68-year-old woman who developed sudden back pain with minimal trauma was found to have a T6 vertebral compression fracture on radiographs and bone scans. Percutaneous vertebroplasty of T5 and T6 was performed. Fluoroscopic imaging during the procedure demonstrated compression and rarefaction of the fractured vertebra associated with changes in intrathoracic pressure. Immediately after the procedure, the patient's back pain resolved and she was discharged home. Two days later, she developed increasing respiratory distress, confusion, and chest pain. A petechial rash on her upper arms also appeared. No evidence of bone cement leakage or pulmonary filling defects were seen on computed tomography-pulmonary angiography. Brain magnetic resonance imaging demonstrated hyperintensities in the periventricular and subcortical white matter on T2/fluid-attenuated inversion recovery sequences. A diagnosis of fat embolism syndrome was made, and the patient recovered with conservative management. Percutaneous vertebroplasty is a relatively safe and simple procedure, reducing pain and improving functional limitations in patients with vertebral fractures. This case demonstrates an uncommon yet serious complication of fat embolism syndrome. Clinicians must be aware of this complication when explaining the procedure to patients and provide prompt supportive care when it does occur. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. An L1-norm phase constraint for half-Fourier compressed sensing in 3D MR imaging.

    PubMed

    Li, Guobin; Hennig, Jürgen; Raithel, Esther; Büchert, Martin; Paul, Dominik; Korvink, Jan G; Zaitsev, Maxim

    2015-10-01

    In most half-Fourier imaging methods, explicit phase replacement is used. In combination with parallel imaging, or compressed sensing, half-Fourier reconstruction is usually performed in a separate step. The purpose of this paper is to report that integration of half-Fourier reconstruction into iterative reconstruction minimizes reconstruction errors. The L1-norm phase constraint for half-Fourier imaging proposed in this work is compared with the L2-norm variant of the same algorithm, with several typical half-Fourier reconstruction methods. Half-Fourier imaging with the proposed phase constraint can be seamlessly combined with parallel imaging and compressed sensing to achieve high acceleration factors. In simulations and in in-vivo experiments half-Fourier imaging with the proposed L1-norm phase constraint enables superior performance both reconstruction of image details and with regard to robustness against phase estimation errors. The performance and feasibility of half-Fourier imaging with the proposed L1-norm phase constraint is reported. Its seamless combination with parallel imaging and compressed sensing enables use of greater acceleration in 3D MR imaging.

  12. Influence of image compression on the interpretation of spectral-domain optical coherence tomography in exudative age-related macular degeneration

    PubMed Central

    Kim, J H; Kang, S W; Kim, J-r; Chang, Y S

    2014-01-01

    Purpose To evaluate the effect of image compression of spectral-domain optical coherence tomography (OCT) images in the examination of eyes with exudative age-related macular degeneration (AMD). Methods Thirty eyes from 30 patients who were diagnosed with exudative AMD were included in this retrospective observational case series. The horizontal OCT scans centered at the center of the fovea were conducted using spectral-domain OCT. The images were exported to Tag Image File Format (TIFF) and 100, 75, 50, 25 and 10% quality of Joint Photographic Experts Group (JPEG) format. OCT images were taken before and after intravitreal ranibizumab injections, and after relapse. The prevalence of subretinal and intraretinal fluids was determined. Differences in choroidal thickness between the TIFF and JPEG images were compared with the intra-observer variability. Results The prevalence of subretinal and intraretinal fluids was comparable regardless of the degree of compression. However, the chorio–scleral interface was not clearly identified in many images with a high degree of compression. In images with 25 and 10% quality of JPEG, the difference in choroidal thickness between the TIFF images and the respective JPEG images was significantly greater than the intra-observer variability of the TIFF images (P=0.029 and P=0.024, respectively). Conclusions In OCT images of eyes with AMD, 50% of the quality of the JPEG format would be an optimal degree of compression for efficient data storage and transfer without sacrificing image quality. PMID:24788012

  13. Method and Apparatus for Evaluating the Visual Quality of Processed Digital Video Sequences

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    2002-01-01

    A Digital Video Quality (DVQ) apparatus and method that incorporate a model of human visual sensitivity to predict the visibility of artifacts. The DVQ method and apparatus are used for the evaluation of the visual quality of processed digital video sequences and for adaptively controlling the bit rate of the processed digital video sequences without compromising the visual quality. The DVQ apparatus minimizes the required amount of memory and computation. The input to the DVQ apparatus is a pair of color image sequences: an original (R) non-compressed sequence, and a processed (T) sequence. Both sequences (R) and (T) are sampled, cropped, and subjected to color transformations. The sequences are then subjected to blocking and discrete cosine transformation, and the results are transformed to local contrast. The next step is a time filtering operation which implements the human sensitivity to different time frequencies. The results are converted to threshold units by dividing each discrete cosine transform coefficient by its respective visual threshold. At the next stage the two sequences are subtracted to produce an error sequence. The error sequence is subjected to a contrast masking operation, which also depends upon the reference sequence (R). The masked errors can be pooled in various ways to illustrate the perceptual error over various dimensions, and the pooled error can be converted to a visual quality measure.

  14. Choice of word length in the design of a specialized hardware for lossless wavelet compression of medical images

    NASA Astrophysics Data System (ADS)

    Urriza, Isidro; Barragan, Luis A.; Artigas, Jose I.; Garcia, Jose I.; Navarro, Denis

    1997-11-01

    Image compression plays an important role in the archiving and transmission of medical images. Discrete cosine transform (DCT)-based compression methods are not suitable for medical images because of block-like image artifacts that could mask or be mistaken for pathology. Wavelet transforms (WTs) are used to overcome this problem. When implementing WTs in hardware, finite precision arithmetic introduces quantization errors. However, lossless compression is usually required in the medical image field. Thus, the hardware designer must look for the optimum register length that, while ensuring the lossless accuracy criteria, will also lead to a high-speed implementation with small chip area. In addition, wavelet choice is a critical issue that affects image quality as well as system design. We analyze the filters best suited to image compression that appear in the literature. For them, we obtain the maximum quantization errors produced in the calculation of the WT components. Thus, we deduce the minimum word length required for the reconstructed image to be numerically identical to the original image. The theoretical results are compared with experimental results obtained from algorithm simulations on random test images. These results enable us to compare the hardware implementation cost of the different filter banks. Moreover, to reduce the word length, we have analyzed the case of increasing the integer part of the numbers while maintaining constant the word length when the scale increases.

  15. Nonlinear Multiscale Transformations: From Synchronization to Error Control

    DTIC Science & Technology

    2001-07-01

    transformation (plus the quantization step) has taken place, a lossless Lempel - Ziv compression algorithm is applied to reduce the size of the transformed... compressed data are all very close, however the visual quality of the reconstructed image is significantly better for the EC compression algorithm ...used in recent times in the first step of transform coding algorithms for image compression . Ideally, a multiscale transformation allows for an

  16. The Polygon-Ellipse Method of Data Compression of Weather Maps

    DTIC Science & Technology

    1994-03-28

    Report No. DOT’•FAAJRD-9416 Pr•oject Report AD-A278 958 ATC-213 The Polygon-Ellipse Method of Data Compression of Weather Maps ELDCT E J.L. GerIz 28...a o means must he- found to Compress this image. The l’olygion.Ellip.e (PE.) encoding algorithm develop.ed in this report rt-premrnt. weather regions...severely compress the image. For example, Mode S would require approximately a 10-fold compression . In addition, the algorithms used to perform the

  17. JP3D compressed-domain watermarking of volumetric medical data sets

    NASA Astrophysics Data System (ADS)

    Ouled Zaid, Azza; Makhloufi, Achraf; Olivier, Christian

    2010-01-01

    Increasing transmission of medical data across multiple user systems raises concerns for medical image watermarking. Additionaly, the use of volumetric images triggers the need for efficient compression techniques in picture archiving and communication systems (PACS), or telemedicine applications. This paper describes an hybrid data hiding/compression system, adapted to volumetric medical imaging. The central contribution is to integrate blind watermarking, based on turbo trellis-coded quantization (TCQ), to JP3D encoder. Results of our method applied to Magnetic Resonance (MR) and Computed Tomography (CT) medical images have shown that our watermarking scheme is robust to JP3D compression attacks and can provide relative high data embedding rate whereas keep a relative lower distortion.

  18. Compressive passive millimeter wave imager

    DOEpatents

    Gopalsami, Nachappa; Liao, Shaolin; Elmer, Thomas W; Koehl, Eugene R; Heifetz, Alexander; Raptis, Apostolos C

    2015-01-27

    A compressive scanning approach for millimeter wave imaging and sensing. A Hadamard mask is positioned to receive millimeter waves from an object to be imaged. A subset of the full set of Hadamard acquisitions is sampled. The subset is used to reconstruct an image representing the object.

  19. Energy Efficient Image/Video Data Transmission on Commercial Multi-Core Processors

    PubMed Central

    Lee, Sungju; Kim, Heegon; Chung, Yongwha; Park, Daihee

    2012-01-01

    In transmitting image/video data over Video Sensor Networks (VSNs), energy consumption must be minimized while maintaining high image/video quality. Although image/video compression is well known for its efficiency and usefulness in VSNs, the excessive costs associated with encoding computation and complexity still hinder its adoption for practical use. However, it is anticipated that high-performance handheld multi-core devices will be used as VSN processing nodes in the near future. In this paper, we propose a way to improve the energy efficiency of image and video compression with multi-core processors while maintaining the image/video quality. We improve the compression efficiency at the algorithmic level or derive the optimal parameters for the combination of a machine and compression based on the tradeoff between the energy consumption and the image/video quality. Based on experimental results, we confirm that the proposed approach can improve the energy efficiency of the straightforward approach by a factor of 2∼5 without compromising image/video quality. PMID:23202181

  20. A Novel 2D Image Compression Algorithm Based on Two Levels DWT and DCT Transforms with Enhanced Minimize-Matrix-Size Algorithm for High Resolution Structured Light 3D Surface Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2015-09-01

    Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.

  1. Finite-element modeling of compression and gravity on a population of breast phantoms for multimodality imaging simulation.

    PubMed

    Sturgeon, Gregory M; Kiarashi, Nooshin; Lo, Joseph Y; Samei, E; Segars, W P

    2016-05-01

    The authors are developing a series of computational breast phantoms based on breast CT data for imaging research. In this work, the authors develop a program that will allow a user to alter the phantoms to simulate the effect of gravity and compression of the breast (craniocaudal or mediolateral oblique) making the phantoms applicable to multimodality imaging. This application utilizes a template finite-element (FE) breast model that can be applied to their presegmented voxelized breast phantoms. The FE model is automatically fit to the geometry of a given breast phantom, and the material properties of each element are set based on the segmented voxels contained within the element. The loading and boundary conditions, which include gravity, are then assigned based on a user-defined position and compression. The effect of applying these loads to the breast is computed using a multistage contact analysis in FEBio, a freely available and well-validated FE software package specifically designed for biomedical applications. The resulting deformation of the breast is then applied to a boundary mesh representation of the phantom that can be used for simulating medical images. An efficient script performs the above actions seamlessly. The user only needs to specify which voxelized breast phantom to use, the compressed thickness, and orientation of the breast. The authors utilized their FE application to simulate compressed states of the breast indicative of mammography and tomosynthesis. Gravity and compression were simulated on example phantoms and used to generate mammograms in the craniocaudal or mediolateral oblique views. The simulated mammograms show a high degree of realism illustrating the utility of the FE method in simulating imaging data of repositioned and compressed breasts. The breast phantoms and the compression software can become a useful resource to the breast imaging research community. These phantoms can then be used to evaluate and compare imaging modalities that involve different positioning and compression of the breast.

  2. Algorithm for Lossless Compression of Calibrated Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron B.; Klimesh, Matthew A.

    2010-01-01

    A two-stage predictive method was developed for lossless compression of calibrated hyperspectral imagery. The first prediction stage uses a conventional linear predictor intended to exploit spatial and/or spectral dependencies in the data. The compressor tabulates counts of the past values of the difference between this initial prediction and the actual sample value. To form the ultimate predicted value, in the second stage, these counts are combined with an adaptively updated weight function intended to capture information about data regularities introduced by the calibration process. Finally, prediction residuals are losslessly encoded using adaptive arithmetic coding. Algorithms of this type are commonly tested on a readily available collection of images from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) hyperspectral imager. On the standard calibrated AVIRIS hyperspectral images that are most widely used for compression benchmarking, the new compressor provides more than 0.5 bits/sample improvement over the previous best compression results. The algorithm has been implemented in Mathematica. The compression algorithm was demonstrated as beneficial on 12-bit calibrated AVIRIS images.

  3. Ultra high-speed x-ray imaging of laser-driven shock compression using synchrotron light

    NASA Astrophysics Data System (ADS)

    Olbinado, Margie P.; Cantelli, Valentina; Mathon, Olivier; Pascarelli, Sakura; Grenzer, Joerg; Pelka, Alexander; Roedel, Melanie; Prencipe, Irene; Laso Garcia, Alejandro; Helbig, Uwe; Kraus, Dominik; Schramm, Ulrich; Cowan, Tom; Scheel, Mario; Pradel, Pierre; De Resseguier, Thibaut; Rack, Alexander

    2018-02-01

    A high-power, nanosecond pulsed laser impacting the surface of a material can generate an ablation plasma that drives a shock wave into it; while in situ x-ray imaging can provide a time-resolved probe of the shock-induced material behaviour on macroscopic length scales. Here, we report on an investigation into laser-driven shock compression of a polyurethane foam and a graphite rod by means of single-pulse synchrotron x-ray phase-contrast imaging with MHz frame rate. A 6 J, 10 ns pulsed laser was used to generate shock compression. Physical processes governing the laser-induced dynamic response such as elastic compression, compaction, pore collapse, fracture, and fragmentation have been imaged; and the advantage of exploiting the partial spatial coherence of a synchrotron source for studying low-density, carbon-based materials is emphasized. The successful combination of a high-energy laser and ultra high-speed x-ray imaging using synchrotron light demonstrates the potentiality of accessing complementary information from scientific studies of laser-driven shock compression.

  4. High-quality compressive ghost imaging

    NASA Astrophysics Data System (ADS)

    Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun

    2018-04-01

    We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.

  5. A Novel Image Compression Algorithm for High Resolution 3D Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2014-06-01

    This research presents a novel algorithm to compress high-resolution images for accurate structured light 3D reconstruction. Structured light images contain a pattern of light and shadows projected on the surface of the object, which are captured by the sensor at very high resolutions. Our algorithm is concerned with compressing such images to a high degree with minimum loss without adversely affecting 3D reconstruction. The Compression Algorithm starts with a single level discrete wavelet transform (DWT) for decomposing an image into four sub-bands. The sub-band LL is transformed by DCT yielding a DC-matrix and an AC-matrix. The Minimize-Matrix-Size Algorithm is used to compress the AC-matrix while a DWT is applied again to the DC-matrix resulting in LL2, HL2, LH2 and HH2 sub-bands. The LL2 sub-band is transformed by DCT, while the Minimize-Matrix-Size Algorithm is applied to the other sub-bands. The proposed algorithm has been tested with images of different sizes within a 3D reconstruction scenario. The algorithm is demonstrated to be more effective than JPEG2000 and JPEG concerning higher compression rates with equivalent perceived quality and the ability to more accurately reconstruct the 3D models.

  6. Sliding window prior data assisted compressed sensing for MRI tracking of lung tumors.

    PubMed

    Yip, Eugene; Yun, Jihyun; Wachowicz, Keith; Gabos, Zsolt; Rathee, Satyapal; Fallone, B G

    2017-01-01

    Hybrid magnetic resonance imaging and radiation therapy devices are capable of imaging in real-time to track intrafractional lung tumor motion during radiotherapy. Highly accelerated magnetic resonance (MR) imaging methods can potentially reduce system delay time and/or improves imaging spatial resolution, and provide flexibility in imaging parameters. Prior Data Assisted Compressed Sensing (PDACS) has previously been proposed as an acceleration method that combines the advantages of 2D compressed sensing and the KEYHOLE view-sharing technique. However, as PDACS relies on prior data acquired at the beginning of a dynamic imaging sequence, decline in image quality occurs for longer duration scans due to drifts in MR signal. Novel sliding window-based techniques for refreshing prior data are proposed as a solution to this problem. MR acceleration is performed by retrospective removal of data from the fully sampled sets. Six patients with lung tumors are scanned with a clinical 3 T MRI using a balanced steady-state free precession (bSSFP) sequence for 3 min at approximately 4 frames per second, for a total of 650 dynamics. A series of distinct pseudo-random patterns of partial k-space acquisition is generated such that, when combined with other dynamics within a sliding window of 100 dynamics, covers the entire k-space. The prior data in the sliding window are continuously refreshed to reduce the impact of MR signal drifts. We intended to demonstrate two different ways to utilize the sliding window data: a simple averaging method and a navigator-based method. These two sliding window methods are quantitatively compared against the original PDACS method using three metrics: artifact power, centroid displacement error, and Dice's coefficient. The study is repeated with pseudo 0.5 T images by adding complex, normally distributed noise with a standard deviation that reduces image SNR, relative to original 3 T images, by a factor of 6. Without sliding window implemented, PDACS-reconstructed dynamic datasets showed progressive increases in image artifact power as the 3 min scan progresses. With sliding windows implemented, this increase in artifact power is eliminated. Near the end of a 3 min scan at 3 T SNR and 5× acceleration, implementation of an averaging (navigator) sliding window method improves our metrics by the following ways: artifact power decreases from 0.065 without sliding window to 0.030 (0.031), centroid error decreases from 2.64 to 1.41 mm (1.28 mm), and Dice coefficient agreement increases from 0.860 to 0.912 (0.915). At pseudo 0.5 T SNR, the improvements in metrics are as follows: artifact power decreases from 0.110 without sliding window to 0.0897 (0.0985), centroid error decreases from 2.92 mm to 1.36 mm (1.32 mm), and Dice coefficient agreements increases from 0.851 to 0.894 (0.896). In this work we demonstrated the negative impact of slow changes in MR signal for longer duration PDACS dynamic scans, namely increases in image artifact power and reductions of tumor tracking accuracy. We have also demonstrated sliding window implementations (i.e., refreshing of prior data) of PDACS are effective solutions to this problem at both 3 T and simulated 0.5 T bSSFP images. © 2016 American Association of Physicists in Medicine.

  7. An image compression algorithm for a high-resolution digital still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    The Electronic Still Camera (ESC) project will provide for the capture and transmission of high-quality images without the use of film. The image quality will be superior to video and will approach the quality of 35mm film. The camera, which will have the same general shape and handling as a 35mm camera, will be able to send images to earth in near real-time. Images will be stored in computer memory (RAM) in removable cartridges readable by a computer. To save storage space, the image will be compressed and reconstructed at the time of viewing. Both lossless and loss-y image compression algorithms are studied, described, and compared.

  8. Lossy compression of weak lensing data

    DOE PAGES

    Vanderveld, R. Ali; Bernstein, Gary M.; Stoughton, Chris; ...

    2011-07-12

    Future orbiting observatories will survey large areas of sky in order to constrain the physics of dark matter and dark energy using weak gravitational lensing and other methods. Lossy compression of the resultant data will improve the cost and feasibility of transmitting the images through the space communication network. We evaluate the consequences of the lossy compression algorithm of Bernstein et al. (2010) for the high-precision measurement of weak-lensing galaxy ellipticities. This square-root algorithm compresses each pixel independently, and the information discarded is by construction less than the Poisson error from photon shot noise. For simulated space-based images (without cosmicmore » rays) digitized to the typical 16 bits per pixel, application of the lossy compression followed by image-wise lossless compression yields images with only 2.4 bits per pixel, a factor of 6.7 compression. We demonstrate that this compression introduces no bias in the sky background. The compression introduces a small amount of additional digitization noise to the images, and we demonstrate a corresponding small increase in ellipticity measurement noise. The ellipticity measurement method is biased by the addition of noise, so the additional digitization noise is expected to induce a multiplicative bias on the galaxies measured ellipticities. After correcting for this known noise-induced bias, we find a residual multiplicative ellipticity bias of m {approx} -4 x 10 -4. This bias is small when compared to the many other issues that precision weak lensing surveys must confront, and furthermore we expect it to be reduced further with better calibration of ellipticity measurement methods.« less

  9. Lossless medical image compression using geometry-adaptive partitioning and least square-based prediction.

    PubMed

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2018-06-01

    To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.

  10. The impact of skull bone intensity on the quality of compressed CT neuro images

    NASA Astrophysics Data System (ADS)

    Kowalik-Urbaniak, Ilona; Vrscay, Edward R.; Wang, Zhou; Cavaro-Menard, Christine; Koff, David; Wallace, Bill; Obara, Boguslaw

    2012-02-01

    The increasing use of technologies such as CT and MRI, along with a continuing improvement in their resolution, has contributed to the explosive growth of digital image data being generated. Medical communities around the world have recognized the need for efficient storage, transmission and display of medical images. For example, the Canadian Association of Radiologists (CAR) has recommended compression ratios for various modalities and anatomical regions to be employed by lossy JPEG and JPEG2000 compression in order to preserve diagnostic quality. Here we investigate the effects of the sharp skull edges present in CT neuro images on JPEG and JPEG2000 lossy compression. We conjecture that this atypical effect is caused by the sharp edges between the skull bone and the background regions as well as between the skull bone and the interior regions. These strong edges create large wavelet coefficients that consume an unnecessarily large number of bits in JPEG2000 compression because of its bitplane coding scheme, and thus result in reduced quality at the interior region, which contains most diagnostic information in the image. To validate the conjecture, we investigate a segmentation based compression algorithm based on simple thresholding and morphological operators. As expected, quality is improved in terms of PSNR as well as the structural similarity (SSIM) image quality measure, and its multiscale (MS-SSIM) and informationweighted (IW-SSIM) versions. This study not only supports our conjecture, but also provides a solution to improve the performance of JPEG and JPEG2000 compression for specific types of CT images.

  11. Frequency of Magnetic Resonance Imaging patterns of tuberculous spondylitis in a public sector hospital

    PubMed Central

    Tabassum, Sumera; Haider, Shahbaz

    2016-01-01

    Objective: To determine frequencies of different MRI patterns of tuberculous spondylitisin a public sector hospital in Karachi. Methods: This descriptive multidisciplinary case series study was done from October 25, 2011 to May 28, 2012 in Radiology Department and Department of Medicine in the Jinnah Postgraduate Medical Center Karachi. MRI scans (dorsal / lumbosacral spine) of the Patients presenting with backache in Medical OPD, were performed in Radiology Department. Axial and sagittal images of T1 weighted, T2 weighted and STIR sequences of the affected region were taken. A total of 140 patients who were diagnosed as having tuberculous spondylitis were further evaluated and analyzed for having different patterns of involvement of the spine and compared with similar studies. Results: Among frequencies of different MRI pattern of tuberculous spondylitis, contiguous vertebral involvement was 100%, discal involvement 98.6%, paravertebral abscess 92.1% cases, epidural abscess 91.4%, spinal cord / thecal sac compression 89.3%, vertebral collapse 72.9%, gibbus deformity 42.9% and psoas abscess 36.4%. Conclusion: Contiguous vertebral involvement was commonest MRI pattern, followed by disk involvement, paravertebral & epidural abscesses, thecal sac compression and vertebral collapse. PMID:27022369

  12. 2D-pattern matching image and video compression: theory, algorithms, and experiments.

    PubMed

    Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth

    2002-01-01

    In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.

  13. Lossless and lossy compression of quantitative phase images of red blood cells obtained by digital holographic imaging.

    PubMed

    Jaferzadeh, Keyvan; Gholami, Samaneh; Moon, Inkyu

    2016-12-20

    In this paper, we evaluate lossless and lossy compression techniques to compress quantitative phase images of red blood cells (RBCs) obtained by an off-axis digital holographic microscopy (DHM). The RBC phase images are numerically reconstructed from their digital holograms and are stored in 16-bit unsigned integer format. In the case of lossless compression, predictive coding of JPEG lossless (JPEG-LS), JPEG2000, and JP3D are evaluated, and compression ratio (CR) and complexity (compression time) are compared against each other. It turns out that JP2k can outperform other methods by having the best CR. In the lossy case, JP2k and JP3D with different CRs are examined. Because some data is lost in a lossy way, the degradation level is measured by comparing different morphological and biochemical parameters of RBC before and after compression. Morphological parameters are volume, surface area, RBC diameter, sphericity index, and the biochemical cell parameter is mean corpuscular hemoglobin (MCH). Experimental results show that JP2k outperforms JP3D not only in terms of mean square error (MSE) when CR increases, but also in compression time in the lossy compression way. In addition, our compression results with both algorithms demonstrate that with high CR values the three-dimensional profile of RBC can be preserved and morphological and biochemical parameters can still be within the range of reported values.

  14. A hierarchical storage management (HSM) scheme for cost-effective on-line archival using lossy compression.

    PubMed

    Avrin, D E; Andriole, K P; Yin, L; Gould, R G; Arenson, R L

    2001-03-01

    A hierarchical storage management (HSM) scheme for cost-effective on-line archival of image data using lossy compression is described. This HSM scheme also provides an off-site tape backup mechanism and disaster recovery. The full-resolution image data are viewed originally for primary diagnosis, then losslessly compressed and sent off site to a tape backup archive. In addition, the original data are wavelet lossy compressed (at approximately 25:1 for computed radiography, 10:1 for computed tomography, and 5:1 for magnetic resonance) and stored on a large RAID device for maximum cost-effective, on-line storage and immediate retrieval of images for review and comparison. This HSM scheme provides a solution to 4 problems in image archiving, namely cost-effective on-line storage, disaster recovery of data, off-site tape backup for the legal record, and maximum intermediate storage and retrieval through the use of on-site lossy compression.

  15. Temporal compressive imaging for video

    NASA Astrophysics Data System (ADS)

    Zhou, Qun; Zhang, Linxia; Ke, Jun

    2018-01-01

    In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.

  16. Correlation estimation and performance optimization for distributed image compression

    NASA Astrophysics Data System (ADS)

    He, Zhihai; Cao, Lei; Cheng, Hui

    2006-01-01

    Correlation estimation plays a critical role in resource allocation and rate control for distributed data compression. A Wyner-Ziv encoder for distributed image compression is often considered as a lossy source encoder followed by a lossless Slepian-Wolf encoder. The source encoder consists of spatial transform, quantization, and bit plane extraction. In this work, we find that Gray code, which has been extensively used in digital modulation, is able to significantly improve the correlation between the source data and its side information. Theoretically, we analyze the behavior of Gray code within the context of distributed image compression. Using this theoretical model, we are able to efficiently allocate the bit budget and determine the code rate of the Slepian-Wolf encoder. Our experimental results demonstrate that the Gray code, coupled with accurate correlation estimation and rate control, significantly improves the picture quality, by up to 4 dB, over the existing methods for distributed image compression.

  17. Fast Plasma Instrument for MMS: Data Compression Simulation Results

    NASA Technical Reports Server (NTRS)

    Barrie, A.; Adrian, Mark L.; Yeh, P.-S.; Winkert, G. E.; Lobell, J. V.; Vinas, A.F.; Simpson, D. J.; Moore, T. E.

    2008-01-01

    Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. To meet these requirements, the Fast Plasma Instrument (FPI) consists of eight (8) identical half top-hat electron sensors and eights (8) identical ion sensors and an Instrument Data Processing Unit (IDPU). The sensors (electron or ion) are grouped into pairs whose 6 deg x 180 deg fields-of-view (FOV) are set 90 deg apart. Each sensor is equipped with electrostatic aperture steering to allow the sensor to scan a 45 deg x 180 deg fan about its nominal viewing (0 deg deflection) direction. Each pair of sensors, known as the Dual Electron Spectrometer (DES) and the Dual Ion Spectrometer (DIS), occupies a quadrant on the MMS spacecraft and the combination of the eight electron/ion sensors, employing aperture steering, image the full-sky every 30-ms (electrons) and 150-ms (ions), respectively. To probe the results in the DES complement of a given spacecraft generating 6.5-Mbs(exp -1) of electron data while the DIS generates 1.1-Mbs(exp -1) of ion data yielding an FPI total data rate of 6.6-MBs(exp -1). The FPI electron/ion data is collected by the IDPU then transmitted to the Central Data Instrument Processor (CIDP) on the spacecraft for science interest ranking. Only data sequences that contain the greatest amount of temporal/spatial structure will be intelligently down-linked by the spacecraft. Currently, the FPI data rate allocation to the CIDP is 1.5-Mbs(exp -1). Consequently, the FPI-IDPU must employ data/image compression to meet this CIDP telemetry allocation. Here, we present simulations of the CCSDS 122.0-B-1 algorithm-based compression of the FPI-DES electron data. Compression analysis is based upon a seed of re-processed Cluster/PEACE electron measurements. Topics to be discussed include: review of compression algorithm; data quality; data formatting/organization; and, implications for data/matrix pruning. To conclude a presentation of the base-lined FPI data compression approach is provided.

  18. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  19. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    PubMed

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  20. Diffusion tensor imaging with quantitative evaluation and fiber tractography of lumbar nerve roots in sciatica.

    PubMed

    Shi, Yin; Zong, Min; Xu, Xiaoquan; Zou, Yuefen; Feng, Yang; Liu, Wei; Wang, Chuanbing; Wang, Dehang

    2015-04-01

    To quantitatively evaluate nerve roots by measuring fractional anisotropy (FA) values in healthy volunteers and sciatica patients, visualize nerve roots by tractography, and compare the diagnostic efficacy between conventional magnetic resonance imaging (MRI) and DTI. Seventy-five sciatica patients and thirty-six healthy volunteers underwent MR imaging using DTI. FA values for L5-S1 lumbar nerve roots were calculated at three levels from DTI images. Tractography was performed on L3-S1 nerve roots. ROC analysis was performed for FA values. The lumbar nerve roots were visualized and FA values were calculated in all subjects. FA values decreased in compressed nerve roots and declined from proximal to distal along the compressed nerve tracts. Mean FA values were more sensitive and specific than MR imaging for differentiating compressed nerve roots, especially in the far lateral zone at distal nerves. DTI can quantitatively evaluate compressed nerve roots, and DTT enables visualization of abnormal nerve tracts, providing vivid anatomic information and localization of probable nerve compression. DTI has great potential utility for evaluating lumbar nerve compression in sciatica. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  1. EVALUATION OF REGISTRATION, COMPRESSION AND CLASSIFICATION ALGORITHMS

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R.

    1994-01-01

    Several types of algorithms are generally used to process digital imagery such as Landsat data. The most commonly used algorithms perform the task of registration, compression, and classification. Because there are different techniques available for performing registration, compression, and classification, imagery data users need a rationale for selecting a particular approach to meet their particular needs. This collection of registration, compression, and classification algorithms was developed so that different approaches could be evaluated and the best approach for a particular application determined. Routines are included for six registration algorithms, six compression algorithms, and two classification algorithms. The package also includes routines for evaluating the effects of processing on the image data. This collection of routines should be useful to anyone using or developing image processing software. Registration of image data involves the geometrical alteration of the imagery. Registration routines available in the evaluation package include image magnification, mapping functions, partitioning, map overlay, and data interpolation. The compression of image data involves reducing the volume of data needed for a given image. Compression routines available in the package include adaptive differential pulse code modulation, two-dimensional transforms, clustering, vector reduction, and picture segmentation. Classification of image data involves analyzing the uncompressed or compressed image data to produce inventories and maps of areas of similar spectral properties within a scene. The classification routines available include a sequential linear technique and a maximum likelihood technique. The choice of the appropriate evaluation criteria is quite important in evaluating the image processing functions. The user is therefore given a choice of evaluation criteria with which to investigate the available image processing functions. All of the available evaluation criteria basically compare the observed results with the expected results. For the image reconstruction processes of registration and compression, the expected results are usually the original data or some selected characteristics of the original data. For classification processes the expected result is the ground truth of the scene. Thus, the comparison process consists of determining what changes occur in processing, where the changes occur, how much change occurs, and the amplitude of the change. The package includes evaluation routines for performing such comparisons as average uncertainty, average information transfer, chi-square statistics, multidimensional histograms, and computation of contingency matrices. This collection of routines is written in FORTRAN IV for batch execution and has been implemented on an IBM 360 computer with a central memory requirement of approximately 662K of 8 bit bytes. This collection of image processing and evaluation routines was developed in 1979.

  2. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received prior to the loss can be used to reconstruct that partition at lower fidelity. By virtue of the compression improvement it achieves relative to previous means of onboard data compression, this software enables (1) increased return of hyperspectral scientific data in the presence of limits on the rates of transmission of data from spacecraft to Earth via radio communication links and/or (2) reduction in spacecraft radio-communication power and/or cost through reduction in the amounts of data required to be downlinked and stored onboard prior to downlink. The software is also suitable for compressing hyperspectral images for ground storage or archival purposes.

  3. Side information in coded aperture compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Galvis, Laura; Arguello, Henry; Lau, Daniel; Arce, Gonzalo R.

    2017-02-01

    Coded aperture compressive spectral imagers sense a three-dimensional cube by using two-dimensional projections of the coded and spectrally dispersed source. These imagers systems often rely on FPA detectors, SLMs, micromirror devices (DMDs), and dispersive elements. The use of the DMDs to implement the coded apertures facilitates the capture of multiple projections, each admitting a different coded aperture pattern. The DMD allows not only to collect the sufficient number of measurements for spectrally rich scenes or very detailed spatial scenes but to design the spatial structure of the coded apertures to maximize the information content on the compressive measurements. Although sparsity is the only signal characteristic usually assumed for reconstruction in compressing sensing, other forms of prior information such as side information have been included as a way to improve the quality of the reconstructions. This paper presents the coded aperture design in a compressive spectral imager with side information in the form of RGB images of the scene. The use of RGB images as side information of the compressive sensing architecture has two main advantages: the RGB is not only used to improve the reconstruction quality but to optimally design the coded apertures for the sensing process. The coded aperture design is based on the RGB scene and thus the coded aperture structure exploits key features such as scene edges. Real reconstructions of noisy compressed measurements demonstrate the benefit of the designed coded apertures in addition to the improvement in the reconstruction quality obtained by the use of side information.

  4. Assessment of low-contrast detectability for compressed digital chest images

    NASA Astrophysics Data System (ADS)

    Cook, Larry T.; Insana, Michael F.; McFadden, Michael A.; Hall, Timothy J.; Cox, Glendon G.

    1994-04-01

    The ability of human observers to detect low-contrast targets in screen-film (SF) images, computed radiographic (CR) images, and compressed CR images was measured using contrast detail (CD) analysis. The results of these studies were used to design a two- alternative forced-choice (2AFC) experiment to investigate the detectability of nodules in adult chest radiographs. CD curves for a common screen-film system were compared with CR images compressed up to 125:1. Data from clinical chest exams were used to define a CD region of clinical interest that sufficiently challenged the observer. From that data, simulated lesions were introduced into 100 normal CR chest films, and forced-choice observer performance studies were performed. CR images were compressed using a full-frame discrete cosine transform (FDCT) technique, where the 2D Fourier space was divided into four areas of different quantization depending on the cumulative power spectrum (energy) of each image. The characteristic curve of the CR images was adjusted so that optical densities matched those of the SF system. The CD curves for SF and uncompressed CR systems were statistically equivalent. The slope of the CD curve for each was - 1.0 as predicted by the Rose model. There was a significant degradation in detection found for CR images compressed to 125:1. Furthermore, contrast-detail analysis demonstrated that many pulmonary nodules encountered in clinical practice are significantly above the average observer threshold for detection. We designed a 2AFC observer study using simulated 1-cm lesions introduced into normal CR chest radiographs. Detectability was reduced for all compressed CR radiographs.

  5. Comparison of Open Source Compression Algorithms on Vhr Remote Sensing Images for Efficient Storage Hierarchy

    NASA Astrophysics Data System (ADS)

    Akoguz, A.; Bozkurt, S.; Gozutok, A. A.; Alp, G.; Turan, E. G.; Bogaz, M.; Kent, S.

    2016-06-01

    High resolution level in satellite imagery came with its fundamental problem as big amount of telemetry data which is to be stored after the downlink operation. Moreover, later the post-processing and image enhancement steps after the image is acquired, the file sizes increase even more and then it gets a lot harder to store and consume much more time to transmit the data from one source to another; hence, it should be taken into account that to save even more space with file compression of the raw and various levels of processed data is a necessity for archiving stations to save more space. Lossless data compression algorithms that will be examined in this study aim to provide compression without any loss of data holding spectral information. Within this objective, well-known open source programs supporting related compression algorithms have been implemented on processed GeoTIFF images of Airbus Defence & Spaces SPOT 6 & 7 satellites having 1.5 m. of GSD, which were acquired and stored by ITU Center for Satellite Communications and Remote Sensing (ITU CSCRS), with the algorithms Lempel-Ziv-Welch (LZW), Lempel-Ziv-Markov chain Algorithm (LZMA & LZMA2), Lempel-Ziv-Oberhumer (LZO), Deflate & Deflate 64, Prediction by Partial Matching (PPMd or PPM2), Burrows-Wheeler Transform (BWT) in order to observe compression performances of these algorithms over sample datasets in terms of how much of the image data can be compressed by ensuring lossless compression.

  6. Development of self-compressing BLSOM for comprehensive analysis of big sequence data.

    PubMed

    Kikuchi, Akihito; Ikemura, Toshimichi; Abe, Takashi

    2015-01-01

    With the remarkable increase in genomic sequence data from various organisms, novel tools are needed for comprehensive analyses of available big sequence data. We previously developed a Batch-Learning Self-Organizing Map (BLSOM), which can cluster genomic fragment sequences according to phylotype solely dependent on oligonucleotide composition and applied to genome and metagenomic studies. BLSOM is suitable for high-performance parallel-computing and can analyze big data simultaneously, but a large-scale BLSOM needs a large computational resource. We have developed Self-Compressing BLSOM (SC-BLSOM) for reduction of computation time, which allows us to carry out comprehensive analysis of big sequence data without the use of high-performance supercomputers. The strategy of SC-BLSOM is to hierarchically construct BLSOMs according to data class, such as phylotype. The first-layer BLSOM was constructed with each of the divided input data pieces that represents the data subclass, such as phylotype division, resulting in compression of the number of data pieces. The second BLSOM was constructed with a total of weight vectors obtained in the first-layer BLSOMs. We compared SC-BLSOM with the conventional BLSOM by analyzing bacterial genome sequences. SC-BLSOM could be constructed faster than BLSOM and cluster the sequences according to phylotype with high accuracy, showing the method's suitability for efficient knowledge discovery from big sequence data.

  7. Cost-effective handling of digital medical images in the telemedicine environment.

    PubMed

    Choong, Miew Keen; Logeswaran, Rajasvaran; Bister, Michel

    2007-09-01

    This paper concentrates on strategies for less costly handling of medical images. Aspects of digitization using conventional digital cameras, lossy compression with good diagnostic quality, and visualization through less costly monitors are discussed. For digitization of film-based media, subjective evaluation of the suitability of digital cameras as an alternative to the digitizer was undertaken. To save on storage, bandwidth and transmission time, the acceptable degree of compression with diagnostically no loss of important data was studied through randomized double-blind tests of the subjective image quality when compression noise was kept lower than the inherent noise. A diagnostic experiment was undertaken to evaluate normal low cost computer monitors as viable viewing displays for clinicians. The results show that conventional digital camera images of X-ray images were diagnostically similar to the expensive digitizer. Lossy compression, when used moderately with the imaging noise to compression noise ratio (ICR) greater than four, can bring about image improvement with better diagnostic quality than the original image. Statistical analysis shows that there is no diagnostic difference between expensive high quality monitors and conventional computer monitors. The results presented show good potential in implementing the proposed strategies to promote widespread cost-effective telemedicine and digital medical environments. 2006 Elsevier Ireland Ltd

  8. Toward an MRI-based method to measure non-uniform cartilage deformation: an MRI-cyclic loading apparatus system and steady-state cyclic displacement of articular cartilage under compressive loading.

    PubMed

    Neu, C P; Hull, M L

    2003-04-01

    Recent magnetic resonance imaging (MRI) techniques have shown potential for measuring non-uniform deformations throughout the volume (i.e. three-dimensional (3D) deformations) in small orthopedic tissues such as articular cartilage. However, to analyze cartilage deformation using MRI techniques, a system is required which can construct images from multiple acquisitions of MRI signals from the cartilage in both the underformed and deformed states. The objectives of the work reported in this article were to 1) design an apparatus that could apply highly repeatable cyclic compressive loads of 400 N and operate in the bore of an MRI scanner, 2) demonstrate that the apparatus and MRI scanner can be successfully integrated to observe 3D deformations in a phantom material, 3) use the apparatus to determine the load cycle necessary to achieve a steady-state deformation response in normal bovine articular cartilage samples using a flat-surfaced and nonporous indentor in unconfined compression. Composed of electronic and pneumatic components, the apparatus regulated pressure to a double-acting pneumatic cylinder so that (1) load-controlled compression cycles were applied to cartilage samples immersed in a saline bath, (2) loading and recovery periods within a cycle varied in time duration, and (3) load magnitude varied so that the stress applied to cartilage samples was within typical physiological ranges. In addition the apparatus allowed gating for MR image acquisition, and operation within the bore of an MRI scanner without creating image artifacts. The apparatus demonstrated high repeatability in load application with a standard deviation of 1.8% of the mean 400 N load applied. When the apparatus was integrated with an MRI scanner programmed with appropriate pulse sequences, images of a phantom material in both the underformed and deformed states were constructed by assembling data acquired through multiple signal acquisitions. Additionally, the number of cycles to reach a steady-state response in normal bovine articular cartilage was 49 for a total cycle duration of 5 seconds, but decreased to 33 and 27 for increasing total cycle durations of 10 and 15 seconds, respectively. Once the steady-state response was achieved, 95% of all displacements were within +/- 7.42 microns of the mean displacement, indicating that the displacement response to the cyclic loads was highly repeatable. With this performance, the MRI-loading apparatus system meets the requirements to create images of articular cartilage from which 3D deformation can be determined.

  9. Joint reconstruction of multiview compressed images.

    PubMed

    Thirumalai, Vijayaraghavan; Frossard, Pascal

    2013-05-01

    Distributed representation of correlated multiview images is an important problem that arises in vision sensor networks. This paper concentrates on the joint reconstruction problem where the distributively compressed images are decoded together in order to take benefit from the image correlation. We consider a scenario where the images captured at different viewpoints are encoded independently using common coding solutions (e.g., JPEG) with a balanced rate distribution among different cameras. A central decoder first estimates the inter-view image correlation from the independently compressed data. The joint reconstruction is then cast as a constrained convex optimization problem that reconstructs total-variation (TV) smooth images, which comply with the estimated correlation model. At the same time, we add constraints that force the reconstructed images to be as close as possible to their compressed versions. We show through experiments that the proposed joint reconstruction scheme outperforms independent reconstruction in terms of image quality, for a given target bit rate. In addition, the decoding performance of our algorithm compares advantageously to state-of-the-art distributed coding schemes based on motion learning and on the DISCOVER algorithm.

  10. Compression of Encrypted Images Using Set Partitioning In Hierarchical Trees Algorithm

    NASA Astrophysics Data System (ADS)

    Sarika, G.; Unnithan, Harikuttan; Peter, Smitha

    2011-10-01

    When it is desired to transmit redundant data over an insecure channel, it is customary to encrypt the data. For encrypted real world sources such as images, the use of Markova properties in the slepian-wolf decoder does not work well for gray scale images. Here in this paper we propose a method of compression of an encrypted image. In the encoder section, the image is first encrypted and then it undergoes compression in resolution. The cipher function scrambles only the pixel values, but does not shuffle the pixel locations. After down sampling, each sub-image is encoded independently and the resulting syndrome bits are transmitted. The received image undergoes a joint decryption and decompression in the decoder section. By using the local statistics based on the image, it is recovered back. Here the decoder gets only lower resolution version of the image. In addition, this method provides only partial access to the current source at the decoder side, which improves the decoder's learning of the source statistics. The source dependency is exploited to improve the compression efficiency. This scheme provides better coding efficiency and less computational complexity.

  11. Towards an acoustic model-based poroelastic imaging method: II. experimental investigation.

    PubMed

    Berry, Gearóid P; Bamber, Jeffrey C; Miller, Naomi R; Barbone, Paul E; Bush, Nigel L; Armstrong, Cecil G

    2006-12-01

    Soft biological tissue contains mobile fluid. The volume fraction of this fluid and the ease with which it may be displaced through the tissue could be of diagnostic significance and may also have consequences for the validity with which strain images can be interpreted according to the traditional idealizations of elastography. In a previous paper, under the assumption of frictionless boundary conditions, the spatio-temporal behavior of the strain field inside a compressed cylindrical poroelastic sample was predicted (Berry et al. 2006). In this current paper, experimental evidence is provided to confirm these predictions. Finite element modeling was first used to extend the previous predictions to allow for the existence of contact friction between the sample and the compressor plates. Elastographic techniques were then applied to image the time-evolution of the strain inside cylindrical samples of tofu (a suitable poroelastic material) during sustained unconfined compression. The observed experimental strain behavior was found to be consistent with the theoretical predictions. In particular, every sample studied confirmed that reduced values of radial strain advance with time from the curved cylindrical surface inwards towards the axis of symmetry. Furthermore, by fitting the predictions of an analytical model to a time sequence of strain images, parametric images of two quantities, each related to one or more of three poroelastic material constants were produced. The two parametric images depicted the Poisson's ratio (nu(s)) of the solid matrix and the product of the aggregate modulus (H(A)) of the solid matrix with the permeability (k) of the solid matrix to the pore fluid. The means of the pixel values in these images, nu(s) = 0.088 (standard deviation 0.023) and H(A)k = 1.449 (standard deviation 0.269) x 10(-7) m(2) s(-1), were in agreement with values derived from previously published data for tofu (Righetti et al. 2005). The results provide the first experimental detection of the fluid-flow-induced characteristic diffusion-like behavior of the strain in a compressed poroelastic material and allow parameters related to the above material constants to be determined. We conclude that it may eventually be possible to use strain data to detect and measure characteristics of diffusely distributed mobile fluid in tissue spaces that are too small to be imaged directly.

  12. GDC 2: Compression of large collections of genomes

    PubMed Central

    Deorowicz, Sebastian; Danek, Agnieszka; Niemiec, Marcin

    2015-01-01

    The fall of prices of the high-throughput genome sequencing changes the landscape of modern genomics. A number of large scale projects aimed at sequencing many human genomes are in progress. Genome sequencing also becomes an important aid in the personalized medicine. One of the significant side effects of this change is a necessity of storage and transfer of huge amounts of genomic data. In this paper we deal with the problem of compression of large collections of complete genomic sequences. We propose an algorithm that is able to compress the collection of 1092 human diploid genomes about 9,500 times. This result is about 4 times better than what is offered by the other existing compressors. Moreover, our algorithm is very fast as it processes the data with speed 200 MB/s on a modern workstation. In a consequence the proposed algorithm allows storing the complete genomic collections at low cost, e.g., the examined collection of 1092 human genomes needs only about 700 MB when compressed, what can be compared to about 6.7 TB of uncompressed FASTA files. The source code is available at http://sun.aei.polsl.pl/REFRESH/index.php?page=projects&project=gdc&subpage=about. PMID:26108279

  13. GDC 2: Compression of large collections of genomes.

    PubMed

    Deorowicz, Sebastian; Danek, Agnieszka; Niemiec, Marcin

    2015-06-25

    The fall of prices of the high-throughput genome sequencing changes the landscape of modern genomics. A number of large scale projects aimed at sequencing many human genomes are in progress. Genome sequencing also becomes an important aid in the personalized medicine. One of the significant side effects of this change is a necessity of storage and transfer of huge amounts of genomic data. In this paper we deal with the problem of compression of large collections of complete genomic sequences. We propose an algorithm that is able to compress the collection of 1092 human diploid genomes about 9,500 times. This result is about 4 times better than what is offered by the other existing compressors. Moreover, our algorithm is very fast as it processes the data with speed 200 MB/s on a modern workstation. In a consequence the proposed algorithm allows storing the complete genomic collections at low cost, e.g., the examined collection of 1092 human genomes needs only about 700 MB when compressed, what can be compared to about 6.7 TB of uncompressed FASTA files. The source code is available at http://sun.aei.polsl.pl/REFRESH/index.php?page=projects&project=gdc&subpage=about.

  14. Survey Of Lossless Image Coding Techniques

    NASA Astrophysics Data System (ADS)

    Melnychuck, Paul W.; Rabbani, Majid

    1989-04-01

    Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.

  15. MR imaging of peripheral nervous system involvement: Parsonage-Turner syndrome.

    PubMed

    Zara, Gabriella; Gasparotti, Roberto; Manara, Renzo

    2012-04-15

    A 55-year-old woman complained of right scapular pain, like burning, radiating down his right arm and numbness in the first three fingers of the hand. Neurologic examination showed a slight deficit of the right brachial triceps muscle. Neurophysiological assessment showed a mild involvement of the seventh right spinal root (C7). Conventional MR imaging of the cervical spine showed mild disc protrusion at level C5-C6 without spinal root compression. High resolution MR neurography with multiplanar reconstruction along the course of the right brachial plexus showed a mild increase in signal intensity and thickening of the C7 root, middle trunk and posterior cord, consistent with Parsonage-Turner Syndrome. STIR images showed increased signal intensity in the right infraspinatus muscle innervated by the suprascapular nerve. In our case, sensitivity and specificity of the new MR sequences are higher than the clinical and neurophysiological evaluations. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. European Union RACE program contributions to digital audiovisual communications and services

    NASA Astrophysics Data System (ADS)

    de Albuquerque, Augusto; van Noorden, Leon; Badique', Eric

    1995-02-01

    The European Union RACE (R&D in advanced communications technologies in Europe) and the future ACTS (advanced communications technologies and services) programs have been contributing and continue to contribute to world-wide developments in audio-visual services. The paper focuses on research progress in: (1) Image data compression. Several methods of image analysis leading to the use of encoders based on improved hybrid DCT-DPCM (MPEG or not), object oriented, hybrid region/waveform or knowledge-based coding methods are discussed. (2) Program production in the aspects of 3D imaging, data acquisition, virtual scene construction, pre-processing and sequence generation. (3) Interoperability and multimedia access systems. The diversity of material available and the introduction of interactive or near- interactive audio-visual services led to the development of prestandards for video-on-demand (VoD) and interworking of multimedia services storage systems and customer premises equipment.

  17. National Information Systems Security Conference (19th) held in Baltimore, Maryland on October 22-25, 1996. Volume 1

    DTIC Science & Technology

    1996-10-25

    been demonstrated that steganography is ineffective 195 when images are stored using this compression algorithm [2]. Difficulty in designing a general...Despite the relative ease of employing steganography to covertly transport data in an uncompressed 24-bit image , lossy compression algorithms based on... image , the security threat that steganography poses cannot be completely eliminated by application of a transform-based lossy compression algorithm

  18. Wavelet compression techniques for hyperspectral data

    NASA Technical Reports Server (NTRS)

    Evans, Bruce; Ringer, Brian; Yeates, Mathew

    1994-01-01

    Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet transform coder was used for the two-dimensional compression. The third case used a three dimensional extension of this same algorithm.

  19. Modeling of video compression effects on target acquisition performance

    NASA Astrophysics Data System (ADS)

    Cha, Jae H.; Preece, Bradley; Espinola, Richard L.

    2009-05-01

    The effect of video compression on image quality was investigated from the perspective of target acquisition performance modeling. Human perception tests were conducted recently at the U.S. Army RDECOM CERDEC NVESD, measuring identification (ID) performance on simulated military vehicle targets at various ranges. These videos were compressed with different quality and/or quantization levels utilizing motion JPEG, motion JPEG2000, and MPEG-4 encoding. To model the degradation on task performance, the loss in image quality is fit to an equivalent Gaussian MTF scaled by the Structural Similarity Image Metric (SSIM). Residual compression artifacts are treated as 3-D spatio-temporal noise. This 3-D noise is found by taking the difference of the uncompressed frame, with the estimated equivalent blur applied, and the corresponding compressed frame. Results show good agreement between the experimental data and the model prediction. This method has led to a predictive performance model for video compression by correlating various compression levels to particular blur and noise input parameters for NVESD target acquisition performance model suite.

  20. Clinical evaluation of JPEG2000 compression for digital mammography

    NASA Astrophysics Data System (ADS)

    Sung, Min-Mo; Kim, Hee-Joung; Kim, Eun-Kyung; Kwak, Jin-Young; Yoo, Jae-Kyung; Yoo, Hyung-Sik

    2002-06-01

    Medical images, such as computed radiography (CR), and digital mammographic images will require large storage facilities and long transmission times for picture archiving and communications system (PACS) implementation. American College of Radiology and National Equipment Manufacturers Association (ACR/NEMA) group is planning to adopt a JPEG2000 compression algorithm in digital imaging and communications in medicine (DICOM) standard to better utilize medical images. The purpose of the study was to evaluate the compression ratios of JPEG2000 for digital mammographic images using peak signal-to-noise ratio (PSNR), receiver operating characteristic (ROC) analysis, and the t-test. The traditional statistical quality measures such as PSNR, which is a commonly used measure for the evaluation of reconstructed images, measures how the reconstructed image differs from the original by making pixel-by-pixel comparisons. The ability to accurately discriminate diseased cases from normal cases is evaluated using ROC curve analysis. ROC curves can be used to compare the diagnostic performance of two or more reconstructed images. The t test can be also used to evaluate the subjective image quality of reconstructed images. The results of the t test suggested that the possible compression ratios using JPEG2000 for digital mammographic images may be as much as 15:1 without visual loss or with preserving significant medical information at a confidence level of 99%, although both PSNR and ROC analyses suggest as much as 80:1 compression ratio can be achieved without affecting clinical diagnostic performance.

  1. A Distributed Compressive Sensing Scheme for Event Capture in Wireless Visual Sensor Networks

    NASA Astrophysics Data System (ADS)

    Hou, Meng; Xu, Sen; Wu, Weiling; Lin, Fei

    2018-01-01

    Image signals which acquired by wireless visual sensor network can be used for specific event capture. This event capture is realized by image processing at the sink node. A distributed compressive sensing scheme is used for the transmission of these image signals from the camera nodes to the sink node. A measurement and joint reconstruction algorithm for these image signals are proposed in this paper. Make advantage of spatial correlation between images within a sensing area, the cluster head node which as the image decoder can accurately co-reconstruct these image signals. The subjective visual quality and the reconstruction error rate are used for the evaluation of reconstructed image quality. Simulation results show that the joint reconstruction algorithm achieves higher image quality at the same image compressive rate than the independent reconstruction algorithm.

  2. Research on assessment and improvement method of remote sensing image reconstruction

    NASA Astrophysics Data System (ADS)

    Sun, Li; Hua, Nian; Yu, Yanbo; Zhao, Zhanping

    2018-01-01

    Remote sensing image quality assessment and improvement is an important part of image processing. Generally, the use of compressive sampling theory in remote sensing imaging system can compress images while sampling which can improve efficiency. A method of two-dimensional principal component analysis (2DPCA) is proposed to reconstruct the remote sensing image to improve the quality of the compressed image in this paper, which contain the useful information of image and can restrain the noise. Then, remote sensing image quality influence factors are analyzed, and the evaluation parameters for quantitative evaluation are introduced. On this basis, the quality of the reconstructed images is evaluated and the different factors influence on the reconstruction is analyzed, providing meaningful referential data for enhancing the quality of remote sensing images. The experiment results show that evaluation results fit human visual feature, and the method proposed have good application value in the field of remote sensing image processing.

  3. Two-dimensional imaging via a narrowband MIMO radar system with two perpendicular linear arrays.

    PubMed

    Wang, Dang-wei; Ma, Xiao-yan; Su, Yi

    2010-05-01

    This paper presents a system model and method for the 2-D imaging application via a narrowband multiple-input multiple-output (MIMO) radar system with two perpendicular linear arrays. Furthermore, the imaging formulation for our method is developed through a Fourier integral processing, and the parameters of antenna array including the cross-range resolution, required size, and sampling interval are also examined. Different from the spatial sequential procedure sampling the scattered echoes during multiple snapshot illuminations in inverse synthetic aperture radar (ISAR) imaging, the proposed method utilizes a spatial parallel procedure to sample the scattered echoes during a single snapshot illumination. Consequently, the complex motion compensation in ISAR imaging can be avoided. Moreover, in our array configuration, multiple narrowband spectrum-shared waveforms coded with orthogonal polyphase sequences are employed. The mainlobes of the compressed echoes from the different filter band could be located in the same range bin, and thus, the range alignment in classical ISAR imaging is not necessary. Numerical simulations based on synthetic data are provided for testing our proposed method.

  4. X-ray Computed Tomography Imaging of the Microstructure of Sand Particles Subjected to High Pressure One-Dimensional Compression

    PubMed Central

    al Mahbub, Asheque; Haque, Asadul

    2016-01-01

    This paper presents the results of X-ray CT imaging of the microstructure of sand particles subjected to high pressure one-dimensional compression leading to particle crushing. A high resolution X-ray CT machine capable of in situ imaging was employed to capture images of the whole volume of a sand sample subjected to compressive stresses up to 79.3 MPa. Images of the whole sample obtained at different load stages were analysed using a commercial image processing software (Avizo) to reveal various microstructural properties, such as pore and particle volume distributions, spatial distribution of void ratios, relative breakage, and anisotropy of particles. PMID:28774011

  5. X-ray Computed Tomography Imaging of the Microstructure of Sand Particles Subjected to High Pressure One-Dimensional Compression.

    PubMed

    Al Mahbub, Asheque; Haque, Asadul

    2016-11-03

    This paper presents the results of X-ray CT imaging of the microstructure of sand particles subjected to high pressure one-dimensional compression leading to particle crushing. A high resolution X-ray CT machine capable of in situ imaging was employed to capture images of the whole volume of a sand sample subjected to compressive stresses up to 79.3 MPa. Images of the whole sample obtained at different load stages were analysed using a commercial image processing software (Avizo) to reveal various microstructural properties, such as pore and particle volume distributions, spatial distribution of void ratios, relative breakage, and anisotropy of particles.

  6. Recent Mastcam and MAHLI Visible/Near-Infrared Spectrophotometric Observations: Pahrump Hills to Marias Pass

    NASA Astrophysics Data System (ADS)

    Johnson, J. R.; Bell, J. F., III; Hayes, A.; Deen, R. G.; Godber, A.; Arvidson, R. E.; Lemmon, M. T.

    2015-12-01

    The Mastcam imaging system on the Curiosity rover continued acquisition of multispectral images of the same terrain at multiple times of day at three new rover locations between sols 872 and 1003. These data sets will be used to investigate the light scattering properties of rocks and soils along the Curiosity traverse using radiative transfer models. Images were acquired by the Mastcam-34 (M-34) camera on Sols 872-892 at 8 times of day (Mojave drill location), Sols 914-917 (Telegraph Peak drill location) at 9 times of day, and Sols 1000-1003 at 8 times of day (Stimson-Murray Formation contact near Marias Pass). Data sets were acquired using filters centered at 445, 527, 751, and 1012 nm, and the images were jpeg-compressed. Data sets typically were pointed ~east and ~west to provide phase angle coverage from near 0° to 125-140° for a variety of rocks and soils. Also acquired on Sols 917-918 at the Telegraph Peak site was a multiple time-of-day Mastcam sequence pointed southeast using only the broadband Bayer filters that provided losslessly compressed images with phase angles ~55-129°. Navcam stereo images were also acquired with each data set to provide broadband photometry and terrain measurements for computing surface normals and local incidence and emission angles used in photometric modeling. On Sol 1028, the MAHLI camera was used as a goniometer to acquire images at 20 arm positions, all centered at the same location within the work volume from a near-constant distance of 85 cm from the surface. Although this experiment was run at only one time of day (~15:30 LTST), it provided phase angle coverage from ~30° to ~111°. The terrain included the contact between the uppermost portion of the Murray Formation and the Stimson sandstones, and was the first acquisition of both Mastcam and MALHI photometry images at the same rover location. The MAHLI images also allowed construction of a 3D shape model of the Stimson-Murray contact region. The attached figure shows a phase color composite of the western Stimson area, created using phase angles of 8°, 78°, and 130° at 751 nm. The red areas correspond to highly backscattering materials that appear to concentrate along linear fractures throughout this area. The blue areas correspond to more forward scattering materials dispersed through the stratigraphic sequence.

  7. Compressive Sensing Image Sensors-Hardware Implementation

    PubMed Central

    Dadkhah, Mohammadreza; Deen, M. Jamal; Shirani, Shahram

    2013-01-01

    The compressive sensing (CS) paradigm uses simultaneous sensing and compression to provide an efficient image acquisition technique. The main advantages of the CS method include high resolution imaging using low resolution sensor arrays and faster image acquisition. Since the imaging philosophy in CS imagers is different from conventional imaging systems, new physical structures have been developed for cameras that use the CS technique. In this paper, a review of different hardware implementations of CS encoding in optical and electrical domains is presented. Considering the recent advances in CMOS (complementary metal–oxide–semiconductor) technologies and the feasibility of performing on-chip signal processing, important practical issues in the implementation of CS in CMOS sensors are emphasized. In addition, the CS coding for video capture is discussed. PMID:23584123

  8. Improved compression technique for multipass color printers

    NASA Astrophysics Data System (ADS)

    Honsinger, Chris

    1998-01-01

    A multipass color printer prints a color image by printing one color place at a time in a prescribed order, e.g., in a four-color systems, the cyan plane may be printed first, the magenta next, and so on. It is desirable to discard the data related to each color plane once it has been printed, so that data from the next print may be downloaded. In this paper, we present a compression scheme that allows the release of a color plane memory, but still takes advantage of the correlation between the color planes. The compression scheme is based on a block adaptive technique for decorrelating the color planes followed by a spatial lossy compression of the decorrelated data. A preferred method of lossy compression is the DCT-based JPEG compression standard, as it is shown that the block adaptive decorrelation operations can be efficiently performed in the DCT domain. The result of the compression technique are compared to that of using JPEG on RGB data without any decorrelating transform. In general, the technique is shown to improve the compression performance over a practical range of compression ratios by at least 30 percent in all images, and up to 45 percent in some images.

  9. A comparison of the fractal and JPEG algorithms

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Shahshahani, M.

    1991-01-01

    A proprietary fractal image compression algorithm and the Joint Photographic Experts Group (JPEG) industry standard algorithm for image compression are compared. In every case, the JPEG algorithm was superior to the fractal method at a given compression ratio according to a root mean square criterion and a peak signal to noise criterion.

  10. Research on compressive sensing reconstruction algorithm based on total variation model

    NASA Astrophysics Data System (ADS)

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  11. Fast Plasma Instrument for MMS: Data Compression Simulation Results

    NASA Astrophysics Data System (ADS)

    Barrie, A. C.; Adrian, M. L.; Yeh, P.; Winkert, G. E.; Lobell, J. V.; Viňas, A. F.; Simpson, D. G.; Moore, T. E.

    2008-12-01

    Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. To meet these requirements, the Fast Plasma Instrument (FPI) consists of eight (8) identical half top-hat electron sensors and eight (8) identical ion sensors and an Instrument Data Processing Unit (IDPU). The sensors (electron or ion) are grouped into pairs whose 6° × 180° fields-of-view (FOV) are set 90° apart. Each sensor is equipped with electrostatic aperture steering to allow the sensor to scan a 45° × 180° fan about the its nominal viewing (0° deflection) direction. Each pair of sensors, known as the Dual Electron Spectrometer (DES) and the Dual Ion Spectrometer (DIS), occupies a quadrant on the MMS spacecraft and the combination of the eight electron/ion sensors, employing aperture steering, image the full-sky every 30-ms (electrons) and 150-ms (ions), respectively. To probe the diffusion regions of reconnection, the highest temporal/spatial resolution mode of FPI results in the DES complement of a given spacecraft generating 6.5-Mb s-1 of electron data while the DIS generates 1.1-Mb s-1 of ion data yielding an FPI total data rate of 7.6-Mb s-1. The FPI electron/ion data is collected by the IDPU then transmitted to the Central Data Instrument Processor (CIDP) on the spacecraft for science interest ranking. Only data sequences that contain the greatest amount of temporal/spatial structure will be intelligently down-linked by the spacecraft. Currently, the FPI data rate allocation to the CIDP is 1.5-Mb s-1. Consequently, the FPI-IDPU must employ data/image compression to meet this CIDP telemetry allocation. Here, we present simulations of the CCSDS 122.0-B-1 algorithm- based compression of the FPI-DES electron data. Compression analysis is based upon a seed of re- processed Cluster/PEACE electron measurements. Topics to be discussed include: (i) Review of compression algorithm; (ii) Data quality; (iii) Data formatting/organization; (iv) Compression optimization; and (v) Implications for data/matrix pruning. We conclude with a presentation of the base-lined FPI data compression approach.

  12. A novel high-frequency encoding algorithm for image compression

    NASA Astrophysics Data System (ADS)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-12-01

    In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.

  13. MINC 2.0: A Flexible Format for Multi-Modal Images.

    PubMed

    Vincent, Robert D; Neelin, Peter; Khalili-Mahani, Najmeh; Janke, Andrew L; Fonov, Vladimir S; Robbins, Steven M; Baghdadi, Leila; Lerch, Jason; Sled, John G; Adalat, Reza; MacDonald, David; Zijdenbos, Alex P; Collins, D Louis; Evans, Alan C

    2016-01-01

    It is often useful that an imaging data format can afford rich metadata, be flexible, scale to very large file sizes, support multi-modal data, and have strong inbuilt mechanisms for data provenance. Beginning in 1992, MINC was developed as a system for flexible, self-documenting representation of neuroscientific imaging data with arbitrary orientation and dimensionality. The MINC system incorporates three broad components: a file format specification, a programming library, and a growing set of tools. In the early 2000's the MINC developers created MINC 2.0, which added support for 64-bit file sizes, internal compression, and a number of other modern features. Because of its extensible design, it has been easy to incorporate details of provenance in the header metadata, including an explicit processing history, unique identifiers, and vendor-specific scanner settings. This makes MINC ideal for use in large scale imaging studies and databases. It also makes it easy to adapt to new scanning sequences and modalities.

  14. GTZ: a fast compression and cloud transmission tool optimized for FASTQ files.

    PubMed

    Xing, Yuting; Li, Gen; Wang, Zhenguo; Feng, Bolun; Song, Zhuo; Wu, Chengkun

    2017-12-28

    The dramatic development of DNA sequencing technology is generating real big data, craving for more storage and bandwidth. To speed up data sharing and bring data to computing resource faster and cheaper, it is necessary to develop a compression tool than can support efficient compression and transmission of sequencing data onto the cloud storage. This paper presents GTZ, a compression and transmission tool, optimized for FASTQ files. As a reference-free lossless FASTQ compressor, GTZ treats different lines of FASTQ separately, utilizes adaptive context modelling to estimate their characteristic probabilities, and compresses data blocks with arithmetic coding. GTZ can also be used to compress multiple files or directories at once. Furthermore, as a tool to be used in the cloud computing era, it is capable of saving compressed data locally or transmitting data directly into cloud by choice. We evaluated the performance of GTZ on some diverse FASTQ benchmarks. Results show that in most cases, it outperforms many other tools in terms of the compression ratio, speed and stability. GTZ is a tool that enables efficient lossless FASTQ data compression and simultaneous data transmission onto to cloud. It emerges as a useful tool for NGS data storage and transmission in the cloud environment. GTZ is freely available online at: https://github.com/Genetalks/gtz .

  15. Hyper-spectral image compression algorithm based on mixing transform of wave band grouping to eliminate redundancy

    NASA Astrophysics Data System (ADS)

    Xie, ChengJun; Xu, Lin

    2008-03-01

    This paper presents an algorithm based on mixing transform of wave band grouping to eliminate spectral redundancy, the algorithm adapts to the relativity difference between different frequency spectrum images, and still it works well when the band number is not the power of 2. Using non-boundary extension CDF(2,2)DWT and subtraction mixing transform to eliminate spectral redundancy, employing CDF(2,2)DWT to eliminate spatial redundancy and SPIHT+CABAC for compression coding, the experiment shows that a satisfied lossless compression result can be achieved. Using hyper-spectral image Canal of American JPL laboratory as the data set for lossless compression test, when the band number is not the power of 2, lossless compression result of this compression algorithm is much better than the results acquired by JPEG-LS, WinZip, ARJ, DPCM, the research achievements of a research team of Chinese Academy of Sciences, Minimum Spanning Tree and Near Minimum Spanning Tree, on the average the compression ratio of this algorithm exceeds the above algorithms by 41%,37%,35%,29%,16%,10%,8% respectively; when the band number is the power of 2, for 128 frames of the image Canal, taking 8, 16 and 32 respectively as the number of one group for groupings based on different numbers, considering factors like compression storage complexity, the type of wave band and the compression effect, we suggest using 8 as the number of bands included in one group to achieve a better compression effect. The algorithm of this paper has priority in operation speed and hardware realization convenience.

  16. Context dependent prediction and category encoding for DPCM image compression

    NASA Technical Reports Server (NTRS)

    Beaudet, Paul R.

    1989-01-01

    Efficient compression of image data requires the understanding of the noise characteristics of sensors as well as the redundancy expected in imagery. Herein, the techniques of Differential Pulse Code Modulation (DPCM) are reviewed and modified for information-preserving data compression. The modifications include: mapping from intensity to an equal variance space; context dependent one and two dimensional predictors; rationale for nonlinear DPCM encoding based upon an image quality model; context dependent variable length encoding of 2x2 data blocks; and feedback control for constant output rate systems. Examples are presented at compression rates between 1.3 and 2.8 bits per pixel. The need for larger block sizes, 2D context dependent predictors, and the hope for sub-bits-per-pixel compression which maintains spacial resolution (information preserving) are discussed.

  17. Information content exploitation of imaging spectrometer's images for lossless compression

    NASA Astrophysics Data System (ADS)

    Wang, Jianyu; Zhu, Zhenyu; Lin, Kan

    1996-11-01

    Imaging spectrometer, such as MAIS produces a tremendous volume of image data with up to 5.12 Mbps raw data rate, which needs urgently a real-time, efficient and reversible compression implementation. Between the lossy scheme with high compression ratio and the lossless scheme with high fidelity, we must make our choice based on the particular information content analysis of each imaging spectrometer's image data. In this paper, we present a careful analysis of information-preserving compression of imaging spectrometer MAIS with an entropy and autocorrelation study on the hyperspectral images. First, the statistical information in an actual MAIS image, captured in Marble Bar Australia, is measured with its entropy, conditional entropy, mutual information and autocorrelation coefficients on both spatial dimensions and spectral dimension. With these careful analyses, it is shown that there is high redundancy existing in the spatial dimensions, but the correlation in spectral dimension of the raw images is smaller than expected. The main reason of the nonstationarity on spectral dimension is attributed to the instruments's discrepancy on detector's response and channel's amplification in different spectral bands. To restore its natural correlation, we preprocess the signal in advance. There are two methods to accomplish this requirement: onboard radiation calibration and normalization. A better result can be achieved by the former one. After preprocessing, the spectral correlation increases so high that it contributes much redundancy in addition to spatial correlation. At last, an on-board hardware implementation for the lossless compression is presented with an ideal result.

  18. Distribution to the Astronomy Community of the Compressed Digitized Sky Survey

    NASA Astrophysics Data System (ADS)

    Postman, Marc

    1996-03-01

    The Space Telescope Science Institute has compressed an all-sky collection of ground-based images and has printed the data on a two volume, 102 CD-ROM disc set. The first part of the survey (containing images of the southern sky) was published in May 1994. The second volume (containing images of the northern sky) was published in January 1995. Software which manages the image retrieval is included with each volume. The Astronomical Society of the Pacific (ASP) is handling the distribution of the lOx compressed data and has sold 310 sets as of October 1996. ASP is also handling the distribution of the recently published 100x version of the northern sky survey which is publicly available at a low cost. The target markets for the 100x compressed data set are the amateur astronomy community, educational institutions, and the general public. During the next year, we plan to publish the first version of a photometric calibration database which will allow users of the compressed sky survey to determine the brightness of stars in the images.

  19. Distribution to the Astronomy Community of the Compressed Digitized Sky Survey

    NASA Technical Reports Server (NTRS)

    Postman, Marc

    1996-01-01

    The Space Telescope Science Institute has compressed an all-sky collection of ground-based images and has printed the data on a two volume, 102 CD-ROM disc set. The first part of the survey (containing images of the southern sky) was published in May 1994. The second volume (containing images of the northern sky) was published in January 1995. Software which manages the image retrieval is included with each volume. The Astronomical Society of the Pacific (ASP) is handling the distribution of the lOx compressed data and has sold 310 sets as of October 1996. ASP is also handling the distribution of the recently published 100x version of the northern sky survey which is publicly available at a low cost. The target markets for the 100x compressed data set are the amateur astronomy community, educational institutions, and the general public. During the next year, we plan to publish the first version of a photometric calibration database which will allow users of the compressed sky survey to determine the brightness of stars in the images.

  20. Subband/transform functions for image processing

    NASA Technical Reports Server (NTRS)

    Glover, Daniel

    1993-01-01

    Functions for image data processing written for use with the MATLAB(TM) software package are presented. These functions provide the capability to transform image data with block transformations (such as the Walsh Hadamard) and to produce spatial frequency subbands of the transformed data. Block transforms are equivalent to simple subband systems. The transform coefficients are reordered using a simple permutation to give subbands. The low frequency subband is a low resolution version of the original image, while the higher frequency subbands contain edge information. The transform functions can be cascaded to provide further decomposition into more subbands. If the cascade is applied to all four of the first stage subbands (in the case of a four band decomposition), then a uniform structure of sixteen bands is obtained. If the cascade is applied only to the low frequency subband, an octave structure of seven bands results. Functions for the inverse transforms are also given. These functions can be used for image data compression systems. The transforms do not in themselves produce data compression, but prepare the data for quantization and compression. Sample quantization functions for subbands are also given. A typical compression approach is to subband the image data, quantize it, then use statistical coding (e.g., run-length coding followed by Huffman coding) for compression. Contour plots of image data and subbanded data are shown.

  1. A zero-error operational video data compression system

    NASA Technical Reports Server (NTRS)

    Kutz, R. L.

    1973-01-01

    A data compression system has been operating since February 1972, using ATS spin-scan cloud cover data. With the launch of ITOS 3 in October 1972, this data compression system has become the only source of near-realtime very high resolution radiometer image data at the data processing facility. The VHRR image data are compressed and transmitted over a 50 kilobit per second wideband ground link. The goal of the data compression experiment was to send data quantized to six bits at twice the rate possible when no compression is used, while maintaining zero error between the transmitted and reconstructed data. All objectives of the data compression experiment were met, and thus a capability of doubling the data throughput of the system has been achieved.

  2. Efficient image compression algorithm for computer-animated images

    NASA Astrophysics Data System (ADS)

    Yfantis, Evangelos A.; Au, Matthew Y.; Miel, G.

    1992-10-01

    An image compression algorithm is described. The algorithm is an extension of the run-length image compression algorithm and its implementation is relatively easy. This algorithm was implemented and compared with other existing popular compression algorithms and with the Lempel-Ziv (LZ) coding. The Lempel-Ziv algorithm is available as a utility in the UNIX operating system and is also referred to as the UNIX uncompress. Sometimes our algorithm is best in terms of saving memory space, and sometimes one of the competing algorithms is best. The algorithm is lossless, and the intent is for the algorithm to be used in computer graphics animated images. Comparisons made with the LZ algorithm indicate that the decompression time using our algorithm is faster than that using the LZ algorithm. Once the data are in memory, a relatively simple and fast transformation is applied to uncompress the file.

  3. Adaptive bit plane quadtree-based block truncation coding for image compression

    NASA Astrophysics Data System (ADS)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  4. Design of a multi-spectral imager built using the compressive sensing single-pixel camera architecture

    NASA Astrophysics Data System (ADS)

    McMackin, Lenore; Herman, Matthew A.; Weston, Tyler

    2016-02-01

    We present the design of a multi-spectral imager built using the architecture of the single-pixel camera. The architecture is enabled by the novel sampling theory of compressive sensing implemented optically using the Texas Instruments DLP™ micro-mirror array. The array not only implements spatial modulation necessary for compressive imaging but also provides unique diffractive spectral features that result in a multi-spectral, high-spatial resolution imager design. The new camera design provides multi-spectral imagery in a wavelength range that extends from the visible to the shortwave infrared without reduction in spatial resolution. In addition to the compressive imaging spectrometer design, we present a diffractive model of the architecture that allows us to predict a variety of detailed functional spatial and spectral design features. We present modeling results, architectural design and experimental results that prove the concept.

  5. A new simultaneous compression and encryption method for images suitable to recognize form by optical correlation

    NASA Astrophysics Data System (ADS)

    Alfalou, Ayman; Elbouz, Marwa; Jridi, Maher; Loussert, Alain

    2009-09-01

    In some recognition form applications (which require multiple images: facial identification or sign-language), many images should be transmitted or stored. This requires the use of communication systems with a good security level (encryption) and an acceptable transmission rate (compression rate). In the literature, several encryption and compression techniques can be found. In order to use optical correlation, encryption and compression techniques cannot be deployed independently and in a cascade manner. Otherwise, our system will suffer from two major problems. In fact, we cannot simply use these techniques in a cascade manner without considering the impact of one technique over another. Secondly, a standard compression can affect the correlation decision, because the correlation is sensitive to the loss of information. To solve both problems, we developed a new technique to simultaneously compress & encrypt multiple images using a BPOF optimized filter. The main idea of our approach consists in multiplexing the spectrums of different transformed images by a Discrete Cosine Transform (DCT). To this end, the spectral plane should be divided into several areas and each of them corresponds to the spectrum of one image. On the other hand, Encryption is achieved using the multiplexing, a specific rotation functions, biometric encryption keys and random phase keys. A random phase key is widely used in optical encryption approaches. Finally, many simulations have been conducted. Obtained results corroborate the good performance of our approach. We should also mention that the recording of the multiplexed and encrypted spectra is optimized using an adapted quantification technique to improve the overall compression rate.

  6. Force balancing in mammographic compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Branderhorst, W., E-mail: w.branderhorst@amc.nl; Groot, J. E. de; Lier, M. G. J. T. B. van

    Purpose: In mammography, the height of the image receptor is adjusted to the patient before compressing the breast. An inadequate height setting can result in an imbalance between the forces applied by the image receptor and the paddle, causing the clamped breast to be pushed up or down relative to the body during compression. This leads to unnecessary stretching of the skin and other tissues around the breast, which can make the imaging procedure more painful for the patient. The goal of this study was to implement a method to measure and minimize the force imbalance, and to assess itsmore » feasibility as an objective and reproducible method of setting the image receptor height. Methods: A trial was conducted consisting of 13 craniocaudal mammographic compressions on a silicone breast phantom, each with the image receptor positioned at a different height. The image receptor height was varied over a range of 12 cm. In each compression, the force exerted by the compression paddle was increased up to 140 N in steps of 10 N. In addition to the paddle force, the authors measured the force exerted by the image receptor and the reaction force exerted on the patient body by the ground. The trial was repeated 8 times, with the phantom remounted at a slightly different orientation and position between the trials. Results: For a given paddle force, the obtained results showed that there is always exactly one image receptor height that leads to a balance of the forces on the breast. For the breast phantom, deviating from this specific height increased the force imbalance by 9.4 ± 1.9 N/cm (6.7%) for 140 N paddle force, and by 7.1 ± 1.6 N/cm (17.8%) for 40 N paddle force. The results also show that in situations where the force exerted by the image receptor is not measured, the craniocaudal force imbalance can still be determined by positioning the patient on a weighing scale and observing the changes in displayed weight during the procedure. Conclusions: In mammographic breast compression, even small changes in the image receptor height can lead to a severe imbalance of the applied forces. This may make the procedure more painful than necessary and, in case the image receptor is set too low, may lead to image quality issues and increased radiation dose due to undercompression. In practice, these effects can be reduced by monitoring the force imbalance and actively adjusting the position of the image receptor throughout the compression.« less

  7. Compression techniques in tele-radiology

    NASA Astrophysics Data System (ADS)

    Lu, Tianyu; Xiong, Zixiang; Yun, David Y.

    1999-10-01

    This paper describes a prototype telemedicine system for remote 3D radiation treatment planning. Due to voluminous medical image data and image streams generated in interactive frame rate involved in the application, the importance of deploying adjustable lossy to lossless compression techniques is emphasized in order to achieve acceptable performance via various kinds of communication networks. In particular, the compression of the data substantially reduces the transmission time and therefore allows large-scale radiation distribution simulation and interactive volume visualization using remote supercomputing resources in a timely fashion. The compression algorithms currently used in the software we developed are JPEG and H.263 lossy methods and Lempel-Ziv (LZ77) lossless methods. Both objective and subjective assessment of the effect of lossy compression methods on the volume data are conducted. Favorable results are obtained showing that substantial compression ratio is achievable within distortion tolerance. From our experience, we conclude that 30dB (PSNR) is about the lower bound to achieve acceptable quality when applying lossy compression to anatomy volume data (e.g. CT). For computer simulated data, much higher PSNR (up to 100dB) is expectable. This work not only introduces such novel approach for delivering medical services that will have significant impact on the existing cooperative image-based services, but also provides a platform for the physicians to assess the effects of lossy compression techniques on the diagnostic and aesthetic appearance of medical imaging.

  8. Graphics processing unit-assisted lossless decompression

    DOEpatents

    Loughry, Thomas A.

    2016-04-12

    Systems and methods for decompressing compressed data that has been compressed by way of a lossless compression algorithm are described herein. In a general embodiment, a graphics processing unit (GPU) is programmed to receive compressed data packets and decompress such packets in parallel. The compressed data packets are compressed representations of an image, and the lossless compression algorithm is a Rice compression algorithm.

  9. Application of alkaliphilic biofilm-forming bacteria to improve compressive strength of cement-sand mortar.

    PubMed

    Park, Sung-Jin; Chun, Woo-Young; Kim, Wha-Jung; Ghim, Sa-Youl

    2012-03-01

    The application of microorganisms in the field of construction material is rapidly increasing worldwide; however, almost all studies that were investigated were bacterial sources with mineral-producing activity and not with organic substances. The difference in the efficiency of using bacteria as an organic agent is that it could improve the durability of cement material. This study aimed to assess the use of biofilm-forming microorganisms as binding agents to increase the compressive strength of cement-sand material. We isolated 13 alkaliphilic biofilmforming bacteria (ABB) from a cement tetrapod block in the West Sea, Korea. Using 16S RNA sequence analysis, the ABB were partially identified as Bacillus algicola KNUC501 and Exiguobacterium marinum KNUC513. KNUC513 was selected for further study following analysis of pH and biofilm formation. Cement-sand mortar cubes containing KNUC513 exhibited greater compressive strength than mineral-forming bacteria (Sporosarcina pasteurii and Arthrobacter crystallopoietes KNUC403). To determine the biofilm effect, Dnase I was used to suppress the biofilm formation of KNUC513. Field emission scanning electron microscopy image revealed the direct involvement of organic-inorganic substance in cement-sand mortar.

  10. Shaping the spectrum of random-phase radar waveforms

    DOEpatents

    Doerry, Armin W.; Marquette, Brandeis

    2017-05-09

    The various technologies presented herein relate to generation of a desired waveform profile in the form of a spectrum of apparently random noise (e.g., white noise or colored noise), but with precise spectral characteristics. Hence, a waveform profile that could be readily determined (e.g., by a spoofing system) is effectively obscured. Obscuration is achieved by dividing the waveform into a series of chips, each with an assigned frequency, wherein the sequence of chips are subsequently randomized. Randomization can be a function of the application of a key to the chip sequence. During processing of the echo pulse, a copy of the randomized transmitted pulse is recovered or regenerated against which the received echo is correlated. Hence, with the echo energy range-compressed in this manner, it is possible to generate a radar image with precise impulse response.

  11. Compression fractures of the back

    MedlinePlus

    ... treatments. Surgery can include: Balloon kyphoplasty Vertebroplasty Spinal fusion Other surgery may be done to remove bone ... Alternative Names Vertebral compression fractures; Osteoporosis - compression fracture Images Compression fracture References Cosman F, de Beur SJ, ...

  12. Compression of electromyographic signals using image compression techniques.

    PubMed

    Costa, Marcus Vinícius Chaffim; Berger, Pedro de Azevedo; da Rocha, Adson Ferreira; de Carvalho, João Luiz Azevedo; Nascimento, Francisco Assis de Oliveira

    2008-01-01

    Despite the growing interest in the transmission and storage of electromyographic signals for long periods of time, few studies have addressed the compression of such signals. In this article we present an algorithm for compression of electromyographic signals based on the JPEG2000 coding system. Although the JPEG2000 codec was originally designed for compression of still images, we show that it can also be used to compress EMG signals for both isotonic and isometric contractions. For EMG signals acquired during isometric contractions, the proposed algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.75% to 13.7%. For isotonic EMG signals, the algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.4% to 7%. The compression results using the JPEG2000 algorithm were compared to those using other algorithms based on the wavelet transform.

  13. Hardware Implementation of Lossless Adaptive Compression of Data From a Hyperspectral Imager

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didlier; Aranki, Nazeeh I.; Klimesh, Matthew A.; Bakhshi, Alireza

    2012-01-01

    Efficient onboard data compression can reduce the data volume from hyperspectral imagers on NASA and DoD spacecraft in order to return as much imagery as possible through constrained downlink channels. Lossless compression is important for signature extraction, object recognition, and feature classification capabilities. To provide onboard data compression, a hardware implementation of a lossless hyperspectral compression algorithm was developed using a field programmable gate array (FPGA). The underlying algorithm is the Fast Lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral- Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), p. 26 with the modification reported in Lossless, Multi-Spectral Data Comressor for Improved Compression for Pushbroom-Type Instruments (NPO-45473), NASA Tech Briefs, Vol. 32, No. 7 (July 2008) p. 63, which provides improved compression performance for data from pushbroom-type imagers. An FPGA implementation of the unmodified FL algorithm was previously developed and reported in Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System (NPO-46867), NASA Tech Briefs, Vol. 36, No. 5 (May 2012) p. 42. The essence of the FL algorithm is adaptive linear predictive compression using the sign algorithm for filter adaption. The FL compressor achieves a combination of low complexity and compression effectiveness that exceeds that of stateof- the-art techniques currently in use. The modification changes the predictor structure to tolerate differences in sensitivity of different detector elements, as occurs in pushbroom-type imagers, which are suitable for spacecraft use. The FPGA implementation offers a low-cost, flexible solution compared to traditional ASIC (application specific integrated circuit) and can be integrated as an intellectual property (IP) for part of, e.g., a design that manages the instrument interface. The FPGA implementation was benchmarked on the Xilinx Virtex IV LX25 device, and ported to a Xilinx prototype board. The current implementation has a critical path of 29.5 ns, which dictated a clock speed of 33 MHz. The critical path delay is end-to-end measurement between the uncompressed input data and the output compression data stream. The implementation compresses one sample every clock cycle, which results in a speed of 33 Msample/s. The implementation has a rather low device use of the Xilinx Virtex IV LX25, making the total power consumption of the implementation about 1.27 W.

  14. Distributed Compressive Sensing vs. Dynamic Compressive Sensing: Improving the Compressive Line Sensing Imaging System through Their Integration

    DTIC Science & Technology

    2015-01-01

    streak tube imaging Lidar [15]. Nevertheless, instead of one- dimensional (1D) fan beam, a laser source modulates the digital micromirror device DMD and...Trans. Inform. Theory, vol. 52, pp. 1289-1306, 2006. [10] D. Dudley, W. Duncan and J. Slaughter, "Emerging Digital Micromirror Device (DMD) Applications

  15. Bandwidth compression of multispectral satellite imagery

    NASA Technical Reports Server (NTRS)

    Habibi, A.

    1978-01-01

    The results of two studies aimed at developing efficient adaptive and nonadaptive techniques for compressing the bandwidth of multispectral images are summarized. These techniques are evaluated and compared using various optimality criteria including MSE, SNR, and recognition accuracy of the bandwidth compressed images. As an example of future requirements, the bandwidth requirements for the proposed Landsat-D Thematic Mapper are considered.

  16. Medical Image Compression Using a New Subband Coding Method

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen; Tucker, Doug

    1995-01-01

    A recently introduced iterative complexity- and entropy-constrained subband quantization design algorithm is generalized and applied to medical image compression. In particular, the corresponding subband coder is used to encode Computed Tomography (CT) axial slice head images, where statistical dependencies between neighboring image subbands are exploited. Inter-slice conditioning is also employed for further improvements in compression performance. The subband coder features many advantages such as relatively low complexity and operation over a very wide range of bit rates. Experimental results demonstrate that the performance of the new subband coder is relatively good, both objectively and subjectively.

  17. Injectant mole-fraction imaging in compressible mixing flows using planar laser-induced iodine fluorescence

    NASA Technical Reports Server (NTRS)

    Hartfield, Roy J., Jr.; Abbitt, John D., III; Mcdaniel, James C.

    1989-01-01

    A technique is described for imaging the injectant mole-fraction distribution in nonreacting compressible mixing flow fields. Planar fluorescence from iodine, seeded into air, is induced by a broadband argon-ion laser and collected using an intensified charge-injection-device array camera. The technique eliminates the thermodynamic dependence of the iodine fluorescence in the compressible flow field by taking the ratio of two images collected with identical thermodynamic flow conditions but different iodine seeding conditions.

  18. Adaptive temporal compressive sensing for video with motion estimation

    NASA Astrophysics Data System (ADS)

    Wang, Yeru; Tang, Chaoying; Chen, Yueting; Feng, Huajun; Xu, Zhihai; Li, Qi

    2018-04-01

    In this paper, we present an adaptive reconstruction method for temporal compressive imaging with pixel-wise exposure. The motion of objects is first estimated from interpolated images with a designed coding mask. With the help of motion estimation, image blocks are classified according to the degree of motion and reconstructed with the corresponding dictionary, which was trained beforehand. Both the simulation and experiment results show that the proposed method can obtain accurate motion information before reconstruction and efficiently reconstruct compressive video.

  19. Simulation of breast compression in mammography using finite element analysis: A preliminary study

    NASA Astrophysics Data System (ADS)

    Liu, Yan-Lin; Liu, Pei-Yuan; Huang, Mei-Lan; Hsu, Jui-Ting; Han, Ruo-Ping; Wu, Jay

    2017-11-01

    Adequate compression during mammography lowers the absorbed dose in the breast and improves the image quality. The compressed breast thickness (CBT) is affected by various factors, such as breast volume, glandularity, and compression force. In this study, we used the finite element analysis to simulate breast compression and deformation and validated the simulated CBT with clinical mammography results. Image data from ten subjects who had undergone mammography screening and breast magnetic resonance imaging (MRI) were collected, and their breast models were created according to the MR images. The non-linear tissue deformation under 10-16 daN in the cranial-caudal direction was simulated. When the clinical compression force was used, the simulated CBT ranged from 2.34 to 5.90 cm. The absolute difference between the simulated CBT and the clinically measured CBT ranged from 0.5 to 7.1 mm. The simulated CBT had a strong positive linear relationship to breast volume and a weak negative correlation to glandularity. The average simulated CBT under 10, 12, 14, and 16 daN was 5.68, 5.12, 4.67, and 4.25 cm, respectively. Through this study, the relationships between CBT, breast volume, glandularity, and compression force are provided for use in clinical mammography.

  20. Visually Lossless Data Compression for Real-Time Frame/Pushbroom Space Science Imagers

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Venbrux, Jack; Bhatia, Prakash; Miller, Warner H.

    2000-01-01

    A visually lossless data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also applicable to frame based imaging and is error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on a block transform of a hybrid of modulated lapped transform (MLT) and discrete cosine transform (DCT), or a 2-dimensional lapped transform, followed by bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate as desired by the user. The approach requires no unique table to maximize its performance. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Flight qualified hardware implementations are in development; a functional chip set is expected by the end of 2001. The chip set is being designed to compress data in excess of 20 Msamples/sec and support quantizations from 2 to 16 bits.

  1. TU-CD-207-09: Analysis of the 3-D Shape of Patients’ Breast for Breast Imaging and Surgery Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agasthya, G; Sechopoulos, I

    2015-06-15

    Purpose: Develop a method to accurately capture the 3-D shape of patients’ external breast surface before and during breast compression for mammography/tomosynthesis. Methods: During this IRB-approved, HIPAA-compliant study, 50 women were recruited to undergo 3-D breast surface imaging during breast compression and imaging for the cranio-caudal (CC) view on a digital mammography/breast tomosynthesis system. Digital projectors and cameras mounted on tripods were used to acquire 3-D surface images of the breast, in three conditions: (a) positioned on the support paddle before compression, (b) during compression by the compression paddle and (c) the anterior-posterior view with the breast in its natural,more » unsupported position. The breast was compressed to standard full compression with the compression paddle and a tomosynthesis image was acquired simultaneously with the 3-D surface. The 3-D surface curvature and deformation with respect to the uncompressed surface was analyzed using contours. The 3-D surfaces were voxelized to capture breast shape in a format that can be manipulated for further analysis. Results: A protocol was developed to accurately capture the 3-D shape of patients’ breast before and during compression for mammography. Using a pair of 3-D scanners, the 50 patient breasts were scanned in three conditions, resulting in accurate representations of the breast surfaces. The surfaces were post processed, analyzed using contours and voxelized, with 1 mm{sup 3} voxels, converting the breast shape into a format that can be easily modified as required. Conclusion: Accurate characterization of the breast curvature and shape for the generation of 3-D models is possible. These models can be used for various applications such as improving breast dosimetry, accurate scatter estimation, conducting virtual clinical trials and validating compression algorithms. Ioannis Sechopoulos is consultant for Fuji Medical Systems USA.« less

  2. Secret shared multiple-image encryption based on row scanning compressive ghost imaging and phase retrieval in the Fresnel domain

    NASA Astrophysics Data System (ADS)

    Li, Xianye; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2017-09-01

    A multiple-image encryption method is proposed that is based on row scanning compressive ghost imaging, (t, n) threshold secret sharing, and phase retrieval in the Fresnel domain. In the encryption process, after wavelet transform and Arnold transform of the target image, the ciphertext matrix can be first detected using a bucket detector. Based on a (t, n) threshold secret sharing algorithm, the measurement key used in the row scanning compressive ghost imaging can be decomposed and shared into two pairs of sub-keys, which are then reconstructed using two phase-only mask (POM) keys with fixed pixel values, placed in the input plane and transform plane 2 of the phase retrieval scheme, respectively; and the other POM key in the transform plane 1 can be generated and updated by the iterative encoding of each plaintext image. In each iteration, the target image acts as the input amplitude constraint in the input plane. During decryption, each plaintext image possessing all the correct keys can be successfully decrypted by measurement key regeneration, compression algorithm reconstruction, inverse wavelet transformation, and Fresnel transformation. Theoretical analysis and numerical simulations both verify the feasibility of the proposed method.

  3. Research on the principle and experimentation of optical compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Chen, Yuheng; Chen, Xinhua; Zhou, Jiankang; Ji, Yiqun; Shen, Weimin

    2013-12-01

    The optical compressive spectral imaging method is a novel spectral imaging technique that draws in the inspiration of compressed sensing, which takes on the advantages such as reducing acquisition data amount, realizing snapshot imaging, increasing signal to noise ratio and so on. Considering the influence of the sampling quality on the ultimate imaging quality, researchers match the sampling interval with the modulation interval in former reported imaging system, while the depressed sampling rate leads to the loss on the original spectral resolution. To overcome that technical defect, the demand for the matching between the sampling interval and the modulation interval is disposed of and the spectral channel number of the designed experimental device increases more than threefold comparing to that of the previous method. Imaging experiment is carried out by use of the experiment installation and the spectral data cube of the shooting target is reconstructed with the acquired compressed image by use of the two-step iterative shrinkage/thresholding algorithms. The experimental result indicates that the spectral channel number increases effectively and the reconstructed data stays high-fidelity. The images and spectral curves are able to accurately reflect the spatial and spectral character of the target.

  4. Local wavelet transform: a cost-efficient custom processor for space image compression

    NASA Astrophysics Data System (ADS)

    Masschelein, Bart; Bormans, Jan G.; Lafruit, Gauthier

    2002-11-01

    Thanks to its intrinsic scalability features, the wavelet transform has become increasingly popular as decorrelator in image compression applications. Throuhgput, memory requirements and complexity are important parameters when developing hardware image compression modules. An implementation of the classical, global wavelet transform requires large memory sizes and implies a large latency between the availability of the input image and the production of minimal data entities for entropy coding. Image tiling methods, as proposed by JPEG2000, reduce the memory sizes and the latency, but inevitably introduce image artefacts. The Local Wavelet Transform (LWT), presented in this paper, is a low-complexity wavelet transform architecture using a block-based processing that results in the same transformed images as those obtained by the global wavelet transform. The architecture minimizes the processing latency with a limited amount of memory. Moreover, as the LWT is an instruction-based custom processor, it can be programmed for specific tasks, such as push-broom processing of infinite-length satelite images. The features of the LWT makes it appropriate for use in space image compression, where high throughput, low memory sizes, low complexity, low power and push-broom processing are important requirements.

  5. Overview of the JPEG XS objective evaluation procedures

    NASA Astrophysics Data System (ADS)

    Willème, Alexandre; Richter, Thomas; Rosewarne, Chris; Macq, Benoit

    2017-09-01

    JPEG XS is a standardization activity conducted by the Joint Photographic Experts Group (JPEG), formally known as ISO/IEC SC29 WG1 group that aims at standardizing a low-latency, lightweight and visually lossless video compression scheme. This codec is intended to be used in applications where image sequences would otherwise be transmitted or stored in uncompressed form, such as in live production (through SDI or IP transport), display links, or frame buffers. Support for compression ratios ranging from 2:1 to 6:1 allows significant bandwidth and power reduction for signal propagation. This paper describes the objective quality assessment procedures conducted as part of the JPEG XS standardization activity. Firstly, this paper discusses the objective part of the experiments that led to the technology selection during the 73th WG1 meeting in late 2016. This assessment consists of PSNR measurements after a single and multiple compression decompression cycles at various compression ratios. After this assessment phase, two proposals among the six responses to the CfP were selected and merged to form the first JPEG XS test model (XSM). Later, this paper describes the core experiments (CEs) conducted so far on the XSM. These experiments are intended to evaluate its performance in more challenging scenarios, such as insertion of picture overlays, robustness to frame editing, assess the impact of the different algorithmic choices, and also to measure the XSM performance using the HDR VDP metric.

  6. Confirmation of translatability and functionality certifies the dual endothelin1/VEGFsp receptor (DEspR) protein.

    PubMed

    Herrera, Victoria L M; Steffen, Martin; Moran, Ann Marie; Tan, Glaiza A; Pasion, Khristine A; Rivera, Keith; Pappin, Darryl J; Ruiz-Opazo, Nelson

    2016-06-14

    In contrast to rat and mouse databases, the NCBI gene database lists the human dual-endothelin1/VEGFsp receptor (DEspR, formerly Dear) as a unitary transcribed pseudogene due to a stop [TGA]-codon at codon#14 in automated DNA and RNA sequences. However, re-analysis is needed given prior single gene studies detected a tryptophan [TGG]-codon#14 by manual Sanger sequencing, demonstrated DEspR translatability and functionality, and since the demonstration of actual non-translatability through expression studies, the standard-of-excellence for pseudogene designation, has not been performed. Re-analysis must meet UNIPROT criteria for demonstration of a protein's existence at the highest (protein) level, which a priori, would override DNA- or RNA-based deductions. To dissect the nucleotide sequence discrepancy, we performed Maxam-Gilbert sequencing and reviewed 727 RNA-seq entries. To comply with the highest level multiple UNIPROT criteria for determining DEspR's existence, we performed various experiments using multiple anti-DEspR monoclonal antibodies (mAbs) targeting distinct DEspR epitopes with one spanning the contested tryptophan [TGG]-codon#14, assessing: (a) DEspR protein expression, (b) predicted full-length protein size, (c) sequence-predicted protein-specific properties beyond codon#14: receptor glycosylation and internalization, (d) protein-partner interactions, and (e) DEspR functionality via DEspR-inhibition effects. Maxam-Gilbert sequencing and some RNA-seq entries demonstrate two guanines, hence a tryptophan [TGG]-codon#14 within a compression site spanning an error-prone compression sequence motif. Western blot analysis using anti-DEspR mAbs targeting distinct DEspR epitopes detect the identical glycosylated 17.5 kDa pull-down protein. Decrease in DEspR-protein size after PNGase-F digest demonstrates post-translational glycosylation, concordant with the consensus-glycosylation site beyond codon#14. Like other small single-transmembrane proteins, mass spectrometry analysis of anti-DEspR mAb pull-down proteins do not detect DEspR, but detect DEspR-protein interactions with proteins implicated in intracellular trafficking and cancer. FACS analyses also detect DEspR-protein in different human cancer stem-like cells (CSCs). DEspR-inhibition studies identify DEspR-roles in CSC survival and growth. Live cell imaging detects fluorescently-labeled anti-DEspR mAb targeted-receptor internalization, concordant with the single internalization-recognition sequence also located beyond codon#14. Data confirm translatability of DEspR, the full-length DEspR protein beyond codon#14, and elucidate DEspR-specific functionality. Along with detection of the tryptophan [TGG]-codon#14 within an error-prone compression site, cumulative data demonstrating DEspR protein existence fulfill multiple UNIPROT criteria, thus refuting its pseudogene designation.

  7. Entropy reduction via simplified image contourization

    NASA Technical Reports Server (NTRS)

    Turner, Martin J.

    1993-01-01

    The process of contourization is presented which converts a raster image into a set of plateaux or contours. These contours can be grouped into a hierarchical structure, defining total spatial inclusion, called a contour tree. A contour coder has been developed which fully describes these contours in a compact and efficient manner and is the basis for an image compression method. Simplification of the contour tree has been undertaken by merging contour tree nodes thus lowering the contour tree's entropy. This can be exploited by the contour coder to increase the image compression ratio. By applying general and simple rules derived from physiological experiments on the human vision system, lossy image compression can be achieved which minimizes noticeable artifacts in the simplified image.

  8. Design of light-small high-speed image data processing system

    NASA Astrophysics Data System (ADS)

    Yang, Jinbao; Feng, Xue; Li, Fei

    2015-10-01

    A light-small high speed image data processing system was designed in order to meet the request of image data processing in aerospace. System was constructed of FPGA, DSP and MCU (Micro-controller), implementing a video compress of 3 million pixels@15frames and real-time return of compressed image to the upper system. Programmable characteristic of FPGA, high performance image compress IC and configurable MCU were made best use to improve integration. Besides, hard-soft board design was introduced and PCB layout was optimized. At last, system achieved miniaturization, light-weight and fast heat dispersion. Experiments show that, system's multifunction was designed correctly and worked stably. In conclusion, system can be widely used in the area of light-small imaging.

  9. A Unified Steganalysis Framework

    DTIC Science & Technology

    2013-04-01

    contains more than 1800 images of different scenes. In the experiments, we used four JPEG based steganography techniques: Out- guess [13], F5 [16], model...also compressed these images again since some of the steganography meth- ods are double compressing the images . Stego- images are generated by embedding...randomly chosen messages (in bits) into 1600 grayscale images using each of the four steganography techniques. A random message length was determined

  10. Adaptive multifocus image fusion using block compressed sensing with smoothed projected Landweber integration in the wavelet domain.

    PubMed

    V S, Unni; Mishra, Deepak; Subrahmanyam, G R K S

    2016-12-01

    The need for image fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. Image fusion is the process of combining substantial information from several sensors using mathematical techniques in order to create a single composite image that will be more comprehensive and thus more useful for a human operator or other computer vision tasks. This paper presents a new approach to multifocus image fusion based on sparse signal representation. Block-based compressive sensing integrated with a projection-driven compressive sensing (CS) recovery that encourages sparsity in the wavelet domain is used as a method to get the focused image from a set of out-of-focus images. Compression is achieved during the image acquisition process using a block compressive sensing method. An adaptive thresholding technique within the smoothed projected Landweber recovery process reconstructs high-resolution focused images from low-dimensional CS measurements of out-of-focus images. Discrete wavelet transform and dual-tree complex wavelet transform are used as the sparsifying basis for the proposed fusion. The main finding lies in the fact that sparsification enables a better selection of the fusion coefficients and hence better fusion. A Laplacian mixture model fit is done in the wavelet domain and estimation of the probability density function (pdf) parameters by expectation maximization leads us to the proper selection of the coefficients of the fused image. Using the proposed method compared with the fusion scheme without employing the projected Landweber (PL) scheme and the other existing CS-based fusion approaches, it is observed that with fewer samples itself, the proposed method outperforms other approaches.

  11. Context-dependent JPEG backward-compatible high-dynamic range image compression

    NASA Astrophysics Data System (ADS)

    Korshunov, Pavel; Ebrahimi, Touradj

    2013-10-01

    High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.

  12. Wavelet-based compression of M-FISH images.

    PubMed

    Hua, Jianping; Xiong, Zixiang; Wu, Qiang; Castleman, Kenneth R

    2005-05-01

    Multiplex fluorescence in situ hybridization (M-FISH) is a recently developed technology that enables multi-color chromosome karyotyping for molecular cytogenetic analysis. Each M-FISH image set consists of a number of aligned images of the same chromosome specimen captured at different optical wavelength. This paper presents embedded M-FISH image coding (EMIC), where the foreground objects/chromosomes and the background objects/images are coded separately. We first apply critically sampled integer wavelet transforms to both the foreground and the background. We then use object-based bit-plane coding to compress each object and generate separate embedded bitstreams that allow continuous lossy-to-lossless compression of the foreground and the background. For efficient arithmetic coding of bit planes, we propose a method of designing an optimal context model that specifically exploits the statistical characteristics of M-FISH images in the wavelet domain. Our experiments show that EMIC achieves nearly twice as much compression as Lempel-Ziv-Welch coding. EMIC also performs much better than JPEG-LS and JPEG-2000 for lossless coding. The lossy performance of EMIC is significantly better than that of coding each M-FISH image with JPEG-2000.

  13. Optimized satellite image compression and reconstruction via evolution strategies

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael

    2009-05-01

    This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.

  14. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as phi(sub gamma), are discrete time signals, where gamma represents the dictionary index. A dictionary with a collection of these waveforms is typically complete or overcomplete. Given such a dictionary, the goal is to obtain a representation image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  15. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as lambda, are discrete time signals, where y represents the dictionary index. A dictionary with a collection of these waveforms Is typically complete or over complete. Given such a dictionary, the goal is to obtain a representation Image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  16. Fast Plasma Instrument for MMS: Data Compression Simulation Results

    NASA Astrophysics Data System (ADS)

    Barrie, A.; Adrian, M. L.; Yeh, P.; Winkert, G.; Lobell, J.; Vinas, A. F.; Simpson, D. G.

    2009-12-01

    Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. To meet these requirements, the Fast Plasma Instrument (FPI) consists of eight (8) identical half top-hat electron sensors and eight (8) identical ion sensors and an Instrument Data Processing Unit (IDPU). The sensors (electron or ion) are grouped into pairs whose 6° x 180° fields-of-view (FOV) are set 90° apart. Each sensor is equipped with electrostatic aperture steering to allow the sensor to scan a 45° x 180° fan about the its nominal viewing (0° deflection) direction. Each pair of sensors, known as the Dual Electron Spectrometer (DES) and the Dual Ion Spectrometer (DIS), occupies a quadrant on the MMS spacecraft and the combination of the eight electron/ion sensors, employing aperture steering, image the full-sky every 30-ms (electrons) and 150-ms (ions), respectively. To probe the diffusion regions of reconnection, the highest temporal/spatial resolution mode of FPI results in the DES complement of a given spacecraft generating 6.5-Mb s-1 of electron data while the DIS generates 1.1-Mb s-1 of ion data yielding an FPI total data rate of 6.6-Mb s-1. The FPI electron/ion data is collected by the IDPU then transmitted to the Central Data Instrument Processor (CIDP) on the spacecraft for science interest ranking. Only data sequences that contain the greatest amount of temporal/spatial structure will be intelligently down-linked by the spacecraft. Currently, the FPI data rate allocation to the CIDP is 1.5-Mb s-1. Consequently, the FPI-IDPU must employ data/image compression to meet this CIDP telemetry allocation. Here, we present updated simulations of the CCSDS 122.0-B-1 algorithm-based compression of the FPI-DES electron data as well as the FPI-DIS ion data. Compression analysis is based upon a seed of re-processed Cluster/PEACE electron measurements and Cluster/CIS ion measurements. Topics to be discussed include: (i) Review of compression algorithm; (ii) Data quality; (iii) Data formatting/organization; (iv) Compression optimization; (v) Investigation of pseudo-log precompression; and (vi) Analysis of compression effectiveness for burst mode as well as fast survey mode data packets for both electron and ion data We conclude with a presentation of the current base-lined FPI data compression approach.

  17. Clinical validation of different echocardiographic motion pictures expert group-4 algorythms and compression levels for telemedicine.

    PubMed

    Barbier, Paolo; Alimento, Marina; Berna, Giovanni; Cavoretto, Dario; Celeste, Fabrizio; Muratori, Manuela; Guazzi, Maurizio D

    2004-01-01

    Tele-echocardiography is not widely used because of lengthy transmission times when using standard Motion Pictures Expert Groups (MPEG)-2 lossy compression algorythms, unless expensive high bandwidth lines are used. We sought to validate the newer MPEG-4 algorythms to allow further reduction in echocardiographic motion video file size. Four cardiologists expert in echocardiography read blindly 165 randomized uncompressed and compressed 2D and color Doppler normal and pathologic motion images. One Digital Video and 3 MPEG-4 compression algorythms were tested, the latter at 3 decreasing compression quality levels (100%, 65% and 40%). Mean diagnostic and image quality scores were computed for each file and compared across the 3 compression levels using uncompressed files as controls. File dimensions decreased from a range of uncompressed 12-83 MB to MPEG-4 0.03-2.3 MB. All algorythms showed mean scores that were not significantly different from uncompressed source, except the MPEG-4 DivX algorythm at the highest selected compression (40%, p=.002). These data support the use of MPEG-4 compression to reduce echocardiographic motion image size for transmission purposes, allowing cost reduction through use of low bandwidth lines.

  18. Motion video compression system with neural network having winner-take-all function

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi (Inventor); Sheu, Bing J. (Inventor)

    1997-01-01

    A motion video data system includes a compression system, including an image compressor, an image decompressor correlative to the image compressor having an input connected to an output of the image compressor, a feedback summing node having one input connected to an output of the image decompressor, a picture memory having an input connected to an output of the feedback summing node, apparatus for comparing an image stored in the picture memory with a received input image and deducing therefrom pixels having differences between the stored image and the received image and for retrieving from the picture memory a partial image including the pixels only and applying the partial image to another input of the feedback summing node, whereby to produce at the output of the feedback summing node an updated decompressed image, a subtraction node having one input connected to received the received image and another input connected to receive the partial image so as to generate a difference image, the image compressor having an input connected to receive the difference image whereby to produce a compressed difference image at the output of the image compressor.

  19. Performance of a Space-Based Wavelet Compressor for Plasma Count Data on the MMS Fast Plasma Investigation

    NASA Technical Reports Server (NTRS)

    Barrie, A. C.; Smith, S. E.; Dorelli, J. C.; Gershman, D. J.; Yeh, P.; Schiff, C.; Avanov, L. A.

    2017-01-01

    Data compression has been a staple of imaging instruments for years. Recently, plasma measurements have utilized compression with relatively low compression ratios. The Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale (MMS) mission generates data roughly 100 times faster than previous plasma instruments, requiring a higher compression ratio to fit within the telemetry allocation. This study investigates the performance of a space-based compression standard employing a Discrete Wavelet Transform and a Bit Plane Encoder (DWT/BPE) in compressing FPI plasma count data. Data from the first 6 months of FPI operation are analyzed to explore the error modes evident in the data and how to adapt to them. While approximately half of the Dual Electron Spectrometer (DES) maps had some level of loss, it was found that there is little effect on the plasma moments and that errors present in individual sky maps are typically minor. The majority of Dual Ion Spectrometer burst sky maps compressed in a lossless fashion, with no error introduced during compression. Because of induced compression error, the size limit for DES burst images has been increased for Phase 1B. Additionally, it was found that the floating point compression mode yielded better results when images have significant compression error, leading to floating point mode being used for the fast survey mode of operation for Phase 1B. Despite the suggested tweaks, it was found that wavelet-based compression, and a DWT/BPE algorithm in particular, is highly suitable to data compression for plasma measurement instruments and can be recommended for future missions.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Andrew; Browning, Nigel D.

    Traditionally, microscopists have worked with the Nyquist-Shannon theory of sampling, which states that to be able to reconstruct the image fully it needs to be sampled at a rate of at least twice the highest spatial frequency in the image. This sampling rate assumes that the image is sampled at regular intervals and that every pixel contains information that is crucial for the image (it even assumes that noise is important). Images in general, and especially low dose S/TEM images, contain significantly less information than can be encoded by a grid of pixels (which is why image compression works). Mathematicallymore » speaking, the image data has a low dimensional or sparse representation. Through the application of compressive sensing methods [1,2,3] this representation can be found using pre-designed measurements that are usually random for implementation simplicity. These measurements and the compressive sensing reconstruction algorithms have the added benefit of reducing noise. This reconstruction approach can be extended into higher dimensions, whereby the random sampling in each 2-D image can be extended into: a sequence of tomographic projections (i.e. tilt images); a sequence of video frames (i.e. incorporating temporal resolution and dynamics); spectral resolution (i.e. energy filtering an image to see the distribution of elements); and ptychography (i.e. sampling a full diffraction image at each location in a 2-D grid across the sample). This approach has been employed experimentally for materials science samples requiring low-dose imaging [2], and can be readily applied to biological samples. Figure 1 shows the resolution possible in a complex biological system, mouse pancreatic islet beta cells [4], when tomogram slices are reconstructed using subsampling. Reducing the number of pixels (1/6 pix and 1/3*1/3) shows minimal degradation compared to the reconstructions using all pixels (all data and 1/3 tilt). Although subsampling 1/6 of the tilts (1/6 of overall dose) degrades the reconstruction to the point that the cellular structures cannot be identified. Using 1/3 of both the pixels and the tilts provides a high quality image at 1/9 the overall dose even for this most basic and rapid demonstration of the CS methods. Figure 2 demonstrates the theoretical tomogram reconstruction quality (vertical axis) as undersampling (horizontal axis) is increased; we examined subsampling pixels and tilt-angles individually and a combined approach in which both pixels and tilts are sub-sampled. Note that subsampling pixels maintains high quality reconstructions (solid lines). Using the inpainting algorithm to obtain tomograms can automatically reduce the dose applied to the system by an order of magnitude. Perhaps the best way to understand the impact is to consider that by using inpainting (and with minimal hardware changes), a sample that can normally withstand a dose of ~10 e/Å2 can potentially be imaged with an “equivalent quality” to a dose level of 103 e/Å2. To put this in perspective, this is approaching the dose level used for the most advanced images, in terms of spatial resolution, for inorganic systems. While there are issues for biological specimens beyond dose (structural complexity being the most important one), this sampling approach allows the methods that are traditionally used for materials science to be applied to biological systems [5]. References: [1] A Stevens, H Yang, L Carin et al. Microscopy 63(1), (2014), pp. 41. [2] L Kovarik, A Stevens, A Liyu et al. Appl. Phys. Lett. 109, 164102 (2016) [3] A Stevens, L Kovarik, P Abellan et al. Adv. Structural and Chemical Imaging 1(10), (2015), pp. 1. [4] MD Guay, W Czaja, MA Aronova et al. Scientific Reports 6, 27614 (2016) [5] Supported by the Chemical Imaging, Signature Discovery, and Analytics in Motion Initiatives at PNNL. PNNL is operated by Battelle Memorial Inst. for the US DOE; contract DE-AC05-76RL01830.« less

Top