Sample records for compression technique based

  1. A comparison of spectral decorrelation techniques and performance evaluation metrics for a wavelet-based, multispectral data compression algorithm

    NASA Technical Reports Server (NTRS)

    Matic, Roy M.; Mosley, Judith I.

    1994-01-01

    Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.

  2. An adaptive technique to maximize lossless image data compression of satellite images

    NASA Technical Reports Server (NTRS)

    Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe

    1994-01-01

    Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.

  3. A Posteriori Restoration of Block Transform-Compressed Data

    NASA Technical Reports Server (NTRS)

    Brown, R.; Boden, A. F.

    1995-01-01

    The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.

  4. Cosmological Particle Data Compression in Practice

    NASA Astrophysics Data System (ADS)

    Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.

    2017-12-01

    In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.

  5. Wavelet-based audio embedding and audio/video compression

    NASA Astrophysics Data System (ADS)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  6. Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique

    NASA Astrophysics Data System (ADS)

    Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi

    Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.

  7. An image compression survey and algorithm switching based on scene activity

    NASA Technical Reports Server (NTRS)

    Hart, M. M.

    1985-01-01

    Data compression techniques are presented. A description of these techniques is provided along with a performance evaluation. The complexity of the hardware resulting from their implementation is also addressed. The compression effect on channel distortion and the applicability of these algorithms to real-time processing are presented. Also included is a proposed new direction for an adaptive compression technique for real-time processing.

  8. Improved compression technique for multipass color printers

    NASA Astrophysics Data System (ADS)

    Honsinger, Chris

    1998-01-01

    A multipass color printer prints a color image by printing one color place at a time in a prescribed order, e.g., in a four-color systems, the cyan plane may be printed first, the magenta next, and so on. It is desirable to discard the data related to each color plane once it has been printed, so that data from the next print may be downloaded. In this paper, we present a compression scheme that allows the release of a color plane memory, but still takes advantage of the correlation between the color planes. The compression scheme is based on a block adaptive technique for decorrelating the color planes followed by a spatial lossy compression of the decorrelated data. A preferred method of lossy compression is the DCT-based JPEG compression standard, as it is shown that the block adaptive decorrelation operations can be efficiently performed in the DCT domain. The result of the compression technique are compared to that of using JPEG on RGB data without any decorrelating transform. In general, the technique is shown to improve the compression performance over a practical range of compression ratios by at least 30 percent in all images, and up to 45 percent in some images.

  9. Radiological Image Compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  10. Application of content-based image compression to telepathology

    NASA Astrophysics Data System (ADS)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  11. Lossless compression techniques for maskless lithography data

    NASA Astrophysics Data System (ADS)

    Dai, Vito; Zakhor, Avideh

    2002-07-01

    Future lithography systems must produce more dense chips with smaller feature sizes, while maintaining the throughput of one wafer per sixty seconds per layer achieved by today's optical lithography systems. To achieve this throughput with a direct-write maskless lithography system, using 25 nm pixels for 50 nm feature sizes, requires data rates of about 10 Tb/s. In a previous paper, we presented an architecture which achieves this data rate contingent on consistent 25 to 1 compression of lithography data, and on implementation of a decoder-writer chip with a real-time decompressor fabricated on the same chip as the massively parallel array of lithography writers. In this paper, we examine the compression efficiency of a spectrum of techniques suitable for lithography data, including two industry standards JBIG and JPEG-LS, a wavelet based technique SPIHT, general file compression techniques ZIP and BZIP2, our own 2D-LZ technique, and a simple list-of-rectangles representation RECT. Layouts rasterized both to black-and-white pixels, and to 32 level gray pixels are considered. Based on compression efficiency, JBIG, ZIP, 2D-LZ, and BZIP2 are found to be strong candidates for application to maskless lithography data, in many cases far exceeding the required compression ratio of 25. To demonstrate the feasibility of implementing the decoder-writer chip, we consider the design of a hardware decoder based on ZIP, the simplest of the four candidate techniques. The basic algorithm behind ZIP compression is Lempel-Ziv 1977 (LZ77), and the design parameters of LZ77 decompression are optimized to minimize circuit usage while maintaining compression efficiency.

  12. The Pixon Method for Data Compression Image Classification, and Image Reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard; Yahil, Amos

    2002-01-01

    As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.

  13. Compression for radiological images

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.

    1992-07-01

    The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

  14. A block-based JPEG-LS compression technique with lossless region of interest

    NASA Astrophysics Data System (ADS)

    Deng, Lihua; Huang, Zhenghua; Yao, Shoukui

    2018-03-01

    JPEG-LS lossless compression algorithm is used in many specialized applications that emphasize on the attainment of high fidelity for its lower complexity and better compression ratios than the lossless JPEG standard. But it cannot prevent error diffusion because of the context dependence of the algorithm, and have low compression rate when compared to lossy compression. In this paper, we firstly divide the image into two parts: ROI regions and non-ROI regions. Then we adopt a block-based image compression technique to decrease the range of error diffusion. We provide JPEG-LS lossless compression for the image blocks which include the whole or part region of interest (ROI) and JPEG-LS near lossless compression for the image blocks which are included in the non-ROI (unimportant) regions. Finally, a set of experiments are designed to assess the effectiveness of the proposed compression method.

  15. Edge-preserving image compression for magnetic-resonance images using dynamic associative neural networks (DANN)-based neural networks

    NASA Astrophysics Data System (ADS)

    Wan, Tat C.; Kabuka, Mansur R.

    1994-05-01

    With the tremendous growth in imaging applications and the development of filmless radiology, the need for compression techniques that can achieve high compression ratios with user specified distortion rates becomes necessary. Boundaries and edges in the tissue structures are vital for detection of lesions and tumors, which in turn requires the preservation of edges in the image. The proposed edge preserving image compressor (EPIC) combines lossless compression of edges with neural network compression techniques based on dynamic associative neural networks (DANN), to provide high compression ratios with user specified distortion rates in an adaptive compression system well-suited to parallel implementations. Improvements to DANN-based training through the use of a variance classifier for controlling a bank of neural networks speed convergence and allow the use of higher compression ratios for `simple' patterns. The adaptation and generalization capabilities inherent in EPIC also facilitate progressive transmission of images through varying the number of quantization levels used to represent compressed patterns. Average compression ratios of 7.51:1 with an averaged average mean squared error of 0.0147 were achieved.

  16. Three-Dimensional Inverse Transport Solver Based on Compressive Sensing Technique

    NASA Astrophysics Data System (ADS)

    Cheng, Yuxiong; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi

    2013-09-01

    According to the direct exposure measurements from flash radiographic image, a compressive sensing-based method for three-dimensional inverse transport problem is presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. It is always very expensive to obtain enough measurements. With limited measurements, compressive sensing sparse reconstruction technique orthogonal matching pursuit is applied to obtain the sparse coefficients by solving an optimization problem. A three-dimensional inverse transport solver is developed based on a compressive sensing-based technique. There are three features in this solver: (1) AutoCAD is employed as a geometry preprocessor due to its powerful capacity in graphic. (2) The forward projection matrix rather than Gauss matrix is constructed by the visualization tool generator. (3) Fourier transform and Daubechies wavelet transform are adopted to convert an underdetermined system to a well-posed system in the algorithm. Simulations are performed and numerical results in pseudo-sine absorption problem, two-cube problem and two-cylinder problem when using compressive sensing-based solver agree well with the reference value.

  17. Lossless compression of VLSI layout image data.

    PubMed

    Dai, Vito; Zakhor, Avideh

    2006-09-01

    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.

  18. Simultaneous compression and encryption for secure real-time secure transmission of sensitive video transmission

    NASA Astrophysics Data System (ADS)

    Al-Hayani, Nazar; Al-Jawad, Naseer; Jassim, Sabah A.

    2014-05-01

    Video compression and encryption became very essential in a secured real time video transmission. Applying both techniques simultaneously is one of the challenges where the size and the quality are important in multimedia transmission. In this paper we proposed a new technique for video compression and encryption. Both encryption and compression are based on edges extracted from the high frequency sub-bands of wavelet decomposition. The compression algorithm based on hybrid of: discrete wavelet transforms, discrete cosine transform, vector quantization, wavelet based edge detection, and phase sensing. The compression encoding algorithm treats the video reference and non-reference frames in two different ways. The encryption algorithm utilized A5 cipher combined with chaotic logistic map to encrypt the significant parameters and wavelet coefficients. Both algorithms can be applied simultaneously after applying the discrete wavelet transform on each individual frame. Experimental results show that the proposed algorithms have the following features: high compression, acceptable quality, and resistance to the statistical and bruteforce attack with low computational processing.

  19. Compressed domain indexing of losslessly compressed images

    NASA Astrophysics Data System (ADS)

    Schaefer, Gerald

    2001-12-01

    Image retrieval and image compression have been pursued separately in the past. Only little research has been done on a synthesis of the two by allowing image retrieval to be performed directly in the compressed domain of images without the need to uncompress them first. In this paper methods for image retrieval in the compressed domain of losslessly compressed images are introduced. While most image compression techniques are lossy, i.e. discard visually less significant information, lossless techniques are still required in fields like medical imaging or in situations where images must not be changed due to legal reasons. The algorithms in this paper are based on predictive coding methods where a pixel is encoded based on the pixel values of its (already encoded) neighborhood. The first method is based on an understanding that predictively coded data is itself indexable and represents a textural description of the image. The second method operates directly on the entropy encoded data by comparing codebooks of images. Experiments show good image retrieval results for both approaches.

  20. Comparison of lossless compression techniques for prepress color images

    NASA Astrophysics Data System (ADS)

    Van Assche, Steven; Denecker, Koen N.; Philips, Wilfried R.; Lemahieu, Ignace L.

    1998-12-01

    In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only lossless compression is acceptable. Most existing lossless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. In this paper we present a comparison of three state-of-the-art lossless compression techniques which exploit such color redundancies: IEP (Inter- color Error Prediction) and a KLT-based technique, which are both linear color decorrelation techniques, and Interframe CALIC, which uses a non-linear approach to color decorrelation. It is shown that these techniques are able to exploit color redundancies and that color decorrelation can be done effectively and efficiently. The linear color decorrelators provide a considerable coding gain (about 2 bpp) on some typical prepress images. The non-linear interframe CALIC predictor does not yield better results, but the full interframe CALIC technique does.

  1. Estimating the concrete compressive strength using hard clustering and fuzzy clustering based regression techniques.

    PubMed

    Nagwani, Naresh Kumar; Deo, Shirish V

    2014-01-01

    Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm.

  2. Estimating the Concrete Compressive Strength Using Hard Clustering and Fuzzy Clustering Based Regression Techniques

    PubMed Central

    Nagwani, Naresh Kumar; Deo, Shirish V.

    2014-01-01

    Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm. PMID:25374939

  3. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.

  4. Subband Coding Methods for Seismic Data Compression

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Pollara, F.

    1995-01-01

    This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  5. A Real-Time High Performance Data Compression Technique For Space Applications

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Venbrux, Jack; Bhatia, Prakash; Miller, Warner H.

    2000-01-01

    A high performance lossy data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on block-transform combined with bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate. The lossy coder is described. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Hardware implementations are in development; a functional chip set is expected by the end of 2001.

  6. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-12-30

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.

  7. Low cost voice compression for mobile digital radios

    NASA Technical Reports Server (NTRS)

    Omura, J. K.

    1985-01-01

    A new technique for low cost rubust voice compression at 4800 bits per second was studied. The approach was based on using a cascade of digital biquad adaptive filters with simplified multipulse excitation followed by simple bit sequence compression.

  8. A High Performance Image Data Compression Technique for Space Applications

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Venbrux, Jack

    2003-01-01

    A highly performing image data compression technique is currently being developed for space science applications under the requirement of high-speed and pushbroom scanning. The technique is also applicable to frame based imaging data. The algorithm combines a two-dimensional transform with a bitplane encoding; this results in an embedded bit string with exact desirable compression rate specified by the user. The compression scheme performs well on a suite of test images acquired from spacecraft instruments. It can also be applied to three-dimensional data cube resulting from hyper-spectral imaging instrument. Flight qualifiable hardware implementations are in development. The implementation is being designed to compress data in excess of 20 Msampledsec and support quantization from 2 to 16 bits. This paper presents the algorithm, its applications and status of development.

  9. Planar temperature measurement in compressible flows using laser-induced iodine fluorescence

    NASA Technical Reports Server (NTRS)

    Hartfield, Roy J., Jr.; Hollo, Steven D.; Mcdaniel, James C.

    1991-01-01

    A laser-induced iodine fluorescence technique that is suitable for the planar measurement of temperature in cold nonreacting compressible air flows is investigated analytically and demonstrated in a known flow field. The technique is based on the temperature dependence of the broadband fluorescence from iodine excited by the 514-nm line of an argon-ion laser. Temperatures ranging from 165 to 245 K were measured in the calibration flow field. This technique makes complete, spatially resolved surveys of temperature practical in highly three-dimensional, low-temperature compressible flows.

  10. Review of Fluorescence-Based Velocimetry Techniques to Study High-Speed Compressible Flows

    NASA Technical Reports Server (NTRS)

    Bathel, Brett F.; Johansen, Criag; Inman, Jennifer A.; Jones, Stephen B.; Danehy, Paul M.

    2013-01-01

    This paper reviews five laser-induced fluorescence-based velocimetry techniques that have been used to study high-speed compressible flows at NASA Langley Research Center. The techniques discussed in this paper include nitric oxide (NO) molecular tagging velocimetry (MTV), nitrogen dioxide photodissociation (NO2-to-NO) MTV, and NO and atomic oxygen (O-atom) Doppler-shift-based velocimetry. Measurements of both single-component and two-component velocity have been performed using these techniques. This paper details the specific application and experiment for which each technique has been used, the facility in which the experiment was performed, the experimental setup, sample results, and a discussion of the lessons learned from each experiment.

  11. Backwards compatible high dynamic range video compression

    NASA Astrophysics Data System (ADS)

    Dolzhenko, Vladimir; Chesnokov, Vyacheslav; Edirisinghe, Eran A.

    2014-02-01

    This paper presents a two layer CODEC architecture for high dynamic range video compression. The base layer contains the tone mapped video stream encoded with 8 bits per component which can be decoded using conventional equipment. The base layer content is optimized for rendering on low dynamic range displays. The enhancement layer contains the image difference, in perceptually uniform color space, between the result of inverse tone mapped base layer content and the original video stream. Prediction of the high dynamic range content reduces the redundancy in the transmitted data while still preserves highlights and out-of-gamut colors. Perceptually uniform colorspace enables using standard ratedistortion optimization algorithms. We present techniques for efficient implementation and encoding of non-uniform tone mapping operators with low overhead in terms of bitstream size and number of operations. The transform representation is based on human vision system model and suitable for global and local tone mapping operators. The compression techniques include predicting the transform parameters from previously decoded frames and from already decoded data for current frame. Different video compression techniques are compared: backwards compatible and non-backwards compatible using AVC and HEVC codecs.

  12. Optical image transformation and encryption by phase-retrieval-based double random-phase encoding and compressive ghost imaging

    NASA Astrophysics Data System (ADS)

    Yuan, Sheng; Yang, Yangrui; Liu, Xuemei; Zhou, Xin; Wei, Zhenzhuo

    2018-01-01

    An optical image transformation and encryption scheme is proposed based on double random-phase encoding (DRPE) and compressive ghost imaging (CGI) techniques. In this scheme, a secret image is first transformed into a binary image with the phase-retrieval-based DRPE technique, and then encoded by a series of random amplitude patterns according to the ghost imaging (GI) principle. Compressive sensing, corrosion and expansion operations are implemented to retrieve the secret image in the decryption process. This encryption scheme takes the advantage of complementary capabilities offered by the phase-retrieval-based DRPE and GI-based encryption techniques. That is the phase-retrieval-based DRPE is used to overcome the blurring defect of the decrypted image in the GI-based encryption, and the CGI not only reduces the data amount of the ciphertext, but also enhances the security of DRPE. Computer simulation results are presented to verify the performance of the proposed encryption scheme.

  13. SEMG signal compression based on two-dimensional techniques.

    PubMed

    de Melo, Wheidima Carneiro; de Lima Filho, Eddie Batista; da Silva Júnior, Waldir Sabino

    2016-04-18

    Recently, two-dimensional techniques have been successfully employed for compressing surface electromyographic (SEMG) records as images, through the use of image and video encoders. Such schemes usually provide specific compressors, which are tuned for SEMG data, or employ preprocessing techniques, before the two-dimensional encoding procedure, in order to provide a suitable data organization, whose correlations can be better exploited by off-the-shelf encoders. Besides preprocessing input matrices, one may also depart from those approaches and employ an adaptive framework, which is able to directly tackle SEMG signals reassembled as images. This paper proposes a new two-dimensional approach for SEMG signal compression, which is based on a recurrent pattern matching algorithm called multidimensional multiscale parser (MMP). The mentioned encoder was modified, in order to efficiently work with SEMG signals and exploit their inherent redundancies. Moreover, a new preprocessing technique, named as segmentation by similarity (SbS), which has the potential to enhance the exploitation of intra- and intersegment correlations, is introduced, the percentage difference sorting (PDS) algorithm is employed, with different image compressors, and results with the high efficiency video coding (HEVC), H.264/AVC, and JPEG2000 encoders are presented. Experiments were carried out with real isometric and dynamic records, acquired in laboratory. Dynamic signals compressed with H.264/AVC and HEVC, when combined with preprocessing techniques, resulted in good percent root-mean-square difference [Formula: see text] compression factor figures, for low and high compression factors, respectively. Besides, regarding isometric signals, the modified two-dimensional MMP algorithm outperformed state-of-the-art schemes, for low compression factors, the combination between SbS and HEVC proved to be competitive, for high compression factors, and JPEG2000, combined with PDS, provided good performance allied to low computational complexity, all in terms of percent root-mean-square difference [Formula: see text] compression factor. The proposed schemes are effective and, specifically, the modified MMP algorithm can be considered as an interesting alternative for isometric signals, regarding traditional SEMG encoders. Besides, the approach based on off-the-shelf image encoders has the potential of fast implementation and dissemination, given that many embedded systems may already have such encoders available, in the underlying hardware/software architecture.

  14. Compression of multispectral fluorescence microscopic images based on a modified set partitioning in hierarchal trees

    NASA Astrophysics Data System (ADS)

    Mansoor, Awais; Robinson, J. Paul; Rajwa, Bartek

    2009-02-01

    Modern automated microscopic imaging techniques such as high-content screening (HCS), high-throughput screening, 4D imaging, and multispectral imaging are capable of producing hundreds to thousands of images per experiment. For quick retrieval, fast transmission, and storage economy, these images should be saved in a compressed format. A considerable number of techniques based on interband and intraband redundancies of multispectral images have been proposed in the literature for the compression of multispectral and 3D temporal data. However, these works have been carried out mostly in the elds of remote sensing and video processing. Compression for multispectral optical microscopy imaging, with its own set of specialized requirements, has remained under-investigated. Digital photography{oriented 2D compression techniques like JPEG (ISO/IEC IS 10918-1) and JPEG2000 (ISO/IEC 15444-1) are generally adopted for multispectral images which optimize visual quality but do not necessarily preserve the integrity of scientic data, not to mention the suboptimal performance of 2D compression techniques in compressing 3D images. Herein we report our work on a new low bit-rate wavelet-based compression scheme for multispectral fluorescence biological imaging. The sparsity of signicant coefficients in high-frequency subbands of multispectral microscopic images is found to be much greater than in natural images; therefore a quad-tree concept such as Said et al.'s SPIHT1 along with correlation of insignicant wavelet coefficients has been proposed to further exploit redundancy at high-frequency subbands. Our work propose a 3D extension to SPIHT, incorporating a new hierarchal inter- and intra-spectral relationship amongst the coefficients of 3D wavelet-decomposed image. The new relationship, apart from adopting the parent-child relationship of classical SPIHT, also brought forth the conditional "sibling" relationship by relating only the insignicant wavelet coefficients of subbands at the same level of decomposition. The insignicant quadtrees in dierent subbands in the high-frequency subband class are coded by a combined function to reduce redundancy. A number of experiments conducted on microscopic multispectral images have shown promising results for the proposed method over current state-of-the-art image-compression techniques.

  15. Radiometric resolution enhancement by lossy compression as compared to truncation followed by lossless compression

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Manohar, Mareboyana

    1994-01-01

    Recent advances in imaging technology make it possible to obtain imagery data of the Earth at high spatial, spectral and radiometric resolutions from Earth orbiting satellites. The rate at which the data is collected from these satellites can far exceed the channel capacity of the data downlink. Reducing the data rate to within the channel capacity can often require painful trade-offs in which certain scientific returns are sacrificed for the sake of others. In this paper we model the radiometric version of this form of lossy compression by dropping a specified number of least significant bits from each data pixel and compressing the remaining bits using an appropriate lossless compression technique. We call this approach 'truncation followed by lossless compression' or TLLC. We compare the TLLC approach with applying a lossy compression technique to the data for reducing the data rate to the channel capacity, and demonstrate that each of three different lossy compression techniques (JPEG/DCT, VQ and Model-Based VQ) give a better effective radiometric resolution than TLLC for a given channel rate.

  16. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications of our technology to the special problems of telemedicine.

  17. Compressed ECG biometric: a fast, secured and efficient method for identification of CVD patient.

    PubMed

    Sufi, Fahim; Khalil, Ibrahim; Mahmood, Abdun

    2011-12-01

    Adoption of compression technology is often required for wireless cardiovascular monitoring, due to the enormous size of Electrocardiography (ECG) signal and limited bandwidth of Internet. However, compressed ECG must be decompressed before performing human identification using present research on ECG based biometric techniques. This additional step of decompression creates a significant processing delay for identification task. This becomes an obvious burden on a system, if this needs to be done for a trillion of compressed ECG per hour by the hospital. Even though the hospital might be able to come up with an expensive infrastructure to tame the exuberant processing, for small intermediate nodes in a multihop network identification preceded by decompression is confronting. In this paper, we report a technique by which a person can be identified directly from his / her compressed ECG. This technique completely obviates the step of decompression and therefore upholds biometric identification less intimidating for the smaller nodes in a multihop network. The biometric template created by this new technique is lower in size compared to the existing ECG based biometrics as well as other forms of biometrics like face, finger, retina etc. (up to 8302 times lower than face template and 9 times lower than existing ECG based biometric template). Lower size of the template substantially reduces the one-to-many matching time for biometric recognition, resulting in a faster biometric authentication mechanism.

  18. Texture characterization for joint compression and classification based on human perception in the wavelet domain.

    PubMed

    Fahmy, Gamal; Black, John; Panchanathan, Sethuraman

    2006-06-01

    Today's multimedia applications demand sophisticated compression and classification techniques in order to store, transmit, and retrieve audio-visual information efficiently. Over the last decade, perceptually based image compression methods have been gaining importance. These methods take into account the abilities (and the limitations) of human visual perception (HVP) when performing compression. The upcoming MPEG 7 standard also addresses the need for succinct classification and indexing of visual content for efficient retrieval. However, there has been no research that has attempted to exploit the characteristics of the human visual system to perform both compression and classification jointly. One area of HVP that has unexplored potential for joint compression and classification is spatial frequency perception. Spatial frequency content that is perceived by humans can be characterized in terms of three parameters, which are: 1) magnitude; 2) phase; and 3) orientation. While the magnitude of spatial frequency content has been exploited in several existing image compression techniques, the novel contribution of this paper is its focus on the use of phase coherence for joint compression and classification in the wavelet domain. Specifically, this paper describes a human visual system-based method for measuring the degree to which an image contains coherent (perceptible) phase information, and then exploits that information to provide joint compression and classification. Simulation results that demonstrate the efficiency of this method are presented.

  19. Analysis-Preserving Video Microscopy Compression via Correlation and Mathematical Morphology

    PubMed Central

    Shao, Chong; Zhong, Alfred; Cribb, Jeremy; Osborne, Lukas D.; O’Brien, E. Timothy; Superfine, Richard; Mayer-Patel, Ketan; Taylor, Russell M.

    2015-01-01

    The large amount video data produced by multi-channel, high-resolution microscopy system drives the need for a new high-performance domain-specific video compression technique. We describe a novel compression method for video microscopy data. The method is based on Pearson's correlation and mathematical morphology. The method makes use of the point-spread function (PSF) in the microscopy video acquisition phase. We compare our method to other lossless compression methods and to lossy JPEG, JPEG2000 and H.264 compression for various kinds of video microscopy data including fluorescence video and brightfield video. We find that for certain data sets, the new method compresses much better than lossless compression with no impact on analysis results. It achieved a best compressed size of 0.77% of the original size, 25× smaller than the best lossless technique (which yields 20% for the same video). The compressed size scales with the video's scientific data content. Further testing showed that existing lossy algorithms greatly impacted data analysis at similar compression sizes. PMID:26435032

  20. On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.

    PubMed

    Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi

    2018-02-01

    On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.

  1. A data compression technique for synthetic aperture radar images

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Minden, G. J.

    1986-01-01

    A data compression technique is developed for synthetic aperture radar (SAR) imagery. The technique is based on an SAR image model and is designed to preserve the local statistics in the image by an adaptive variable rate modification of block truncation coding (BTC). A data rate of approximately 1.6 bit/pixel is achieved with the technique while maintaining the image quality and cultural (pointlike) targets. The algorithm requires no large data storage and is computationally simple.

  2. Hybrid method based on singular value decomposition and embedded zero tree wavelet technique for ECG signal compression.

    PubMed

    Kumar, Ranjeet; Kumar, A; Singh, G K

    2016-06-01

    In the field of biomedical, it becomes necessary to reduce data quantity due to the limitation of storage in real-time ambulatory system and telemedicine system. Research has been underway since very beginning for the development of an efficient and simple technique for longer term benefits. This paper, presents an algorithm based on singular value decomposition (SVD), and embedded zero tree wavelet (EZW) techniques for ECG signal compression which deals with the huge data of ambulatory system. The proposed method utilizes the low rank matrix for initial compression on two dimensional (2-D) ECG data array using SVD, and then EZW is initiated for final compression. Initially, 2-D array construction has key issue for the proposed technique in pre-processing. Here, three different beat segmentation approaches have been exploited for 2-D array construction using segmented beat alignment with exploitation of beat correlation. The proposed algorithm has been tested on MIT-BIH arrhythmia record, and it was found that it is very efficient in compression of different types of ECG signal with lower signal distortion based on different fidelity assessments. The evaluation results illustrate that the proposed algorithm has achieved the compression ratio of 24.25:1 with excellent quality of signal reconstruction in terms of percentage-root-mean square difference (PRD) as 1.89% for ECG signal Rec. 100 and consumes only 162bps data instead of 3960bps uncompressed data. The proposed method is efficient and flexible with different types of ECG signal for compression, and controls quality of reconstruction. Simulated results are clearly illustrate the proposed method can play a big role to save the memory space of health data centres as well as save the bandwidth in telemedicine based healthcare systems. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. Wavelet-based compression of pathological images for telemedicine applications

    NASA Astrophysics Data System (ADS)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  4. Steganographic optical image encryption system based on reversible data hiding and double random phase encoding

    NASA Astrophysics Data System (ADS)

    Chuang, Cheng-Hung; Chen, Yen-Lin

    2013-02-01

    This study presents a steganographic optical image encryption system based on reversible data hiding and double random phase encoding (DRPE) techniques. Conventional optical image encryption systems can securely transmit valuable images using an encryption method for possible application in optical transmission systems. The steganographic optical image encryption system based on the DRPE technique has been investigated to hide secret data in encrypted images. However, the DRPE techniques vulnerable to attacks and many of the data hiding methods in the DRPE system can distort the decrypted images. The proposed system, based on reversible data hiding, uses a JBIG2 compression scheme to achieve lossless decrypted image quality and perform a prior encryption process. Thus, the DRPE technique enables a more secured optical encryption process. The proposed method extracts and compresses the bit planes of the original image using the lossless JBIG2 technique. The secret data are embedded in the remaining storage space. The RSA algorithm can cipher the compressed binary bits and secret data for advanced security. Experimental results show that the proposed system achieves a high data embedding capacity and lossless reconstruction of the original images.

  5. Compression technique for large statistical data bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eggers, S.J.; Olken, F.; Shoshani, A.

    1981-03-01

    The compression of large statistical databases is explored and are proposed for organizing the compressed data, such that the time required to access the data is logarithmic. The techniques exploit special characteristics of statistical databases, namely, variation in the space required for the natural encoding of integer attributes, a prevalence of a few repeating values or constants, and the clustering of both data of the same length and constants in long, separate series. The techniques are variations of run-length encoding, in which modified run-lengths for the series are extracted from the data stream and stored in a header, which ismore » used to form the base level of a B-tree index into the database. The run-lengths are cumulative, and therefore the access time of the data is logarithmic in the size of the header. The details of the compression scheme and its implementation are discussed, several special cases are presented, and an analysis is given of the relative performance of the various versions.« less

  6. The application of compressed sensing to long-term acoustic emission-based structural health monitoring

    NASA Astrophysics Data System (ADS)

    Cattaneo, Alessandro; Park, Gyuhae; Farrar, Charles; Mascareñas, David

    2012-04-01

    The acoustic emission (AE) phenomena generated by a rapid release in the internal stress of a material represent a promising technique for structural health monitoring (SHM) applications. AE events typically result in a discrete number of short-time, transient signals. The challenge associated with capturing these events using classical techniques is that very high sampling rates must be used over extended periods of time. The result is that a very large amount of data is collected to capture a phenomenon that rarely occurs. Furthermore, the high energy consumption associated with the required high sampling rates makes the implementation of high-endurance, low-power, embedded AE sensor nodes difficult to achieve. The relatively rare occurrence of AE events over long time scales implies that these measurements are inherently sparse in the spike domain. The sparse nature of AE measurements makes them an attractive candidate for the application of compressed sampling techniques. Collecting compressed measurements of sparse AE signals will relax the requirements on the sampling rate and memory demands. The focus of this work is to investigate the suitability of compressed sensing techniques for AE-based SHM. The work explores estimating AE signal statistics in the compressed domain for low-power classification applications. In the event compressed classification finds an event of interest, ι1 norm minimization will be used to reconstruct the measurement for further analysis. The impact of structured noise on compressive measurements is specifically addressed. The suitability of a particular algorithm, called Justice Pursuit, to increase robustness to a small amount of arbitrary measurement corruption is investigated.

  7. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  8. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.

  9. Compressed Sensing for Body MRI

    PubMed Central

    Feng, Li; Benkert, Thomas; Block, Kai Tobias; Sodickson, Daniel K; Otazo, Ricardo; Chandarana, Hersh

    2016-01-01

    The introduction of compressed sensing for increasing imaging speed in MRI has raised significant interest among researchers and clinicians, and has initiated a large body of research across multiple clinical applications over the last decade. Compressed sensing aims to reconstruct unaliased images from fewer measurements than that are traditionally required in MRI by exploiting image compressibility or sparsity. Moreover, appropriate combinations of compressed sensing with previously introduced fast imaging approaches, such as parallel imaging, have demonstrated further improved performance. The advent of compressed sensing marks the prelude to a new era of rapid MRI, where the focus of data acquisition has changed from sampling based on the nominal number of voxels and/or frames to sampling based on the desired information content. This paper presents a brief overview of the application of compressed sensing techniques in body MRI, where imaging speed is crucial due to the presence of respiratory motion along with stringent constraints on spatial and temporal resolution. The first section provides an overview of the basic compressed sensing methodology, including the notion of sparsity, incoherence, and non-linear reconstruction. The second section reviews state-of-the-art compressed sensing techniques that have been demonstrated for various clinical body MRI applications. In the final section, the paper discusses current challenges and future opportunities. PMID:27981664

  10. Compressive self-interference Fresnel digital holography with faithful reconstruction

    NASA Astrophysics Data System (ADS)

    Wan, Yuhong; Man, Tianlong; Han, Ying; Zhou, Hongqiang; Wang, Dayong

    2017-05-01

    We developed compressive self-interference digital holographic approach that allows retrieving three-dimensional information of the spatially incoherent objects from single-shot captured hologram. The Fresnel incoherent correlation holography is combined with parallel phase-shifting technique to instantaneously obtain spatial-multiplexed phase-shifting holograms. The recording scheme is regarded as compressive forward sensing model, thus the compressive-sensing-based reconstruction algorithm is implemented to reconstruct the original object from the under sampled demultiplexed sub-holograms. The concept was verified by simulations and experiments with simulating use of the polarizer array. The proposed technique has great potential to be applied in 3D tracking of spatially incoherent samples.

  11. Fractal-Based Image Compression, II

    DTIC Science & Technology

    1990-06-01

    data for figure 3 ----------------------------------- 10 iv 1. INTRODUCTION The need for data compression is not new. With humble beginnings such as...the use of acronyms and abbreviations in spoken and written word, the methods for data compression became more advanced as the need for information...grew. The Morse code, developed because of the need for faster telegraphy, was an early example of a data compression technique. Largely because of the

  12. Survey of Header Compression Techniques

    NASA Technical Reports Server (NTRS)

    Ishac, Joseph

    2001-01-01

    This report provides a summary of several different header compression techniques. The different techniques included are: (1) Van Jacobson's header compression (RFC 1144); (2) SCPS (Space Communications Protocol Standards) header compression (SCPS-TP, SCPS-NP); (3) Robust header compression (ROHC); and (4) The header compression techniques in RFC2507 and RFC2508. The methodology for compression and error correction for these schemes are described in the remainder of this document. All of the header compression schemes support compression over simplex links, provided that the end receiver has some means of sending data back to the sender. However, if that return path does not exist, then neither Van Jacobson's nor SCPS can be used, since both rely on TCP (Transmission Control Protocol). In addition, under link conditions of low delay and low error, all of the schemes perform as expected. However, based on the methodology of the schemes, each scheme is likely to behave differently as conditions degrade. Van Jacobson's header compression relies heavily on the TCP retransmission timer and would suffer an increase in loss propagation should the link possess a high delay and/or bit error rate (BER). The SCPS header compression scheme protects against high delay environments by avoiding delta encoding between packets. Thus, loss propagation is avoided. However, SCPS is still affected by an increased BER (bit-error-rate) since the lack of delta encoding results in larger header sizes. Next, the schemes found in RFC2507 and RFC2508 perform well for non-TCP connections in poor conditions. RFC2507 performance with TCP connections is improved by various techniques over Van Jacobson's, but still suffers a performance hit with poor link properties. Also, RFC2507 offers the ability to send TCP data without delta encoding, similar to what SCPS offers. ROHC is similar to the previous two schemes, but adds additional CRCs (cyclic redundancy check) into headers and improves compression schemes which provide better tolerances in conditions with a high BER.

  13. Digital Image Compression Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.

    1993-01-01

    The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.

  14. A Novel 2D Image Compression Algorithm Based on Two Levels DWT and DCT Transforms with Enhanced Minimize-Matrix-Size Algorithm for High Resolution Structured Light 3D Surface Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2015-09-01

    Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.

  15. Watermarking of ultrasound medical images in teleradiology using compressed watermark

    PubMed Central

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohamad; Ali, Mushtaq

    2016-01-01

    Abstract. The open accessibility of Internet-based medical images in teleradialogy face security threats due to the nonsecured communication media. This paper discusses the spatial domain watermarking of ultrasound medical images for content authentication, tamper detection, and lossless recovery. For this purpose, the image is divided into two main parts, the region of interest (ROI) and region of noninterest (RONI). The defined ROI and its hash value are combined as watermark, lossless compressed, and embedded into the RONI part of images at pixel’s least significant bits (LSBs). The watermark lossless compression and embedding at pixel’s LSBs preserve image diagnostic and perceptual qualities. Different lossless compression techniques including Lempel-Ziv-Welch (LZW) were tested for watermark compression. The performances of these techniques were compared based on more bit reduction and compression ratio. LZW was found better than others and used in tamper detection and recovery watermarking of medical images (TDARWMI) scheme development to be used for ROI authentication, tamper detection, localization, and lossless recovery. TDARWMI performance was compared and found to be better than other watermarking schemes. PMID:26839914

  16. Visually Lossless Data Compression for Real-Time Frame/Pushbroom Space Science Imagers

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Venbrux, Jack; Bhatia, Prakash; Miller, Warner H.

    2000-01-01

    A visually lossless data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also applicable to frame based imaging and is error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on a block transform of a hybrid of modulated lapped transform (MLT) and discrete cosine transform (DCT), or a 2-dimensional lapped transform, followed by bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate as desired by the user. The approach requires no unique table to maximize its performance. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Flight qualified hardware implementations are in development; a functional chip set is expected by the end of 2001. The chip set is being designed to compress data in excess of 20 Msamples/sec and support quantizations from 2 to 16 bits.

  17. Video bandwidth compression system

    NASA Astrophysics Data System (ADS)

    Ludington, D.

    1980-08-01

    The objective of this program was the development of a Video Bandwidth Compression brassboard model for use by the Air Force Avionics Laboratory, Wright-Patterson Air Force Base, in evaluation of bandwidth compression techniques for use in tactical weapons and to aid in the selection of particular operational modes to be implemented in an advanced flyable model. The bandwidth compression system is partitioned into two major divisions: the encoder, which processes the input video with a compression algorithm and transmits the most significant information; and the decoder where the compressed data is reconstructed into a video image for display.

  18. Compression of electromyographic signals using image compression techniques.

    PubMed

    Costa, Marcus Vinícius Chaffim; Berger, Pedro de Azevedo; da Rocha, Adson Ferreira; de Carvalho, João Luiz Azevedo; Nascimento, Francisco Assis de Oliveira

    2008-01-01

    Despite the growing interest in the transmission and storage of electromyographic signals for long periods of time, few studies have addressed the compression of such signals. In this article we present an algorithm for compression of electromyographic signals based on the JPEG2000 coding system. Although the JPEG2000 codec was originally designed for compression of still images, we show that it can also be used to compress EMG signals for both isotonic and isometric contractions. For EMG signals acquired during isometric contractions, the proposed algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.75% to 13.7%. For isotonic EMG signals, the algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.4% to 7%. The compression results using the JPEG2000 algorithm were compared to those using other algorithms based on the wavelet transform.

  19. Turbulence intensity and spatial integral scale during compression and expansion strokes in a four-cycle reciprocating engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ikegami, M.; Shioji, M.; Nishimoto, K.

    1987-01-01

    A laser homodyne technique is applied to measure turbulence intensities and spatial scales during compression and expansion strokes in a non-fired engine. By using this technique, relative fluid motion in a turbulent flow is detected directly without cyclic variation biases caused by fluctuation in the main flow. Experiments are performed at different engine speeds, compression ratios, and induction swirl ratios. In no-swirl cases the turbulence field near the compression end is almost uniform, whereas in swirled cases both the turbulence intensity and the scale near the cylinder axis are higher than those in the periphery. In addition, based on themore » measured results, the k-epsilon two-equation turbulence model under the influence of compression is discussed.« less

  20. Planning/scheduling techniques for VQ-based image compression

    NASA Technical Reports Server (NTRS)

    Short, Nicholas M., Jr.; Manohar, Mareboyana; Tilton, James C.

    1994-01-01

    The enormous size of the data holding and the complexity of the information system resulting from the EOS system pose several challenges to computer scientists, one of which is data archival and dissemination. More than ninety percent of the data holdings of NASA is in the form of images which will be accessed by users across the computer networks. Accessing the image data in its full resolution creates data traffic problems. Image browsing using a lossy compression reduces this data traffic, as well as storage by factor of 30-40. Of the several image compression techniques, VQ is most appropriate for this application since the decompression of the VQ compressed images is a table lookup process which makes minimal additional demands on the user's computational resources. Lossy compression of image data needs expert level knowledge in general and is not straightforward to use. This is especially true in the case of VQ. It involves the selection of appropriate codebooks for a given data set and vector dimensions for each compression ratio, etc. A planning and scheduling system is described for using the VQ compression technique in the data access and ingest of raw satellite data.

  1. Optical information authentication using compressed double-random-phase-encoded images and quick-response codes.

    PubMed

    Wang, Xiaogang; Chen, Wen; Chen, Xudong

    2015-03-09

    In this paper, we develop a new optical information authentication system based on compressed double-random-phase-encoded images and quick-response (QR) codes, where the parameters of optical lightwave are used as keys for optical decryption and the QR code is a key for verification. An input image attached with QR code is first optically encoded in a simplified double random phase encoding (DRPE) scheme without using interferometric setup. From the single encoded intensity pattern recorded by a CCD camera, a compressed double-random-phase-encoded image, i.e., the sparse phase distribution used for optical decryption, is generated by using an iterative phase retrieval technique with QR code. We compare this technique to the other two methods proposed in literature, i.e., Fresnel domain information authentication based on the classical DRPE with holographic technique and information authentication based on DRPE and phase retrieval algorithm. Simulation results show that QR codes are effective on improving the security and data sparsity of optical information encryption and authentication system.

  2. Survey Of Lossless Image Coding Techniques

    NASA Astrophysics Data System (ADS)

    Melnychuck, Paul W.; Rabbani, Majid

    1989-04-01

    Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.

  3. Two-dimensional compression of surface electromyographic signals using column-correlation sorting and image encoders.

    PubMed

    Costa, Marcus V C; Carvalho, Joao L A; Berger, Pedro A; Zaghetto, Alexandre; da Rocha, Adson F; Nascimento, Francisco A O

    2009-01-01

    We present a new preprocessing technique for two-dimensional compression of surface electromyographic (S-EMG) signals, based on correlation sorting. We show that the JPEG2000 coding system (originally designed for compression of still images) and the H.264/AVC encoder (video compression algorithm operating in intraframe mode) can be used for compression of S-EMG signals. We compare the performance of these two off-the-shelf image compression algorithms for S-EMG compression, with and without the proposed preprocessing step. Compression of both isotonic and isometric contraction S-EMG signals is evaluated. The proposed methods were compared with other S-EMG compression algorithms from the literature.

  4. A Unified Steganalysis Framework

    DTIC Science & Technology

    2013-04-01

    contains more than 1800 images of different scenes. In the experiments, we used four JPEG based steganography techniques: Out- guess [13], F5 [16], model...also compressed these images again since some of the steganography meth- ods are double compressing the images . Stego- images are generated by embedding...randomly chosen messages (in bits) into 1600 grayscale images using each of the four steganography techniques. A random message length was determined

  5. An image registration-based technique for noninvasive vascular elastography

    NASA Astrophysics Data System (ADS)

    Valizadeh, Sina; Makkiabadi, Bahador; Mirbagheri, Alireza; Soozande, Mehdi; Manwar, Rayyan; Mozaffarzadeh, Moein; Nasiriavanaki, Mohammadreza

    2018-02-01

    Non-invasive vascular elastography is an emerging technique in vascular tissue imaging. During the past decades, several techniques have been suggested to estimate the tissue elasticity by measuring the displacement of the Carotid vessel wall. Cross correlation-based methods are the most prevalent approaches to measure the strain exerted in the wall vessel by the blood pressure. In the case of a low pressure, the displacement is too small to be apparent in ultrasound imaging, especially in the regions far from the center of the vessel, causing a high error of displacement measurement. On the other hand, increasing the compression leads to a relatively large displacement in the regions near the center, which reduces the performance of the cross correlation-based methods. In this study, a non-rigid image registration-based technique is proposed to measure the tissue displacement for a relatively large compression. The results show that the error of the displacement measurement obtained by the proposed method is reduced by increasing the amount of compression while the error of the cross correlationbased method rises for a relatively large compression. We also used the synthetic aperture imaging method, benefiting the directivity diagram, to improve the image quality, especially in the superficial regions. The best relative root-mean-square error (RMSE) of the proposed method and the adaptive cross correlation method were 4.5% and 6%, respectively. Consequently, the proposed algorithm outperforms the conventional method and reduces the relative RMSE by 25%.

  6. Task-oriented lossy compression of magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques

    1996-04-01

    A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.

  7. An Image Processing Technique for Achieving Lossy Compression of Data at Ratios in Excess of 100:1

    DTIC Science & Technology

    1992-11-01

    5 Lempel , Ziv , Welch (LZW) Compression ............... 7 Lossless Compression Tests Results ................. 9 Exact...since IBM holds the patent for this technique. Lempel , Ziv , Welch (LZW) Compression The LZW compression is related to two compression techniques known as... compression , using the input stream as data . This step is possible because the compression algorithm always outputs the phrase and character components of a

  8. In vivo optical elastography: stress and strain imaging of human skin lesions

    NASA Astrophysics Data System (ADS)

    Es'haghian, Shaghayegh; Gong, Peijun; Kennedy, Kelsey M.; Wijesinghe, Philip; Sampson, David D.; McLaughlin, Robert A.; Kennedy, Brendan F.

    2015-03-01

    Probing the mechanical properties of skin at high resolution could aid in the assessment of skin pathologies by, for example, detecting the extent of cancerous skin lesions and assessing pathology in burn scars. Here, we present two elastography techniques based on optical coherence tomography (OCT) to probe the local mechanical properties of skin. The first technique, optical palpation, is a high-resolution tactile imaging technique, which uses a complaint silicone layer positioned on the tissue surface to measure spatially-resolved stress imparted by compressive loading. We assess the performance of optical palpation, using a handheld imaging probe on a skin-mimicking phantom, and demonstrate its use on human skin. The second technique is a strain imaging technique, phase-sensitive compression OCE that maps depth-resolved mechanical variations within skin. We show preliminary results of in vivo phase-sensitive compression OCE on a human skin lesion.

  9. Radar Range Sidelobe Reduction Using Adaptive Pulse Compression Technique

    NASA Technical Reports Server (NTRS)

    Li, Lihua; Coon, Michael; McLinden, Matthew

    2013-01-01

    Pulse compression has been widely used in radars so that low-power, long RF pulses can be transmitted, rather than a highpower short pulse. Pulse compression radars offer a number of advantages over high-power short pulsed radars, such as no need of high-power RF circuitry, no need of high-voltage electronics, compact size and light weight, better range resolution, and better reliability. However, range sidelobe associated with pulse compression has prevented the use of this technique on spaceborne radars since surface returns detected by range sidelobes may mask the returns from a nearby weak cloud or precipitation particles. Research on adaptive pulse compression was carried out utilizing a field-programmable gate array (FPGA) waveform generation board and a radar transceiver simulator. The results have shown significant improvements in pulse compression sidelobe performance. Microwave and millimeter-wave radars present many technological challenges for Earth and planetary science applications. The traditional tube-based radars use high-voltage power supply/modulators and high-power RF transmitters; therefore, these radars usually have large size, heavy weight, and reliability issues for space and airborne platforms. Pulse compression technology has provided a path toward meeting many of these radar challenges. Recent advances in digital waveform generation, digital receivers, and solid-state power amplifiers have opened a new era for applying pulse compression to the development of compact and high-performance airborne and spaceborne remote sensing radars. The primary objective of this innovative effort is to develop and test a new pulse compression technique to achieve ultrarange sidelobes so that this technique can be applied to spaceborne, airborne, and ground-based remote sensing radars to meet future science requirements. By using digital waveform generation, digital receiver, and solid-state power amplifier technologies, this improved pulse compression technique could bring significant impact on future radar development. The novel feature of this innovation is the non-linear FM (NLFM) waveform design. The traditional linear FM has the limit (-20 log BT -3 dB) for achieving ultra-low-range sidelobe in pulse compression. For this study, a different combination of 20- or 40-microsecond chirp pulse width and 2- or 4-MHz chirp bandwidth was used. These are typical operational parameters for airborne or spaceborne weather radars. The NLFM waveform design was then implemented on a FPGA board to generate a real chirp signal, which was then sent to the radar transceiver simulator. The final results have shown significant improvement on sidelobe performance compared to that obtained using a traditional linear FM chirp.

  10. Human Motion Capture Data Tailored Transform Coding.

    PubMed

    Junhui Hou; Lap-Pui Chau; Magnenat-Thalmann, Nadia; Ying He

    2015-07-01

    Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed.

  11. Optical Measurement Technique for Space Column Characterization

    NASA Technical Reports Server (NTRS)

    Barrows, Danny A.; Watson, Judith J.; Burner, Alpheus W.; Phelps, James E.

    2004-01-01

    A simple optical technique for the structural characterization of lightweight space columns is presented. The technique is useful for determining the coefficient of thermal expansion during cool down as well as the induced strain during tension and compression testing. The technique is based upon object-to-image plane scaling and does not require any photogrammetric calibrations or computations. Examples of the measurement of the coefficient of thermal expansion are presented for several lightweight space columns. Examples of strain measured during tension and compression testing are presented along with comparisons to results obtained with Linear Variable Differential Transformer (LVDT) position transducers.

  12. Compression techniques in tele-radiology

    NASA Astrophysics Data System (ADS)

    Lu, Tianyu; Xiong, Zixiang; Yun, David Y.

    1999-10-01

    This paper describes a prototype telemedicine system for remote 3D radiation treatment planning. Due to voluminous medical image data and image streams generated in interactive frame rate involved in the application, the importance of deploying adjustable lossy to lossless compression techniques is emphasized in order to achieve acceptable performance via various kinds of communication networks. In particular, the compression of the data substantially reduces the transmission time and therefore allows large-scale radiation distribution simulation and interactive volume visualization using remote supercomputing resources in a timely fashion. The compression algorithms currently used in the software we developed are JPEG and H.263 lossy methods and Lempel-Ziv (LZ77) lossless methods. Both objective and subjective assessment of the effect of lossy compression methods on the volume data are conducted. Favorable results are obtained showing that substantial compression ratio is achievable within distortion tolerance. From our experience, we conclude that 30dB (PSNR) is about the lower bound to achieve acceptable quality when applying lossy compression to anatomy volume data (e.g. CT). For computer simulated data, much higher PSNR (up to 100dB) is expectable. This work not only introduces such novel approach for delivering medical services that will have significant impact on the existing cooperative image-based services, but also provides a platform for the physicians to assess the effects of lossy compression techniques on the diagnostic and aesthetic appearance of medical imaging.

  13. Data compression experiments with LANDSAT thematic mapper and Nimbus-7 coastal zone color scanner data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Ramapriyan, H. K.

    1989-01-01

    A case study is presented where an image segmentation based compression technique is applied to LANDSAT Thematic Mapper (TM) and Nimbus-7 Coastal Zone Color Scanner (CZCS) data. The compression technique, called Spatially Constrained Clustering (SCC), can be regarded as an adaptive vector quantization approach. The SCC can be applied to either single or multiple spectral bands of image data. The segmented image resulting from SCC is encoded in small rectangular blocks, with the codebook varying from block to block. Lossless compression potential (LDP) of sample TM and CZCS images are evaluated. For the TM test image, the LCP is 2.79. For the CZCS test image the LCP is 1.89, even though when only a cloud-free section of the image is considered the LCP increases to 3.48. Examples of compressed images are shown at several compression ratios ranging from 4 to 15. In the case of TM data, the compressed data are classified using the Bayes' classifier. The results show an improvement in the similarity between the classification results and ground truth when compressed data are used, thus showing that compression is, in fact, a useful first step in the analysis.

  14. Compression of surface myoelectric signals using MP3 encoding.

    PubMed

    Chan, Adrian D C

    2011-01-01

    The potential of MP3 compression of surface myoelectric signals is explored in this paper. MP3 compression is a perceptual-based encoder scheme, used traditionally to compress audio signals. The ubiquity of MP3 compression (e.g., portable consumer electronics and internet applications) makes it an attractive option for remote monitoring and telemedicine applications. The effects of muscle site and contraction type are examined at different MP3 encoding bitrates. Results demonstrate that MP3 compression is sensitive to the myoelectric signal bandwidth, with larger signal distortion associated with myoelectric signals that have higher bandwidths. Compared to other myoelectric signal compression techniques reported previously (embedded zero-tree wavelet compression and adaptive differential pulse code modulation), MP3 compression demonstrates superior performance (i.e., lower percent residual differences for the same compression ratios).

  15. Edge compression techniques for visualization of dense directed graphs.

    PubMed

    Dwyer, Tim; Henry Riche, Nathalie; Marriott, Kim; Mears, Christopher

    2013-12-01

    We explore the effectiveness of visualizing dense directed graphs by replacing individual edges with edges connected to 'modules'-or groups of nodes-such that the new edges imply aggregate connectivity. We only consider techniques that offer a lossless compression: that is, where the entire graph can still be read from the compressed version. The techniques considered are: a simple grouping of nodes with identical neighbor sets; Modular Decomposition which permits internal structure in modules and allows them to be nested; and Power Graph Analysis which further allows edges to cross module boundaries. These techniques all have the same goal--to compress the set of edges that need to be rendered to fully convey connectivity--but each successive relaxation of the module definition permits fewer edges to be drawn in the rendered graph. Each successive technique also, we hypothesize, requires a higher degree of mental effort to interpret. We test this hypothetical trade-off with two studies involving human participants. For Power Graph Analysis we propose a novel optimal technique based on constraint programming. This enables us to explore the parameter space for the technique more precisely than could be achieved with a heuristic. Although applicable to many domains, we are motivated by--and discuss in particular--the application to software dependency analysis.

  16. Novel approach to multispectral image compression on the Internet

    NASA Astrophysics Data System (ADS)

    Zhu, Yanqiu; Jin, Jesse S.

    2000-10-01

    Still image coding techniques such as JPEG have been always applied onto intra-plane images. Coding fidelity is always utilized in measuring the performance of intra-plane coding methods. In many imaging applications, it is more and more necessary to deal with multi-spectral images, such as the color images. In this paper, a novel approach to multi-spectral image compression is proposed by using transformations among planes for further compression of spectral planes. Moreover, a mechanism of introducing human visual system to the transformation is provided for exploiting the psycho visual redundancy. The new technique for multi-spectral image compression, which is designed to be compatible with the JPEG standard, is demonstrated on extracting correlation among planes based on human visual system. A high measure of compactness in the data representation and compression can be seen with the power of the scheme taken into account.

  17. Machine compliance in compression tests

    NASA Astrophysics Data System (ADS)

    Sousa, Pedro; Ivens, Jan; Lomov, Stepan V.

    2018-05-01

    The compression behavior of a material cannot be accurately determined if the machine compliance is not accounted prior to the measurements. This work discusses the machine compliance during a compressibility test with fiberglass fabrics. The thickness variation was measured during loading and unloading cycles with a relaxation stage of 30 minutes between them. The measurements were performed using an indirect technique based on the comparison between the displacement at a free compression cycle and the displacement with a sample. Relating to the free test, it has been noticed the nonexistence of machine relaxation during relaxation stage. Considering relaxation or not, the characteristic curves for a free compression cycle can be overlapped precisely in the majority of the points. For the compression test with sample, it was noticed a non-physical decrease of about 30 µm during the relaxation stage, what can be explained by the greater fabric relaxation in relation to the machine relaxation. Beyond the technique normally used, another technique was used which allows a constant thickness during relaxation. Within this second method, machine displacement with sample is simply subtracted to the machine displacement without sample being imposed as constant. If imposed as a constant it will remain constant during relaxation stage and it will suddenly decrease after relaxation. If constantly calculated it will decrease gradually during relaxation stage. Independently of the technique used the final result will remain unchanged. The uncertainty introduced by this imprecision is about ±15 µm.

  18. Recce imagery compression options

    NASA Astrophysics Data System (ADS)

    Healy, Donald J.

    1995-09-01

    The errors introduced into reconstructed RECCE imagery by ATARS DPCM compression are compared to those introduced by the more modern DCT-based JPEG compression algorithm. For storage applications in which uncompressed sensor data is available JPEG provides better mean-square-error performance while also providing more flexibility in the selection of compressed data rates. When ATARS DPCM compression has already been performed, lossless encoding techniques may be applied to the DPCM deltas to achieve further compression without introducing additional errors. The abilities of several lossless compression algorithms including Huffman, Lempel-Ziv, Lempel-Ziv-Welch, and Rice encoding to provide this additional compression of ATARS DPCM deltas are compared. It is shown that the amount of noise in the original imagery significantly affects these comparisons.

  19. Complex adaptation-based LDR image rendering for 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  20. Block sparsity-based joint compressed sensing recovery of multi-channel ECG signals.

    PubMed

    Singh, Anurag; Dandapat, Samarendra

    2017-04-01

    In recent years, compressed sensing (CS) has emerged as an effective alternative to conventional wavelet based data compression techniques. This is due to its simple and energy-efficient data reduction procedure, which makes it suitable for resource-constrained wireless body area network (WBAN)-enabled electrocardiogram (ECG) telemonitoring applications. Both spatial and temporal correlations exist simultaneously in multi-channel ECG (MECG) signals. Exploitation of both types of correlations is very important in CS-based ECG telemonitoring systems for better performance. However, most of the existing CS-based works exploit either of the correlations, which results in a suboptimal performance. In this work, within a CS framework, the authors propose to exploit both types of correlations simultaneously using a sparse Bayesian learning-based approach. A spatiotemporal sparse model is employed for joint compression/reconstruction of MECG signals. Discrete wavelets transform domain block sparsity of MECG signals is exploited for simultaneous reconstruction of all the channels. Performance evaluations using Physikalisch-Technische Bundesanstalt MECG diagnostic database show a significant gain in the diagnostic reconstruction quality of the MECG signals compared with the state-of-the art techniques at reduced number of measurements. Low measurement requirement may lead to significant savings in the energy-cost of the existing CS-based WBAN systems.

  1. OpenCL-based vicinity computation for 3D multiresolution mesh compression

    NASA Astrophysics Data System (ADS)

    Hachicha, Soumaya; Elkefi, Akram; Ben Amar, Chokri

    2017-03-01

    3D multiresolution mesh compression systems are still widely addressed in many domains. These systems are more and more requiring volumetric data to be processed in real-time. Therefore, the performance is becoming constrained by material resources usage and an overall reduction in the computational time. In this paper, our contribution entirely lies on computing, in real-time, triangles neighborhood of 3D progressive meshes for a robust compression algorithm based on the scan-based wavelet transform(WT) technique. The originality of this latter algorithm is to compute the WT with minimum memory usage by processing data as they are acquired. However, with large data, this technique is considered poor in term of computational complexity. For that, this work exploits the GPU to accelerate the computation using OpenCL as a heterogeneous programming language. Experiments demonstrate that, aside from the portability across various platforms and the flexibility guaranteed by the OpenCL-based implementation, this method can improve performance gain in speedup factor of 5 compared to the sequential CPU implementation.

  2. Impact of JPEG2000 compression on endmember extraction and unmixing of remotely sensed hyperspectral data

    NASA Astrophysics Data System (ADS)

    Martin, Gabriel; Gonzalez-Ruiz, Vicente; Plaza, Antonio; Ortiz, Juan P.; Garcia, Inmaculada

    2010-07-01

    Lossy hyperspectral image compression has received considerable interest in recent years due to the extremely high dimensionality of the data. However, the impact of lossy compression on spectral unmixing techniques has not been widely studied. These techniques characterize mixed pixels (resulting from insufficient spatial resolution) in terms of a suitable combination of spectrally pure substances (called endmembers) weighted by their estimated fractional abundances. This paper focuses on the impact of JPEG2000-based lossy compression of hyperspectral images on the quality of the endmembers extracted by different algorithms. The three considered algorithms are the orthogonal subspace projection (OSP), which uses only spatial information, and the automatic morphological endmember extraction (AMEE) and spatial spectral endmember extraction (SSEE), which integrate both spatial and spectral information in the search for endmembers. The impact of compression on the resulting abundance estimation based on the endmembers derived by different methods is also substantiated. Experimental results are conducted using a hyperspectral data set collected by NASA Jet Propulsion Laboratory over the Cuprite mining district in Nevada. The experimental results are quantitatively analyzed using reference information available from U.S. Geological Survey, resulting in recommendations to specialists interested in applying endmember extraction and unmixing algorithms to compressed hyperspectral data.

  3. Quality of reconstruction of compressed off-axis digital holograms by frequency filtering and wavelets.

    PubMed

    Cheremkhin, Pavel A; Kurbatova, Ekaterina A

    2018-01-01

    Compression of digital holograms can significantly help with the storage of objects and data in 2D and 3D form, its transmission, and its reconstruction. Compression of standard images by methods based on wavelets allows high compression ratios (up to 20-50 times) with minimum losses of quality. In the case of digital holograms, application of wavelets directly does not allow high values of compression to be obtained. However, additional preprocessing and postprocessing can afford significant compression of holograms and the acceptable quality of reconstructed images. In this paper application of wavelet transforms for compression of off-axis digital holograms are considered. The combined technique based on zero- and twin-order elimination, wavelet compression of the amplitude and phase components of the obtained Fourier spectrum, and further additional compression of wavelet coefficients by thresholding and quantization is considered. Numerical experiments on reconstruction of images from the compressed holograms are performed. The comparative analysis of applicability of various wavelets and methods of additional compression of wavelet coefficients is performed. Optimum parameters of compression of holograms by the methods can be estimated. Sizes of holographic information were decreased up to 190 times.

  4. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    PubMed

    Gillespy, T; Rowberg, A H

    1994-02-01

    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.

  5. A new technique in reference based DNA sequence compression algorithm: Enabling partial decompression

    NASA Astrophysics Data System (ADS)

    Banerjee, Kakoli; Prasad, R. A.

    2014-10-01

    The whole gamut of Genetic data is ever increasing exponentially. The human genome in its base format occupies almost thirty terabyte of data and doubling its size every two and a half year. It is well-know that computational resources are limited. The most important resource which genetic data requires in its collection, storage and retrieval is its storage space. Storage is limited. Computational performance is also dependent on storage and execution time. Transmission capabilities are also directly dependent on the size of the data. Hence Data compression techniques become an issue of utmost importance when we confront with the task of handling such giganticdatabases like GenBank. Decompression is also an issue when such huge databases are being handled. This paper is intended not only to provide genetic data compression but also partially decompress the genetic sequences.

  6. Streamlined Genome Sequence Compression using Distributed Source Coding

    PubMed Central

    Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel

    2014-01-01

    We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552

  7. Low-Complexity Lossless and Near-Lossless Data Compression Technique for Multispectral Imagery

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Klimesh, Matthew A.

    2009-01-01

    This work extends the lossless data compression technique described in Fast Lossless Compression of Multispectral- Image Data, (NPO-42517) NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26. The original technique was extended to include a near-lossless compression option, allowing substantially smaller compressed file sizes when a small amount of distortion can be tolerated. Near-lossless compression is obtained by including a quantization step prior to encoding of prediction residuals. The original technique uses lossless predictive compression and is designed for use on multispectral imagery. A lossless predictive data compression algorithm compresses a digitized signal one sample at a time as follows: First, a sample value is predicted from previously encoded samples. The difference between the actual sample value and the prediction is called the prediction residual. The prediction residual is encoded into the compressed file. The decompressor can form the same predicted sample and can decode the prediction residual from the compressed file, and so can reconstruct the original sample. A lossless predictive compression algorithm can generally be converted to a near-lossless compression algorithm by quantizing the prediction residuals prior to encoding them. In this case, since the reconstructed sample values will not be identical to the original sample values, the encoder must determine the values that will be reconstructed and use these values for predicting later sample values. The technique described here uses this method, starting with the original technique, to allow near-lossless compression. The extension to allow near-lossless compression adds the ability to achieve much more compression when small amounts of distortion are tolerable, while retaining the low complexity and good overall compression effectiveness of the original algorithm.

  8. Protection of Health Imagery by Region Based Lossless Reversible Watermarking Scheme

    PubMed Central

    Priya, R. Lakshmi; Sadasivam, V.

    2015-01-01

    Providing authentication and integrity in medical images is a problem and this work proposes a new blind fragile region based lossless reversible watermarking technique to improve trustworthiness of medical images. The proposed technique embeds the watermark using a reversible least significant bit embedding scheme. The scheme combines hashing, compression, and digital signature techniques to create a content dependent watermark making use of compressed region of interest (ROI) for recovery of ROI as reported in literature. The experiments were carried out to prove the performance of the scheme and its assessment reveals that ROI is extracted in an intact manner and PSNR values obtained lead to realization that the presented scheme offers greater protection for health imageries. PMID:26649328

  9. Rank-k Maximal Statistics for Divergence and Probability of Misclassification

    NASA Technical Reports Server (NTRS)

    Decell, H. P., Jr.

    1972-01-01

    A technique is developed for selecting from n-channel multispectral data some k combinations of the n-channels upon which to base a given classification technique so that some measure of the loss of the ability to distinguish between classes, using the compressed k-dimensional data, is minimized. Information loss in compressing the n-channel data to k channels is taken to be the difference in the average interclass divergences (or probability of misclassification) in n-space and in k-space.

  10. Adjustable lossless image compression based on a natural splitting of an image into drawing, shading, and fine-grained components

    NASA Technical Reports Server (NTRS)

    Novik, Dmitry A.; Tilton, James C.

    1993-01-01

    The compression, or efficient coding, of single band or multispectral still images is becoming an increasingly important topic. While lossy compression approaches can produce reconstructions that are visually close to the original, many scientific and engineering applications require exact (lossless) reconstructions. However, the most popular and efficient lossless compression techniques do not fully exploit the two-dimensional structural links existing in the image data. We describe here a general approach to lossless data compression that effectively exploits two-dimensional structural links of any length. After describing in detail two main variants on this scheme, we discuss experimental results.

  11. Measurement of effective bulk and contact resistance of gas diffusion layer under inhomogeneous compression - Part I: Electrical conductivity

    NASA Astrophysics Data System (ADS)

    Vikram, Ajit; Chowdhury, Prabudhya Roy; Phillips, Ryan K.; Hoorfar, Mina

    2016-07-01

    This paper describes a measurement technique developed for the determination of the effective electrical bulk resistance of the gas diffusion layer (GDL) and the contact resistance distribution at the interface of the GDL and the bipolar plate (BPP). The novelty of this study is the measurement and separation of the bulk and contact resistance under inhomogeneous compression, occurring in an actual fuel cell assembly due to the presence of the channels and ribs on the bipolar plates. The measurement of the electrical contact resistance, contributing to nearly two-third of the ohmic losses in the fuel cell assembly, shows a non-linear distribution along the GDL/BPP interface. The effective bulk resistance of the GDL under inhomogeneous compression showed a decrease of nearly 40% compared to that estimated for homogeneous compression at different compression pressures. Such a decrease in the effective bulk resistance under inhomogeneous compression could be due to the non-uniform distribution of pressure under the ribs and the channels. This measurement technique can be used to identify optimum GDL, BPP and channel-rib structures based on minimum bulk and contact resistances measured under inhomogeneous compression.

  12. The human genome contracts again.

    PubMed

    Pavlichin, Dmitri S; Weissman, Tsachy; Yona, Golan

    2013-09-01

    The number of human genomes that have been sequenced completely for different individuals has increased rapidly in recent years. Storing and transferring complete genomes between computers for the purpose of applying various applications and analysis tools will soon become a major hurdle, hindering the analysis phase. Therefore, there is a growing need to compress these data efficiently. Here, we describe a technique to compress human genomes based on entropy coding, using a reference genome and known Single Nucleotide Polymorphisms (SNPs). Furthermore, we explore several intrinsic features of genomes and information in other genomic databases to further improve the compression attained. Using these methods, we compress James Watson's genome to 2.5 megabytes (MB), improving on recent work by 37%. Similar compression is obtained for most genomes available from the 1000 Genomes Project. Our biologically inspired techniques promise even greater gains for genomes of lower organisms and for human genomes as more genomic data become available. Code is available at sourceforge.net/projects/genomezip/

  13. Shear wave pulse compression for dynamic elastography using phase-sensitive optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Nguyen, Thu-Mai; Song, Shaozhen; Arnal, Bastien; Wong, Emily Y.; Huang, Zhihong; Wang, Ruikang K.; O'Donnell, Matthew

    2014-01-01

    Assessing the biomechanical properties of soft tissue provides clinically valuable information to supplement conventional structural imaging. In the previous studies, we introduced a dynamic elastography technique based on phase-sensitive optical coherence tomography (PhS-OCT) to characterize submillimetric structures such as skin layers or ocular tissues. Here, we propose to implement a pulse compression technique for shear wave elastography. We performed shear wave pulse compression in tissue-mimicking phantoms. Using a mechanical actuator to generate broadband frequency-modulated vibrations (1 to 5 kHz), induced displacements were detected at an equivalent frame rate of 47 kHz using a PhS-OCT. The recorded signal was digitally compressed to a broadband pulse. Stiffness maps were then reconstructed from spatially localized estimates of the local shear wave speed. We demonstrate that a simple pulse compression scheme can increase shear wave detection signal-to-noise ratio (>12 dB gain) and reduce artifacts in reconstructing stiffness maps of heterogeneous media.

  14. A seismic data compression system using subband coding

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.; Pollara, F.

    1995-01-01

    This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  15. Image splitting and remapping method for radiological image compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  16. Adaptive multifocus image fusion using block compressed sensing with smoothed projected Landweber integration in the wavelet domain.

    PubMed

    V S, Unni; Mishra, Deepak; Subrahmanyam, G R K S

    2016-12-01

    The need for image fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. Image fusion is the process of combining substantial information from several sensors using mathematical techniques in order to create a single composite image that will be more comprehensive and thus more useful for a human operator or other computer vision tasks. This paper presents a new approach to multifocus image fusion based on sparse signal representation. Block-based compressive sensing integrated with a projection-driven compressive sensing (CS) recovery that encourages sparsity in the wavelet domain is used as a method to get the focused image from a set of out-of-focus images. Compression is achieved during the image acquisition process using a block compressive sensing method. An adaptive thresholding technique within the smoothed projected Landweber recovery process reconstructs high-resolution focused images from low-dimensional CS measurements of out-of-focus images. Discrete wavelet transform and dual-tree complex wavelet transform are used as the sparsifying basis for the proposed fusion. The main finding lies in the fact that sparsification enables a better selection of the fusion coefficients and hence better fusion. A Laplacian mixture model fit is done in the wavelet domain and estimation of the probability density function (pdf) parameters by expectation maximization leads us to the proper selection of the coefficients of the fused image. Using the proposed method compared with the fusion scheme without employing the projected Landweber (PL) scheme and the other existing CS-based fusion approaches, it is observed that with fewer samples itself, the proposed method outperforms other approaches.

  17. Techniques for information extraction from compressed GPS traces : final report.

    DOT National Transportation Integrated Search

    2015-12-31

    Developing techniques for extracting information requires a good understanding of methods used to compress the traces. Many techniques for compressing trace data : consisting of position (i.e., latitude/longitude) and time values have been developed....

  18. NIR hyperspectral compressive imager based on a modified Fabry–Perot resonator

    NASA Astrophysics Data System (ADS)

    Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Stern, Adrian

    2018-04-01

    The acquisition of hyperspectral (HS) image datacubes with available 2D sensor arrays involves a time consuming scanning process. In the last decade, several compressive sensing (CS) techniques were proposed to reduce the HS acquisition time. In this paper, we present a method for near-infrared (NIR) HS imaging which relies on our rapid CS resonator spectroscopy technique. Within the framework of CS, and by using a modified Fabry–Perot resonator, a sequence of spectrally modulated images is used to recover NIR HS datacubes. Owing to the innovative CS design, we demonstrate the ability to reconstruct NIR HS images with hundreds of spectral bands from an order of magnitude fewer measurements, i.e. with a compression ratio of about 10:1. This high compression ratio, together with the high optical throughput of the system, facilitates fast acquisition of large HS datacubes.

  19. Compressive residual strength of graphite/epoxy laminates after impact

    NASA Technical Reports Server (NTRS)

    Guy, Teresa A.; Lagace, Paul A.

    1992-01-01

    The issue of damage tolerance after impact, in terms of the compressive residual strength, was experimentally examined in graphite/epoxy laminates using Hercules AS4/3501-6 in a (+ or - 45/0)(sub 2S) configuration. Three different impactor masses were used at various velocities and the resultant damage measured via a number of nondestructive and destructive techniques. Specimens were then tested to failure under uniaxial compression. The results clearly show that a minimum compressive residual strength exists which is below the open hole strength for a hole of the same diameter as the impactor. Increases in velocity beyond the point of minimum strength cause a difference in the damage produced and cause a resultant increase in the compressive residual strength which asymptotes to the open hole strength value. Furthermore, the results show that this minimum compressive residual strength value is independent of the impactor mass used and is only dependent upon the damage present in the impacted specimen which is the same for the three impactor mass cases. A full 3-D representation of the damage is obtained through the various techniques. Only this 3-D representation can properly characterize the damage state that causes the resultant residual strength. Assessment of the state-of-the-art in predictive analysis capabilities shows a need to further develop techniques based on the 3-D damage state that exists. In addition, the need for damage 'metrics' is clearly indicated.

  20. About a method for compressing x-ray computed microtomography data

    NASA Astrophysics Data System (ADS)

    Mancini, Lucia; Kourousias, George; Billè, Fulvio; De Carlo, Francesco; Fidler, Aleš

    2018-04-01

    The management of scientific data is of high importance especially for experimental techniques that produce big data volumes. Such a technique is x-ray computed tomography (CT) and its community has introduced advanced data formats which allow for better management of experimental data. Rather than the organization of the data and the associated meta-data, the main topic on this work is data compression and its applicability to experimental data collected from a synchrotron-based CT beamline at the Elettra-Sincrotrone Trieste facility (Italy) and studies images acquired from various types of samples. This study covers parallel beam geometry, but it could be easily extended to a cone-beam one. The reconstruction workflow used is the one currently in operation at the beamline. Contrary to standard image compression studies, this manuscript proposes a systematic framework and workflow for the critical examination of different compression techniques and does so by applying it to experimental data. Beyond the methodology framework, this study presents and examines the use of JPEG-XR in combination with HDF5 and TIFF formats providing insights and strategies on data compression and image quality issues that can be used and implemented at other synchrotron facilities and laboratory systems. In conclusion, projection data compression using JPEG-XR appears as a promising, efficient method to reduce data file size and thus to facilitate data handling and image reconstruction.

  1. Image processing using Gallium Arsenide (GaAs) technology

    NASA Technical Reports Server (NTRS)

    Miller, Warner H.

    1989-01-01

    The need to increase the information return from space-borne imaging systems has increased in the past decade. The use of multi-spectral data has resulted in the need for finer spatial resolution and greater spectral coverage. Onboard signal processing will be necessary in order to utilize the available Tracking and Data Relay Satellite System (TDRSS) communication channel at high efficiency. A generally recognized approach to the increased efficiency of channel usage is through data compression techniques. The compression technique implemented is a differential pulse code modulation (DPCM) scheme with a non-uniform quantizer. The need to advance the state-of-the-art of onboard processing was recognized and a GaAs integrated circuit technology was chosen. An Adaptive Programmable Processor (APP) chip set was developed which is based on an 8-bit slice general processor. The reason for choosing the compression technique for the Multi-spectral Linear Array (MLA) instrument is described. Also a description is given of the GaAs integrated circuit chip set which will demonstrate that data compression can be performed onboard in real time at data rate in the order of 500 Mb/s.

  2. Digital compression algorithms for HDTV transmission

    NASA Technical Reports Server (NTRS)

    Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.

    1990-01-01

    Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.

  3. Compressed air injection technique to standardize block injection pressures.

    PubMed

    Tsui, Ban C H; Li, Lisa X Y; Pillay, Jennifer J

    2006-11-01

    Presently, no standardized technique exists to monitor injection pressures during peripheral nerve blocks. Our objective was to determine if a compressed air injection technique, using an in vitro model based on Boyle's law and typical regional anesthesia equipment, could consistently maintain injection pressures below a 1293 mmHg level associated with clinically significant nerve injury. Injection pressures for 20 and 30 mL syringes with various needle sizes (18G, 20G, 21G, 22G, and 24G) were measured in a closed system. A set volume of air was aspirated into a saline-filled syringe and then compressed and maintained at various percentages while pressure was measured. The needle was inserted into the injection port of a pressure sensor, which had attached extension tubing with an injection plug clamped "off". Using linear regression with all data points, the pressure value and 99% confidence interval (CI) at 50% air compression was estimated. The linearity of Boyle's law was demonstrated with a high correlation, r = 0.99, and a slope of 0.984 (99% CI: 0.967-1.001). The net pressure generated at 50% compression was estimated as 744.8 mmHg, with the 99% CI between 729.6 and 760.0 mmHg. The various syringe/needle combinations had similar results. By creating and maintaining syringe air compression at 50% or less, injection pressures will be substantially below the 1293 mmHg threshold considered to be an associated risk factor for clinically significant nerve injury. This technique may allow simple, real-time and objective monitoring during local anesthetic injections while inherently reducing injection speed.

  4. Detection of rebars in concrete using advanced ultrasonic pulse compression techniques.

    PubMed

    Laureti, S; Ricci, M; Mohamed, M N I B; Senni, L; Davis, L A J; Hutchins, D A

    2018-04-01

    A pulse compression technique has been developed for the non-destructive testing of concrete samples. Scattering of signals from aggregate has historically been a problem in such measurements. Here, it is shown that a combination of piezocomposite transducers, pulse compression and post processing can lead to good images of a reinforcement bar at a cover depth of 55 mm. This has been achieved using a combination of wide bandwidth operation over the 150-450 kHz range, and processing based on measuring the cumulative energy scattered back to the receiver. Results are presented in the form of images of a 20 mm rebar embedded within a sample containing 10 mm aggregate. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Electroencephalographic compression based on modulated filter banks and wavelet transform.

    PubMed

    Bazán-Prieto, Carlos; Cárdenas-Barrera, Julián; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando

    2011-01-01

    Due to the large volume of information generated in an electroencephalographic (EEG) study, compression is needed for storage, processing or transmission for analysis. In this paper we evaluate and compare two lossy compression techniques applied to EEG signals. It compares the performance of compression schemes with decomposition by filter banks or wavelet Packets transformation, seeking the best value for compression, best quality and more efficient real time implementation. Due to specific properties of EEG signals, we propose a quantization stage adapted to the dynamic range of each band, looking for higher quality. The results show that the compressor with filter bank performs better than transform methods. Quantization adapted to the dynamic range significantly enhances the quality.

  6. Cloud solution for histopathological image analysis using region of interest based compression.

    PubMed

    Kanakatte, Aparna; Subramanya, Rakshith; Delampady, Ashik; Nayak, Rajarama; Purushothaman, Balamuralidhar; Gubbi, Jayavardhana

    2017-07-01

    Recent technological gains have led to the adoption of innovative cloud based solutions in medical imaging field. Once the medical image is acquired, it can be viewed, modified, annotated and shared on many devices. This advancement is mainly due to the introduction of Cloud computing in medical domain. Tissue pathology images are complex and are normally collected at different focal lengths using a microscope. The single whole slide image contains many multi resolution images stored in a pyramidal structure with the highest resolution image at the base and the smallest thumbnail image at the top of the pyramid. Highest resolution image will be used for tissue pathology diagnosis and analysis. Transferring and storing such huge images is a big challenge. Compression is a very useful and effective technique to reduce the size of these images. As pathology images are used for diagnosis, no information can be lost during compression (lossless compression). A novel method of extracting the tissue region and applying lossless compression on this region and lossy compression on the empty regions has been proposed in this paper. The resulting compression ratio along with lossless compression on tissue region is in acceptable range allowing efficient storage and transmission to and from the Cloud.

  7. Full-field measurement of micromotion around a cementless femoral stem using micro-CT imaging and radiopaque markers.

    PubMed

    Malfroy Camine, V; Rüdiger, H A; Pioletti, D P; Terrier, A

    2016-12-08

    A good primary stability of cementless femoral stems is essential for the long-term success of total hip arthroplasty. Experimental measurement of implant micromotion with linear variable differential transformers is commonly used to assess implant primary stability in pre-clinical testing. But these measurements are often limited to a few distinct points at the interface. New techniques based on micro-computed tomography (micro-CT) have recently been introduced, such as Digital Volume Correlation (DVC) or markers-based approaches. DVC is however limited to measurement around non-metallic implants due to metal-induced imaging artifacts, and markers-based techniques are confined to a small portion of the implant. In this paper, we present a technique based on micro-CT imaging and radiopaque markers to provide the first full-field micromotion measurement at the entire bone-implant interface of a cementless femoral stem implanted in a cadaveric femur. Micromotion was measured during compression and torsion. Over 300 simultaneous measurement points were obtained. Micromotion amplitude ranged from 0 to 24µm in compression and from 0 to 49µm in torsion. Peak micromotion was distal in compression and proximal in torsion. The technique bias was 5.1µm and its repeatability standard deviation was 4µm. The method was thus highly reliable and compared well with results obtained with linear variable differential transformers (LVDTs) reported in the literature. These results indicate that this micro-CT based technique is perfectly relevant to observe local variations in primary stability around metallic implants. Possible applications include pre-clinical testing of implants and validation of patient-specific models for pre-operative planning. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. 2D-pattern matching image and video compression: theory, algorithms, and experiments.

    PubMed

    Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth

    2002-01-01

    In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.

  9. Nonlinear compression of temporal solitons in an optical waveguide via inverse engineering

    NASA Astrophysics Data System (ADS)

    Paul, Koushik; Sarma, Amarendra K.

    2018-03-01

    We propose a novel method based on the so-called shortcut-to-adiabatic passage techniques to achieve fast compression of temporal solitons in a nonlinear waveguide. We demonstrate that soliton compression could be achieved, in principle, at an arbitrarily small distance by inverse-engineering the pulse width and the nonlinearity of the medium. The proposed scheme could possibly be exploited for various short-distance communication protocols and may be even in nonlinear guided wave-optics devices and generation of ultrashort soliton pulses.

  10. The development of machine technology processing for earth resource survey

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A.

    1970-01-01

    The following technologies are considered for automatic processing of earth resources data: (1) registration of multispectral and multitemporal images, (2) digital image display systems, (3) data system parameter effects on satellite remote sensing systems, and (4) data compression techniques based on spectral redundancy. The importance of proper spectral band and compression algorithm selections is pointed out.

  11. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-03-10

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.

  12. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.

  13. Wavelet-based scalable L-infinity-oriented compression.

    PubMed

    Alecu, Alin; Munteanu, Adrian; Cornelis, Jan P H; Schelkens, Peter

    2006-09-01

    Among the different classes of coding techniques proposed in literature, predictive schemes have proven their outstanding performance in near-lossless compression. However, these schemes are incapable of providing embedded L(infinity)-oriented compression, or, at most, provide a very limited number of potential L(infinity) bit-stream truncation points. We propose a new multidimensional wavelet-based L(infinity)-constrained scalable coding framework that generates a fully embedded L(infinity)-oriented bit stream and that retains the coding performance and all the scalability options of state-of-the-art L2-oriented wavelet codecs. Moreover, our codec instantiation of the proposed framework clearly outperforms JPEG2000 in L(infinity) coding sense.

  14. FTIR-derived characteristics of fossil-gymnosperm leaf remains of Cordaites principalis and Cordaites borassifolius (Pennsylvanian, Maritimes Canada and Czech Republic)

    USGS Publications Warehouse

    Zodrow, E.L.; Mastalerz, Maria; Simunek, Z.

    2003-01-01

    Cordaites principalis and Cordaites borassifolius, gymnosperm trees of the Carboniferous, are distinguished based on compression and cuticular morphology. A new distinction between them is suggested on the basis of differences in functional groups. Cuticular and compression spectra of C. borassifolius have lower CH2/CH3 ratios, suggesting more branched aliphatic chains in comparison with cuticles and compressions of C. principalis. Other differences are observed with Fourier transform infrared spectroscopy (FTIR) technique, but they vary from sample to sample of the two species to suggest other than chemotaxonomic-based sources of variations. ?? 2003 Elsevier B.V. All rights reserved.

  15. Tibiofemoral Compression Force Differences Using Laxity- and Force-Based Initial Graft Tensioning Techniques in the ACL-Reconstructed Knee

    PubMed Central

    Fleming, Braden C.; Brady, Mark F.; Bradley, Michael P.; Banerjee, Rahul; Hulstyn, Michael J.; Fadale, Paul D.

    2008-01-01

    Purpose To document the tibiofemoral (TF) compression forces produced during clinical initial graft tension protocols. Methods An image analysis system was used to track the position of the tibia relative to the femur in 11 cadaver knees. TF compression forces were quantified using thin-film pressure sensors. Prior to performing ACL reconstructions with patellar tendon grafts, measurements of TF compression force were obtained from the ACL-intact knee with knee flexion. ACL reconstructions were then performed using “force-based” and “laxity-based” graft tension approaches. Within each approach, high- and low-tension conditions were compared to the ACL-intact condition over the range of knee flexion angles. Results The TF compression forces for all initial graft tension conditions were significantly greater than that of the normal knee when the knee was in full extension (0°). The TF compression forces when using the laxity-based approach were greater than those produced with the force-based approach. However the laxity-based approach was necessary to restore normal laxity at the time of surgery. Conclusions The initial graft tension conditions produce different TF compressive force profiles at the time of surgery. A compromise must be made between restoring knee laxity or TF compressive forces when reconstructing the ACL with patellar tendon graft. Clinical Relevance The TF compression forces were greater in the ACL-reconstructed knee for all the initial graft tension conditions when compared to the ACL-intact knee, and that clinically relevant initial graft tension conditions produce different TF compressive forces. PMID:18760214

  16. Shock-adiabatic to quasi-isentropic compression of warm dense helium up to 150 GPa

    NASA Astrophysics Data System (ADS)

    Zheng, J.; Chen, Q. F.; Gu, Y. J.; Li, J. T.; Li, Z. G.; Li, C. J.; Chen, Z. Y.

    2017-06-01

    Multiple reverberation compression can achieve higher pressure, higher temperature, but lower entropy. It is available to provide an important validation for the elaborate and wider planetary models and simulate the inertial confinement fusion capsule implosion process. In the work, we have developed the thermodynamic and optical properties of helium from shock-adiabatic to quasi-isentropic compression by means of a multiple reverberation technique. By this technique, the initial dense gaseous helium was compressed to high pressure and high temperature and entered the warm dense matter (WDM) region. The experimental equation of state (EOS) of WDM helium in the pressure-density-temperature (P-ρ -T) range of 1 -150 GPa , 0.1 -1.1 g c m-3 , and 4600-24 000 K were measured. The optical radiations emanating from the WDM helium were recorded, and the particle velocity profiles detecting from the sample/window interface were obtained successfully up to 10 times compression. The optical radiation results imply that dense He has become rather opaque after the 2nd compression with a density of about 0.3 g c m-3 and a temperature of about 1 eV. The opaque states of helium under multiple compression were analyzed by the particle velocity measurements. The multiple compression technique could efficiently enhanced the density and the compressibility, and our multiple compression ratios (ηi=ρi/ρ0,i =1 -10 ) of helium are greatly improved from 3.5 to 43 based on initial precompressed density (ρ0) . For the relative compression ratio (ηi'=ρi/ρi -1) , it increases with pressure in the lower density regime and reversely decreases in the higher density regime, and a turning point occurs at the 3rd and 4th compression states under the different loading conditions. This nonmonotonic evolution of the compression is controlled by two factors, where the excitation of internal degrees of freedom results in the increasing compressibility and the repulsive interactions between the particles results in the decreasing compressibility at the onset of electron excitation and ionization. In the P-ρ -T contour with the experiments and the calculations, our multiple compression states from insulating to semiconducting fluid (from transparent to opaque fluid) are illustrated. Our results give an elaborate validation of EOS models and have applications for planetary and stellar opaque atmospheres.

  17. Clinical utility of wavelet compression for resolution-enhanced chest radiography

    NASA Astrophysics Data System (ADS)

    Andriole, Katherine P.; Hovanes, Michael E.; Rowberg, Alan H.

    2000-05-01

    This study evaluates the usefulness of wavelet compression for resolution-enhanced storage phosphor chest radiographs in the detection of subtle interstitial disease, pneumothorax and other abnormalities. A wavelet compression technique, MrSIDTM (LizardTech, Inc., Seattle, WA), is implemented which compresses the images from their original 2,000 by 2,000 (2K) matrix size, and then decompresses the image data for display at optimal resolution by matching the spatial frequency characteristics of image objects using a 4,000- square matrix. The 2K-matrix computed radiography (CR) chest images are magnified to a 4K-matrix using wavelet series expansion. The magnified images are compared with the original uncompressed 2K radiographs and with two-times magnification of the original images. Preliminary results show radiologist preference for MrSIDTM wavelet-based magnification over magnification of original data, and suggest that the compressed/decompressed images may provide an enhancement to the original. Data collection for clinical trials of 100 chest radiographs including subtle interstitial abnormalities and/or subtle pneumothoraces and normal cases, are in progress. Three experienced thoracic radiologists will view images side-by- side on calibrated softcopy workstations under controlled viewing conditions, and rank order preference tests will be performed. This technique combines image compression with image enhancement, and suggests that compressed/decompressed images can actually improve the originals.

  18. Lossy compression of quality scores in genomic data.

    PubMed

    Cánovas, Rodrigo; Moffat, Alistair; Turpin, Andrew

    2014-08-01

    Next-generation sequencing technologies are revolutionizing medicine. Data from sequencing technologies are typically represented as a string of bases, an associated sequence of per-base quality scores and other metadata, and in aggregate can require a large amount of space. The quality scores show how accurate the bases are with respect to the sequencing process, that is, how confident the sequencer is of having called them correctly, and are the largest component in datasets in which they are retained. Previous research has examined how to store sequences of bases effectively; here we add to that knowledge by examining methods for compressing quality scores. The quality values originate in a continuous domain, and so if a fidelity criterion is introduced, it is possible to introduce flexibility in the way these values are represented, allowing lossy compression over the quality score data. We present existing compression options for quality score data, and then introduce two new lossy techniques. Experiments measuring the trade-off between compression ratio and information loss are reported, including quantifying the effect of lossy representations on a downstream application that carries out single nucleotide polymorphism and insert/deletion detection. The new methods are demonstrably superior to other techniques when assessed against the spectrum of possible trade-offs between storage required and fidelity of representation. An implementation of the methods described here is available at https://github.com/rcanovas/libCSAM. rcanovas@student.unimelb.edu.au Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Compression of Probabilistic XML Documents

    NASA Astrophysics Data System (ADS)

    Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice

    Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

  20. Two-thumb technique is superior to two-finger technique during lone rescuer infant manikin CPR.

    PubMed

    Udassi, Sharda; Udassi, Jai P; Lamb, Melissa A; Theriaque, Douglas W; Shuster, Jonathan J; Zaritsky, Arno L; Haque, Ikram U

    2010-06-01

    Infant CPR guidelines recommend two-finger chest compression with a lone rescuer and two-thumb with two rescuers. Two-thumb provides better chest compression but is perceived to be associated with increased ventilation hands-off time. We hypothesized that lone rescuer two-thumb CPR is associated with increased ventilation cycle time, decreased ventilation quality and fewer chest compressions compared to two-finger CPR in an infant manikin model. Crossover observational study randomizing 34 healthcare providers to perform 2 min CPR at a compression rate of 100 min(-1) using a 30:2 compression:ventilation ratio comparing two-thumb vs. two-finger techniques. A Laerdal Baby ALS Trainer manikin was modified to digitally record compression rate, compression depth and compression pressure and ventilation cycle time (two mouth-to-mouth breaths). Manikin chest rise with breaths was video recorded and later reviewed by two blinded CPR instructors for percent effective breaths. Data (mean+/-SD) were analyzed using a two-tailed paired t-test. Significance was defined qualitatively as p< or =0.05. Mean % effective breaths were 90+/-18.6% in two-thumb and 88.9+/-21.1% in two-finger, p=0.65. Mean time (s) to deliver two mouth-to-mouth breaths was 7.6+/-1.6 in two-thumb and 7.0+/-1.5 in two-finger, p<0.0001. Mean delivered compressions per minute were 87+/-11 in two-thumb and 92+/-12 in two-finger, p=0.0005. Two-thumb resulted in significantly higher compression depth and compression pressure compared to the two-finger technique. Healthcare providers required 0.6s longer time to deliver two breaths during two-thumb lone rescuer infant CPR, but there was no significant difference in percent effective breaths delivered between the two techniques. Two-thumb CPR had 4 fewer delivered compressions per minute, which may be offset by far more effective compression depth and compression pressure compared to two-finger technique. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  1. Lossless Astronomical Image Compression and the Effects of Random Noise

    NASA Technical Reports Server (NTRS)

    Pence, William

    2009-01-01

    In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.

  2. Highly Efficient Compression Algorithms for Multichannel EEG.

    PubMed

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  3. Enhancement of DRPE performance with a novel scheme based on new RAC: Principle, security analysis and FPGA implementation

    NASA Astrophysics Data System (ADS)

    Neji, N.; Jridi, M.; Alfalou, A.; Masmoudi, N.

    2016-02-01

    The double random phase encryption (DRPE) method is a well-known all-optical architecture which has many advantages especially in terms of encryption efficiency. However, the method presents some vulnerabilities against attacks and requires a large quantity of information to encode the complex output plane. In this paper, we present an innovative hybrid technique to enhance the performance of DRPE method in terms of compression and encryption. An optimized simultaneous compression and encryption method is applied simultaneously on the real and imaginary components of the DRPE output plane. The compression and encryption technique consists in using an innovative randomized arithmetic coder (RAC) that can well compress the DRPE output planes and at the same time enhance the encryption. The RAC is obtained by an appropriate selection of some conditions in the binary arithmetic coding (BAC) process and by using a pseudo-random number to encrypt the corresponding outputs. The proposed technique has the capabilities to process video content and to be standard compliant with modern video coding standards such as H264 and HEVC. Simulations demonstrate that the proposed crypto-compression system has presented the drawbacks of the DRPE method. The cryptographic properties of DRPE have been enhanced while a compression rate of one-sixth can be achieved. FPGA implementation results show the high performance of the proposed method in terms of maximum operating frequency, hardware occupation, and dynamic power consumption.

  4. Comparison of different bonding techniques for efficient strain transfer using piezoelectric actuators

    NASA Astrophysics Data System (ADS)

    Ziss, Dorian; Martín-Sánchez, Javier; Lettner, Thomas; Halilovic, Alma; Trevisi, Giovanna; Trotta, Rinaldo; Rastelli, Armando; Stangl, Julian

    2017-04-01

    In this paper, strain transfer efficiencies from a single crystalline piezoelectric lead magnesium niobate-lead titanate substrate to a GaAs semiconductor membrane bonded on top are investigated using state-of-the-art x-ray diffraction (XRD) techniques and finite-element-method (FEM) simulations. Two different bonding techniques are studied, namely, gold-thermo-compression and polymer-based SU8 bonding. Our results show a much higher strain-transfer for the "soft" SU8 bonding in comparison to the "hard" bonding via gold-thermo-compression. A comparison between the XRD results and FEM simulations allows us to explain this unexpected result with the presence of complex interface structures between the different layers.

  5. Comparison of different bonding techniques for efficient strain transfer using piezoelectric actuators

    PubMed Central

    Ziss, Dorian; Martín-Sánchez, Javier; Lettner, Thomas; Halilovic, Alma; Trevisi, Giovanna; Trotta, Rinaldo; Rastelli, Armando; Stangl, Julian

    2017-01-01

    In this paper, strain transfer efficiencies from a single crystalline piezoelectric lead magnesium niobate-lead titanate substrate to a GaAs semiconductor membrane bonded on top are investigated using state-of-the-art x-ray diffraction (XRD) techniques and finite-element-method (FEM) simulations. Two different bonding techniques are studied, namely, gold-thermo-compression and polymer-based SU8 bonding. Our results show a much higher strain-transfer for the “soft” SU8 bonding in comparison to the “hard” bonding via gold-thermo-compression. A comparison between the XRD results and FEM simulations allows us to explain this unexpected result with the presence of complex interface structures between the different layers. PMID:28522879

  6. Comparison of different bonding techniques for efficient strain transfer using piezoelectric actuators.

    PubMed

    Ziss, Dorian; Martín-Sánchez, Javier; Lettner, Thomas; Halilovic, Alma; Trevisi, Giovanna; Trotta, Rinaldo; Rastelli, Armando; Stangl, Julian

    2017-04-01

    In this paper, strain transfer efficiencies from a single crystalline piezoelectric lead magnesium niobate-lead titanate substrate to a GaAs semiconductor membrane bonded on top are investigated using state-of-the-art x-ray diffraction (XRD) techniques and finite-element-method (FEM) simulations. Two different bonding techniques are studied, namely, gold-thermo-compression and polymer-based SU8 bonding. Our results show a much higher strain-transfer for the "soft" SU8 bonding in comparison to the "hard" bonding via gold-thermo-compression. A comparison between the XRD results and FEM simulations allows us to explain this unexpected result with the presence of complex interface structures between the different layers.

  7. Wavelet compression of multichannel ECG data by enhanced set partitioning in hierarchical trees algorithm.

    PubMed

    Sharifahmadian, Ershad

    2006-01-01

    The set partitioning in hierarchical trees (SPIHT) algorithm is very effective and computationally simple technique for image and signal compression. Here the author modified the algorithm which provides even better performance than the SPIHT algorithm. The enhanced set partitioning in hierarchical trees (ESPIHT) algorithm has performance faster than the SPIHT algorithm. In addition, the proposed algorithm reduces the number of bits in a bit stream which is stored or transmitted. I applied it to compression of multichannel ECG data. Also, I presented a specific procedure based on the modified algorithm for more efficient compression of multichannel ECG data. This method employed on selected records from the MIT-BIH arrhythmia database. According to experiments, the proposed method attained the significant results regarding compression of multichannel ECG data. Furthermore, in order to compress one signal which is stored for a long time, the proposed multichannel compression method can be utilized efficiently.

  8. Nonlinear model-order reduction for compressible flow solvers using the Discrete Empirical Interpolation Method

    NASA Astrophysics Data System (ADS)

    Fosas de Pando, Miguel; Schmid, Peter J.; Sipp, Denis

    2016-11-01

    Nonlinear model reduction for large-scale flows is an essential component in many fluid applications such as flow control, optimization, parameter space exploration and statistical analysis. In this article, we generalize the POD-DEIM method, introduced by Chaturantabut & Sorensen [1], to address nonlocal nonlinearities in the equations without loss of performance or efficiency. The nonlinear terms are represented by nested DEIM-approximations using multiple expansion bases based on the Proper Orthogonal Decomposition. These extensions are imperative, for example, for applications of the POD-DEIM method to large-scale compressible flows. The efficient implementation of the presented model-reduction technique follows our earlier work [2] on linearized and adjoint analyses and takes advantage of the modular structure of our compressible flow solver. The efficacy of the nonlinear model-reduction technique is demonstrated to the flow around an airfoil and its acoustic footprint. We could obtain an accurate and robust low-dimensional model that captures the main features of the full flow.

  9. Compressing random microstructures via stochastic Wang tilings.

    PubMed

    Novák, Jan; Kučerová, Anna; Zeman, Jan

    2012-10-01

    This Rapid Communication presents a stochastic Wang tiling-based technique to compress or reconstruct disordered microstructures on the basis of given spatial statistics. Unlike the existing approaches based on a single unit cell, it utilizes a finite set of tiles assembled by a stochastic tiling algorithm, thereby allowing to accurately reproduce long-range orientation orders in a computationally efficient manner. Although the basic features of the method are demonstrated for a two-dimensional particulate suspension, the present framework is fully extensible to generic multidimensional media.

  10. Potential digitization/compression techniques for Shuttle video

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Batson, B. H.

    1978-01-01

    The Space Shuttle initially will be using a field-sequential color television system but it is possible that an NTSC color TV system may be used for future missions. In addition to downlink color TV transmission via analog FM links, the Shuttle will use a high resolution slow-scan monochrome system for uplink transmission of text and graphics information. This paper discusses the characteristics of the Shuttle video systems, and evaluates digitization and/or bandwidth compression techniques for the various links. The more attractive techniques for the downlink video are based on a two-dimensional DPCM encoder that utilizes temporal and spectral as well as the spatial correlation of the color TV imagery. An appropriate technique for distortion-free coding of the uplink system utilizes two-dimensional HCK codes.

  11. Monitoring and diagnosis of Alzheimer's disease using noninvasive compressive sensing EEG

    NASA Astrophysics Data System (ADS)

    Morabito, F. C.; Labate, D.; Morabito, G.; Palamara, I.; Szu, H.

    2013-05-01

    The majority of elderly with Alzheimer's Disease (AD) receive care at home from caregivers. In contrast to standard tethered clinical settings, a wireless, real-time, body-area smartphone-based remote monitoring of electroencephalogram (EEG) can be extremely advantageous for home care of those patients. Such wearable tools pave the way to personalized medicine, for example giving the opportunity to control the progression of the disease and the effect of drugs. By applying Compressive Sensing (CS) techniques it is in principle possible to overcome the difficulty raised by smartphones spatial-temporal throughput rate bottleneck. Unfortunately, EEG and other physiological signals are often non-sparse. In this paper, it is instead shown that the EEG of AD patients becomes actually more compressible with the progression of the disease. EEG of Mild Cognitive Impaired (MCI) subjects is also showing clear tendency to enhanced compressibility. This feature favor the use of CS techniques and ultimately the use of telemonitoring with wearable sensors.

  12. Measurement of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique

    DTIC Science & Technology

    2013-05-01

    Measurement of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC...of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique Todd C...Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c

  13. A Comparison of Compressed Sensing and Sparse Recovery Algorithms Applied to Simulation Data

    DOE PAGES

    Fan, Ya Ju; Kamath, Chandrika

    2016-09-01

    The move toward exascale computing for scientific simulations is placing new demands on compression techniques. It is expected that the I/O system will not be able to support the volume of data that is expected to be written out. To enable quantitative analysis and scientific discovery, we are interested in techniques that compress high-dimensional simulation data and can provide perfect or near-perfect reconstruction. In this paper, we explore the use of compressed sensing (CS) techniques to reduce the size of the data before they are written out. Using large-scale simulation data, we investigate how the sufficient sparsity condition and themore » contrast in the data affect the quality of reconstruction and the degree of compression. Also, we provide suggestions for the practical implementation of CS techniques and compare them with other sparse recovery methods. Finally, our results show that despite longer times for reconstruction, compressed sensing techniques can provide near perfect reconstruction over a range of data with varying sparsity.« less

  14. A Comparison of Compressed Sensing and Sparse Recovery Algorithms Applied to Simulation Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, Ya Ju; Kamath, Chandrika

    The move toward exascale computing for scientific simulations is placing new demands on compression techniques. It is expected that the I/O system will not be able to support the volume of data that is expected to be written out. To enable quantitative analysis and scientific discovery, we are interested in techniques that compress high-dimensional simulation data and can provide perfect or near-perfect reconstruction. In this paper, we explore the use of compressed sensing (CS) techniques to reduce the size of the data before they are written out. Using large-scale simulation data, we investigate how the sufficient sparsity condition and themore » contrast in the data affect the quality of reconstruction and the degree of compression. Also, we provide suggestions for the practical implementation of CS techniques and compare them with other sparse recovery methods. Finally, our results show that despite longer times for reconstruction, compressed sensing techniques can provide near perfect reconstruction over a range of data with varying sparsity.« less

  15. Intelligent condition monitoring method for bearing faults from highly compressed measurements using sparse over-complete features

    NASA Astrophysics Data System (ADS)

    Ahmed, H. O. A.; Wong, M. L. D.; Nandi, A. K.

    2018-01-01

    Condition classification of rolling element bearings in rotating machines is important to prevent the breakdown of industrial machinery. A considerable amount of literature has been published on bearing faults classification. These studies aim to determine automatically the current status of a roller element bearing. Of these studies, methods based on compressed sensing (CS) have received some attention recently due to their ability to allow one to sample below the Nyquist sampling rate. This technology has many possible uses in machine condition monitoring and has been investigated as a possible approach for fault detection and classification in the compressed domain, i.e., without reconstructing the original signal. However, previous CS based methods have been found to be too weak for highly compressed data. The present paper explores computationally, for the first time, the effects of sparse autoencoder based over-complete sparse representations on the classification performance of highly compressed measurements of bearing vibration signals. For this study, the CS method was used to produce highly compressed measurements of the original bearing dataset. Then, an effective deep neural network (DNN) with unsupervised feature learning algorithm based on sparse autoencoder is used for learning over-complete sparse representations of these compressed datasets. Finally, the fault classification is achieved using two stages, namely, pre-training classification based on stacked autoencoder and softmax regression layer form the deep net stage (the first stage), and re-training classification based on backpropagation (BP) algorithm forms the fine-tuning stage (the second stage). The experimental results show that the proposed method is able to achieve high levels of accuracy even with extremely compressed measurements compared with the existing techniques.

  16. High-performance compression and double cryptography based on compressive ghost imaging with the fast Fourier transform

    NASA Astrophysics Data System (ADS)

    Leihong, Zhang; Zilan, Pan; Luying, Wu; Xiuhua, Ma

    2016-11-01

    To solve the problem that large images can hardly be retrieved for stringent hardware restrictions and the security level is low, a method based on compressive ghost imaging (CGI) with Fast Fourier Transform (FFT) is proposed, named FFT-CGI. Initially, the information is encrypted by the sender with FFT, and the FFT-coded image is encrypted by the system of CGI with a secret key. Then the receiver decrypts the image with the aid of compressive sensing (CS) and FFT. Simulation results are given to verify the feasibility, security, and compression of the proposed encryption scheme. The experiment suggests the method can improve the quality of large images compared with conventional ghost imaging and achieve the imaging for large-sized images, further the amount of data transmitted largely reduced because of the combination of compressive sensing and FFT, and improve the security level of ghost images through ciphertext-only attack (COA), chosen-plaintext attack (CPA), and noise attack. This technique can be immediately applied to encryption and data storage with the advantages of high security, fast transmission, and high quality of reconstructed information.

  17. DNA-COMPACT: DNA COMpression Based on a Pattern-Aware Contextual Modeling Technique

    PubMed Central

    Li, Pinghao; Wang, Shuang; Kim, Jihoon; Xiong, Hongkai; Ohno-Machado, Lucila; Jiang, Xiaoqian

    2013-01-01

    Genome data are becoming increasingly important for modern medicine. As the rate of increase in DNA sequencing outstrips the rate of increase in disk storage capacity, the storage and data transferring of large genome data are becoming important concerns for biomedical researchers. We propose a two-pass lossless genome compression algorithm, which highlights the synthesis of complementary contextual models, to improve the compression performance. The proposed framework could handle genome compression with and without reference sequences, and demonstrated performance advantages over best existing algorithms. The method for reference-free compression led to bit rates of 1.720 and 1.838 bits per base for bacteria and yeast, which were approximately 3.7% and 2.6% better than the state-of-the-art algorithms. Regarding performance with reference, we tested on the first Korean personal genome sequence data set, and our proposed method demonstrated a 189-fold compression rate, reducing the raw file size from 2986.8 MB to 15.8 MB at a comparable decompression cost with existing algorithms. DNAcompact is freely available at https://sourceforge.net/projects/dnacompact/for research purpose. PMID:24282536

  18. Design of a digital compression technique for shuttle television

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Fultz, G.

    1976-01-01

    The determination of the performance and hardware complexity of data compression algorithms applicable to color television signals, were studied to assess the feasibility of digital compression techniques for shuttle communications applications. For return link communications, it is shown that a nonadaptive two dimensional DPCM technique compresses the bandwidth of field-sequential color TV to about 13 MBPS and requires less than 60 watts of secondary power. For forward link communications, a facsimile coding technique is recommended which provides high resolution slow scan television on a 144 KBPS channel. The onboard decoder requires about 19 watts of secondary power.

  19. Coronary angiogram video compression for remote browsing and archiving applications.

    PubMed

    Ouled Zaid, Azza; Fradj, Bilel Ben

    2010-12-01

    In this paper, we propose a H.264/AVC based compression technique adapted to coronary angiograms. H.264/AVC coder has proven to use the most advanced and accurate motion compensation process, but, at the cost of high computational complexity. On the other hand, analysis of coronary X-ray images reveals large areas containing no diagnostically important information. Our contribution is to exploit the energy characteristics in slice equal size regions to determine the regions with relevant information content, to be encoded using the H.264 coding paradigm. The others regions, are compressed using fixed block motion compensation and conventional hard-decision quantization. Experiments have shown that at the same bitrate, this procedure reduces the H.264 coder computing time of about 25% while attaining the same visual quality. A subjective assessment, based on the consensus approach leads to a compression ratio of 30:1 which insures both a diagnostic adequacy and a sufficient compression in regards to storage and transmission requirements. Copyright © 2010 Elsevier Ltd. All rights reserved.

  20. Compressed NMR: Combining compressive sampling and pure shift NMR techniques.

    PubMed

    Aguilar, Juan A; Kenwright, Alan M

    2017-12-26

    Historically, the resolution of multidimensional nuclear magnetic resonance (NMR) has been orders of magnitude lower than the intrinsic resolution that NMR spectrometers are capable of producing. The slowness of Nyquist sampling as well as the existence of signals as multiplets instead of singlets have been two of the main reasons for this underperformance. Fortunately, two compressive techniques have appeared that can overcome these limitations. Compressive sensing, also known as compressed sampling (CS), avoids the first limitation by exploiting the compressibility of typical NMR spectra, thus allowing sampling at sub-Nyquist rates, and pure shift techniques eliminate the second issue "compressing" multiplets into singlets. This paper explores the possibilities and challenges presented by this combination (compressed NMR). First, a description of the CS framework is given, followed by a description of the importance of combining it with the right pure shift experiment. Second, examples of compressed NMR spectra and how they can be combined with covariance methods will be shown. Copyright © 2017 John Wiley & Sons, Ltd.

  1. The application of compressive sampling in rapid ultrasonic computerized tomography (UCT) technique of steel tube slab (STS).

    PubMed

    Jiang, Baofeng; Jia, Pengjiao; Zhao, Wen; Wang, Wentao

    2018-01-01

    This paper explores a new method for rapid structural damage inspection of steel tube slab (STS) structures along randomly measured paths based on a combination of compressive sampling (CS) and ultrasonic computerized tomography (UCT). In the measurement stage, using fewer randomly selected paths rather than the whole measurement net is proposed to detect the underlying damage of a concrete-filled steel tube. In the imaging stage, the ℓ1-minimization algorithm is employed to recover the information of the microstructures based on the measurement data related to the internal situation of the STS structure. A numerical concrete tube model, with the various level of damage, was studied to demonstrate the performance of the rapid UCT technique. Real-world concrete-filled steel tubes in the Shenyang Metro stations were detected using the proposed UCT technique in a CS framework. Both the numerical and experimental results show the rapid UCT technique has the capability of damage detection in an STS structure with a high level of accuracy and with fewer required measurements, which is more convenient and efficient than the traditional UCT technique.

  2. A Secure and Robust Compressed Domain Video Steganography for Intra- and Inter-Frames Using Embedding-Based Byte Differencing (EBBD) Scheme

    PubMed Central

    Idbeaa, Tarik; Abdul Samad, Salina; Husain, Hafizah

    2016-01-01

    This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values. PMID:26963093

  3. A Secure and Robust Compressed Domain Video Steganography for Intra- and Inter-Frames Using Embedding-Based Byte Differencing (EBBD) Scheme.

    PubMed

    Idbeaa, Tarik; Abdul Samad, Salina; Husain, Hafizah

    2016-01-01

    This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values.

  4. Context dependent prediction and category encoding for DPCM image compression

    NASA Technical Reports Server (NTRS)

    Beaudet, Paul R.

    1989-01-01

    Efficient compression of image data requires the understanding of the noise characteristics of sensors as well as the redundancy expected in imagery. Herein, the techniques of Differential Pulse Code Modulation (DPCM) are reviewed and modified for information-preserving data compression. The modifications include: mapping from intensity to an equal variance space; context dependent one and two dimensional predictors; rationale for nonlinear DPCM encoding based upon an image quality model; context dependent variable length encoding of 2x2 data blocks; and feedback control for constant output rate systems. Examples are presented at compression rates between 1.3 and 2.8 bits per pixel. The need for larger block sizes, 2D context dependent predictors, and the hope for sub-bits-per-pixel compression which maintains spacial resolution (information preserving) are discussed.

  5. CURRENT CONCEPTS AND TREATMENT OF PATELLOFEMORAL COMPRESSIVE ISSUES.

    PubMed

    Mullaney, Michael J; Fukunaga, Takumi

    2016-12-01

    Patellofemoral disorders, commonly encountered in sports and orthopedic rehabilitation settings, may result from dysfunction in patellofemoral joint compression. Osseous and soft tissue factors, as well as the mechanical interaction of the two, contribute to increased patellofemoral compression and pain. Treatment of patellofemoral compressive issues is based on identification of contributory impairments. Use of reliable tests and measures is essential in detecting impairments in hip flexor, quadriceps, iliotibial band, hamstrings, and gastrocnemius flexibility, as well as in joint mobility, myofascial restrictions, and proximal muscle weakness. Once relevant impairments are identified, a combination of manual techniques, instrument-assisted methods, and therapeutic exercises are used to address the impairments and promote functional improvements. The purpose of this clinical commentary is to describe the clinical presentation, contributory considerations, and interventions to address patellofemoral joint compressive issues.

  6. CURRENT CONCEPTS AND TREATMENT OF PATELLOFEMORAL COMPRESSIVE ISSUES

    PubMed Central

    Fukunaga, Takumi

    2016-01-01

    Patellofemoral disorders, commonly encountered in sports and orthopedic rehabilitation settings, may result from dysfunction in patellofemoral joint compression. Osseous and soft tissue factors, as well as the mechanical interaction of the two, contribute to increased patellofemoral compression and pain. Treatment of patellofemoral compressive issues is based on identification of contributory impairments. Use of reliable tests and measures is essential in detecting impairments in hip flexor, quadriceps, iliotibial band, hamstrings, and gastrocnemius flexibility, as well as in joint mobility, myofascial restrictions, and proximal muscle weakness. Once relevant impairments are identified, a combination of manual techniques, instrument-assisted methods, and therapeutic exercises are used to address the impairments and promote functional improvements. The purpose of this clinical commentary is to describe the clinical presentation, contributory considerations, and interventions to address patellofemoral joint compressive issues. PMID:27904792

  7. JP3D compressed-domain watermarking of volumetric medical data sets

    NASA Astrophysics Data System (ADS)

    Ouled Zaid, Azza; Makhloufi, Achraf; Olivier, Christian

    2010-01-01

    Increasing transmission of medical data across multiple user systems raises concerns for medical image watermarking. Additionaly, the use of volumetric images triggers the need for efficient compression techniques in picture archiving and communication systems (PACS), or telemedicine applications. This paper describes an hybrid data hiding/compression system, adapted to volumetric medical imaging. The central contribution is to integrate blind watermarking, based on turbo trellis-coded quantization (TCQ), to JP3D encoder. Results of our method applied to Magnetic Resonance (MR) and Computed Tomography (CT) medical images have shown that our watermarking scheme is robust to JP3D compression attacks and can provide relative high data embedding rate whereas keep a relative lower distortion.

  8. High-performance compression of astronomical images

    NASA Technical Reports Server (NTRS)

    White, Richard L.

    1993-01-01

    Astronomical images have some rather unusual characteristics that make many existing image compression techniques either ineffective or inapplicable. A typical image consists of a nearly flat background sprinkled with point sources and occasional extended sources. The images are often noisy, so that lossless compression does not work very well; furthermore, the images are usually subjected to stringent quantitative analysis, so any lossy compression method must be proven not to discard useful information, but must instead discard only the noise. Finally, the images can be extremely large. For example, the Space Telescope Science Institute has digitized photographic plates covering the entire sky, generating 1500 images each having 14000 x 14000 16-bit pixels. Several astronomical groups are now constructing cameras with mosaics of large CCD's (each 2048 x 2048 or larger); these instruments will be used in projects that generate data at a rate exceeding 100 MBytes every 5 minutes for many years. An effective technique for image compression may be based on the H-transform (Fritze et al. 1977). The method that we have developed can be used for either lossless or lossy compression. The digitized sky survey images can be compressed by at least a factor of 10 with no noticeable losses in the astrometric and photometric properties of the compressed images. The method has been designed to be computationally efficient: compression or decompression of a 512 x 512 image requires only 4 seconds on a Sun SPARCstation 1. The algorithm uses only integer arithmetic, so it is completely reversible in its lossless mode, and it could easily be implemented in hardware for space applications.

  9. Review of Orbital Propellant Transfer Techniques and the Feasibility of a Thermal Bootstrap Propellant Transfer Concepts

    NASA Technical Reports Server (NTRS)

    Yoshikawa, H. H.; Madison, I. B.

    1971-01-01

    This study was performed in support of the NASA Task B-2 Study Plan for Space Basing. The nature of space-based operations implies that orbital transfer of propellant is a prime consideration. The intent of this report is (1) to report on the findings and recommendations of existing literature on space-based propellant transfer techniques, and (2) to determine possible alternatives to the recommended methods. The reviewed literature recommends, in general, the use of conventional liquid transfer techniques (i.e., pumping) in conjunction with an artificially induced gravitational field. An alternate concept that was studied, the Thermal Bootstrap Transfer Process, is based on the compression of a two-phase fluid with subsequent condensation to a liquid (vapor compression/condensation). This concept utilizes the intrinsic energy capacities of the tanks and propellant by exploiting temperature differentials and available energy differences. The results indicate the thermodynamic feasibility of the Thermal Bootstrap Transfer Process for a specific range of tank sizes, temperatures, fill-factors and receiver tank heat transfer coefficients.

  10. A Study on the Data Compression Technology-Based Intelligent Data Acquisition (IDAQ) System for Structural Health Monitoring of Civil Structures

    PubMed Central

    Jeon, Joonryong

    2017-01-01

    In this paper, a data compression technology-based intelligent data acquisition (IDAQ) system was developed for structural health monitoring of civil structures, and its validity was tested using random signals (El-Centro seismic waveform). The IDAQ system was structured to include a high-performance CPU with large dynamic memory for multi-input and output in a radio frequency (RF) manner. In addition, the embedded software technology (EST) has been applied to it to implement diverse logics needed in the process of acquiring, processing and transmitting data. In order to utilize IDAQ system for the structural health monitoring of civil structures, this study developed an artificial filter bank by which structural dynamic responses (acceleration) were efficiently acquired, and also optimized it on the random El-Centro seismic waveform. All techniques developed in this study have been embedded to our system. The data compression technology-based IDAQ system was proven valid in acquiring valid signals in a compressed size. PMID:28704945

  11. A Study on the Data Compression Technology-Based Intelligent Data Acquisition (IDAQ) System for Structural Health Monitoring of Civil Structures.

    PubMed

    Heo, Gwanghee; Jeon, Joonryong

    2017-07-12

    In this paper, a data compression technology-based intelligent data acquisition (IDAQ) system was developed for structural health monitoring of civil structures, and its validity was tested using random signals (El-Centro seismic waveform). The IDAQ system was structured to include a high-performance CPU with large dynamic memory for multi-input and output in a radio frequency (RF) manner. In addition, the embedded software technology (EST) has been applied to it to implement diverse logics needed in the process of acquiring, processing and transmitting data. In order to utilize IDAQ system for the structural health monitoring of civil structures, this study developed an artificial filter bank by which structural dynamic responses (acceleration) were efficiently acquired, and also optimized it on the random El-Centro seismic waveform. All techniques developed in this study have been embedded to our system. The data compression technology-based IDAQ system was proven valid in acquiring valid signals in a compressed size.

  12. Use of zerotree coding in a high-speed pyramid image multiresolution decomposition

    NASA Astrophysics Data System (ADS)

    Vega-Pineda, Javier; Cabrera, Sergio D.; Lucero, Aldo

    1995-03-01

    A Zerotree (ZT) coding scheme is applied as a post-processing stage to avoid transmitting zero data in the High-Speed Pyramid (HSP) image compression algorithm. This algorithm has features that increase the capability of the ZT coding to give very high compression rates. In this paper the impact of the ZT coding scheme is analyzed and quantified. The HSP algorithm creates a discrete-time multiresolution analysis based on a hierarchical decomposition technique that is a subsampling pyramid. The filters used to create the image residues and expansions can be related to wavelet representations. According to the pixel coordinates and the level in the pyramid, N2 different wavelet basis functions of various sizes and rotations are linearly combined. The HSP algorithm is computationally efficient because of the simplicity of the required operations, and as a consequence, it can be very easily implemented with VLSI hardware. This is the HSP's principal advantage over other compression schemes. The ZT coding technique transforms the different quantized image residual levels created by the HSP algorithm into a bit stream. The use of ZT's compresses even further the already compressed image taking advantage of parent-child relationships (trees) between the pixels of the residue images at different levels of the pyramid. Zerotree coding uses the links between zeros along the hierarchical structure of the pyramid, to avoid transmission of those that form branches of all zeros. Compression performance and algorithm complexity of the combined HSP-ZT method are compared with those of the JPEG standard technique.

  13. 4800 B/S speech compression techniques for mobile satellite systems

    NASA Technical Reports Server (NTRS)

    Townes, S. A.; Barnwell, T. P., III; Rose, R. C.; Gersho, A.; Davidson, G.

    1986-01-01

    This paper will discuss three 4800 bps digital speech compression techniques currently being investigated for application in the mobile satellite service. These three techniques, vector adaptive predictive coding, vector excitation coding, and the self excited vocoder, are the most promising among a number of techniques being developed to possibly provide near-toll-quality speech compression while still keeping the bit-rate low enough for a power and bandwidth limited satellite service.

  14. Adaptive bit plane quadtree-based block truncation coding for image compression

    NASA Astrophysics Data System (ADS)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  15. A Lossless Multichannel Bio-Signal Compression Based on Low-Complexity Joint Coding Scheme for Portable Medical Devices

    PubMed Central

    Kim, Dong-Sun; Kwon, Jin-San

    2014-01-01

    Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor. PMID:25237900

  16. Digital cinema video compression

    NASA Astrophysics Data System (ADS)

    Husak, Walter

    2003-05-01

    The Motion Picture Industry began a transition from film based distribution and projection to digital distribution and projection several years ago. Digital delivery and presentation offers the prospect to increase the quality of the theatrical experience for the audience, reduce distribution costs to the distributors, and create new business opportunities for the theater owners and the studios. Digital Cinema also presents an opportunity to provide increased flexibility and security of the movies for the content owners and the theater operators. Distribution of content via electronic means to theaters is unlike any of the traditional applications for video compression. The transition from film-based media to electronic media represents a paradigm shift in video compression techniques and applications that will be discussed in this paper.

  17. Comparative data compression techniques and multi-compression results

    NASA Astrophysics Data System (ADS)

    Hasan, M. R.; Ibrahimy, M. I.; Motakabber, S. M. A.; Ferdaus, M. M.; Khan, M. N. H.

    2013-12-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms.

  18. Perceptual compression of magnitude-detected synthetic aperture radar imagery

    NASA Technical Reports Server (NTRS)

    Gorman, John D.; Werness, Susan A.

    1994-01-01

    A perceptually-based approach for compressing synthetic aperture radar (SAR) imagery is presented. Key components of the approach are a multiresolution wavelet transform, a bit allocation mask based on an empirical human visual system (HVS) model, and hybrid scalar/vector quantization. Specifically, wavelet shrinkage techniques are used to segregate wavelet transform coefficients into three components: local means, edges, and texture. Each of these three components is then quantized separately according to a perceptually-based bit allocation scheme. Wavelet coefficients associated with local means and edges are quantized using high-rate scalar quantization while texture information is quantized using low-rate vector quantization. The impact of the perceptually-based multiresolution compression algorithm on visual image quality, impulse response, and texture properties is assessed for fine-resolution magnitude-detected SAR imagery; excellent image quality is found at bit rates at or above 1 bpp along with graceful performance degradation at rates below 1 bpp.

  19. Dynamic magnetic resonance imaging method based on golden-ratio cartesian sampling and compressed sensing.

    PubMed

    Li, Shuo; Zhu, Yanchun; Xie, Yaoqin; Gao, Song

    2018-01-01

    Dynamic magnetic resonance imaging (DMRI) is used to noninvasively trace the movements of organs and the process of drug delivery. The results can provide quantitative or semiquantitative pathology-related parameters, thus giving DMRI great potential for clinical applications. However, conventional DMRI techniques suffer from low temporal resolution and long scan time owing to the limitations of the k-space sampling scheme and image reconstruction algorithm. In this paper, we propose a novel DMRI sampling scheme based on a golden-ratio Cartesian trajectory in combination with a compressed sensing reconstruction algorithm. The results of two simulation experiments, designed according to the two major DMRI techniques, showed that the proposed method can improve the temporal resolution and shorten the scan time and provide high-quality reconstructed images.

  20. Perceptual distortion analysis of color image VQ-based coding

    NASA Astrophysics Data System (ADS)

    Charrier, Christophe; Knoblauch, Kenneth; Cherifi, Hocine

    1997-04-01

    It is generally accepted that a RGB color image can be easily encoded by using a gray-scale compression technique on each of the three color planes. Such an approach, however, fails to take into account correlations existing between color planes and perceptual factors. We evaluated several linear and non-linear color spaces, some introduced by the CIE, compressed with the vector quantization technique for minimum perceptual distortion. To study these distortions, we measured contrast and luminance of the video framebuffer, to precisely control color. We then obtained psychophysical judgements to measure how well these methods work to minimize perceptual distortion in a variety of color space.

  1. Improving throughput and user experience for information intensive websites by applying HTTP compression technique.

    PubMed

    Malla, Ratnakar

    2008-11-06

    HTTP compression is a technique specified as part of the W3C HTTP 1.0 standard. It allows HTTP servers to take advantage of GZIP compression technology that is built into latest browsers. A brief survey of medical informatics websites show that compression is not enabled. With compression enabled, downloaded files sizes are reduced by more than 50% and typical transaction time is also reduced from 20 to 8 minutes, thus providing a better user experience.

  2. Evaluation of a newly developed infant chest compression technique: A randomized crossover manikin trial.

    PubMed

    Smereka, Jacek; Bielski, Karol; Ladny, Jerzy R; Ruetzler, Kurt; Szarpak, Lukasz

    2017-04-01

    Providing adequate chest compression is essential during infant cardio-pulmonary-resuscitation (CPR) but was reported to be performed poor. The "new 2-thumb technique" (nTTT), which consists in using 2 thumbs directed at the angle of 90° to the chest while closing the fingers of both hands in a fist, was recently introduced. Therefore, the aim of this study was to compare 3 chest compression techniques, namely, the 2-finger-technique (TFT), the 2-thumb-technique (TTHT), and the nTTT in an randomized infant-CPR manikin setting. A total of 73 paramedics with at least 1 year of clinical experience performed 3 CPR settings with a chest compression:ventilation ratio of 15:2, according to current guidelines. Chest compression was performed with 1 out of the 3 chest compression techniques in a randomized sequence. Chest compression rate and depth, chest decompression, and adequate ventilation after chest compression served as outcome parameters. The chest compression depth was 29 (IQR, 28-29) mm in the TFT group, 42 (40-43) mm in the TTHT group, and 40 (39-40) mm in the nTTT group (TFT vs TTHT, P < 0.001; TFT vs nTTT, P < 0.001; TTHT vs nTTT, P < 0.01). The median compression rate with TFT, TTHT, and nTTT varied and amounted to 136 (IQR, 133-144) min versus 117 (115-121) min versus 111 (109-113) min. There was a statistically significant difference in the compression rate between TFT and TTHT (P < 0.001), TFT and nTTT (P < 0.001), as well as TTHT and nTTT (P < 0.001). Incorrect decompressions after CC were significantly increased in the TTHT group compared with the TFT (P < 0.001) and the nTTT (P < 0.001) group. The nTTT provides adequate chest compression depth and rate and was associated with adequate chest decompression and possibility to adequately ventilate the infant manikin. Further clinical studies are necessary to confirm these initial findings.

  3. Hardware Implementation of 32-Bit High-Speed Direct Digital Frequency Synthesizer

    PubMed Central

    Ibrahim, Salah Hasan; Ali, Sawal Hamid Md.; Islam, Md. Shabiul

    2014-01-01

    The design and implementation of a high-speed direct digital frequency synthesizer are presented. A modified Brent-Kung parallel adder is combined with pipelining technique to improve the speed of the system. A gated clock technique is proposed to reduce the number of registers in the phase accumulator design. The quarter wave symmetry technique is used to store only one quarter of the sine wave. The ROM lookup table (LUT) is partitioned into three 4-bit sub-ROMs based on angular decomposition technique and trigonometric identity. Exploiting the advantages of sine-cosine symmetrical attributes together with XOR logic gates, one sub-ROM block can be removed from the design. These techniques, compressed the ROM into 368 bits. The ROM compressed ratio is 534.2 : 1, with only two adders, two multipliers, and XOR-gates with high frequency resolution of 0.029 Hz. These techniques make the direct digital frequency synthesizer an attractive candidate for wireless communication applications. PMID:24991635

  4. On lossy transform compression of ECG signals with reference to deformation of their parameter values.

    PubMed

    Koski, Antti; Tossavainen, Timo; Juhola, Martti

    2004-01-01

    Electrocardiogram (ECG) signals are the most prominent biomedical signal type used in clinical medicine. Their compression is important and widely researched in the medical informatics community. In the previous literature compression efficacy has been investigated only in the context of how much known or developed methods reduced the storage required by compressed forms of original ECG signals. Sometimes statistical signal evaluations based on, for example, root mean square error were studied. In previous research we developed a refined method for signal compression and tested it jointly with several known techniques for other biomedical signals. Our method of so-called successive approximation quantization used with wavelets was one of the most successful in those tests. In this paper, we studied to what extent these lossy compression methods altered values of medical parameters (medical information) computed from signals. Since the methods are lossy, some information is lost due to the compression when a high enough compression ratio is reached. We found that ECG signals sampled at 400 Hz could be compressed to one fourth of their original storage space, but the values of their medical parameters changed less than 5% due to compression, which indicates reliable results.

  5. Compressive sensing for efficient health monitoring and effective damage detection of structures

    NASA Astrophysics Data System (ADS)

    Jayawardhana, Madhuka; Zhu, Xinqun; Liyanapathirana, Ranjith; Gunawardana, Upul

    2017-02-01

    Real world Structural Health Monitoring (SHM) systems consist of sensors in the scale of hundreds, each sensor generating extremely large amounts of data, often arousing the issue of the cost associated with data transfer and storage. Sensor energy is a major component included in this cost factor, especially in Wireless Sensor Networks (WSN). Data compression is one of the techniques that is being explored to mitigate the effects of these issues. In contrast to traditional data compression techniques, Compressive Sensing (CS) - a very recent development - introduces the means of accurately reproducing a signal by acquiring much less number of samples than that defined by Nyquist's theorem. CS achieves this task by exploiting the sparsity of the signal. By the reduced amount of data samples, CS may help reduce the energy consumption and storage costs associated with SHM systems. This paper investigates CS based data acquisition in SHM, in particular, the implications of CS on damage detection and localization. CS is implemented in a simulation environment to compress structural response data from a Reinforced Concrete (RC) structure. Promising results were obtained from the compressed data reconstruction process as well as the subsequent damage identification process using the reconstructed data. A reconstruction accuracy of 99% could be achieved at a Compression Ratio (CR) of 2.48 using the experimental data. Further analysis using the reconstructed signals provided accurate damage detection and localization results using two damage detection algorithms, showing that CS has not compromised the crucial information on structural damages during the compression process.

  6. The FBI compression standard for digitized fingerprint images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.

    1996-10-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the currentmore » status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.« less

  7. FBI compression standard for digitized fingerprint images

    NASA Astrophysics Data System (ADS)

    Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas

    1996-11-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  8. Adaptive compressive ghost imaging based on wavelet trees and sparse representation.

    PubMed

    Yu, Wen-Kai; Li, Ming-Fei; Yao, Xu-Ri; Liu, Xue-Feng; Wu, Ling-An; Zhai, Guang-Jie

    2014-03-24

    Compressed sensing is a theory which can reconstruct an image almost perfectly with only a few measurements by finding its sparsest representation. However, the computation time consumed for large images may be a few hours or more. In this work, we both theoretically and experimentally demonstrate a method that combines the advantages of both adaptive computational ghost imaging and compressed sensing, which we call adaptive compressive ghost imaging, whereby both the reconstruction time and measurements required for any image size can be significantly reduced. The technique can be used to improve the performance of all computational ghost imaging protocols, especially when measuring ultra-weak or noisy signals, and can be extended to imaging applications at any wavelength.

  9. Optimal Compressed Sensing and Reconstruction of Unstructured Mesh Datasets

    DOE PAGES

    Salloum, Maher; Fabian, Nathan D.; Hensinger, David M.; ...

    2017-08-09

    Exascale computing promises quantities of data too large to efficiently store and transfer across networks in order to be able to analyze and visualize the results. We investigate compressed sensing (CS) as an in situ method to reduce the size of the data as it is being generated during a large-scale simulation. CS works by sampling the data on the computational cluster within an alternative function space such as wavelet bases and then reconstructing back to the original space on visualization platforms. While much work has gone into exploring CS on structured datasets, such as image data, we investigate itsmore » usefulness for point clouds such as unstructured mesh datasets often found in finite element simulations. We sample using a technique that exhibits low coherence with tree wavelets found to be suitable for point clouds. We reconstruct using the stagewise orthogonal matching pursuit algorithm that we improved to facilitate automated use in batch jobs. We analyze the achievable compression ratios and the quality and accuracy of reconstructed results at each compression ratio. In the considered case studies, we are able to achieve compression ratios up to two orders of magnitude with reasonable reconstruction accuracy and minimal visual deterioration in the data. Finally, our results suggest that, compared to other compression techniques, CS is attractive in cases where the compression overhead has to be minimized and where the reconstruction cost is not a significant concern.« less

  10. A Fourier-based compressed sensing technique for accelerated CT image reconstruction using first-order methods.

    PubMed

    Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei

    2014-06-21

    As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate [Formula: see text]. In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques.

  11. Energy-efficient sensing in wireless sensor networks using compressed sensing.

    PubMed

    Razzaque, Mohammad Abdur; Dobson, Simon

    2014-02-12

    Sensing of the application environment is the main purpose of a wireless sensor network. Most existing energy management strategies and compression techniques assume that the sensing operation consumes significantly less energy than radio transmission and reception. This assumption does not hold in a number of practical applications. Sensing energy consumption in these applications may be comparable to, or even greater than, that of the radio. In this work, we support this claim by a quantitative analysis of the main operational energy costs of popular sensors, radios and sensor motes. In light of the importance of sensing level energy costs, especially for power hungry sensors, we consider compressed sensing and distributed compressed sensing as potential approaches to provide energy efficient sensing in wireless sensor networks. Numerical experiments investigating the effectiveness of compressed sensing and distributed compressed sensing using real datasets show their potential for efficient utilization of sensing and overall energy costs in wireless sensor networks. It is shown that, for some applications, compressed sensing and distributed compressed sensing can provide greater energy efficiency than transform coding and model-based adaptive sensing in wireless sensor networks.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salloum, Maher; Fabian, Nathan D.; Hensinger, David M.

    Exascale computing promises quantities of data too large to efficiently store and transfer across networks in order to be able to analyze and visualize the results. We investigate compressed sensing (CS) as an in situ method to reduce the size of the data as it is being generated during a large-scale simulation. CS works by sampling the data on the computational cluster within an alternative function space such as wavelet bases and then reconstructing back to the original space on visualization platforms. While much work has gone into exploring CS on structured datasets, such as image data, we investigate itsmore » usefulness for point clouds such as unstructured mesh datasets often found in finite element simulations. We sample using a technique that exhibits low coherence with tree wavelets found to be suitable for point clouds. We reconstruct using the stagewise orthogonal matching pursuit algorithm that we improved to facilitate automated use in batch jobs. We analyze the achievable compression ratios and the quality and accuracy of reconstructed results at each compression ratio. In the considered case studies, we are able to achieve compression ratios up to two orders of magnitude with reasonable reconstruction accuracy and minimal visual deterioration in the data. Finally, our results suggest that, compared to other compression techniques, CS is attractive in cases where the compression overhead has to be minimized and where the reconstruction cost is not a significant concern.« less

  13. Fully-coupled aeroelastic simulation with fluid compressibility — For application to vocal fold vibration

    PubMed Central

    Yang, Jubiao; Wang, Xingshi; Krane, Michael; Zhang, Lucy T.

    2017-01-01

    In this study, a fully-coupled fluid–structure interaction model is developed for studying dynamic interactions between compressible fluid and aeroelastic structures. The technique is built based on the modified Immersed Finite Element Method (mIFEM), a robust numerical technique to simulate fluid–structure interactions that has capabilities to simulate high Reynolds number flows and handles large density disparities between the fluid and the solid. For accurate assessment of this intricate dynamic process between compressible fluid, such as air and aeroelastic structures, we included in the model the fluid compressibility in an isentropic process and a solid contact model. The accuracy of the compressible fluid solver is verified by examining acoustic wave propagations in a closed and an open duct, respectively. The fully-coupled fluid–structure interaction model is then used to simulate and analyze vocal folds vibrations using compressible air interacting with vocal folds that are represented as layered viscoelastic structures. Using physiological geometric and parametric setup, we are able to obtain a self-sustained vocal fold vibration with a constant inflow pressure. Parametric studies are also performed to study the effects of lung pressure and vocal fold tissue stiffness in vocal folds vibrations. All the case studies produce expected airflow behavior and a sustained vibration, which provide verification and confidence in our future studies of realistic acoustical studies of the phonation process. PMID:29527067

  14. Ultrasound Elastography: Review of Techniques and Clinical Applications

    PubMed Central

    Sigrist, Rosa M.S.; Liau, Joy; Kaffas, Ahmed El; Chammas, Maria Cristina; Willmann, Juergen K.

    2017-01-01

    Elastography-based imaging techniques have received substantial attention in recent years for non-invasive assessment of tissue mechanical properties. These techniques take advantage of changed soft tissue elasticity in various pathologies to yield qualitative and quantitative information that can be used for diagnostic purposes. Measurements are acquired in specialized imaging modes that can detect tissue stiffness in response to an applied mechanical force (compression or shear wave). Ultrasound-based methods are of particular interest due to its many inherent advantages, such as wide availability including at the bedside and relatively low cost. Several ultrasound elastography techniques using different excitation methods have been developed. In general, these can be classified into strain imaging methods that use internal or external compression stimuli, and shear wave imaging that use ultrasound-generated traveling shear wave stimuli. While ultrasound elastography has shown promising results for non-invasive assessment of liver fibrosis, new applications in breast, thyroid, prostate, kidney and lymph node imaging are emerging. Here, we review the basic principles, foundation physics, and limitations of ultrasound elastography and summarize its current clinical use and ongoing developments in various clinical applications. PMID:28435467

  15. Classification Techniques for Digital Map Compression

    DTIC Science & Technology

    1989-03-01

    classification improved the performance of the K-means classification algorithm resulting in a compression of 8.06:1 with Lempel - Ziv coding. Run-length coding... compression performance are run-length coding [2], [8] and Lempel - Ziv coding 110], [11]. These techniques are chosen because they are most efficient when...investigated. After the classification, some standard file compression methods, such as Lempel - Ziv and run-length encoding were applied to the

  16. Compressive sensing imaging through a drywall barrier at sub-THz and THz frequencies in transmission and reflection modes

    NASA Astrophysics Data System (ADS)

    Takan, Taylan; Özkan, Vedat A.; Idikut, Fırat; Yildirim, Ihsan Ozan; Şahin, Asaf B.; Altan, Hakan

    2014-10-01

    In this work sub-terahertz imaging using Compressive Sensing (CS) techniques for targets placed behind a visibly opaque barrier is demonstrated both experimentally and theoretically. Using a multiplied Schottky diode based millimeter wave source working at 118 GHz, metal cutout targets were illuminated in both reflection and transmission configurations with and without barriers which were made out of drywall. In both modes the image is spatially discretized using laser machined, 10 × 10 pixel metal apertures to demonstrate the technique of compressive sensing. The images were collected by modulating the source and measuring the transmitted flux through the apertures using a Golay cell. Experimental results were compared to simulations of the expected transmission through the metal apertures. Image quality decreases as expected when going from the non-obscured transmission case to the obscured transmission case and finally to the obscured reflection case. However, in all instances the image appears below the Nyquist rate which demonstrates that this technique is a viable option for Through the Wall Reflection Imaging (TWRI) applications.

  17. Engagement techniques and playing level impact the biomechanical demands on rugby forwards during machine-based scrummaging.

    PubMed

    Preatoni, Ezio; Stokes, Keith A; England, Michael E; Trewartha, Grant

    2015-04-01

    This cross-sectional study investigated the factors that may influence the physical loading on rugby forwards performing a scrum by studying the biomechanics of machine-based scrummaging under different engagement techniques and playing levels. 34 forward packs from six playing levels performed repetitions of five different types of engagement techniques against an instrumented scrum machine under realistic training conditions. Applied forces and body movements were recorded in three orthogonal directions. The modification of the engagement technique altered the load acting on players. These changes were in a similar direction and of similar magnitude irrespective of the playing level. Reducing the dynamics of the initial engagement through a fold-in procedure decreased the peak compression force, the peak downward force and the engagement speed in excess of 30%. For example, peak compression (horizontal) forces in the professional teams changed from 16.5 (baseline technique) to 8.6 kN (fold-in procedure). The fold-in technique also reduced the occurrence of combined high forces and head-trunk misalignment during the absorption of the impact, which was used as a measure of potential hazard, by more than 30%. Reducing the initial impact did not decrease the ability of the teams to produce sustained compression forces. De-emphasising the initial impact against the scrum machine decreased the mechanical stresses acting on forward players and may benefit players' welfare by reducing the hazard factors that may induce chronic degeneration of the spine. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  18. High-strain rate tensile characterization of graphite platelet reinforced vinyl ester based nanocomposites using split-Hopkinson pressure bar

    NASA Astrophysics Data System (ADS)

    Pramanik, Brahmananda

    The dynamic response of exfoliated graphite nanoplatelet (xGnP) reinforced and carboxyl terminated butadiene nitrile (CTBN) toughened vinyl ester based nanocomposites are characterized under both dynamic tensile and compressive loading. Dynamic direct tensile tests are performed applying the reverse impact Split Hopkinson Pressure Bar (SHPB) technique. The specimen geometry for tensile test is parametrically optimized by Finite Element Analysis (FEA) using ANSYS Mechanical APDLRTM. Uniform stress distribution within the specimen gage length has been verified using high-speed digital photography. The on-specimen strain gage installation is substituted by a non-contact Laser Occlusion Expansion Gage (LOEG) technique for infinitesimal dynamic tensile strain measurements. Due to very low transmitted pulse signal, an alternative approach based on incident pulse is applied for obtaining the stress-time history. Indirect tensile tests are also performed combining the conventional SHPB technique with Brazilian disk test method for evaluating cylindrical disk specimens. The cylindrical disk specimen is held snugly in between two concave end fixtures attached to the incident and transmission bars. Indirect tensile stress is estimated from the SHPB pulses, and diametrical transverse tensile strain is measured using LOEG. Failure diagnosis using high-speed digital photography validates the viability of utilizing this indirect test method for characterizing the tensile properties of the candidate vinyl ester based nanocomposite system. Also, quasi-static indirect tensile response agrees with previous investigations conducted using the traditional dog-bone specimen in quasi-static direct tensile tests. Investigation of both quasi-static and dynamic indirect tensile test responses show the strain rate effect on the tensile strength and energy absorbing capacity of the candidate materials. Finally, the conventional compressive SHPB tests are performed. It is observed that both strength and energy absorbing capacity of these candidate material systems are distinctively less under dynamic tension than under compressive loading. Nano-reinforcement appears to marginally improve these properties for pure vinyl ester under dynamic tension, although it is found to be detrimental under dynamic compression.

  19. Non-contact evaluation of milk-based products using air-coupled ultrasound

    NASA Astrophysics Data System (ADS)

    Meyer, S.; Hindle, S. A.; Sandoz, J.-P.; Gan, T. H.; Hutchins, D. A.

    2006-07-01

    An air-coupled ultrasonic technique has been developed and used to detect physicochemical changes of liquid beverages within a glass container. This made use of two wide-bandwidth capacitive transducers, combined with pulse-compression techniques. The use of a glass container to house samples enabled visual inspection, helping to verify the results of some of the ultrasonic measurements. The non-contact pulse-compression system was used to evaluate agglomeration processes in milk-based products. It is shown that the amplitude of the signal varied with time after the samples had been treated with lactic acid, thus promoting sample destabilization. Non-contact imaging was also performed to follow destabilization of samples by scanning in various directions across the container. The obtained ultrasonic images were also compared to those from a digital camera. Coagulation with glucono-delta-lactone of skim milk poured into this container could be monitored within a precision of a pH of 0.15. This rapid, non-contact and non-destructive technique has shown itself to be a feasible method for investigating the quality of milk-based beverages, and possibly other food products.

  20. Modeling corneal surfaces with rational functions for high-speed videokeratoscopy data compression.

    PubMed

    Schneider, Martin; Iskander, D Robert; Collins, Michael J

    2009-02-01

    High-speed videokeratoscopy is an emerging technique that enables study of the corneal surface and tear-film dynamics. Unlike its static predecessor, this new technique results in a very large amount of digital data for which storage needs become significant. We aimed to design a compression technique that would use mathematical functions to parsimoniously fit corneal surface data with a minimum number of coefficients. Since the Zernike polynomial functions that have been traditionally used for modeling corneal surfaces may not necessarily correctly represent given corneal surface data in terms of its optical performance, we introduced the concept of Zernike polynomial-based rational functions. Modeling optimality criteria were employed in terms of both the rms surface error as well as the point spread function cross-correlation. The parameters of approximations were estimated using a nonlinear least-squares procedure based on the Levenberg-Marquardt algorithm. A large number of retrospective videokeratoscopic measurements were used to evaluate the performance of the proposed rational-function-based modeling approach. The results indicate that the rational functions almost always outperform the traditional Zernike polynomial approximations with the same number of coefficients.

  1. The application of compressive sampling in rapid ultrasonic computerized tomography (UCT) technique of steel tube slab (STS)

    PubMed Central

    Jiang, Baofeng; Jia, Pengjiao; Zhao, Wen; Wang, Wentao

    2018-01-01

    This paper explores a new method for rapid structural damage inspection of steel tube slab (STS) structures along randomly measured paths based on a combination of compressive sampling (CS) and ultrasonic computerized tomography (UCT). In the measurement stage, using fewer randomly selected paths rather than the whole measurement net is proposed to detect the underlying damage of a concrete-filled steel tube. In the imaging stage, the ℓ1-minimization algorithm is employed to recover the information of the microstructures based on the measurement data related to the internal situation of the STS structure. A numerical concrete tube model, with the various level of damage, was studied to demonstrate the performance of the rapid UCT technique. Real-world concrete-filled steel tubes in the Shenyang Metro stations were detected using the proposed UCT technique in a CS framework. Both the numerical and experimental results show the rapid UCT technique has the capability of damage detection in an STS structure with a high level of accuracy and with fewer required measurements, which is more convenient and efficient than the traditional UCT technique. PMID:29293593

  2. Apparatus and method for determining microscale interactions based on compressive sensors such as crystal structures

    DOEpatents

    McAdams, Harley; AlQuraishi, Mohammed

    2015-04-21

    Techniques for determining values for a metric of microscale interactions include determining a mesoscale metric for a plurality of mesoscale interaction types, wherein a value of the mesoscale metric for each mesoscale interaction type is based on a corresponding function of values of the microscale metric for the plurality of the microscale interaction types. A plurality of observations that indicate the values of the mesoscale metric are determined for the plurality of mesoscale interaction types. Values of the microscale metric are determined for the plurality of microscale interaction types based on the plurality of observations and the corresponding functions and compressed sensing.

  3. Non-US data compression and coding research. FASAC Technical Assessment Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gray, R.M.; Cohn, M.; Craver, L.W.

    1993-11-01

    This assessment of recent data compression and coding research outside the United States examines fundamental and applied work in the basic areas of signal decomposition, quantization, lossless compression, and error control, as well as application development efforts in image/video compression and speech/audio compression. Seven computer scientists and engineers who are active in development of these technologies in US academia, government, and industry carried out the assessment. Strong industrial and academic research groups in Western Europe, Israel, and the Pacific Rim are active in the worldwide search for compression algorithms that provide good tradeoffs among fidelity, bit rate, and computational complexity,more » though the theoretical roots and virtually all of the classical compression algorithms were developed in the United States. Certain areas, such as segmentation coding, model-based coding, and trellis-coded modulation, have developed earlier or in more depth outside the United States, though the United States has maintained its early lead in most areas of theory and algorithm development. Researchers abroad are active in other currently popular areas, such as quantizer design techniques based on neural networks and signal decompositions based on fractals and wavelets, but, in most cases, either similar research is or has been going on in the United States, or the work has not led to useful improvements in compression performance. Because there is a high degree of international cooperation and interaction in this field, good ideas spread rapidly across borders (both ways) through international conferences, journals, and technical exchanges. Though there have been no fundamental data compression breakthroughs in the past five years--outside or inside the United State--there have been an enormous number of significant improvements in both places in the tradeoffs among fidelity, bit rate, and computational complexity.« less

  4. The Coming of Digital Desktop Media.

    ERIC Educational Resources Information Center

    Galbreath, Jeremy

    1992-01-01

    Discusses the movement toward digital-based platforms including full-motion video for multimedia products. Hardware- and software-based compression techniques for digital data storage are considered, and a chart summarizes features of Digital Video Interactive, Moving Pictures Experts Group, P x 64, Joint Photographic Experts Group, Apple…

  5. Composeable Chat over Low-Bandwidth Intermittent Communication Links

    DTIC Science & Technology

    2007-04-01

    Compression (STC), introduced in this report, is a data compression algorithm intended to compress alphanumeric... Ziv - Lempel coding, the grandfather of most modern general-purpose file compression programs, watches for input symbol sequences that have previously... data . This section applies these techniques to create a new compression algorithm called Small Text Compression . Various sequence compression

  6. Resistance Curves in the Tensile and Compressive Longitudinal Failure of Composites

    NASA Technical Reports Server (NTRS)

    Camanho, Pedro P.; Catalanotti, Giuseppe; Davila, Carlos G.; Lopes, Claudio S.; Bessa, Miguel A.; Xavier, Jose C.

    2010-01-01

    This paper presents a new methodology to measure the crack resistance curves associated with fiber-dominated failure modes in polymer-matrix composites. These crack resistance curves not only characterize the fracture toughness of the material, but are also the basis for the identification of the parameters of the softening laws used in the analytical and numerical simulation of fracture in composite materials. The method proposed is based on the identification of the crack tip location by the use of Digital Image Correlation and the calculation of the J-integral directly from the test data using a simple expression derived for cross-ply composite laminates. It is shown that the results obtained using the proposed methodology yield crack resistance curves similar to those obtained using FEM-based methods in compact tension carbon-epoxy specimens. However, it is also shown that the Digital Image Correlation based technique can be used to extract crack resistance curves in compact compression tests for which FEM-based techniques are inadequate.

  7. Shock temperature measurement of transparent materials under shock compression

    NASA Astrophysics Data System (ADS)

    Hu, Jinbiao

    1999-06-01

    Under shock compression, some materials have very small absorptance. So it's emissivity is very small too. For this kinds of materials, although they stand in high temperature state under shock compression, the temperature can not be detected easily by using optical radiation technique because of the low emissivity. In this paper, an optical radiation temperature measurement technique of measuring temperature of very low emissive material under shock compression was proposed. For making sure this technique, temperature of crystal NaCl at shock pressure 41 GPa was measured. The result agrees with the results of Kormer et al and Ahrens et al very well. This shows that this technique is reliable and can be used to measuring low emissive shock temperature.

  8. Compressive Sampling Based Interior Reconstruction for Dynamic Carbon Nanotube Micro-CT

    PubMed Central

    Yu, Hengyong; Cao, Guohua; Burk, Laurel; Lee, Yueh; Lu, Jianping; Santago, Pete; Zhou, Otto; Wang, Ge

    2010-01-01

    In the computed tomography (CT) field, one recent invention is the so-called carbon nanotube (CNT) based field emission x-ray technology. On the other hand, compressive sampling (CS) based interior tomography is a new innovation. Combining the strengths of these two novel subjects, we apply the interior tomography technique to local mouse cardiac imaging using respiration and cardiac gating with a CNT based micro-CT scanner. The major features of our method are: (1) it does not need exact prior knowledge inside an ROI; and (2) two orthogonal scout projections are employed to regularize the reconstruction. Both numerical simulations and in vivo mouse studies are performed to demonstrate the feasibility of our methodology. PMID:19923686

  9. A Framework of Hyperspectral Image Compression using Neural Networks

    DOE PAGES

    Masalmah, Yahya M.; Martínez Nieves, Christian; Rivera Soto, Rafael; ...

    2015-01-01

    Hyperspectral image analysis has gained great attention due to its wide range of applications. Hyperspectral images provide a vast amount of information about underlying objects in an image by using a large range of the electromagnetic spectrum for each pixel. However, since the same image is taken multiple times using distinct electromagnetic bands, the size of such images tend to be significant, which leads to greater processing requirements. The aim of this paper is to present a proposed framework for image compression and to study the possible effects of spatial compression on quality of unmixing results. Image compression allows usmore » to reduce the dimensionality of an image while still preserving most of the original information, which could lead to faster image processing. Lastly, this paper presents preliminary results of different training techniques used in Artificial Neural Network (ANN) based compression algorithm.« less

  10. An alternative noninvasive technique for the treatment of iatrogenic femoral pseudoaneurysms: stethoscope-guided compression.

    PubMed

    Korkmaz, Ahmet; Duyuler, Serkan; Kalayci, Süleyman; Türker, Pinar; Sahan, Ekrem; Maden, Orhan; Selçuk, Mehmet Timur

    2013-06-01

    latrogenic femoral pseudoaneurysm is a well-known vascular access site complication. Many invasive and noninvasive techniques have been proposed for the management of this relatively common complication. In this study, we aimed to evaluate efficiency and safety of stethoscope-guided compression as a novel noninvasive technique in the femoral pseudoaneurysm treatment. We prospectively included 29 consecutive patients with the diagnosis of femoral pseudoaneurysm who underwent coronary angiography. Patients with a clinical suspicion of femoral pseudoaneurysm were referred to colour Doppler ultrasound evaluation. The adult (large) side of the stethoscope was used to determine the location where the bruit was best heard. Then compression with the paediatric (small) side of the stethoscope was applied until the bruit could no longer be heard and compression was maintained for at least two sessions. Once the bruit disappeared, a 12-hour bed rest with external elastic compression was advised to the patients, in order to prevent disintegration of newly formed thrombosis. Mean pseudoaneurysm size was 1.7 +/- 0.4 cmx 3.0 +/- 0.9 cm and the mean duration of compression was 36.2 +/- 8.5 minutes.Twenty-six (89.6%) of these 29 patients were successfully treated with stethoscope-guided compression. In 18 patients (62%), the pseuodoaneurysms were successfully closed after 2 sessions of 15-minute compression. No severe complication was observed. Stethoscope-guided compression of femoral pseudoaneurysms is a safe and effective novel technique which requires less equipment and expertise than other contemporary methods.

  11. Compressive sensing based wireless sensor for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Bao, Yuequan; Zou, Zilong; Li, Hui

    2014-03-01

    Data loss is a common problem for monitoring systems based on wireless sensors. Reliable communication protocols, which enhance communication reliability by repetitively transmitting unreceived packets, is one approach to tackle the problem of data loss. An alternative approach allows data loss to some extent and seeks to recover the lost data from an algorithmic point of view. Compressive sensing (CS) provides such a data loss recovery technique. This technique can be embedded into smart wireless sensors and effectively increases wireless communication reliability without retransmitting the data. The basic idea of CS-based approach is that, instead of transmitting the raw signal acquired by the sensor, a transformed signal that is generated by projecting the raw signal onto a random matrix, is transmitted. Some data loss may occur during the transmission of this transformed signal. However, according to the theory of CS, the raw signal can be effectively reconstructed from the received incomplete transformed signal given that the raw signal is compressible in some basis and the data loss ratio is low. This CS-based technique is implemented into the Imote2 smart sensor platform using the foundation of Illinois Structural Health Monitoring Project (ISHMP) Service Tool-suite. To overcome the constraints of limited onboard resources of wireless sensor nodes, a method called random demodulator (RD) is employed to provide memory and power efficient construction of the random sampling matrix. Adaptation of RD sampling matrix is made to accommodate data loss in wireless transmission and meet the objectives of the data recovery. The embedded program is tested in a series of sensing and communication experiments. Examples and parametric study are presented to demonstrate the applicability of the embedded program as well as to show the efficacy of CS-based data loss recovery for real wireless SHM systems.

  12. Data Compression Techniques for Advanced Space Transportation Systems

    NASA Technical Reports Server (NTRS)

    Bradley, William G.

    1998-01-01

    Advanced space transportation systems, including vehicle state of health systems, will produce large amounts of data which must be stored on board the vehicle and or transmitted to the ground and stored. The cost of storage or transmission of the data could be reduced if the number of bits required to represent the data is reduced by the use of data compression techniques. Most of the work done in this study was rather generic and could apply to many data compression systems, but the first application area to be considered was launch vehicle state of health telemetry systems. Both lossless and lossy compression techniques were considered in this study.

  13. Integrating dynamic and distributed compressive sensing techniques to enhance image quality of the compressive line sensing system for unmanned aerial vehicles application

    NASA Astrophysics Data System (ADS)

    Ouyang, Bing; Hou, Weilin; Caimi, Frank M.; Dalgleish, Fraser R.; Vuorenkoski, Anni K.; Gong, Cuiling

    2017-07-01

    The compressive line sensing imaging system adopts distributed compressive sensing (CS) to acquire data and reconstruct images. Dynamic CS uses Bayesian inference to capture the correlated nature of the adjacent lines. An image reconstruction technique that incorporates dynamic CS in the distributed CS framework was developed to improve the quality of reconstructed images. The effectiveness of the technique was validated using experimental data acquired in an underwater imaging test facility. Results that demonstrate contrast and resolution improvements will be presented. The improved efficiency is desirable for unmanned aerial vehicles conducting long-duration missions.

  14. Holographic techniques for cellular fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Kim, Myung K.

    2017-04-01

    We have constructed a prototype instrument for holographic fluorescence microscopy (HFM) based on self-interference incoherent digital holography (SIDH) and demonstrate novel imaging capabilities such as differential 3D fluorescence microscopy and optical sectioning by compressive sensing.

  15. Real-Time SCADA Cyber Protection Using Compression Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyle G. Roybal; Gordon H Rueff

    2013-11-01

    The Department of Energy’s Office of Electricity Delivery and Energy Reliability (DOE-OE) has a critical mission to secure the energy infrastructure from cyber attack. Through DOE-OE’s Cybersecurity for Energy Delivery Systems (CEDS) program, the Idaho National Laboratory (INL) has developed a method to detect malicious traffic on Supervisory, Control, and Data Acquisition (SCADA) network using a data compression technique. SCADA network traffic is often repetitive with only minor differences between packets. Research performed at the INL showed that SCADA network traffic has traits desirable for using compression analysis to identify abnormal network traffic. An open source implementation of a Lempel-Ziv-Welchmore » (LZW) lossless data compression algorithm was used to compress and analyze surrogate SCADA traffic. Infected SCADA traffic was found to have statistically significant differences in compression when compared against normal SCADA traffic at the packet level. The initial analyses and results are clearly able to identify malicious network traffic from normal traffic at the packet level with a very high confidence level across multiple ports and traffic streams. Statistical differentiation between infected and normal traffic level was possible using a modified data compression technique at the 99% probability level for all data analyzed. However, the conditions tested were rather limited in scope and need to be expanded into more realistic simulations of hacking events using techniques and approaches that are better representative of a real-world attack on a SCADA system. Nonetheless, the use of compression techniques to identify malicious traffic on SCADA networks in real time appears to have significant merit for infrastructure protection.« less

  16. Evaluation of a newly developed infant chest compression technique

    PubMed Central

    Smereka, Jacek; Bielski, Karol; Ladny, Jerzy R.; Ruetzler, Kurt; Szarpak, Lukasz

    2017-01-01

    Abstract Background: Providing adequate chest compression is essential during infant cardio-pulmonary-resuscitation (CPR) but was reported to be performed poor. The “new 2-thumb technique” (nTTT), which consists in using 2 thumbs directed at the angle of 90° to the chest while closing the fingers of both hands in a fist, was recently introduced. Therefore, the aim of this study was to compare 3 chest compression techniques, namely, the 2-finger-technique (TFT), the 2-thumb-technique (TTHT), and the nTTT in an randomized infant-CPR manikin setting. Methods: A total of 73 paramedics with at least 1 year of clinical experience performed 3 CPR settings with a chest compression:ventilation ratio of 15:2, according to current guidelines. Chest compression was performed with 1 out of the 3 chest compression techniques in a randomized sequence. Chest compression rate and depth, chest decompression, and adequate ventilation after chest compression served as outcome parameters. Results: The chest compression depth was 29 (IQR, 28–29) mm in the TFT group, 42 (40–43) mm in the TTHT group, and 40 (39–40) mm in the nTTT group (TFT vs TTHT, P < 0.001; TFT vs nTTT, P < 0.001; TTHT vs nTTT, P < 0.01). The median compression rate with TFT, TTHT, and nTTT varied and amounted to 136 (IQR, 133–144) min–1 versus 117 (115–121) min–1 versus 111 (109–113) min–1. There was a statistically significant difference in the compression rate between TFT and TTHT (P < 0.001), TFT and nTTT (P < 0.001), as well as TTHT and nTTT (P < 0.001). Incorrect decompressions after CC were significantly increased in the TTHT group compared with the TFT (P < 0.001) and the nTTT (P < 0.001) group. Conclusions: The nTTT provides adequate chest compression depth and rate and was associated with adequate chest decompression and possibility to adequately ventilate the infant manikin. Further clinical studies are necessary to confirm these initial findings. PMID:28383397

  17. A method for compression of intra-cortically-recorded neural signals dedicated to implantable brain-machine interfaces.

    PubMed

    Shaeri, Mohammad Ali; Sodagar, Amir M

    2015-05-01

    This paper proposes an efficient data compression technique dedicated to implantable intra-cortical neural recording devices. The proposed technique benefits from processing neural signals in the Discrete Haar Wavelet Transform space, a new spike extraction approach, and a novel data framing scheme to telemeter the recorded neural information to the outside world. Based on the proposed technique, a 64-channel neural signal processor was designed and prototyped as a part of a wireless implantable extra-cellular neural recording microsystem. Designed in a 0.13- μ m standard CMOS process, the 64-channel neural signal processor reported in this paper occupies ∼ 0.206 mm(2) of silicon area, and consumes 94.18 μW when operating under a 1.2-V supply voltage at a master clock frequency of 1.28 MHz.

  18. Influence of acquisition frame-rate and video compression techniques on pulse-rate variability estimation from vPPG signal.

    PubMed

    Cerina, Luca; Iozzia, Luca; Mainardi, Luca

    2017-11-14

    In this paper, common time- and frequency-domain variability indexes obtained by pulse rate variability (PRV) series extracted from video-photoplethysmographic signal (vPPG) were compared with heart rate variability (HRV) parameters calculated from synchronized ECG signals. The dual focus of this study was to analyze the effect of different video acquisition frame-rates starting from 60 frames-per-second (fps) down to 7.5 fps and different video compression techniques using both lossless and lossy codecs on PRV parameters estimation. Video recordings were acquired through an off-the-shelf GigE Sony XCG-C30C camera on 60 young, healthy subjects (age 23±4 years) in the supine position. A fully automated, signal extraction method based on the Kanade-Lucas-Tomasi (KLT) algorithm for regions of interest (ROI) detection and tracking, in combination with a zero-phase principal component analysis (ZCA) signal separation technique was employed to convert the video frames sequence to a pulsatile signal. The frame-rate degradation was simulated on video recordings by directly sub-sampling the ROI tracking and signal extraction modules, to correctly mimic videos recorded at a lower speed. The compression of the videos was configured to avoid any frame rejection caused by codec quality leveling, FFV1 codec was used for lossless compression and H.264 with variable quality parameter as lossy codec. The results showed that a reduced frame-rate leads to inaccurate tracking of ROIs, increased time-jitter in the signals dynamics and local peak displacements, which degrades the performances in all the PRV parameters. The root mean square of successive differences (RMSSD) and the proportion of successive differences greater than 50 ms (PNN50) indexes in time-domain and the low frequency (LF) and high frequency (HF) power in frequency domain were the parameters which highly degraded with frame-rate reduction. Such a degradation can be partially mitigated by up-sampling the measured signal at a higher frequency (namely 60 Hz). Concerning the video compression, the results showed that compression techniques are suitable for the storage of vPPG recordings, although lossless or intra-frame compression are to be preferred over inter-frame compression methods. FFV1 performances are very close to the uncompressed (UNC) version with less than 45% disk size. H.264 showed a degradation of the PRV estimation directly correlated with the increase of the compression ratio.

  19. A new simultaneous compression and encryption method for images suitable to recognize form by optical correlation

    NASA Astrophysics Data System (ADS)

    Alfalou, Ayman; Elbouz, Marwa; Jridi, Maher; Loussert, Alain

    2009-09-01

    In some recognition form applications (which require multiple images: facial identification or sign-language), many images should be transmitted or stored. This requires the use of communication systems with a good security level (encryption) and an acceptable transmission rate (compression rate). In the literature, several encryption and compression techniques can be found. In order to use optical correlation, encryption and compression techniques cannot be deployed independently and in a cascade manner. Otherwise, our system will suffer from two major problems. In fact, we cannot simply use these techniques in a cascade manner without considering the impact of one technique over another. Secondly, a standard compression can affect the correlation decision, because the correlation is sensitive to the loss of information. To solve both problems, we developed a new technique to simultaneously compress & encrypt multiple images using a BPOF optimized filter. The main idea of our approach consists in multiplexing the spectrums of different transformed images by a Discrete Cosine Transform (DCT). To this end, the spectral plane should be divided into several areas and each of them corresponds to the spectrum of one image. On the other hand, Encryption is achieved using the multiplexing, a specific rotation functions, biometric encryption keys and random phase keys. A random phase key is widely used in optical encryption approaches. Finally, many simulations have been conducted. Obtained results corroborate the good performance of our approach. We should also mention that the recording of the multiplexed and encrypted spectra is optimized using an adapted quantification technique to improve the overall compression rate.

  20. Compressed Air System Optimization: Case Study Food Industry in Indonesia

    NASA Astrophysics Data System (ADS)

    Widayati, Endang; Nuzahar, Hasril

    2016-01-01

    Compressors and compressed air systems was one of the most important utilities in industries or factories. Approximately 10% of the cost of electricity in the industry was used to produce compressed air. Therefore the potential for energy savings in the compressors and compressed air systems had a big challenge. This field was conducted especially in Indonesia food industry or factory. Compressed air system optimization was a technique approach to determine the optimal conditions for the operation of compressors and compressed air systems that included evaluation of the energy needs, supply adjustment, eliminating or reconfiguring the use and operation of inefficient, changing and complementing some equipment and improving operating efficiencies. This technique gave the significant impact for energy saving and costs. The potential savings based on this study through measurement and optimization e.g. system that lowers the pressure of 7.5 barg to 6.8 barg would reduce energy consumption and running costs approximately 4.2%, switch off the compressor GA110 and GA75 was obtained annual savings of USD 52,947 ≈ 455 714 kWh, running GA75 light load or unloaded then obtained annual savings of USD 31,841≈ 270,685 kWh, install new compressor 2x132 kW and 1x 132 kW VSD obtained annual savings of USD 108,325≈ 928,500 kWh. Furthermore it was needed to conduct study of technical aspect of energy saving potential (Investment Grade Audit) and performed Cost Benefit Analysis. This study was one of best practice solutions how to save energy and improve energy performance in compressors and compressed air system.

  1. Study on the key technology of optical encryption based on compressive ghost imaging with double random-phase encoding

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Pan, Zilan; Liang, Dong; Ma, Xiuhua; Zhang, Dawei

    2015-12-01

    An optical encryption method based on compressive ghost imaging (CGI) with double random-phase encoding (DRPE), named DRPE-CGI, is proposed. The information is first encrypted by the sender with DRPE, the DRPE-coded image is encrypted by the system of computational ghost imaging with a secret key. The key of N random-phase vectors is generated by the sender and will be shared with the receiver who is the authorized user. The receiver decrypts the DRPE-coded image with the key, with the aid of CGI and a compressive sensing technique, and then reconstructs the original information by the technique of DRPE-decoding. The experiments suggest that cryptanalysts cannot get any useful information about the original image even if they eavesdrop 60% of the key at a given time, so the security of DRPE-CGI is higher than that of the security of conventional ghost imaging. Furthermore, this method can reduce 40% of the information quantity compared with ghost imaging while the qualities of reconstructing the information are the same. It can also improve the quality of the reconstructed plaintext information compared with DRPE-GI with the same sampling times. This technique can be immediately applied to encryption and data storage with the advantages of high security, fast transmission, and high quality of reconstructed information.

  2. Bandwidth compression of multispectral satellite imagery

    NASA Technical Reports Server (NTRS)

    Habibi, A.

    1978-01-01

    The results of two studies aimed at developing efficient adaptive and nonadaptive techniques for compressing the bandwidth of multispectral images are summarized. These techniques are evaluated and compared using various optimality criteria including MSE, SNR, and recognition accuracy of the bandwidth compressed images. As an example of future requirements, the bandwidth requirements for the proposed Landsat-D Thematic Mapper are considered.

  3. Analysis of Compression Algorithm in Ground Collision Avoidance Systems (Auto-GCAS)

    NASA Technical Reports Server (NTRS)

    Schmalz, Tyler; Ryan, Jack

    2011-01-01

    Automatic Ground Collision Avoidance Systems (Auto-GCAS) utilizes Digital Terrain Elevation Data (DTED) stored onboard a plane to determine potential recovery maneuvers. Because of the current limitations of computer hardware on military airplanes such as the F-22 and F-35, the DTED must be compressed through a lossy technique called binary-tree tip-tilt. The purpose of this study is to determine the accuracy of the compressed data with respect to the original DTED. This study is mainly interested in the magnitude of the error between the two as well as the overall distribution of the errors throughout the DTED. By understanding how the errors of the compression technique are affected by various factors (topography, density of sampling points, sub-sampling techniques, etc.), modifications can be made to the compression technique resulting in better accuracy. This, in turn, would minimize unnecessary activation of A-GCAS during flight as well as maximizing its contribution to fighter safety.

  4. Failure strengths of denture teeth fabricated on injection molded or compression molded denture base resins.

    PubMed

    Robison, Nathan E; Tantbirojn, Daranee; Versluis, Antheunis; Cagna, David R

    2016-08-01

    Denture tooth fracture or debonding remains a common problem in removable prosthodontics. The purpose of this in vitro study was to explore factors determining failure strengths for combinations of different denture tooth designs (shape, materials) and injection or compression molded denture base resins. Three central incisor denture tooth designs were tested: nanohybrid composite (NHC; Ivoclar Phonares II), interpenetrating network (IPN; Dentsply Portrait), and microfiller reinforced polyacrylic (MRP; VITA Physiodens). Denture teeth of each type were processed on an injection molded resin (IvoBase HI; Ivoclar Vivadent AG) or a compression molded resin (Lucitone 199; Dentsply Intl) (n=11 or 12). The denture teeth were loaded at 45 degrees on the incisal edge. The failure load was recorded and analyzed with 2-way ANOVA (α=.05), and the fracture mode was categorized from observed fracture surfaces as cohesive, adhesive, or mixed failure. The following failure loads (mean ±SD) were recorded: NHC/injection molded 280 ±52 N; IPN/injection molded 331 ±41 N; MRP/injection molded 247 ±23 N; NHC/compression molded 204 ±31 N; IPN/compression molded 184 ±17 N; MRP/compression molded 201 ±16 N. Injection molded resin yielded significantly higher failure strength for all denture teeth (P<.001), among which IPN had the highest strength. Failure was predominantly cohesive in the teeth, with the exception of mixed mode for the IPN/compression group. When good bonding was achieved, the strength of the structure (denture tooth/base resin combination) was determined by the strength of the denture teeth, which may be affected by the processing technique. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  5. An iterative forward analysis technique to determine the equation of state of dynamically compressed materials

    DOE PAGES

    Ali, S. J.; Kraus, R. G.; Fratanduono, D. E.; ...

    2017-05-18

    Here, we developed an iterative forward analysis (IFA) technique with the ability to use hydrocode simulations as a fitting function for analysis of dynamic compression experiments. The IFA method optimizes over parameterized quantities in the hydrocode simulations, breaking the degeneracy of contributions to the measured material response. Velocity profiles from synthetic data generated using a hydrocode simulation are analyzed as a first-order validation of the technique. We also analyze multiple magnetically driven ramp compression experiments on copper and compare with more conventional techniques. Excellent agreement is obtained in both cases.

  6. Compressive spectral testbed imaging system based on thin-film color-patterned filter arrays.

    PubMed

    Rueda, Hoover; Arguello, Henry; Arce, Gonzalo R

    2016-11-20

    Compressive spectral imaging systems can reliably capture multispectral data using far fewer measurements than traditional scanning techniques. In this paper, a thin-film patterned filter array-based compressive spectral imager is demonstrated, including its optical design and implementation. The use of a patterned filter array entails a single-step three-dimensional spatial-spectral coding on the input data cube, which provides higher flexibility on the selection of voxels being multiplexed on the sensor. The patterned filter array is designed and fabricated with micrometer pitch size thin films, referred to as pixelated filters, with three different wavelengths. The performance of the system is evaluated in terms of references measured by a commercially available spectrometer and the visual quality of the reconstructed images. Different distributions of the pixelated filters, including random and optimized structures, are explored.

  7. Architecture for one-shot compressive imaging using computer-generated holograms.

    PubMed

    Macfaden, Alexander J; Kindness, Stephen J; Wilkinson, Timothy D

    2016-09-10

    We propose a synchronous implementation of compressive imaging. This method is mathematically equivalent to prevailing sequential methods, but uses a static holographic optical element to create a spatially distributed spot array from which the image can be reconstructed with an instantaneous measurement. We present the holographic design requirements and demonstrate experimentally that the linear algebra of compressed imaging can be implemented with this technique. We believe this technique can be integrated with optical metasurfaces, which will allow the development of new compressive sensing methods.

  8. Compressed-Sensing Reconstruction Based on Block Sparse Bayesian Learning in Bearing-Condition Monitoring

    PubMed Central

    Sun, Jiedi; Yu, Yang; Wen, Jiangtao

    2017-01-01

    Remote monitoring of bearing conditions, using wireless sensor network (WSN), is a developing trend in the industrial field. In complicated industrial environments, WSN face three main constraints: low energy, less memory, and low operational capability. Conventional data-compression methods, which concentrate on data compression only, cannot overcome these limitations. Aiming at these problems, this paper proposed a compressed data acquisition and reconstruction scheme based on Compressed Sensing (CS) which is a novel signal-processing technique and applied it for bearing conditions monitoring via WSN. The compressed data acquisition is realized by projection transformation and can greatly reduce the data volume, which needs the nodes to process and transmit. The reconstruction of original signals is achieved in the host computer by complicated algorithms. The bearing vibration signals not only exhibit the sparsity property, but also have specific structures. This paper introduced the block sparse Bayesian learning (BSBL) algorithm which works by utilizing the block property and inherent structures of signals to reconstruct CS sparsity coefficients of transform domains and further recover the original signals. By using the BSBL, CS reconstruction can be improved remarkably. Experiments and analyses showed that BSBL method has good performance and is suitable for practical bearing-condition monitoring. PMID:28635623

  9. Maltodextrin: a novel excipient used in sugar-based orally disintegrating tablets and phase transition process.

    PubMed

    Elnaggar, Yosra Shaaban R; El-Massik, Magda A; Abdallah, Ossama Y; Ebian, Abd Elazim R

    2010-06-01

    The recent challenge in orally disintegrating tablets (ODT) manufacturing encompasses the compromise between instantaneous disintegration, sufficient hardness, and standard processing equipment. The current investigation constitutes one attempt to fulfill this challenge. Maltodextrin, in the present work, was utilized as a novel excipient to prepare ODT of meclizine. Tablets were prepared by both direct compression and wet granulation techniques. The effect of maltodextrin concentrations on ODT characteristics--manifested as hardness and disintegration time--was studied. The effect of conditioning (40 degrees C and 75% relative humidity) as a post-compression treatment on ODT characteristics was also assessed. Furthermore, maltodextrin-pronounced hardening effect was investigated using differential scanning calorimetry (DSC) and X-ray analysis. Results revealed that in both techniques, rapid disintegration (30-40 s) would be achieved on the cost of tablet hardness (about 1 kg). Post-compression conditioning of tablets resulted in an increase in hardness (3 kg), while keeping rapid disintegration (30-40 s) according to guidance of the FDA for ODT. However, direct compression-conditioning technique exhibited drawbacks of long conditioning time and appearance of the so-called patch effect. These problems were, yet, absent in wet granulation-conditioning technique. DSC and X-ray analysis suggested involvement of glass-elastic deformation in maltodextrin hardening effect. High-performance liquid chromatography analysis of meclizine ODT suggested no degradation of the drug by the applied conditions of temperature and humidity. Overall results proposed that maltodextrin is a promising saccharide for production of ODT with accepted hardness-disintegration time compromise, utilizing standard processing equipment and phenomena of phase transition.

  10. Multidomain approach for calculating compressible flows

    NASA Technical Reports Server (NTRS)

    Cambier, L.; Chazzi, W.; Veuillot, J. P.; Viviand, H.

    1982-01-01

    A multidomain approach for calculating compressible flows by using unsteady or pseudo-unsteady methods is presented. This approach is based on a general technique of connecting together two domains in which hyperbolic systems (that may differ) are solved with the aid of compatibility relations associated with these systems. Some examples of this approach's application to calculating transonic flows in ideal fluids are shown, particularly the adjustment of shock waves. The approach is then applied to treating a shock/boundary layer interaction problem in a transonic channel.

  11. Time-Reversal Based Range Extension technique for Ultra-wideband (UWB) Sensors and Applications in Tactical Communications and Networking

    DTIC Science & Technology

    2010-01-28

    has to rely on a uni- polar sequence whose autocorrelation is typically less sharp than that of a bi-polar sequence. Optical orthogonal code (OOC...detection in multipath environments," in Proc. IEEE ICC󈧇, vol. 5, pp. 3530-3534, May 2003. [11] M. Weisenhorn and W. Hirt, "Robust Noncoherent Receiver...M. Duarte, D. Baron, S. Sarvotham, K. Kelly, and R. Baraniuk, "A New Compressive Imaging Camera Architecture using Optical -Domain Compression," in

  12. Hyperspectral image compressing using wavelet-based method

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  13. Parallel compression of data chunks of a shared data object using a log-structured file system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-10-25

    Techniques are provided for parallel compression of data chunks being written to a shared object. A client executing on a compute node or a burst buffer node in a parallel computing system stores a data chunk generated by the parallel computing system to a shared data object on a storage node by compressing the data chunk; and providing the data compressed data chunk to the storage node that stores the shared object. The client and storage node may employ Log-Structured File techniques. The compressed data chunk can be de-compressed by the client when the data chunk is read. A storagemore » node stores a data chunk as part of a shared object by receiving a compressed version of the data chunk from a compute node; and storing the compressed version of the data chunk to the shared data object on the storage node.« less

  14. Word aligned bitmap compression method, data structure, and apparatus

    DOEpatents

    Wu, Kesheng; Shoshani, Arie; Otoo, Ekow

    2004-12-14

    The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.

  15. Data Compression Techniques for Maps

    DTIC Science & Technology

    1989-01-01

    Lempel - Ziv compression is applied to the classified and unclassified images as also to the output of the compression algorithms . The algorithms ...resulted in a compression of 7:1. The output of the quadtree coding algorithm was then compressed using Lempel - Ziv coding. The compression ratio achieved...using Lempel - Ziv coding. The unclassified image gave a compression ratio of only 1.4:1. The K means classified image

  16. The Hugoniot and chemistry of ablator plastic below 100 GPa

    DOE PAGES

    Akin, M. C.; Fratanduono, D. E.; Chau, R.

    2016-01-25

    The equation of state of glow discharge polymer (GDP) was measured to high precision using the two-stage light gas gun at Lawrence Livermore National Laboratory at pressures up to 70 GPa. Both absolute measurements and impedance matching techniques were used to determine the principal and secondary Hugoniots. GDP likely reacts at about 30 GPa, demonstrated by specific emission at 450 nm coupled with changes to the Hugoniot and reshock points. As a result of these reactions, the shock pressure in GDP evolves in time, leading to a possible decrease in pressure as compression increases, or negative compressibility, and causing complexmore » pressure profiles within the plastic. Velocity wave profile variation was observed as a function of position on each shot, suggesting some internal variation of GDP may be present, which would be consistent with previous observations. The complex temporal and possibly structural evolution of GDP under shock compression suggests that calculations of compression and pressure based upon bulk or mean measurements may lead to artificially low pressures and high compressions. Evidence for this includes a large shift in calculating reshock pressures based on the reflected Hugoniot. In conclusion, these changes also suggest other degradation mechanisms for inertial confinement fusion implosions.« less

  17. An assessment of computational fluid dynamic techniques in the analysis and design of turbomachinery - The 1990 Freeman Scholar Lecture

    NASA Technical Reports Server (NTRS)

    Lakshminarayana, B.

    1991-01-01

    Various computational fluid dynamic techniques are reviewed focusing on the Euler and Navier-Stokes solvers with a brief assessment of boundary layer solutions, and quasi-3D and quasi-viscous techniques. Particular attention is given to a pressure-based method, explicit and implicit time marching techniques, a pseudocompressibility technique for incompressible flow, and zonal techniques. Recommendations are presented with regard to the most appropriate technique for various flow regimes and types of turbomachinery, incompressible and compressible flows, cascades, rotors, stators, liquid-handling, and gas-handling turbomachinery.

  18. Intelligent transportation systems data compression using wavelet decomposition technique.

    DOT National Transportation Integrated Search

    2009-12-01

    Intelligent Transportation Systems (ITS) generates massive amounts of traffic data, which posts : challenges for data storage, transmission and retrieval. Data compression and reconstruction technique plays an : important role in ITS data procession....

  19. A variational principle for compressible fluid mechanics. Discussion of the one-dimensional theory

    NASA Technical Reports Server (NTRS)

    Prozan, R. J.

    1982-01-01

    The second law of thermodynamics is used as a variational statement to derive a numerical procedure to satisfy the governing equations of motion. The procedure, based on numerical experimentation, appears to be stable provided the CFL condition is satisfied. This stability is manifested no matter how severe the gradients (compression or expansion) are in the flow field. For reasons of simplicity only one dimensional inviscid compressible unsteady flow is discussed here; however, the concepts and techniques are not restricted to one dimension nor are they restricted to inviscid non-reacting flow. The solution here is explicit in time. Further study is required to determine the impact of the variational principle on implicit algorithms.

  20. Applications of data compression techniques in modal analysis for on-orbit system identification

    NASA Technical Reports Server (NTRS)

    Carlin, Robert A.; Saggio, Frank; Garcia, Ephrahim

    1992-01-01

    Data compression techniques have been investigated for use with modal analysis applications. A redundancy-reduction algorithm was used to compress frequency response functions (FRFs) in order to reduce the amount of disk space necessary to store the data and/or save time in processing it. Tests were performed for both single- and multiple-degree-of-freedom (SDOF and MDOF, respectively) systems, with varying amounts of noise. Analysis was done on both the compressed and uncompressed FRFs using an SDOF Nyquist curve fit as well as the Eigensystem Realization Algorithm. Significant savings were realized with minimal errors incurred by the compression process.

  1. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  2. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  3. Data compression techniques applied to high resolution high frame rate video technology

    NASA Technical Reports Server (NTRS)

    Hartz, William G.; Alexovich, Robert E.; Neustadter, Marc S.

    1989-01-01

    An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended.

  4. Convergent Technologies in Distance Learning Delivery.

    ERIC Educational Resources Information Center

    Wheeler, Steve

    1999-01-01

    Describes developments in British education in distance learning technologies. Highlights include networking the rural areas; communication, community, and paradigm shifts; digital compression techniques and telematics; Web-based material delivered over the Internet; system flexibility; social support; learning support; videoconferencing; and…

  5. Perspectives of SiC-Based Ceramic Composites and Their Applications to Fusion Reactors 5.Development of Evaluation and Application Techniques of SiC⁄SiC Composites for Fusion Reactors

    NASA Astrophysics Data System (ADS)

    Hinoki, Tatsuya

    Evaluation techniques and mechanical properties of silicon carbide composites (SiC⁄SiC composites) reinforced with highly crystalline fibers are reviewed for fusion applications. The SiC⁄SiC composites used were fabricated by means of the CVI method. The evaluation includes in-plane tensile strength by in-plane tensile test, transthickness tensile strength by transthickness tensile test and diametral compression test and shear strength by compression test using double-notched specimen. All tests were successfully conducted using small specimens for neutron irradiation experiment. As application technique, the novel tungsten(W) coating technique on SiC is reviewed. The W powder melted by high power lamp in a few seconds and formed coating on SiC. No thick reaction layers of WC and W5Si3, which are formed by the other coating methods, were formed by this method.

  6. Indexing and retrieval of MPEG compressed video

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; Doermann, David S.

    1998-04-01

    To keep pace with the increased popularity of digital video as an archival medium, the development of techniques for fast and efficient analysis of ideo streams is essential. In particular, solutions to the problems of storing, indexing, browsing, and retrieving video data from large multimedia databases are necessary to a low access to these collections. Given that video is often stored efficiently in a compressed format, the costly overhead of decompression can be reduced by analyzing the compressed representation directly. In earlier work, we presented compressed domain parsing techniques which identified shots, subshots, and scenes. In this article, we present efficient key frame selection, feature extraction, indexing, and retrieval techniques that are directly applicable to MPEG compressed video. We develop a frame type independent representation which normalizes spatial and temporal features including frame type, frame size, macroblock encoding, and motion compensation vectors. Features for indexing are derived directly from this representation and mapped to a low- dimensional space where they can be accessed using standard database techniques. Spatial information is used as primary index into the database and temporal information is used to rank retrieved clips and enhance the robustness of the system. The techniques presented enable efficient indexing, querying, and retrieval of compressed video as demonstrated by our system which typically takes a fraction of a second to retrieve similar video scenes from a database, with over 95 percent recall.

  7. A manual carotid compression technique to overcome difficult filter protection device retrieval during carotid artery stenting.

    PubMed

    Nii, Kouhei; Nakai, Kanji; Tsutsumi, Masanori; Aikawa, Hiroshi; Iko, Minoru; Sakamoto, Kimiya; Mitsutake, Takafumi; Eto, Ayumu; Hanada, Hayatsura; Kazekawa, Kiyoshi

    2015-01-01

    We investigated the incidence of embolic protection device retrieval difficulties at carotid artery stenting (CAS) with a closed-cell stent and demonstrated the usefulness of a manual carotid compression assist technique. Between July 2010 and October 2013, we performed 156 CAS procedures using self-expandable closed-cell stents. All procedures were performed with the aid of a filter design embolic protection device. We used FilterWire EZ in 118 procedures and SpiderFX in 38 procedures. The embolic protection device was usually retrieved by the accessory retrieval sheath after CAS. We applied a manual carotid compression technique when it was difficult to navigate the retrieval sheath through the deployed stent. We compared clinical outcomes in patients where simple retrieval was possible with patients where the manual carotid compression assisted technique was used for retrieval. Among the 156 CAS procedures, we encountered 12 (7.7%) where embolic protection device retrieval was hampered at the proximal stent terminus. Our manual carotid compression technique overcame this difficulty without eliciting neurologic events, artery dissection, or stent deformity. In patients undergoing closed-cell stent placement, embolic protection device retrieval difficulties may be encountered at the proximal stent terminus. Manual carotid compression assisted retrieval is an easy, readily available solution to overcome these difficulties. Copyright © 2015 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  8. Digital TV processing system

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Two digital video data compression systems directly applicable to the Space Shuttle TV Communication System were described: (1) For the uplink, a low rate monochrome data compressor is used. The compression is achieved by using a motion detection technique in the Hadamard domain. To transform the variable source rate into a fixed rate, an adaptive rate buffer is provided. (2) For the downlink, a color data compressor is considered. The compression is achieved first by intra-color transformation of the original signal vector, into a vector which has lower information entropy. Then two-dimensional data compression techniques are applied to the Hadamard transformed components of this last vector. Mathematical models and data reliability analyses were also provided for the above video data compression techniques transmitted over a channel encoded Gaussian channel. It was shown that substantial gains can be achieved by the combination of video source and channel coding.

  9. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  10. Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm

    NASA Astrophysics Data System (ADS)

    Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan

    2017-12-01

    Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.

  11. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  12. Image coding of SAR imagery

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Kwok, R.; Curlander, J. C.

    1987-01-01

    Five coding techniques in the spatial and transform domains have been evaluated for SAR image compression: linear three-point predictor (LTPP), block truncation coding (BTC), microadaptive picture sequencing (MAPS), adaptive discrete cosine transform (ADCT), and adaptive Hadamard transform (AHT). These techniques have been tested with Seasat data. Both LTPP and BTC spatial domain coding techniques provide very good performance at rates of 1-2 bits/pixel. The two transform techniques, ADCT and AHT, demonstrate the capability to compress the SAR imagery to less than 0.5 bits/pixel without visible artifacts. Tradeoffs such as the rate distortion performance, the computational complexity, the algorithm flexibility, and the controllability of compression ratios are also discussed.

  13. Real-time feedback can improve infant manikin cardiopulmonary resuscitation by up to 79%--a randomised controlled trial.

    PubMed

    Martin, Philip; Theobald, Peter; Kemp, Alison; Maguire, Sabine; Maconochie, Ian; Jones, Michael

    2013-08-01

    European and Advanced Paediatric Life Support training courses. Sixty-nine certified CPR providers. CPR providers were randomly allocated to a 'no-feedback' or 'feedback' group, performing two-thumb and two-finger chest compressions on a "physiological", instrumented resuscitation manikin. Baseline data was recorded without feedback, before chest compressions were repeated with one group receiving feedback. Indices were calculated that defined chest compression quality, based upon comparison of the chest wall displacement to the targets of four, internationally recommended parameters: chest compression depth, release force, chest compression rate and compression duty cycle. Baseline data were consistent with other studies, with <1% of chest compressions performed by providers simultaneously achieving the target of the four internationally recommended parameters. During the 'experimental' phase, 34 CPR providers benefitted from the provision of 'real-time' feedback which, on analysis, coincided with a statistical improvement in compression rate, depth and duty cycle quality across both compression techniques (all measures: p<0.001). Feedback enabled providers to simultaneously achieve the four targets in 75% (two-finger) and 80% (two-thumb) of chest compressions. Real-time feedback produced a dramatic increase in the quality of chest compression (i.e. from <1% to 75-80%). If these results transfer to a clinical scenario this technology could, for the first time, support providers in consistently performing accurate chest compressions during infant CPR and thus potentially improving clinical outcomes. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  14. A novel shape-based coding-decoding technique for an industrial visual inspection system.

    PubMed

    Mukherjee, Anirban; Chaudhuri, Subhasis; Dutta, Pranab K; Sen, Siddhartha; Patra, Amit

    2004-01-01

    This paper describes a unique single camera-based dimension storage method for image-based measurement. The system has been designed and implemented in one of the integrated steel plants of India. The purpose of the system is to encode the frontal cross-sectional area of an ingot. The encoded data will be stored in a database to facilitate the future manufacturing diagnostic process. The compression efficiency and reconstruction error of the lossy encoding technique have been reported and found to be quite encouraging.

  15. A Novel Method of Newborn Chest Compression: A Randomized Crossover Simulation Study.

    PubMed

    Smereka, Jacek; Szarpak, Lukasz; Ladny, Jerzy R; Rodriguez-Nunez, Antonio; Ruetzler, Kurt

    2018-01-01

    Objective: To compare a novel two-thumb chest compression technique with standard techniques during newborn resuscitation performed by novice physicians in terms of median depth of chest compressions, degree of full chest recoil, and effective compression efficacy. Patients and Methods: The total of 74 novice physicians with less than 1-year work experience participated in the study. They performed chest compressions using three techniques: (A) The new two-thumb technique (nTTT). The novel method of chest compressions in an infant consists in using two thumbs directed at the angle of 90° to the chest while closing the fingers of both hands in a fist. (B) TFT. With this method, the rescuer compresses the sternum with the tips of two fingers. (C) TTHT. Two thumbs are placed over the lower third of the sternum, with the fingers encircling the torso and supporting the back. Results: The median depth of chest compressions for nTTT was 3.8 (IQR, 3.7-3.9) cm, for TFT-2.1 (IQR, 1.7-2.5) cm, while for TTHT-3.6 (IQR, 3.5-3.8) cm. There was a significant difference between nTTT and TFT, and TTHT and TFT ( p < 0.001) for each time interval during resuscitation. The degree of full chest recoil was 93% (IQR, 91-97) for nTTT, 99% (IQR, 96-100) for TFT, and 90% (IQR, 74-91) for TTHT. There was a statistically significant difference in the degree of complete chest relaxation between nTTT and TFT ( p < 0.001), between nTTT and TTHT ( p = 0.016), and between TFT and TTHT ( p < 0.001). Conclusion: The median chest compression depth for nTTT and TTHT is significantly higher than that for TFT. The degree of full chest recoil was highest for TFT, then for nTTT and TTHT. The effective compression efficiency with nTTT was higher than for TTHT and TFT. Our novel newborn chest compression method in this manikin study provided adequate chest compression depth and degree of full chest recoil, as well as very good effective compression efficiency. Further clinical studies are necessary to confirm these initial results.

  16. Data compression for satellite images

    NASA Technical Reports Server (NTRS)

    Chen, P. H.; Wintz, P. A.

    1976-01-01

    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.

  17. Study of radar pulse compression for high resolution satellite altimetry

    NASA Technical Reports Server (NTRS)

    Dooley, R. P.; Nathanson, F. E.; Brooks, L. W.

    1974-01-01

    Pulse compression techniques are studied which are applicable to a satellite altimeter having a topographic resolution of + 10 cm. A systematic design procedure is used to determine the system parameters. The performance of an optimum, maximum likelihood processor is analysed, which provides the basis for modifying the standard split-gate tracker to achieve improved performance. Bandwidth considerations lead to the recommendation of a full deramp STRETCH pulse compression technique followed by an analog filter bank to separate range returns. The implementation of the recommended technique is examined.

  18. Integer cosine transform compression for Galileo at Jupiter: A preliminary look

    NASA Technical Reports Server (NTRS)

    Ekroot, L.; Dolinar, S.; Cheung, K.-M.

    1993-01-01

    The Galileo low-gain antenna mission has a severely rate-constrained channel over which we wish to send large amounts of information. Because of this link pressure, compression techniques for image and other data are being selected. The compression technique that will be used for images is the integer cosine transform (ICT). This article investigates the compression performance of Galileo's ICT algorithm as applied to Galileo images taken during the early portion of the mission and to images that simulate those expected from the encounter at Jupiter.

  19. Bit-wise arithmetic coding for data compression

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.

    1994-01-01

    This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.

  20. Compression and contact area of anterior strut grafts in spinal instrumentation: a biomechanical study.

    PubMed

    Pizanis, Antonius; Holstein, Jörg H; Vossen, Felix; Burkhardt, Markus; Pohlemann, Tim

    2013-08-26

    Anterior bone grafts are used as struts to reconstruct the anterior column of the spine in kyphosis or following injury. An incomplete fusion can lead to later correction losses and compromise further healing. Despite the different stabilizing techniques that have evolved, from posterior or anterior fixating implants to combined anterior/posterior instrumentation, graft pseudarthrosis rates remain an important concern. Furthermore, the need for additional anterior implant fixation is still controversial. In this bench-top study, we focused on the graft-bone interface under various conditions, using two simulated spinal injury models and common surgical fixation techniques to investigate the effect of implant-mediated compression and contact on the anterior graft. Calf spines were stabilised with posterior internal fixators. The wooden blocks as substitutes for strut grafts were impacted using a "pressfit" technique and pressure-sensitive films placed at the interface between the vertebral bone and the graft to record the compression force and the contact area with various stabilization techniques. Compression was achieved either with posterior internal fixator alone or with an additional anterior implant. The importance of concomitant ligament damage was also considered using two simulated injury models: pure compression Magerl/AO fracture type A or rotation/translation fracture type C models. In type A injury models, 1 mm-oversized grafts for impaction grafting provided good compression and fair contact areas that were both markedly increased by the use of additional compressing anterior rods or by shortening the posterior fixator construct. Anterior instrumentation by itself had similar effects. For type C injuries, dramatic differences were observed between the techniques, as there was a net decrease in compression and an inadequate contact on the graft occurred in this model. Under these circumstances, both compression and the contact area on graft could only be maintained at high levels with the use of additional anterior rods. Under experimental conditions, we observed that ligamentous injury following type C fracture has a negative influence on the compression and contact area of anterior interbody bone grafts when only an internal fixator is used for stabilization. Because of the loss of tension banding effects in type C injuries, an additional anterior compressing implant can be beneficial to restore both compression to and contact on the strut graft.

  1. ERGC: an efficient referential genome compression algorithm

    PubMed Central

    Saha, Subrata; Rajasekaran, Sanguthevar

    2015-01-01

    Motivation: Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. Results: We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. Availability and implementation: The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. Contact: rajasek@engr.uconn.edu PMID:26139636

  2. Performance Study of Salt Cavern Air Storage Based Non-Supplementary Fired Compressed Air Energy Storage System

    NASA Astrophysics Data System (ADS)

    Chen, Xiaotao; Song, Jie; Liang, Lixiao; Si, Yang; Wang, Le; Xue, Xiaodai

    2017-10-01

    Large-scale energy storage system (ESS) plays an important role in the planning and operation of smart grid and energy internet. Compressed air energy storage (CAES) is one of promising large-scale energy storage techniques. However, the high cost of the storage of compressed air and the low capacity remain to be solved. This paper proposes a novel non-supplementary fired compressed air energy storage system (NSF-CAES) based on salt cavern air storage to address the issues of air storage and the efficiency of CAES. Operating mechanisms of the proposed NSF-CAES are analysed based on thermodynamics principle. Key factors which has impact on the system storage efficiency are thoroughly explored. The energy storage efficiency of the proposed NSF-CAES system can be improved by reducing the maximum working pressure of the salt cavern and improving inlet air pressure of the turbine. Simulation results show that the electric-to-electric conversion efficiency of the proposed NSF-CAES can reach 63.29% with a maximum salt cavern working pressure of 9.5 MPa and 9 MPa inlet air pressure of the turbine, which is higher than the current commercial CAES plants.

  3. Application of grammar-based codes for lossless compression of digital mammograms

    NASA Astrophysics Data System (ADS)

    Li, Xiaoli; Krishnan, Srithar; Ma, Ngok-Wah

    2006-01-01

    A newly developed grammar-based lossless source coding theory and its implementation was proposed in 1999 and 2000, respectively, by Yang and Kieffer. The code first transforms the original data sequence into an irreducible context-free grammar, which is then compressed using arithmetic coding. In the study of grammar-based coding for mammography applications, we encountered two issues: processing time and limited number of single-character grammar G variables. For the first issue, we discover a feature that can simplify the matching subsequence search in the irreducible grammar transform process. Using this discovery, an extended grammar code technique is proposed and the processing time of the grammar code can be significantly reduced. For the second issue, we propose to use double-character symbols to increase the number of grammar variables. Under the condition that all the G variables have the same probability of being used, our analysis shows that the double- and single-character approaches have the same compression rates. By using the methods proposed, we show that the grammar code can outperform three other schemes: Lempel-Ziv-Welch (LZW), arithmetic, and Huffman on compression ratio, and has similar error tolerance capabilities as LZW coding under similar circumstances.

  4. The compression and storage method of the same kind of medical images: DPCM

    NASA Astrophysics Data System (ADS)

    Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong

    2006-09-01

    Medical imaging has started to take advantage of digital technology, opening the way for advanced medical imaging and teleradiology. Medical images, however, require large amounts of memory. At over 1 million bytes per image, a typical hospital needs a staggering amount of memory storage (over one trillion bytes per year), and transmitting an image over a network (even the promised superhighway) could take minutes--too slow for interactive teleradiology. This calls for image compression to reduce significantly the amount of data needed to represent an image. Several compression techniques with different compression ratio have been developed. However, the lossless techniques, which allow for perfect reconstruction of the original images, yield modest compression ratio, while the techniques that yield higher compression ratio are lossy, that is, the original image is reconstructed only approximately. Medical imaging poses the great challenge of having compression algorithms that are lossless (for diagnostic and legal reasons) and yet have high compression ratio for reduced storage and transmission time. To meet this challenge, we are developing and studying some compression schemes, which are either strictly lossless or diagnostically lossless, taking advantage of the peculiarities of medical images and of the medical practice. In order to increase the Signal to Noise Ratio (SNR) by exploitation of correlations within the source signal, a method of combining differential pulse code modulation (DPCM) is presented.

  5. Data-dependent bucketing improves reference-free compression of sequencing reads.

    PubMed

    Patro, Rob; Kingsford, Carl

    2015-09-01

    The storage and transmission of high-throughput sequencing data consumes significant resources. As our capacity to produce such data continues to increase, this burden will only grow. One approach to reduce storage and transmission requirements is to compress this sequencing data. We present a novel technique to boost the compression of sequencing that is based on the concept of bucketing similar reads so that they appear nearby in the file. We demonstrate that, by adopting a data-dependent bucketing scheme and employing a number of encoding ideas, we can achieve substantially better compression ratios than existing de novo sequence compression tools, including other bucketing and reordering schemes. Our method, Mince, achieves up to a 45% reduction in file sizes (28% on average) compared with existing state-of-the-art de novo compression schemes. Mince is written in C++11, is open source and has been made available under the GPLv3 license. It is available at http://www.cs.cmu.edu/∼ckingsf/software/mince. carlk@cs.cmu.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  6. Program Design for Retrospective Searches on Large Data Bases

    ERIC Educational Resources Information Center

    Thiel, L. H.; Heaps, H. S.

    1972-01-01

    Retrospective search of large data bases requires development of special techniques for automatic compression of data and minimization of the number of input-output operations to the computer files. The computer program should require a relatively small amount of internal memory. This paper describes the structure of such a program. (9 references)…

  7. LFQC: a lossless compression algorithm for FASTQ files

    PubMed Central

    Nicolae, Marius; Pathak, Sudipta; Rajasekaran, Sanguthevar

    2015-01-01

    Motivation: Next Generation Sequencing (NGS) technologies have revolutionized genomic research by reducing the cost of whole genome sequencing. One of the biggest challenges posed by modern sequencing technology is economic storage of NGS data. Storing raw data is infeasible because of its enormous size and high redundancy. In this article, we address the problem of storage and transmission of large FASTQ files using innovative compression techniques. Results: We introduce a new lossless non-reference based FASTQ compression algorithm named Lossless FASTQ Compressor. We have compared our algorithm with other state of the art big data compression algorithms namely gzip, bzip2, fastqz (Bonfield and Mahoney, 2013), fqzcomp (Bonfield and Mahoney, 2013), Quip (Jones et al., 2012), DSRC2 (Roguski and Deorowicz, 2014). This comparison reveals that our algorithm achieves better compression ratios on LS454 and SOLiD datasets. Availability and implementation: The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/rajasek/lfqc-v1.1.zip. Contact: rajasek@engr.uconn.edu PMID:26093148

  8. Evaluation on Compression Properties of Different Shape and Perforated rHDPE in Concrete Structures

    NASA Astrophysics Data System (ADS)

    Yuhazri, M. Y.; Hafiz, K. M.; Myia, Y. Z. A.; Jia, C. P.; Sihombing, H.; Sapuan, S. M.; Badarulzaman, N. A.

    2017-10-01

    The purpose of this study was to develop a concrete structure by incorporating waste HDPE plastic as the main reinforcement material and cement as the matrix via standard casting technique. There are eight different shapes of rHDPE reinforcing structure were used to investigate the compression properties of produced concrete composites. Experimental result shown that the highest shape in compressive strength of rHDPE reinforcing structure were the concrete with the addition of X-perforated beam (18.22 MPa), followed by X-beam (17.7 MPa), square perforated tube (17.54 MPa), round tube (17.42 MPa) and round perforated tube (16.69 MPa). In terms of their compressive behavior, the average concrete containing rHDPE reinforcement was successfully improved by 6 % of the mechanical characteristic compared to control concrete. It is shown that the addition of waste plastic as reinforcement structure can provide better compressive strength based on their shape and pattern respectively.

  9. Technique for fast and efficient hierarchical clustering

    DOEpatents

    Stork, Christopher

    2013-10-08

    A fast and efficient technique for hierarchical clustering of samples in a dataset includes compressing the dataset to reduce a number of variables within each of the samples of the dataset. A nearest neighbor matrix is generated to identify nearest neighbor pairs between the samples based on differences between the variables of the samples. The samples are arranged into a hierarchy that groups the samples based on the nearest neighbor matrix. The hierarchy is rendered to a display to graphically illustrate similarities or differences between the samples.

  10. Communications and information research: Improved space link performance via concatenated forward error correction coding

    NASA Technical Reports Server (NTRS)

    Rao, T. R. N.; Seetharaman, G.; Feng, G. L.

    1996-01-01

    With the development of new advanced instruments for remote sensing applications, sensor data will be generated at a rate that not only requires increased onboard processing and storage capability, but imposes demands on the space to ground communication link and ground data management-communication system. Data compression and error control codes provide viable means to alleviate these demands. Two types of data compression have been studied by many researchers in the area of information theory: a lossless technique that guarantees full reconstruction of the data, and a lossy technique which generally gives higher data compaction ratio but incurs some distortion in the reconstructed data. To satisfy the many science disciplines which NASA supports, lossless data compression becomes a primary focus for the technology development. While transmitting the data obtained by any lossless data compression, it is very important to use some error-control code. For a long time, convolutional codes have been widely used in satellite telecommunications. To more efficiently transform the data obtained by the Rice algorithm, it is required to meet the a posteriori probability (APP) for each decoded bit. A relevant algorithm for this purpose has been proposed which minimizes the bit error probability in the decoding linear block and convolutional codes and meets the APP for each decoded bit. However, recent results on iterative decoding of 'Turbo codes', turn conventional wisdom on its head and suggest fundamentally new techniques. During the past several months of this research, the following approaches have been developed: (1) a new lossless data compression algorithm, which is much better than the extended Rice algorithm for various types of sensor data, (2) a new approach to determine the generalized Hamming weights of the algebraic-geometric codes defined by a large class of curves in high-dimensional spaces, (3) some efficient improved geometric Goppa codes for disk memory systems and high-speed mass memory systems, and (4) a tree based approach for data compression using dynamic programming.

  11. Flood inundation extent mapping based on block compressed tracing

    NASA Astrophysics Data System (ADS)

    Shen, Dingtao; Rui, Yikang; Wang, Jiechen; Zhang, Yu; Cheng, Liang

    2015-07-01

    Flood inundation extent, depth, and duration are important factors affecting flood hazard evaluation. At present, flood inundation analysis is based mainly on a seeded region-growing algorithm, which is an inefficient process because it requires excessive recursive computations and it is incapable of processing massive datasets. To address this problem, we propose a block compressed tracing algorithm for mapping the flood inundation extent, which reads the DEM data in blocks before transferring them to raster compression storage. This allows a smaller computer memory to process a larger amount of data, which solves the problem of the regular seeded region-growing algorithm. In addition, the use of a raster boundary tracing technique allows the algorithm to avoid the time-consuming computations required by the seeded region-growing. Finally, we conduct a comparative evaluation in the Chin-sha River basin, results show that the proposed method solves the problem of flood inundation extent mapping based on massive DEM datasets with higher computational efficiency than the original method, which makes it suitable for practical applications.

  12. Secure biometric image sensor and authentication scheme based on compressed sensing.

    PubMed

    Suzuki, Hiroyuki; Suzuki, Masamichi; Urabe, Takuya; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2013-11-20

    It is important to ensure the security of biometric authentication information, because its leakage causes serious risks, such as replay attacks using the stolen biometric data, and also because it is almost impossible to replace raw biometric information. In this paper, we propose a secure biometric authentication scheme that protects such information by employing an optical data ciphering technique based on compressed sensing. The proposed scheme is based on two-factor authentication, the biometric information being supplemented by secret information that is used as a random seed for a cipher key. In this scheme, a biometric image is optically encrypted at the time of image capture, and a pair of restored biometric images for enrollment and verification are verified in the authentication server. If any of the biometric information is exposed to risk, it can be reenrolled by changing the secret information. Through numerical experiments, we confirm that finger vein images can be restored from the compressed sensing measurement data. We also present results that verify the accuracy of the scheme.

  13. Compressive Sampling based Image Coding for Resource-deficient Visual Communication.

    PubMed

    Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen

    2016-04-14

    In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.

  14. The effects of processing techniques on magnesium-based composite

    NASA Astrophysics Data System (ADS)

    Rodzi, Siti Nur Hazwani Mohamad; Zuhailawati, Hussain

    2016-12-01

    The aim of this study is to investigate the effect of processing techniques on the densification, hardness and compressive strength of Mg alloy and Mg-based composite for biomaterial application. The control sample (pure Mg) and Mg-based composite (Mg-Zn/HAp) were fabricated through mechanical alloying process using high energy planetary mill, whilst another Mg-Zn/HAp composite was fabricated through double step processing (the matrix Mg-Zn alloy was fabricated by planetary mill, subsequently HAp was dispersed by roll mill). As-milled powder was then consolidated by cold press into 10 mm diameter pellet under 400 MPa compaction pressure before being sintered at 300 °C for 1 hour under the flow of argon. The densification of the sintered pellets were then determined by Archimedes principle. Mechanical properties of the sintered pellets were characterized by microhardness and compression test. The results show that the density of the pellets was significantly increased by addition of HAp, but the most optimum density was observed when the sample was fabricated through double step processing (1.8046 g/cm3). Slight increment in hardness and ultimate compressive strength were observed for Mg-Zn/HAp composite that was fabricated through double step processing (58.09 HV, 132.19 MPa), as compared to Mg-Zn/HAp produced through single step processing (47.18 HV, 122.49 MPa).

  15. CNES studies for on-board implementation via HLS tools of a cloud-detection module for selective compression

    NASA Astrophysics Data System (ADS)

    Camarero, R.; Thiebaut, C.; Dejean, Ph.; Speciel, A.

    2010-08-01

    Future CNES high resolution instruments for remote sensing missions will lead to higher data-rates because of the increase in resolution and dynamic range. For example, the ground resolution improvement has induced a data-rate multiplied by 8 from SPOT4 to SPOT5 [1] and by 28 to PLEIADES-HR [2]. Innovative "smart" compression techniques will be then required, performing different types of compression inside a scene, in order to reach higher global compression ratios while complying with image quality requirements. This socalled "selective compression", allows important compression gains by detecting and then differently compressing the regions-of-interest (ROI) and non-interest in the image (e.g. higher compression ratios are assigned to the non-interesting data). Given that most of CNES high resolution images are cloudy [1], significant mass-memory and transmission gain could be reached by just detecting and suppressing (or compressing significantly) the areas covered by clouds. Since 2007, CNES works on a cloud detection module [3] as a simplification for on-board implementation of an already existing module used on-ground for PLEIADES-HR album images [4]. The different steps of this Support Vector Machine classifier have already been analyzed, for simplification and optimization, during this on-board implementation study: reflectance computation, characteristics vector computation (based on multispectral criteria) and computation of the SVM output. In order to speed up the hardware design phase, a new approach based on HLS [5] tools is being tested for the VHDL description stage. The aim is to obtain a bit-true VDHL design directly from a high level description language as C or Matlab/Simulink [6].

  16. A novel high-frequency encoding algorithm for image compression

    NASA Astrophysics Data System (ADS)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-12-01

    In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.

  17. Optimal Compression Methods for Floating-point Format Images

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2009-01-01

    We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.

  18. Highly oriented carbon fiber–polymer composites via additive manufacturing

    DOE PAGES

    Tekinalp, Halil L.; Kunc, Vlastimil; Velez-Garcia, Gregorio M.; ...

    2014-10-16

    Additive manufacturing, diverging from traditional manufacturing techniques, such as casting and machining materials, can handle complex shapes with great design flexibility without the typical waste. Although this technique has been mainly used for rapid prototyping, interest is growing in using this method to directly manufacture actual parts of complex shape. To use 3D-printing additive manufacturing in wide spread applications, the technique and the feedstock materials require improvements to meet the mechanical requirements of load-bearing components. Thus, we investigated the short fiber (0.2 mm to 0.4 mm) reinforced acrylonitrile-butadiene-styrene composites as a feedstock for 3D-printing in terms of their processibility, microstructuremore » and mechanical performance; and also provided comparison with traditional compression molded composites. The tensile strength and modulus of 3D-printed samples increased ~115% and ~700%, respectively. 3D-printer yielded samples with very high fiber orientation in printing direction (up to 91.5 %), whereas, compression molding process yielded samples with significantly less fiber orientation. Microstructure-mechanical property relationships revealed that although the relatively high porosity is observed in the 3D-printed composites as compared to those produced by the conventional compression molding technique, they both exhibited comparable tensile strength and modulus. Furthermore, this phenomena is explained based on the changes in fiber orientation, dispersion and void formation.« less

  19. Tissue-engineered articular cartilage exhibits tension-compression nonlinearity reminiscent of the native cartilage.

    PubMed

    Kelly, Terri-Ann N; Roach, Brendan L; Weidner, Zachary D; Mackenzie-Smith, Charles R; O'Connell, Grace D; Lima, Eric G; Stoker, Aaron M; Cook, James L; Ateshian, Gerard A; Hung, Clark T

    2013-07-26

    The tensile modulus of articular cartilage is much larger than its compressive modulus. This tension-compression nonlinearity enhances interstitial fluid pressurization and decreases the frictional coefficient. The current set of studies examines the tensile and compressive properties of cylindrical chondrocyte-seeded agarose constructs over different developmental stages through a novel method that combines osmotic loading, video microscopy, and uniaxial unconfined compression testing. This method was previously used to examine tension-compression nonlinearity in native cartilage. Engineered cartilage, cultured under free-swelling (FS) or dynamically loaded (DL) conditions, was tested in unconfined compression in hypertonic and hypotonic salt solutions. The apparent equilibrium modulus decreased with increasing salt concentration, indicating that increasing the bath solution osmolarity shielded the fixed charges within the tissue, shifting the measured moduli along the tension-compression curve and revealing the intrinsic properties of the tissue. With this method, we were able to measure the tensile (401±83kPa for FS and 678±473kPa for DL) and compressive (161±33kPa for FS and 348±203kPa for DL) moduli of the same engineered cartilage specimens. These moduli are comparable to values obtained from traditional methods, validating this technique for measuring the tensile and compressive properties of hydrogel-based constructs. This study shows that engineered cartilage exhibits tension-compression nonlinearity reminiscent of the native tissue, and that dynamic deformational loading can yield significantly higher tensile properties. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Compression-based aggregation model for medical web services.

    PubMed

    Al-Shammary, Dhiah; Khalil, Ibrahim

    2010-01-01

    Many organizations such as hospitals have adopted Cloud Web services in applying their network services to avoid investing heavily computing infrastructure. SOAP (Simple Object Access Protocol) is the basic communication protocol of Cloud Web services that is XML based protocol. Generally,Web services often suffer congestions and bottlenecks as a result of the high network traffic that is caused by the large XML overhead size. At the same time, the massive load on Cloud Web services in terms of the large demand of client requests has resulted in the same problem. In this paper, two XML-aware aggregation techniques that are based on exploiting the compression concepts are proposed in order to aggregate the medical Web messages and achieve higher message size reduction.

  1. RZA-NLMF algorithm-based adaptive sparse sensing for realizing compressive sensing

    NASA Astrophysics Data System (ADS)

    Gui, Guan; Xu, Li; Adachi, Fumiyuki

    2014-12-01

    Nonlinear sparse sensing (NSS) techniques have been adopted for realizing compressive sensing in many applications such as radar imaging. Unlike the NSS, in this paper, we propose an adaptive sparse sensing (ASS) approach using the reweighted zero-attracting normalized least mean fourth (RZA-NLMF) algorithm which depends on several given parameters, i.e., reweighted factor, regularization parameter, and initial step size. First, based on the independent assumption, Cramer-Rao lower bound (CRLB) is derived as for the performance comparisons. In addition, reweighted factor selection method is proposed for achieving robust estimation performance. Finally, to verify the algorithm, Monte Carlo-based computer simulations are given to show that the ASS achieves much better mean square error (MSE) performance than the NSS.

  2. Parametric study on the compressive strength geopolymer paving block

    NASA Astrophysics Data System (ADS)

    Aman; Awaluddin, A.; Ahmad, A.; Olivia, M.

    2018-04-01

    This paper reported about the investigated of sodium hidroxida concentration, effect of ratio liquid to solid (L/S), temperature and time on the compressive strength of geopolymer paving block using fly ash and fine aggregate as base material and combination of sodium hidroxida and sodium silicate as alkaline activator and the ratio of Na2SiO3/NaOH was 2 and fly ash to aggregate of 1: 3. The experiments were conducted with variation of the sodium hidroxida concentration of (10-16 M) liquid to solid (L/S) 0.1- 0.7 ratio, curing temperature 30-100 °C and curing time (7-28 day). The main evaluation techniques in this experimental were Compressive strength, X-ray diffraction (XRD),and Scaning Electron Microscope (SEM). The result showed that the compressive strength of Geopolymer Paving block has increased with an increasing of concentration, liquid to solid ratio, curing temperature and curing time.

  3. Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method

    NASA Astrophysics Data System (ADS)

    Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan

    2018-04-01

    Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.

  4. Compressed Sensing in On-Grid MIMO Radar.

    PubMed

    Minner, Michael F

    2015-01-01

    The accurate detection of targets is a significant problem in multiple-input multiple-output (MIMO) radar. Recent advances of Compressive Sensing offer a means of efficiently accomplishing this task. The sparsity constraints needed to apply the techniques of Compressive Sensing to problems in radar systems have led to discretizations of the target scene in various domains, such as azimuth, time delay, and Doppler. Building upon recent work, we investigate the feasibility of on-grid Compressive Sensing-based MIMO radar via a threefold azimuth-delay-Doppler discretization for target detection and parameter estimation. We utilize a colocated random sensor array and transmit distinct linear chirps to a small scene with few, slowly moving targets. Relying upon standard far-field and narrowband assumptions, we analyze the efficacy of various recovery algorithms in determining the parameters of the scene through numerical simulations, with particular focus on the ℓ 1-squared Nonnegative Regularization method.

  5. The least-squares finite element method for low-mach-number compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Yu, Sheng-Tao

    1994-01-01

    The present paper reports the development of the Least-Squares Finite Element Method (LSFEM) for simulating compressible viscous flows at low Mach numbers in which the incompressible flows pose as an extreme. Conventional approach requires special treatments for low-speed flows calculations: finite difference and finite volume methods are based on the use of the staggered grid or the preconditioning technique; and, finite element methods rely on the mixed method and the operator-splitting method. In this paper, however, we show that such difficulty does not exist for the LSFEM and no special treatment is needed. The LSFEM always leads to a symmetric, positive-definite matrix through which the compressible flow equations can be effectively solved. Two numerical examples are included to demonstrate the method: first, driven cavity flows at various Reynolds numbers; and, buoyancy-driven flows with significant density variation. Both examples are calculated by using full compressible flow equations.

  6. A Finite Element Study of Micropipette Aspiration of Single Cells: Effect of Compressibility

    PubMed Central

    Jafari Bidhendi, Amirhossein; Korhonen, Rami K.

    2012-01-01

    Micropipette aspiration (MA) technique has been widely used to measure the viscoelastic properties of different cell types. Cells experience nonlinear large deformations during the aspiration procedure. Neo-Hookean viscohyperelastic (NHVH) incompressible and compressible models were used to simulate the creep behavior of cells in MA, particularly accounting for the effect of compressibility, bulk relaxation, and hardening phenomena under large strain. In order to find optimal material parameters, the models were fitted to the experimental data available for mesenchymal stem cells. Finally, through Neo-Hookean porohyperelastic (NHPH) material model for the cell, the influence of fluid flow on the aspiration length of the cell was studied. Based on the results, we suggest that the compressibility and bulk relaxation/fluid flow play a significant role in the deformation behavior of single cells and should be taken into account in the analysis of the mechanics of cells. PMID:22400045

  7. Study of adaptive methods for data compression of scanner data

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The performance of adaptive image compression techniques and the applicability of a variety of techniques to the various steps in the data dissemination process are examined in depth. It is concluded that the bandwidth of imagery generated by scanners can be reduced without introducing significant degradation such that the data can be transmitted over an S-band channel. This corresponds to a compression ratio equivalent to 1.84 bits per pixel. It is also shown that this can be achieved using at least two fairly simple techniques with weight-power requirements well within the constraints of the LANDSAT-D satellite. These are the adaptive 2D DPCM and adaptive hybrid techniques.

  8. Monitoring compaction and compressibility changes in offshore chalk reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dean, G.; Hardy, R.; Eltvik, P.

    1994-03-01

    Some of the North Sea's largest and most important oil fields are in chalk reservoirs. In these fields, it is important to measure reservoir compaction and compressibility because compaction can result in platform subsidence. Also, compaction drive is a main drive mechanism in these fields, so an accurate reserves estimate cannot be made without first measuring compressibility. Estimating compaction and reserves is difficult because compressibility changes throughout field life. Installing of accurate, permanent downhole pressure gauges on offshore chalk fields makes it possible to use a new method to monitor compressibility -- measurement of reservoir pressure changes caused by themore » tide. This tidal-monitoring technique is an in-situ method that can greatly increase compressibility information. It can be used to estimate compressibility and to measure compressibility variation over time. This paper concentrates on application of the tidal-monitoring technique to North Sea chalk reservoirs. However, the method is applicable for any tidal offshore area and can be applied whenever necessary to monitor in-situ rock compressibility. One such application would be if platform subsidence was expected.« less

  9. Compressed-domain video indexing techniques using DCT and motion vector information in MPEG video

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; Doermann, David S.; Lin, King-Ip; Faloutsos, Christos

    1997-01-01

    Development of various multimedia applications hinges on the availability of fast and efficient storage, browsing, indexing, and retrieval techniques. Given that video is typically stored efficiently in a compressed format, if we can analyze the compressed representation directly, we can avoid the costly overhead of decompressing and operating at the pixel level. Compressed domain parsing of video has been presented in earlier work where a video clip is divided into shots, subshots, and scenes. In this paper, we describe key frame selection, feature extraction, and indexing and retrieval techniques that are directly applicable to MPEG compressed video. We develop a frame-type independent representation of the various types of frames present in an MPEG video in which al frames can be considered equivalent. Features are derived from the available DCT, macroblock, and motion vector information and mapped to a low-dimensional space where they can be accessed with standard database techniques. The spatial information is used as primary index while the temporal information is used to enhance the robustness of the system during the retrieval process. The techniques presented enable fast archiving, indexing, and retrieval of video. Our operational prototype typically takes a fraction of a second to retrieve similar video scenes from our database, with over 95% success.

  10. LagLoc - a new surgical technique for locking plate systems.

    PubMed

    Triana, Miguel; Gueorguiev, Boyko; Sommer, Christoph; Stoffel, Karl; Agarwal, Yash; Zderic, Ivan; Helfen, Tobias; Krieg, James C; Krause, Fabian; Knobe, Matthias; Richards, R Geoff; Lenz, Mark

    2018-06-19

    Treatment of oblique and spiral fractures remains challenging. The aim of this study was to introduce and investigate the new LagLoc technique for locked plating with generation of interfragmentary compression, combining the advantages of lag-screw and locking-head-screw techniques. Oblique fracture was simulated in artificial diaphyseal bones, assigned to three groups for plating with a 7-hole locking compression plate. Group I was plated with three locking screws in holes 1, 4 and 7. The central screw crossed the fracture line. In group II the central hole was occupied with a lag screw perpendicular to fracture line. Group III was instrumented applying the LagLoc technique as follows. Hole 4 was predrilled perpendicularly to the plate, followed by overdrilling of the near cortex and insertion of a locking screw whose head was covered by a holding sleeve to prevent temporarily the locking in the plate hole and generate interfragmentary compression. Subsequently, the screw head was released and locked in the plate hole. Holes 1 and 7 were occupied with locking screws. Interfragmentary compression in the fracture gap was measured using pressure sensors. All screws in the three groups were tightened with 4Nm torque. Interfragmentary compression in group I (167 ± 25N) was significantly lower in comparison to groups II (431 ± 21N) and III (379 ± 59N), p≤0.005. The difference in compression between groups II and III remained not significant (p = 0.999). The new LagLoc technique offers an alternative tool to generate interfragmentary compression with the application of locking plates by combining the biomechanical advantages of lag screw and locking screw fixations. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  11. Sidelobe apodization in optical pulse compression reflectometry for fiber optic distributed acoustic sensing.

    PubMed

    Mompó, Juan José; Martín-López, Sonia; González-Herráez, Miguel; Loayssa, Alayn

    2018-04-01

    We demonstrate a technique to reduce the sidelobes in optical pulse compression reflectometry for distributed acoustic sensing. The technique is based on using a Gaussian probe pulse with linear frequency modulation. This is shown to improve the sidelobe suppression by 13 dB compared to the use of square pulses without any significant penalty in terms of spatial resolution. In addition, a 2.25 dB enhancement in signal-to-noise ratio is calculated compared to the use of receiver-side windowing. The method is tested by measuring 700 Hz vibrations with a 140  nε amplitude at the end of a 50 km fiber sensing link with 34 cm spatial resolution, giving a record 147,058 spatially resolved points.

  12. Digital coding of Shuttle TV

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Batson, B.

    1976-01-01

    Space Shuttle will be using a field-sequential color television system for the first few missions, but the present plans are to switch to a NTSC color TV system for future missions. The field-sequential color TV system uses a modified black and white camera, producing a TV signal with a digital bandwidth of about 60 Mbps. This article discusses the characteristics of the Shuttle TV systems and proposes a bandwidth-compression technique for the field-sequential color TV system that could operate at 13 Mbps to produce a high-fidelity signal. The proposed bandwidth-compression technique is based on a two-dimensional DPCM system that utilizes temporal, spectral, and spatial correlation inherent in the field-sequential color TV imagery. The proposed system requires about 60 watts and less than 200 integrated circuits.

  13. Compressive Sensing with Cross-Validation and Stop-Sampling for Sparse Polynomial Chaos Expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik

    Compressive sensing is a powerful technique for recovering sparse solutions of underdetermined linear systems, which is often encountered in uncertainty quanti cation analysis of expensive and high-dimensional physical models. We perform numerical investigations employing several com- pressive sensing solvers that target the unconstrained LASSO formulation, with a focus on linear systems that arise in the construction of polynomial chaos expansions. With core solvers of l1 ls, SpaRSA, CGIST, FPC AS, and ADMM, we develop techniques to mitigate over tting through an automated selection of regularization constant based on cross-validation, and a heuristic strategy to guide the stop-sampling decision. Practical recommendationsmore » on parameter settings for these tech- niques are provided and discussed. The overall method is applied to a series of numerical examples of increasing complexity, including large eddy simulations of supersonic turbulent jet-in-cross flow involving a 24-dimensional input. Through empirical phase-transition diagrams and convergence plots, we illustrate sparse recovery performance under structures induced by polynomial chaos, accuracy and computational tradeoffs between polynomial bases of different degrees, and practi- cability of conducting compressive sensing for a realistic, high-dimensional physical application. Across test cases studied in this paper, we find ADMM to have demonstrated empirical advantages through consistent lower errors and faster computational times.« less

  14. Study of on-board compression of earth resources data

    NASA Technical Reports Server (NTRS)

    Habibi, A.

    1975-01-01

    The current literature on image bandwidth compression was surveyed and those methods relevant to compression of multispectral imagery were selected. Typical satellite multispectral data was then analyzed statistically and the results used to select a smaller set of candidate bandwidth compression techniques particularly relevant to earth resources data. These were compared using both theoretical analysis and simulation, under various criteria of optimality such as mean square error (MSE), signal-to-noise ratio, classification accuracy, and computational complexity. By concatenating some of the most promising techniques, three multispectral data compression systems were synthesized which appear well suited to current and future NASA earth resources applications. The performance of these three recommended systems was then examined in detail by all of the above criteria. Finally, merits and deficiencies were summarized and a number of recommendations for future NASA activities in data compression proposed.

  15. A Proposal for Kelly CriterionBased Lossy Network Compression

    DTIC Science & Technology

    2016-03-01

    warehousing and data mining techniques for cyber security. New York (NY): Springer; 2007. p. 83–108. 34. Münz G, Li S, Carle G. Traffic anomaly...p. 188–196. 48. Kim NU, Park MW, Park SH, Jung SM, Eom JH, Chung TM. A study on ef- fective hash-based load balancing scheme for parallel nids. In

  16. Compressed Sensing for Chemistry

    NASA Astrophysics Data System (ADS)

    Sanders, Jacob Nathan

    Many chemical applications, from spectroscopy to quantum chemistry, involve measuring or computing a large amount of data, and then compressing this data to retain the most chemically-relevant information. In contrast, compressed sensing is an emergent technique that makes it possible to measure or compute an amount of data that is roughly proportional to its information content. In particular, compressed sensing enables the recovery of a sparse quantity of information from significantly undersampled data by solving an ℓ 1-optimization problem. This thesis represents the application of compressed sensing to problems in chemistry. The first half of this thesis is about spectroscopy. Compressed sensing is used to accelerate the computation of vibrational and electronic spectra from real-time time-dependent density functional theory simulations. Using compressed sensing as a drop-in replacement for the discrete Fourier transform, well-resolved frequency spectra are obtained at one-fifth the typical simulation time and computational cost. The technique is generalized to multiple dimensions and applied to two-dimensional absorption spectroscopy using experimental data collected on atomic rubidium vapor. Finally, a related technique known as super-resolution is applied to open quantum systems to obtain realistic models of a protein environment, in the form of atomistic spectral densities, at lower computational cost. The second half of this thesis deals with matrices in quantum chemistry. It presents a new use of compressed sensing for more efficient matrix recovery whenever the calculation of individual matrix elements is the computational bottleneck. The technique is applied to the computation of the second-derivative Hessian matrices in electronic structure calculations to obtain the vibrational modes and frequencies of molecules. When applied to anthracene, this technique results in a threefold speed-up, with greater speed-ups possible for larger molecules. The implementation of the method in the Q-Chem commercial software package is described. Moreover, the method provides a general framework for bootstrapping cheap low-accuracy calculations in order to reduce the required number of expensive high-accuracy calculations.

  17. Complex-Difference Constrained Compressed Sensing Reconstruction for Accelerated PRF Thermometry with Application to MRI Induced RF Heating

    PubMed Central

    Cao, Zhipeng; Oh, Sukhoon; Otazo, Ricardo; Sica, Christopher T.; Griswold, Mark A.; Collins, Christopher M.

    2014-01-01

    Purpose Introduce a novel compressed sensing reconstruction method to accelerate proton resonance frequency (PRF) shift temperature imaging for MRI induced radiofrequency (RF) heating evaluation. Methods A compressed sensing approach that exploits sparsity of the complex difference between post-heating and baseline images is proposed to accelerate PRF temperature mapping. The method exploits the intra- and inter-image correlations to promote sparsity and remove shared aliasing artifacts. Validations were performed on simulations and retrospectively undersampled data acquired in ex-vivo and in-vivo studies by comparing performance with previously proposed techniques. Results The proposed complex difference constrained compressed sensing reconstruction method improved the reconstruction of smooth and local PRF temperature change images compared to various available reconstruction methods in a simulation study, a retrospective study with heating of a human forearm in vivo, and a retrospective study with heating of a sample of beef ex vivo . Conclusion Complex difference based compressed sensing with utilization of a fully-sampled baseline image improves the reconstruction accuracy for accelerated PRF thermometry. It can be used to improve the volumetric coverage and temporal resolution in evaluation of RF heating due to MRI, and may help facilitate and validate temperature-based methods for safety assurance. PMID:24753099

  18. Photogrammetric point cloud compression for tactical networks

    NASA Astrophysics Data System (ADS)

    Madison, Andrew C.; Massaro, Richard D.; Wayant, Clayton D.; Anderson, John E.; Smith, Clint B.

    2017-05-01

    We report progress toward the development of a compression schema suitable for use in the Army's Common Operating Environment (COE) tactical network. The COE facilitates the dissemination of information across all Warfighter echelons through the establishment of data standards and networking methods that coordinate the readout and control of a multitude of sensors in a common operating environment. When integrated with a robust geospatial mapping functionality, the COE enables force tracking, remote surveillance, and heightened situational awareness to Soldiers at the tactical level. Our work establishes a point cloud compression algorithm through image-based deconstruction and photogrammetric reconstruction of three-dimensional (3D) data that is suitable for dissimination within the COE. An open source visualization toolkit was used to deconstruct 3D point cloud models based on ground mobile light detection and ranging (LiDAR) into a series of images and associated metadata that can be easily transmitted on a tactical network. Stereo photogrammetric reconstruction is then conducted on the received image stream to reveal the transmitted 3D model. The reported method boasts nominal compression ratios typically on the order of 250 while retaining tactical information and accurate georegistration. Our work advances the scope of persistent intelligence, surveillance, and reconnaissance through the development of 3D visualization and data compression techniques relevant to the tactical operations environment.

  19. Vertical Object Layout and Compression for Fixed Heaps

    NASA Astrophysics Data System (ADS)

    Titzer, Ben L.; Palsberg, Jens

    Research into embedded sensor networks has placed increased focus on the problem of developing reliable and flexible software for microcontroller-class devices. Languages such as nesC [10] and Virgil [20] have brought higher-level programming idioms to this lowest layer of software, thereby adding expressiveness. Both languages are marked by the absence of dynamic memory allocation, which removes the need for a runtime system to manage memory. While nesC offers code modules with statically allocated fields, arrays and structs, Virgil allows the application to allocate and initialize arbitrary objects during compilation, producing a fixed object heap for runtime. This paper explores techniques for compressing fixed object heaps with the goal of reducing the RAM footprint of a program. We explore table-based compression and introduce a novel form of object layout called vertical object layout. We provide experimental results that measure the impact on RAM size, code size, and execution time for a set of Virgil programs. Our results show that compressed vertical layout has better execution time and code size than table-based compression while achieving more than 20% heap reduction on 6 of 12 benchmark programs and 2-17% heap reduction on the remaining 6. We also present a formalization of vertical object layout and prove tight relationships between three styles of object layout.

  20. Pulse-compression ghost imaging lidar via coherent detection.

    PubMed

    Deng, Chenjin; Gong, Wenlin; Han, Shensheng

    2016-11-14

    Ghost imaging (GI) lidar, as a novel remote sensing technique, has been receiving increasing interest in recent years. By combining pulse-compression technique and coherent detection with GI, we propose a new lidar system called pulse-compression GI lidar. Our analytical results, which are backed up by numerical simulations, demonstrate that pulse-compression GI lidar can obtain the target's spatial intensity distribution, range and moving velocity. Compared with conventional pulsed GI lidar system, pulse-compression GI lidar, without decreasing the range resolution, is easy to obtain high single pulse energy with the use of a long pulse, and the mechanism of coherent detection can eliminate the influence of the stray light, which is helpful to improve the detection sensitivity and detection range.

  1. Compression-RSA technique: A more efficient encryption-decryption procedure

    NASA Astrophysics Data System (ADS)

    Mandangan, Arif; Mei, Loh Chai; Hung, Chang Ee; Che Hussin, Che Haziqah

    2014-06-01

    The efficiency of encryption-decryption procedures has become a major problem in asymmetric cryptography. Compression-RSA technique is developed to overcome the efficiency problem by compressing the numbers of kplaintext, where k∈Z+ and k > 2, becoming only 2 plaintext. That means, no matter how large the numbers of plaintext, they will be compressed to only 2 plaintext. The encryption-decryption procedures are expected to be more efficient since these procedures only receive 2 inputs to be processed instead of kinputs. However, it is observed that as the numbers of original plaintext are increasing, the size of the new plaintext becomes bigger. As a consequence, it will probably affect the efficiency of encryption-decryption procedures, especially for RSA cryptosystem since both of its encryption-decryption procedures involve exponential operations. In this paper, we evaluated the relationship between the numbers of original plaintext and the size of the new plaintext. In addition, we conducted several experiments to show that the RSA cryptosystem with embedded Compression-RSA technique is more efficient than the ordinary RSA cryptosystem.

  2. ERGC: an efficient referential genome compression algorithm.

    PubMed

    Saha, Subrata; Rajasekaran, Sanguthevar

    2015-11-01

    Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. rajasek@engr.uconn.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. An Evidence-Based Approach for Choosing Post-exercise Recovery Techniques to Reduce Markers of Muscle Damage, Soreness, Fatigue, and Inflammation: A Systematic Review With Meta-Analysis

    PubMed Central

    Dupuy, Olivier; Douzi, Wafa; Theurot, Dimitri; Bosquet, Laurent; Dugué, Benoit

    2018-01-01

    Introduction: The aim of the present work was to perform a meta-analysis evaluating the impact of recovery techniques on delayed onset muscle soreness (DOMS), perceived fatigue, muscle damage, and inflammatory markers after physical exercise. Method: Three databases including PubMed, Embase, and Web-of-Science were searched using the following terms: (“recovery” or “active recovery” or “cooling” or “massage” or “compression garment” or “electrostimulation” or “stretching” or “immersion” or “cryotherapy”) and (“DOMS” or “perceived fatigue” or “CK” or “CRP” or “IL-6”) and (“after exercise” or “post-exercise”) for randomized controlled trials, crossover trials, and repeated-measure studies. Overall, 99 studies were included. Results: Active recovery, massage, compression garments, immersion, contrast water therapy, and cryotherapy induced a small to large decrease (−2.26 < g < −0.40) in the magnitude of DOMS, while there was no change for the other methods. Massage was found to be the most powerful technique for recovering from DOMS and fatigue. In terms of muscle damage and inflammatory markers, we observed an overall moderate decrease in creatine kinase [SMD (95% CI) = −0.37 (−0.58 to −0.16), I2 = 40.15%] and overall small decreases in interleukin-6 [SMD (95% CI) = −0.36 (−0.60 to −0.12), I2 = 0%] and C-reactive protein [SMD (95% CI) = −0.38 (−0.59 to−0.14), I2 = 39%]. The most powerful techniques for reducing inflammation were massage and cold exposure. Conclusion: Massage seems to be the most effective method for reducing DOMS and perceived fatigue. Perceived fatigue can be effectively managed using compression techniques, such as compression garments, massage, or water immersion. PMID:29755363

  4. Particle-mesh techniques

    NASA Technical Reports Server (NTRS)

    Macneice, Peter

    1995-01-01

    This is an introduction to numerical Particle-Mesh techniques, which are commonly used to model plasmas, gravitational N-body systems, and both compressible and incompressible fluids. The theory behind this approach is presented, and its practical implementation, both for serial and parallel machines, is discussed. This document is based on a four-hour lecture course presented by the author at the NASA Summer School for High Performance Computational Physics, held at Goddard Space Flight Center.

  5. Tensor-product preconditioners for a space-time discontinuous Galerkin method

    NASA Astrophysics Data System (ADS)

    Diosady, Laslo T.; Murman, Scott M.

    2014-10-01

    A space-time discontinuous Galerkin spectral element discretization is presented for direct numerical simulation of the compressible Navier-Stokes equations. An efficient solution technique based on a matrix-free Newton-Krylov method is presented. A diagonalized alternating direction implicit preconditioner is extended to a space-time formulation using entropy variables. The effectiveness of this technique is demonstrated for the direct numerical simulation of turbulent flow in a channel.

  6. Two-Thumb Encircling Technique Over the Head of Patients in the Setting of Lone Rescuer Infant CPR Occurred During Ambulance Transfer: A Crossover Simulation Study.

    PubMed

    Jo, Choong Hyun; Cho, Gyu Chong; Lee, Chang Hee

    2017-07-01

    The purpose of this study was to determine if the over-the-head 2-thumb encircling technique (OTTT) provides better overall quality of cardiopulmonary resuscitation compared with conventional 2-finger technique (TFT) for a lone rescuer in the setting of infant cardiac arrest in ambulance. Fifty medical emergency service students were voluntarily recruited to perform lone rescuer infant cardiopulmonary resuscitation for 2 minutes on a manikin simulating a 3-month-old baby in an ambulance. Participants who performed OTTT sat over the head of manikins to compress the chest using a 2-thumb encircling technique and provide bag-valve mask ventilations, whereas those who performed TFT sat at the side of the manikins to compress using 2-fingers and provide pocket-mask ventilations. Mean hands-off time was not significantly different between OTTT and TFT (7.6 ± 1.1 seconds vs 7.9 ± 1.3 seconds, P = 0.885). Over-the-head 2-thumb encircling technique resulted in greater depth of compression (42.6 ± 1.4 mm vs 41.0 ± 1.4 mm, P < 0.001) and faster rate of compressions (114.4 ± 8.0 per minute vs 112.2 ± 8.2 per minute, P = 0.019) than TFT. Over-the-head 2-thumb encircling technique resulted in a smaller fatigue score than TFT (1.7 ± 1.5 vs 2.5 ± 1.6, P < 0.001). In addition, subjects reported that compression, ventilation, and changing compression to ventilation were easier in OTTT than in TFT. The use of OTTT may be a suitable alternative to TFT in the setting of cardiac arrest of infants during ambulance transfer.

  7. A Method For The Verification Of Wire Crimp Compression Using Ultrasonic Inspection

    NASA Technical Reports Server (NTRS)

    Cramer, K. E.; Perey, Daniel F.; Yost, William t.

    2010-01-01

    The development of a new ultrasonic measurement technique to assess quantitatively wire crimp terminations is discussed. The amplitude change of a compressional ultrasonic wave propagating at right angles to the wire axis and through the junction of a crimp termination is shown to correlate with the results of a destructive pull test, which is a standard for assessing crimp wire junction quality. To demonstrate the technique, the case of incomplete compression of crimped connections is ultrasonically tested, and the results are correlated with pull tests. Results show that the nondestructive ultrasonic measurement technique consistently predicts good crimps when the ultrasonic transmission is above a certain threshold amplitude level. A quantitative measure of the quality of the crimped connection based on the ultrasonic energy transmitted is shown to respond accurately to crimp quality. A wave propagation model, solved by finite element analysis, describes the compressional ultrasonic wave propagation through the junction during the crimping process. This model is in agreement within 6% of the ultrasonic measurements. A prototype instrument for applying this technique while wire crimps are installed is also presented. The instrument is based on a two-jaw type crimp tool suitable for butt-splice type connections. A comparison of the results of two different instruments is presented and shows reproducibility between instruments within a 95% confidence bound.

  8. Electrical Conductivity, Thermal Stability, and Lattice Defect Evolution During Cyclic Channel Die Compression of OFHC Copper

    NASA Astrophysics Data System (ADS)

    Satheesh Kumar, S. S.; Raghu, T.

    2015-02-01

    Oxygen-free high-conductivity (OFHC) copper samples are severe plastically deformed by cyclic channel die compression (CCDC) technique at room temperature up to an effective plastic strain of 7.2. Effect of straining on variation in electrical conductivity, evolution of deformation stored energy, and recrystallization onset temperatures are studied. Deformation-induced lattice defects are quantified using three different methodologies including x-ray diffraction profile analysis employing Williamson-Hall technique, stored energy based method, and electrical resistivity-based techniques. Compared to other severe plastic deformation techniques, electrical conductivity degrades marginally from 100.6% to 96.6% IACS after three cycles of CCDC. Decrease in recrystallization onset and peak temperatures is noticed, whereas stored energy increases and saturates at around 0.95-1.1J/g after three cycles of CCDC. Although drop in recrystallization activation energy is observed with the increasing strain, superior thermal stability is revealed, which is attributed to CCDC process mechanics. Low activation energy observed in CCDC-processed OFHC copper is corroborated to synergistic influence of grain boundary characteristics and lattice defects distribution. Estimated defects concentration indicated continuous increase in dislocation density and vacancy with strain. Deformation-induced vacancy concentration is found to be significantly higher than equilibrium vacancy concentration ascribed to hydrostatic stress states experienced during CCDC.

  9. Splenorenal shunt via magnetic compression technique: a feasibility study in canine and cadaver.

    PubMed

    Xue, Fei; Li, Jianpeng; Lu, Jianwen; Zhu, Haoyang; Liu, Wenyan; Zhang, Hongke; Yang, Huan; Guo, Hongchang; Lv, Yi

    2016-12-01

    The concept of magnetic compression technique (MCT) has been accepted by surgeons to solve a variety of surgical problems. In this study, we attempted to explore the feasibility of a splenorenal shunt using MCT in canine and cadaver. The diameters of the splenic vein (SV), the left renal vein (LRV), and the vertical interval between them, were measured in computer tomography (CT) images obtained from 30 patients with portal hypertension and in 20 adult cadavers. The magnetic devices used for the splenorenal shunt were then manufactured based on the anatomic parameters measured above. The observation of the anatomical structure showed there were no special structural tissues or any important organs between SV and LRV. Then the magnetic compression splenorenal shunt procedure was performed in three dogs and five cadavers. Seven days later, the necrotic tissue between the two magnets was shed and the magnets were removed with the anchor wire. The feasibility of splenorenal shunt via MCT was successfully shown in both canine and cadaver, thus providing a theoretical support for future clinical application.

  10. Synthesis and toughness properties of resins and composites

    NASA Technical Reports Server (NTRS)

    Johnston, N. J.

    1984-01-01

    Tensile and shear moduli of four ACEE (Aircraft Energy Efficiency Program) resins are presented along with ACEE composite material modulus predictions based on micromechanics. Compressive strength and fracture toughness of the resins and composites were discussed. In addition, several resin synthesis techniques are reviewed.

  11. Automated threat response recommendation in environments of high data uncertainty using the Countermeasure Association Technique (CMAT)

    NASA Astrophysics Data System (ADS)

    Chapman, George B.; Johnson, Glenn; Burdick, Robert

    1991-09-01

    The CounterMeasure Association Technique (CMAT) is discussed which was developed for the Air Force, and is used to automatically recommend countermeasure and maneuver response to a pilot while he is under missile attack. The overall system is discussed, as well as several key technical components. These components include use of fuzzy sets to specify data uncertainty, use of mimic nets to train the CMAT algorithm to make the same resource optimization tradeoffs as made in a data base of library of training scenarios, and use of several data compression techniques to store the countermeasure effectiveness data base.

  12. Some Practical Universal Noiseless Coding Techniques

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.

    1994-01-01

    Report discusses noiseless data-compression-coding algorithms, performance characteristics and practical consideration in implementation of algorithms in coding modules composed of very-large-scale integrated circuits. Report also has value as tutorial document on data-compression-coding concepts. Coding techniques and concepts in question "universal" in sense that, in principle, applicable to streams of data from variety of sources. However, discussion oriented toward compression of high-rate data generated by spaceborne sensors for lower-rate transmission back to earth.

  13. An Ultra-Low Power Turning Angle Based Biomedical Signal Compression Engine with Adaptive Threshold Tuning

    PubMed Central

    Zhou, Jun; Wang, Chao

    2017-01-01

    Intelligent sensing is drastically changing our everyday life including healthcare by biomedical signal monitoring, collection, and analytics. However, long-term healthcare monitoring generates tremendous data volume and demands significant wireless transmission power, which imposes a big challenge for wearable healthcare sensors usually powered by batteries. Efficient compression engine design to reduce wireless transmission data rate with ultra-low power consumption is essential for wearable miniaturized healthcare sensor systems. This paper presents an ultra-low power biomedical signal compression engine for healthcare data sensing and analytics in the era of big data and sensor intelligence. It extracts the feature points of the biomedical signal by window-based turning angle detection. The proposed approach has low complexity and thus low power consumption while achieving a large compression ratio (CR) and good quality of reconstructed signal. Near-threshold design technique is adopted to further reduce the power consumption on the circuit level. Besides, the angle threshold for compression can be adaptively tuned according to the error between the original signal and reconstructed signal to address the variation of signal characteristics from person to person or from channel to channel to meet the required signal quality with optimal CR. For demonstration, the proposed biomedical compression engine has been used and evaluated for ECG compression. It achieves an average (CR) of 71.08% and percentage root-mean-square difference (PRD) of 5.87% while consuming only 39 nW. Compared to several state-of-the-art ECG compression engines, the proposed design has significantly lower power consumption while achieving similar CRD and PRD, making it suitable for long-term wearable miniaturized sensor systems to sense and collect healthcare data for remote data analytics. PMID:28783079

  14. An Ultra-Low Power Turning Angle Based Biomedical Signal Compression Engine with Adaptive Threshold Tuning.

    PubMed

    Zhou, Jun; Wang, Chao

    2017-08-06

    Intelligent sensing is drastically changing our everyday life including healthcare by biomedical signal monitoring, collection, and analytics. However, long-term healthcare monitoring generates tremendous data volume and demands significant wireless transmission power, which imposes a big challenge for wearable healthcare sensors usually powered by batteries. Efficient compression engine design to reduce wireless transmission data rate with ultra-low power consumption is essential for wearable miniaturized healthcare sensor systems. This paper presents an ultra-low power biomedical signal compression engine for healthcare data sensing and analytics in the era of big data and sensor intelligence. It extracts the feature points of the biomedical signal by window-based turning angle detection. The proposed approach has low complexity and thus low power consumption while achieving a large compression ratio (CR) and good quality of reconstructed signal. Near-threshold design technique is adopted to further reduce the power consumption on the circuit level. Besides, the angle threshold for compression can be adaptively tuned according to the error between the original signal and reconstructed signal to address the variation of signal characteristics from person to person or from channel to channel to meet the required signal quality with optimal CR. For demonstration, the proposed biomedical compression engine has been used and evaluated for ECG compression. It achieves an average (CR) of 71.08% and percentage root-mean-square difference (PRD) of 5.87% while consuming only 39 nW. Compared to several state-of-the-art ECG compression engines, the proposed design has significantly lower power consumption while achieving similar CRD and PRD, making it suitable for long-term wearable miniaturized sensor systems to sense and collect healthcare data for remote data analytics.

  15. Improving multispectral satellite image compression using onboard subpixel registration

    NASA Astrophysics Data System (ADS)

    Albinet, Mathieu; Camarero, Roberto; Isnard, Maxime; Poulet, Christophe; Perret, Jokin

    2013-09-01

    Future CNES earth observation missions will have to deal with an ever increasing telemetry data rate due to improvements in resolution and addition of spectral bands. Current CNES image compressors implement a discrete wavelet transform (DWT) followed by a bit plane encoding (BPE) but only on a mono spectral basis and do not profit from the multispectral redundancy of the observed scenes. Recent CNES studies have proven a substantial gain on the achievable compression ratio, +20% to +40% on selected scenarios, by implementing a multispectral compression scheme based on a Karhunen Loeve transform (KLT) followed by the classical DWT+BPE. But such results can be achieved only on perfectly registered bands; a default of registration as low as 0.5 pixel ruins all the benefits of multispectral compression. In this work, we first study the possibility to implement a multi-bands subpixel onboard registration based on registration grids generated on-the-fly by the satellite attitude control system and simplified resampling and interpolation techniques. Indeed bands registration is usually performed on ground using sophisticated techniques too computationally intensive for onboard use. This fully quantized algorithm is tuned to meet acceptable registration performances within stringent image quality criteria, with the objective of onboard real-time processing. In a second part, we describe a FPGA implementation developed to evaluate the design complexity and, by extrapolation, the data rate achievable on a spacequalified ASIC. Finally, we present the impact of this approach on the processing chain not only onboard but also on ground and the impacts on the design of the instrument.

  16. Data compression: The end-to-end information systems perspective for NASA space science missions

    NASA Technical Reports Server (NTRS)

    Tai, Wallace

    1991-01-01

    The unique characteristics of compressed data have important implications to the design of space science data systems, science applications, and data compression techniques. The sequential nature or data dependence between each of the sample values within a block of compressed data introduces an error multiplication or propagation factor which compounds the effects of communication errors. The data communication characteristics of the onboard data acquisition, storage, and telecommunication channels may influence the size of the compressed blocks and the frequency of included re-initialization points. The organization of the compressed data are continually changing depending on the entropy of the input data. This also results in a variable output rate from the instrument which may require buffering to interface with the spacecraft data system. On the ground, there exist key tradeoff issues associated with the distribution and management of the science data products when data compression techniques are applied in order to alleviate the constraints imposed by ground communication bandwidth and data storage capacity.

  17. Telemedicine + OCT: toward design of optimized algorithms for high-quality compressed images

    NASA Astrophysics Data System (ADS)

    Mousavi, Mahta; Lurie, Kristen; Land, Julian; Javidi, Tara; Ellerbee, Audrey K.

    2014-03-01

    Telemedicine is an emerging technology that aims to provide clinical healthcare at a distance. Among its goals, the transfer of diagnostic images over telecommunication channels has been quite appealing to the medical community. When viewed as an adjunct to biomedical device hardware, one highly important consideration aside from the transfer rate and speed is the accuracy of the reconstructed image at the receiver end. Although optical coherence tomography (OCT) is an established imaging technique that is ripe for telemedicine, the effects of OCT data compression, which may be necessary on certain telemedicine platforms, have not received much attention in the literature. We investigate the performance and efficiency of several lossless and lossy compression techniques for OCT data and characterize their effectiveness with respect to achievable compression ratio, compression rate and preservation of image quality. We examine the effects of compression in the interferogram vs. A-scan domain as assessed with various objective and subjective metrics.

  18. Low Complexity Compression and Speed Enhancement for Optical Scanning Holography

    PubMed Central

    Tsang, P. W. M.; Poon, T.-C.; Liu, J.-P.; Kim, T.; Kim, Y. S.

    2016-01-01

    In this paper we report a low complexity compression method that is suitable for compact optical scanning holography (OSH) systems with different optical settings. Our proposed method can be divided into 2 major parts. First, an automatic decision maker is applied to select the rows of holographic pixels to be scanned. This process enhances the speed of acquiring a hologram, and also lowers the data rate. Second, each row of down-sampled pixels is converted into a one-bit representation with delta modulation (DM). Existing DM-based hologram compression techniques suffers from the disadvantage that a core parameter, commonly known as the step size, has to be determined in advance. However, the correct value of the step size for compressing each row of hologram is dependent on the dynamic range of the pixels, which could deviate significantly with the object scene, as well as OSH systems with different opical settings. We have overcome this problem by incorporating a dynamic step-size adjustment scheme. The proposed method is applied in the compression of holograms that are acquired with 2 different OSH systems, demonstrating a compression ratio of over two orders of magnitude, while preserving favorable fidelity on the reconstructed images. PMID:27708410

  19. Use of a wave reverberation technique to infer the density compression of shocked liquid deuterium to 75 GPa.

    PubMed

    Knudson, M D; Hanson, D L; Bailey, J E; Hall, C A; Asay, J R

    2003-01-24

    A novel approach was developed to probe density compression of liquid deuterium (L-D2) along the principal Hugoniot. Relative transit times of shock waves reverberating within the sample are shown to be sensitive to the compression due to the first shock. This technique has proven to be more sensitive than the conventional method of inferring density from the shock and mass velocity, at least in this high-pressure regime. Results in the range of 22-75 GPa indicate an approximately fourfold density compression, and provide data to differentiate between proposed theories for hydrogen and its isotopes.

  20. Compressed sensing system considerations for ECG and EMG wireless biosensors.

    PubMed

    Dixon, Anna M R; Allstot, Emily G; Gangopadhyay, Daibashish; Allstot, David J

    2012-04-01

    Compressed sensing (CS) is an emerging signal processing paradigm that enables sub-Nyquist processing of sparse signals such as electrocardiogram (ECG) and electromyogram (EMG) biosignals. Consequently, it can be applied to biosignal acquisition systems to reduce the data rate to realize ultra-low-power performance. CS is compared to conventional and adaptive sampling techniques and several system-level design considerations are presented for CS acquisition systems including sparsity and compression limits, thresholding techniques, encoder bit-precision requirements, and signal recovery algorithms. Simulation studies show that compression factors greater than 16X are achievable for ECG and EMG signals with signal-to-quantization noise ratios greater than 60 dB.

  1. Advanced computational techniques for incompressible/compressible fluid-structure interactions

    NASA Astrophysics Data System (ADS)

    Kumar, Vinod

    2005-07-01

    Fluid-Structure Interaction (FSI) problems are of great importance to many fields of engineering and pose tremendous challenges to numerical analyst. This thesis addresses some of the hurdles faced for both 2D and 3D real life time-dependent FSI problems with particular emphasis on parachute systems. The techniques developed here would help improve the design of parachutes and are of direct relevance to several other FSI problems. The fluid system is solved using the Deforming-Spatial-Domain/Stabilized Space-Time (DSD/SST) finite element formulation for the Navier-Stokes equations of incompressible and compressible flows. The structural dynamics solver is based on a total Lagrangian finite element formulation. Newton-Raphson method is employed to linearize the otherwise nonlinear system resulting from the fluid and structure formulations. The fluid and structural systems are solved in decoupled fashion at each nonlinear iteration. While rigorous coupling methods are desirable for FSI simulations, the decoupled solution techniques provide sufficient convergence in the time-dependent problems considered here. In this thesis, common problems in the FSI simulations of parachutes are discussed and possible remedies for a few of them are presented. Further, the effects of the porosity model on the aerodynamic forces of round parachutes are analyzed. Techniques for solving compressible FSI problems are also discussed. Subsequently, a better stabilization technique is proposed to efficiently capture and accurately predict the shocks in supersonic flows. The numerical examples simulated here require high performance computing. Therefore, numerical tools using distributed memory supercomputers with message passing interface (MPI) libraries were developed.

  2. Compressed Sensing Techniques Applied to Ultrasonic Imaging of Cargo Containers.

    PubMed

    López, Yuri Álvarez; Lorenzo, José Ángel Martínez

    2017-01-15

    One of the key issues in the fight against the smuggling of goods has been the development of scanners for cargo inspection. X-ray-based radiographic system scanners are the most developed sensing modality. However, they are costly and use bulky sources that emit hazardous, ionizing radiation. Aiming to improve the probability of threat detection, an ultrasonic-based technique, capable of detecting the footprint of metallic containers or compartments concealed within the metallic structure of the inspected cargo, has been proposed. The system consists of an array of acoustic transceivers that is attached to the metallic structure-under-inspection, creating a guided acoustic Lamb wave. Reflections due to discontinuities are detected in the images, provided by an imaging algorithm. Taking into consideration that the majority of those images are sparse, this contribution analyzes the application of Compressed Sensing (CS) techniques in order to reduce the amount of measurements needed, thus achieving faster scanning, without compromising the detection capabilities of the system. A parametric study of the image quality, as a function of the samples needed in spatial and frequency domains, is presented, as well as the dependence on the sampling pattern. For this purpose, realistic cargo inspection scenarios have been simulated.

  3. Compressed Sensing Techniques Applied to Ultrasonic Imaging of Cargo Containers

    PubMed Central

    Álvarez López, Yuri; Martínez Lorenzo, José Ángel

    2017-01-01

    One of the key issues in the fight against the smuggling of goods has been the development of scanners for cargo inspection. X-ray-based radiographic system scanners are the most developed sensing modality. However, they are costly and use bulky sources that emit hazardous, ionizing radiation. Aiming to improve the probability of threat detection, an ultrasonic-based technique, capable of detecting the footprint of metallic containers or compartments concealed within the metallic structure of the inspected cargo, has been proposed. The system consists of an array of acoustic transceivers that is attached to the metallic structure-under-inspection, creating a guided acoustic Lamb wave. Reflections due to discontinuities are detected in the images, provided by an imaging algorithm. Taking into consideration that the majority of those images are sparse, this contribution analyzes the application of Compressed Sensing (CS) techniques in order to reduce the amount of measurements needed, thus achieving faster scanning, without compromising the detection capabilities of the system. A parametric study of the image quality, as a function of the samples needed in spatial and frequency domains, is presented, as well as the dependence on the sampling pattern. For this purpose, realistic cargo inspection scenarios have been simulated. PMID:28098841

  4. The heat-compression technique for the conversion of platelet-rich fibrin preparation to a barrier membrane with a reduced rate of biodegradation.

    PubMed

    Kawase, Tomoyuki; Kamiya, Mana; Kobayashi, Mito; Tanaka, Takaaki; Okuda, Kazuhiro; Wolff, Larry F; Yoshie, Hiromasa

    2015-05-01

    Platelet-rich fibrin (PRF) was developed as an advanced form of platelet-rich plasma to eliminate xenofactors, such as bovine thrombin, and it is mainly used as a source of growth factor for tissue regeneration. Furthermore, although a minor application, PRF in a compressed membrane-like form has also been used as a substitute for commercially available barrier membranes in guided-tissue regeneration (GTR) treatment. However, the PRF membrane is resorbed within 2 weeks or less at implantation sites; therefore, it can barely maintain sufficient space for bone regeneration. In this study, we developed and optimized a heat-compression technique and tested the feasibility of the resulting PRF membrane. Freshly prepared human PRF was first compressed with dry gauze and subsequently with a hot iron. Biodegradability was microscopically examined in vitro by treatment with plasmin at 37°C or in vivo by subcutaneous implantation in nude mice. Compared with the control gauze-compressed PRF, the heat-compressed PRF appeared plasmin-resistant and remained stable for longer than 10 days in vitro. Additionally, in animal implantation studies, the heat-compressed PRF was observed at least for 3 weeks postimplantation in vivo whereas the control PRF was completely resorbed within 2 weeks. Therefore, these findings suggest that the heat-compression technique reduces the rate of biodegradation of the PRF membrane without sacrificing its biocompatibility and that the heat-compressed PRF membrane easily could be prepared at chair-side and applied as a barrier membrane in the GTR treatment. © 2014 Wiley Periodicals, Inc.

  5. Coulomb-Driven Relativistic Electron Beam Compression

    NASA Astrophysics Data System (ADS)

    Lu, Chao; Jiang, Tao; Liu, Shengguang; Wang, Rui; Zhao, Lingrong; Zhu, Pengfei; Xiang, Dao; Zhang, Jie

    2018-01-01

    Coulomb interaction between charged particles is a well-known phenomenon in many areas of research. In general, the Coulomb repulsion force broadens the pulse width of an electron bunch and limits the temporal resolution of many scientific facilities such as ultrafast electron diffraction and x-ray free-electron lasers. Here we demonstrate a scheme that actually makes use of the Coulomb force to compress a relativistic electron beam. Furthermore, we show that the Coulomb-driven bunch compression process does not introduce additional timing jitter, which is in sharp contrast to the conventional radio-frequency buncher technique. Our work not only leads to enhanced temporal resolution in electron-beam-based ultrafast instruments that may provide new opportunities in probing material systems far from equilibrium, but also opens a promising direction for advanced beam manipulation through self-field interactions.

  6. Coulomb-Driven Relativistic Electron Beam Compression.

    PubMed

    Lu, Chao; Jiang, Tao; Liu, Shengguang; Wang, Rui; Zhao, Lingrong; Zhu, Pengfei; Xiang, Dao; Zhang, Jie

    2018-01-26

    Coulomb interaction between charged particles is a well-known phenomenon in many areas of research. In general, the Coulomb repulsion force broadens the pulse width of an electron bunch and limits the temporal resolution of many scientific facilities such as ultrafast electron diffraction and x-ray free-electron lasers. Here we demonstrate a scheme that actually makes use of the Coulomb force to compress a relativistic electron beam. Furthermore, we show that the Coulomb-driven bunch compression process does not introduce additional timing jitter, which is in sharp contrast to the conventional radio-frequency buncher technique. Our work not only leads to enhanced temporal resolution in electron-beam-based ultrafast instruments that may provide new opportunities in probing material systems far from equilibrium, but also opens a promising direction for advanced beam manipulation through self-field interactions.

  7. Evaluation of the robustness of the preprocessing technique improving reversible compressibility of CT images: Tested on various CT examinations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeon, Chang Ho; Kim, Bohyoung; Gu, Bon Seung

    2013-10-15

    Purpose: To modify the preprocessing technique, which was previously proposed, improving compressibility of computed tomography (CT) images to cover the diversity of three dimensional configurations of different body parts and to evaluate the robustness of the technique in terms of segmentation correctness and increase in reversible compression ratio (CR) for various CT examinations.Methods: This study had institutional review board approval with waiver of informed patient consent. A preprocessing technique was previously proposed to improve the compressibility of CT images by replacing pixel values outside the body region with a constant value resulting in maximizing data redundancy. Since the technique wasmore » developed aiming at only chest CT images, the authors modified the segmentation method to cover the diversity of three dimensional configurations of different body parts. The modified version was evaluated as follows. In randomly selected 368 CT examinations (352 787 images), each image was preprocessed by using the modified preprocessing technique. Radiologists visually confirmed whether the segmented region covers the body region or not. The images with and without the preprocessing were reversibly compressed using Joint Photographic Experts Group (JPEG), JPEG2000 two-dimensional (2D), and JPEG2000 three-dimensional (3D) compressions. The percentage increase in CR per examination (CR{sub I}) was measured.Results: The rate of correct segmentation was 100.0% (95% CI: 99.9%, 100.0%) for all the examinations. The median of CR{sub I} were 26.1% (95% CI: 24.9%, 27.1%), 40.2% (38.5%, 41.1%), and 34.5% (32.7%, 36.2%) in JPEG, JPEG2000 2D, and JPEG2000 3D, respectively.Conclusions: In various CT examinations, the modified preprocessing technique can increase in the CR by 25% or more without concerning about degradation of diagnostic information.« less

  8. "Can you see me now?" An objective metric for predicting intelligibility of compressed American Sign Language video

    NASA Astrophysics Data System (ADS)

    Ciaramello, Francis M.; Hemami, Sheila S.

    2007-02-01

    For members of the Deaf Community in the United States, current communication tools include TTY/TTD services, video relay services, and text-based communication. With the growth of cellular technology, mobile sign language conversations are becoming a possibility. Proper coding techniques must be employed to compress American Sign Language (ASL) video for low-rate transmission while maintaining the quality of the conversation. In order to evaluate these techniques, an appropriate quality metric is needed. This paper demonstrates that traditional video quality metrics, such as PSNR, fail to predict subjective intelligibility scores. By considering the unique structure of ASL video, an appropriate objective metric is developed. Face and hand segmentation is performed using skin-color detection techniques. The distortions in the face and hand regions are optimally weighted and pooled across all frames to create an objective intelligibility score for a distorted sequence. The objective intelligibility metric performs significantly better than PSNR in terms of correlation with subjective responses.

  9. Injectant mole-fraction imaging in compressible mixing flows using planar laser-induced iodine fluorescence

    NASA Technical Reports Server (NTRS)

    Hartfield, Roy J., Jr.; Abbitt, John D., III; Mcdaniel, James C.

    1989-01-01

    A technique is described for imaging the injectant mole-fraction distribution in nonreacting compressible mixing flow fields. Planar fluorescence from iodine, seeded into air, is induced by a broadband argon-ion laser and collected using an intensified charge-injection-device array camera. The technique eliminates the thermodynamic dependence of the iodine fluorescence in the compressible flow field by taking the ratio of two images collected with identical thermodynamic flow conditions but different iodine seeding conditions.

  10. An investigative study of multispectral data compression for remotely-sensed images using vector quantization and difference-mapped shift-coding

    NASA Technical Reports Server (NTRS)

    Jaggi, S.

    1993-01-01

    A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.

  11. FRESCO: Referential compression of highly similar sequences.

    PubMed

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

  12. Data Compression Using the Dictionary Approach Algorithm

    DTIC Science & Technology

    1990-12-01

    Compression Technique The LZ77 is an OPM/L data compression scheme suggested by Ziv and Lempel . A slightly modified...June 1984. 12. Witten H. I., Neal M. R. and Cleary G. J., Arithmetic Coding For Data Compression , Communication ACM June 1987. 13. Ziv I. and Lempel A...AD-A242 539 NAVAL POSTGRADUATE SCHOOL Monterey, California DTIC NOV 181991 0 THESIS DATA COMPRESSION USING THE DICTIONARY APPROACH ALGORITHM

  13. Computations of Unsteady Viscous Compressible Flows Using Adaptive Mesh Refinement in Curvilinear Body-fitted Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, E.; Modiano, David; Colella, Phillip

    1994-01-01

    A methodology for accurate and efficient simulation of unsteady, compressible flows is presented. The cornerstones of the methodology are a special discretization of the Navier-Stokes equations on structured body-fitted grid systems and an efficient solution-adaptive mesh refinement technique for structured grids. The discretization employs an explicit multidimensional upwind scheme for the inviscid fluxes and an implicit treatment of the viscous terms. The mesh refinement technique is based on the AMR algorithm of Berger and Colella. In this approach, cells on each level of refinement are organized into a small number of topologically rectangular blocks, each containing several thousand cells. The small number of blocks leads to small overhead in managing data, while their size and regular topology means that a high degree of optimization can be achieved on computers with vector processors.

  14. Symmetrical compression distance for arrhythmia discrimination in cloud-based big-data services.

    PubMed

    Lillo-Castellano, J M; Mora-Jiménez, I; Santiago-Mozos, R; Chavarría-Asso, F; Cano-González, A; García-Alberola, A; Rojo-Álvarez, J L

    2015-07-01

    The current development of cloud computing is completely changing the paradigm of data knowledge extraction in huge databases. An example of this technology in the cardiac arrhythmia field is the SCOOP platform, a national-level scientific cloud-based big data service for implantable cardioverter defibrillators. In this scenario, we here propose a new methodology for automatic classification of intracardiac electrograms (EGMs) in a cloud computing system, designed for minimal signal preprocessing. A new compression-based similarity measure (CSM) is created for low computational burden, so-called weighted fast compression distance, which provides better performance when compared with other CSMs in the literature. Using simple machine learning techniques, a set of 6848 EGMs extracted from SCOOP platform were classified into seven cardiac arrhythmia classes and one noise class, reaching near to 90% accuracy when previous patient arrhythmia information was available and 63% otherwise, hence overcoming in all cases the classification provided by the majority class. Results show that this methodology can be used as a high-quality service of cloud computing, providing support to physicians for improving the knowledge on patient diagnosis.

  15. High-performance software-only H.261 video compression on PC

    NASA Astrophysics Data System (ADS)

    Kasperovich, Leonid

    1996-03-01

    This paper describes an implementation of a software H.261 codec for PC, that takes an advantage of the fast computational algorithms for DCT-based video compression, which have been presented by the author at the February's 1995 SPIE/IS&T meeting. The motivation for developing the H.261 prototype system is to demonstrate a feasibility of real time software- only videoconferencing solution to operate across a wide range of network bandwidth, frame rate, and resolution of the input video. As the bandwidths of current network technology will be increased, the higher frame rate and resolution of video to be transmitted is allowed, that requires, in turn, a software codec to be able to compress pictures of CIF (352 X 288) resolution at up to 30 frame/sec. Running on Pentium 133 MHz PC the codec presented is capable to compress video in CIF format at 21 - 23 frame/sec. This result is comparable to the known hardware-based H.261 solutions, but it doesn't require any specific hardware. The methods to achieve high performance, the program optimization technique for Pentium microprocessor along with the performance profile, showing the actual contribution of the different encoding/decoding stages to the overall computational process, are presented.

  16. Digital CODEC for real-time processing of broadcast quality video signals at 1.8 bits/pixel

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1989-01-01

    Advances in very large-scale integration and recent work in the field of bandwidth efficient digital modulation techniques have combined to make digital video processing technically feasible and potentially cost competitive for broadcast quality television transmission. A hardware implementation was developed for a DPCM-based digital television bandwidth compression algorithm which processes standard NTSC composite color television signals and produces broadcast quality video in real time at an average of 1.8 bits/pixel. The data compression algorithm and the hardware implementation of the CODEC are described, and performance results are provided.

  17. Parallel phase-shifting self-interference digital holography with faithful reconstruction using compressive sensing

    NASA Astrophysics Data System (ADS)

    Wan, Yuhong; Man, Tianlong; Wu, Fan; Kim, Myung K.; Wang, Dayong

    2016-11-01

    We present a new self-interference digital holographic approach that allows single-shot capturing three-dimensional intensity distribution of the spatially incoherent objects. The Fresnel incoherent correlation holographic microscopy is combined with parallel phase-shifting technique to instantaneously obtain spatially multiplexed phase-shifting holograms. The compressive-sensing-based reconstruction algorithm is implemented to reconstruct the original object from the under sampled demultiplexed holograms. The scheme is verified with simulations. The validity of the proposed method is experimentally demonstrated in an indirectly way by simulating the use of specific parallel phase-shifting recording device.

  18. Digital CODEC for real-time processing of broadcast quality video signals at 1.8 bits/pixel

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary JO; Whyte, Wayne A.

    1991-01-01

    Advances in very large scale integration and recent work in the field of bandwidth efficient digital modulation techniques have combined to make digital video processing technically feasible an potentially cost competitive for broadcast quality television transmission. A hardware implementation was developed for DPCM (differential pulse code midulation)-based digital television bandwidth compression algorithm which processes standard NTSC composite color television signals and produces broadcast quality video in real time at an average of 1.8 bits/pixel. The data compression algorithm and the hardware implementation of the codec are described, and performance results are provided.

  19. Dynamic Deformation Behavior of Soft Material Using Shpb Technique and Pulse Shaper

    NASA Astrophysics Data System (ADS)

    Lee, Ouk Sub; Cho, Kyu Sang; Kim, Sung Hyun; Han, Yong Hwan

    This paper presents a modified Split Hopkinson Pressure Bar (SHPB) technique to obtain compressive stress strain data for NBR rubber materials. An experimental technique with a modified the conventional SHPB has been developed for measuring the compressive stress strain responses of materials with low mechanical impedance and low compressive strengths, such as the rubber and the polymeric material. This paper uses an aluminum pressure bar to achieve a closer impedance match between the pressure bar and the specimen materials. In addition, a pulse shaper is utilized to lengthen the rising time of the incident pulse to ensure dynamic stress equilibrium and homogeneous deformation of NBR rubber materials. It is found that the modified technique can determine the dynamic deformation behavior of rubbers more accurately.

  20. Distributed single source coding with side information

    NASA Astrophysics Data System (ADS)

    Vila-Forcen, Jose E.; Koval, Oleksiy; Voloshynovskiy, Sviatoslav V.

    2004-01-01

    In the paper we advocate image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: classical image compression is considered from the positions of source coding with side information and, contrarily to the existing scenarios, where side information is given explicitly, side information is created based on deterministic approximation of local image features. We consider an image in the transform domain as a realization of a source with a bounded codebook of symbols where each symbol represents a particular edge shape. The codebook is image independent and plays the role of auxiliary source. Due to the partial availability of side information at both encoder and decoder we treat our problem as a modification of Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available only at decoder. Finally, we present a practical compression algorithm for passport photo images based on our concept that demonstrates the superior performance in very low bit rate regime.

  1. Optical scanning holography based on compressive sensing using a digital micro-mirror device

    NASA Astrophysics Data System (ADS)

    A-qian, Sun; Ding-fu, Zhou; Sheng, Yuan; You-jun, Hu; Peng, Zhang; Jian-ming, Yue; xin, Zhou

    2017-02-01

    Optical scanning holography (OSH) is a distinct digital holography technique, which uses a single two-dimensional (2D) scanning process to record the hologram of a three-dimensional (3D) object. Usually, these 2D scanning processes are in the form of mechanical scanning, and the quality of recorded hologram may be affected due to the limitation of mechanical scanning accuracy and unavoidable vibration of stepper motor's start-stop. In this paper, we propose a new framework, which replaces the 2D mechanical scanning mirrors with a Digital Micro-mirror Device (DMD) to modulate the scanning light field, and we call it OSH based on Compressive Sensing (CS) using a digital micro-mirror device (CS-OSH). CS-OSH can reconstruct the hologram of an object through the use of compressive sensing theory, and then restore the image of object itself. Numerical simulation results confirm this new type OSH can get a reconstructed image with favorable visual quality even under the condition of a low sample rate.

  2. Dogmas and controversies in compression therapy: report of an International Compression Club (ICC) meeting, Brussels, May 2011.

    PubMed

    Flour, Mieke; Clark, Michael; Partsch, Hugo; Mosti, Giovanni; Uhl, Jean-Francois; Chauveau, Michel; Cros, Francois; Gelade, Pierre; Bender, Dean; Andriessen, Anneke; Schuren, Jan; Cornu-Thenard, André; Arkans, Ed; Milic, Dragan; Benigni, Jean-Patrick; Damstra, Robert; Szolnoky, Gyozo; Schingale, Franz

    2013-10-01

    The International Compression Club (ICC) is a partnership between academics, clinicians and industry focused upon understanding the role of compression in the management of different clinical conditions. The ICC meet regularly and from these meetings have produced a series of eight consensus publications upon topics ranging from evidence-based compression to compression trials for arm lymphoedema. All of the current consensus documents can be accessed on the ICC website (http://www.icc-compressionclub.com/index.php). In May 2011, the ICC met in Brussels during the European Wound Management Association (EWMA) annual conference. With almost 50 members in attendance, the day-long ICC meeting challenged a series of dogmas and myths that exist when considering compression therapies. In preparation for a discussion on beliefs surrounding compression, a forum was established on the ICC website where presenters were able to display a summary of their thoughts upon each dogma to be discussed during the meeting. Members of the ICC could then provide comments on each topic thereby widening the discussion to the entire membership of the ICC rather than simply those who were attending the EWMA conference. This article presents an extended report of the issues that were discussed, with each dogma covered in a separate section. The ICC discussed 12 'dogmas' with areas 1 through 7 dedicated to materials and application techniques used to apply compression with the remaining topics (8 through 12) related to the indications for using compression. © 2012 The Authors. International Wound Journal © 2012 John Wiley & Sons Ltd and Medicalhelplines.com Inc.

  3. Optical identity authentication technique based on compressive ghost imaging with QR code

    NASA Astrophysics Data System (ADS)

    Wenjie, Zhan; Leihong, Zhang; Xi, Zeng; Yi, Kang

    2018-04-01

    With the rapid development of computer technology, information security has attracted more and more attention. It is not only related to the information and property security of individuals and enterprises, but also to the security and social stability of a country. Identity authentication is the first line of defense in information security. In authentication systems, response time and security are the most important factors. An optical authentication technology based on compressive ghost imaging with QR codes is proposed in this paper. The scheme can be authenticated with a small number of samples. Therefore, the response time of the algorithm is short. At the same time, the algorithm can resist certain noise attacks, so it offers good security.

  4. Neural network for image compression

    NASA Astrophysics Data System (ADS)

    Panchanathan, Sethuraman; Yeap, Tet H.; Pilache, B.

    1992-09-01

    In this paper, we propose a new scheme for image compression using neural networks. Image data compression deals with minimization of the amount of data required to represent an image while maintaining an acceptable quality. Several image compression techniques have been developed in recent years. We note that the coding performance of these techniques may be improved by employing adaptivity. Over the last few years neural network has emerged as an effective tool for solving a wide range of problems involving adaptivity and learning. A multilayer feed-forward neural network trained using the backward error propagation algorithm is used in many applications. However, this model is not suitable for image compression because of its poor coding performance. Recently, a self-organizing feature map (SOFM) algorithm has been proposed which yields a good coding performance. However, this algorithm requires a long training time because the network starts with random initial weights. In this paper we have used the backward error propagation algorithm (BEP) to quickly obtain the initial weights which are then used to speedup the training time required by the SOFM algorithm. The proposed approach (BEP-SOFM) combines the advantages of the two techniques and, hence, achieves a good coding performance in a shorter training time. Our simulation results demonstrate the potential gains using the proposed technique.

  5. Fast and Adaptive Lossless On-Board Hyperspectral Data Compression System for Space Applications

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh; Bakhshi, Alireza; Keymeulen, Didier; Klimesh, Matthew

    2009-01-01

    Efficient on-board lossless hyperspectral data compression reduces the data volume necessary to meet NASA and DoD limited downlink capabilities. The techniques also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware, which makes it practical for flight implementations of pushbroom instruments. A prototype of the compressor (and decompressor) of the algorithm is available in software, but this implementation may not meet speed and real-time requirements of some space applications. Hardware acceleration provides performance improvements of 10x-100x vs. the software implementation (about 1M samples/sec on a Pentium IV machine). This paper describes a hardware implementation of the JPL-developed 'Fast Lossless' compression algorithm on a Field Programmable Gate Array (FPGA). The FPGA implementation targets the current state of the art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for Space applications.

  6. Hardware Implementation of Lossless Adaptive and Scalable Hyperspectral Data Compression for Space

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh; Keymeulen, Didier; Bakhshi, Alireza; Klimesh, Matthew

    2009-01-01

    On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. The technique also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware. A modified form of the algorithm that is better suited for data from pushbroom instruments is generally appropriate for flight implementation. A scalable field programmable gate array (FPGA) hardware implementation was developed. The FPGA implementation achieves a throughput performance of 58 Msamples/sec, which can be increased to over 100 Msamples/sec in a parallel implementation that uses twice the hardware resources This paper describes the hardware implementation of the 'Modified Fast Lossless' compression algorithm on an FPGA. The FPGA implementation targets the current state-of-the-art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for space applications.

  7. SAR data compression: Application, requirements, and designs

    NASA Technical Reports Server (NTRS)

    Curlander, John C.; Chang, C. Y.

    1991-01-01

    The feasibility of reducing data volume and data rate is evaluated for the Earth Observing System (EOS) Synthetic Aperture Radar (SAR). All elements of data stream from the sensor downlink data stream to electronic delivery of browse data products are explored. The factors influencing design of a data compression system are analyzed, including the signal data characteristics, the image quality requirements, and the throughput requirements. The conclusion is that little or no reduction can be achieved in the raw signal data using traditional data compression techniques (e.g., vector quantization, adaptive discrete cosine transform) due to the induced phase errors in the output image. However, after image formation, a number of techniques are effective for data compression.

  8. A Comparison of LBG and ADPCM Speech Compression Techniques

    NASA Astrophysics Data System (ADS)

    Bachu, Rajesh G.; Patel, Jignasa; Barkana, Buket D.

    Speech compression is the technology of converting human speech into an efficiently encoded representation that can later be decoded to produce a close approximation of the original signal. In all speech there is a degree of predictability and speech coding techniques exploit this to reduce bit rates yet still maintain a suitable level of quality. This paper is a study and implementation of Linde-Buzo-Gray Algorithm (LBG) and Adaptive Differential Pulse Code Modulation (ADPCM) algorithms to compress speech signals. In here we implemented the methods using MATLAB 7.0. The methods we used in this study gave good results and performance in compressing the speech and listening tests showed that efficient and high quality coding is achieved.

  9. An Efficient Framework for Compressed Sensing Reconstruction of Highly Accelerated Dynamic Cardiac MRI

    NASA Astrophysics Data System (ADS)

    Ting, Samuel T.

    The research presented in this work seeks to develop, validate, and deploy practical techniques for improving diagnosis of cardiovascular disease. In the philosophy of biomedical engineering, we seek to identify an existing medical problem having significant societal and economic effects and address this problem using engineering approaches. Cardiovascular disease is the leading cause of mortality in the United States, accounting for more deaths than any other major cause of death in every year since 1900 with the exception of the year 1918. Cardiovascular disease is estimated to account for almost one-third of all deaths in the United States, with more than 2150 deaths each day, or roughly 1 death every 40 seconds. In the past several decades, a growing array of imaging modalities have proven useful in aiding the diagnosis and evaluation of cardiovascular disease, including computed tomography, single photon emission computed tomography, and echocardiography. In particular, cardiac magnetic resonance imaging is an excellent diagnostic tool that can provide within a single exam a high quality evaluation of cardiac function, blood flow, perfusion, viability, and edema without the use of ionizing radiation. The scope of this work focuses on the application of engineering techniques for improving imaging using cardiac magnetic resonance with the goal of improving the utility of this powerful imaging modality. Dynamic cine imaging, or the capturing of movies of a single slice or volume within the heart or great vessel region, is used in nearly every cardiac magnetic resonance imaging exam, and adequate evaluation of cardiac function and morphology for diagnosis and evaluation of cardiovascular disease depends heavily on both the spatial and temporal resolution as well as the image quality of the reconstruction cine images. This work focuses primarily on image reconstruction techniques utilized in cine imaging; however, the techniques discussed are also relevant to other dynamic and static imaging techniques based on cardiac magnetic resonance. Conventional segmented techniques for cardiac cine imaging require breath-holding as well as regular cardiac rhythm, and can be time-consuming to acquire. Inadequate breath-holding or irregular cardiac rhythm can result in completely non-diagnostic images, limiting the utility of these techniques in a significant patient population. Real-time single-shot cardiac cine imaging enables free-breathing acquisition with significantly shortened imaging time and promises to significantly improve the utility of cine imaging for diagnosis and evaluation of cardiovascular disease. However, utility of real-time cine images depends heavily on the successful reconstruction of final cine images from undersampled data. Successful reconstruction of images from more highly undersampled data results directly in images exhibiting finer spatial and temporal resolution provided that image quality is sufficient. This work focuses primarily on the development, validation, and deployment of practical techniques for enabling the reconstruction of real-time cardiac cine images at the spatial and temporal resolutions and image quality needed for diagnostic utility. Particular emphasis is placed on the development of reconstruction approaches resulting in with short computation times that can be used in the clinical environment. Specifically, the use of compressed sensing signal recovery techniques is considered; such techniques show great promise in allowing successful reconstruction of highly undersampled data. The scope of this work concerns two primary topics related to signal recovery using compressed sensing: (1) long reconstruction times of these techniques, and (2) improved sparsity models for signal recovery from more highly undersampled data. Both of these aspects are relevant to the practical application of compressed sensing techniques in the context of improving image reconstruction of real-time cardiac cine images. First, algorithmic and implementational approaches are proposed for reducing the computational time for a compressed sensing reconstruction framework. Specific optimization algorithms based on the fast iterative/shrinkage algorithm (FISTA) are applied in the context of real-time cine image reconstruction to achieve efficient per-iteration computation time. Implementation within a code framework utilizing commercially available graphics processing units (GPUs) allows for practical and efficient implementation directly within the clinical environment. Second, patch-based sparsity models are proposed to enable compressed sensing signal recovery from highly undersampled data. Numerical studies demonstrate that this approach can help improve image quality at higher undersampling ratios, enabling real-time cine imaging at higher acceleration rates. In this work, it is shown that these techniques yield a holistic framework for achieving efficient reconstruction of real-time cine images with spatial and temporal resolution sufficient for use in the clinical environment. A thorough description of these techniques from both a theoretical and practical view is provided - both of which may be of interest to the reader in terms of future work.

  10. Application of Compressive Sensing to Gravitational Microlensing Data and Implications for Miniaturized Space Observatories

    NASA Technical Reports Server (NTRS)

    Korde-Patel, Asmita (Inventor); Barry, Richard K.; Mohsenin, Tinoosh

    2016-01-01

    Compressive Sensing is a technique for simultaneous acquisition and compression of data that is sparse or can be made sparse in some domain. It is currently under intense development and has been profitably employed for industrial and medical applications. We here describe the use of this technique for the processing of astronomical data. We outline the procedure as applied to exoplanet gravitational microlensing and analyze measurement results and uncertainty values. We describe implications for on-spacecraft data processing for space observatories. Our findings suggest that application of these techniques may yield significant, enabling benefits especially for power and volume-limited space applications such as miniaturized or micro-constellation satellites.

  11. Compression of transmission bandwidth requirements for a certain class of band-limited functions.

    NASA Technical Reports Server (NTRS)

    Smith, I. R.; Schilling, D. L.

    1972-01-01

    A study of source-encoding techniques that afford a reduction of data-transmission rates is made with particular emphasis on the compression of transmission bandwidth requirements of band-limited functions. The feasibility of bandwidth compression through analog signal rooting is investigated. It is found that the N-th roots of elements of a certain class of entire functions of exponential type possess contour integrals resembling Fourier transforms, the Cauchy principal values of which are compactly supported on an interval one N-th the size of that of the original function. Exploring this theoretical result, it is found that synthetic roots can be generated, which closely approximate the N-th roots of a certain class of band-limited signals and possess spectra that are essentially confined to a bandwidth one N-th that of the signal subjected to the rooting operation. A source-encoding algorithm based on this principle is developed that allows the compression of data-transmission requirements for a certain class of band-limited signals.

  12. [Research progress on mechanical performance evaluation of artificial intervertebral disc].

    PubMed

    Li, Rui; Wang, Song; Liao, Zhenhua; Liu, Weiqiang

    2018-03-01

    The mechanical properties of artificial intervertebral disc (AID) are related to long-term reliability of prosthesis. There are three testing methods involved in the mechanical performance evaluation of AID based on different tools: the testing method using mechanical simulator, in vitro specimen testing method and finite element analysis method. In this study, the testing standard, testing equipment and materials of AID were firstly introduced. Then, the present status of AID static mechanical properties test (static axial compression, static axial compression-shear), dynamic mechanical properties test (dynamic axial compression, dynamic axial compression-shear), creep and stress relaxation test, device pushout test, core pushout test, subsidence test, etc. were focused on. The experimental techniques using in vitro specimen testing method and testing results of available artificial discs were summarized. The experimental methods and research status of finite element analysis were also summarized. Finally, the research trends of AID mechanical performance evaluation were forecasted. The simulator, load, dynamic cycle, motion mode, specimen and test standard would be important research fields in the future.

  13. Realizing Ultrafast Electron Pulse Self-Compression by Femtosecond Pulse Shaping Technique.

    PubMed

    Qi, Yingpeng; Pei, Minjie; Qi, Dalong; Yang, Yan; Jia, Tianqing; Zhang, Shian; Sun, Zhenrong

    2015-10-01

    Uncorrelated position and velocity distribution of the electron bunch at the photocathode from the residual energy greatly limit the transverse coherent length and the recompression ability. Here we first propose a femtosecond pulse-shaping method to realize the electron pulse self-compression in ultrafast electron diffraction system based on a point-to-point space-charge model. The positively chirped femtosecond laser pulse can correspondingly create the positively chirped electron bunch at the photocathode (such as metal-insulator heterojunction), and such a shaped electron pulse can realize the self-compression in the subsequent propagation process. The greatest advantage for our proposed scheme is that no additional components are introduced into the ultrafast electron diffraction system, which therefore does not affect the electron bunch shape. More importantly, this scheme can break the limitation that the electron pulse via postphotocathode static compression schemes is not shorter than the excitation laser pulse due to the uncorrelated position and velocity distribution of the initial electron bunch.

  14. Efficient Sparse Signal Transmission over a Lossy Link Using Compressive Sensing

    PubMed Central

    Wu, Liantao; Yu, Kai; Cao, Dongyu; Hu, Yuhen; Wang, Zhi

    2015-01-01

    Reliable data transmission over lossy communication link is expensive due to overheads for error protection. For signals that have inherent sparse structures, compressive sensing (CS) is applied to facilitate efficient sparse signal transmissions over lossy communication links without data compression or error protection. The natural packet loss in the lossy link is modeled as a random sampling process of the transmitted data, and the original signal will be reconstructed from the lossy transmission results using the CS-based reconstruction method at the receiving end. The impacts of packet lengths on transmission efficiency under different channel conditions have been discussed, and interleaving is incorporated to mitigate the impact of burst data loss. Extensive simulations and experiments have been conducted and compared to the traditional automatic repeat request (ARQ) interpolation technique, and very favorable results have been observed in terms of both accuracy of the reconstructed signals and the transmission energy consumption. Furthermore, the packet length effect provides useful insights for using compressed sensing for efficient sparse signal transmission via lossy links. PMID:26287195

  15. Synthesis and viscoelastic characterization of microstructurally aligned Silk fibroin sponges.

    PubMed

    Panda, Debojyoti; Konar, Subhajit; Bajpai, Saumendra K; Arockiarajan, A

    2017-07-01

    Silk fibroin (SF) is a model candidate for use in tissue engineering and regenerative medicine owing to its bio-compatible mechanochemical properties. Despite numerous advances made in the fabrication of various biomimetic substrates using SF, relatively few clinical applications have been designed, primarily due to the lack of complete understanding of its constitutive properties. Here we fabricate microstructurally aligned SF sponge using the unidirectional freezing technique wherein a novel solvent-processing technique involving Acetic acid is employed, which obviates the post-treatment of the sponges to induce their water-stability. Subsequently, we quantify the anisotropic, viscoelastic response of the bulk SF sponge samples by performing a series of mechanical tests under uniaxial compression over a wide range of strain rates. Results for these uniaxial compression tests in the finite strain regime through ramp strain and ramp-relaxation loading histories applied over two orders of strain rate magnitude show that microstructural anisotropy is directly manifested in the bulk viscoelastic solid-like response. Furthermore, the experiments reveal a high degree of volume compressibility of the sponges during deformation, and also evince for their remarkable strain recovery capacity under large compressive strains during strain recovery tests. Finally, in order to predict the bulk viscoelastic material properties of the fabricated and pre-characterized SF sponges, a finite strain kinematics-based, nonlinear, continuum model developed within a thermodynamically-consistent framework in a parallel investigation, was successfully employed to capture the viscoelastic solid-like, transversely isotropic, and compressible response of the sponges macroscopically. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. A novel shape similarity based elastography system for prostate cancer assessment

    NASA Astrophysics Data System (ADS)

    Wang, Haisu; Mousavi, Seyed Reza; Samani, Abbas

    2012-03-01

    Prostate cancer is the second common cancer among men worldwide and remains the second leading cancer-related cause of death in mature men. The disease can be cured if it is detected at early stage. This implies that prostate cancer detection at early stage is very critical for desirable treatment outcome. Conventional techniques of prostate cancer screening and detection, such as Digital Rectal Examination (DRE), Prostate-Specific Antigen (PSA) and Trans Rectal Ultra-Sonography (TRUS), are known to have low sensitivity and specificity. Elastography is an imaging technique that uses tissue stiffness as contrast mechanism. As the association between the degree of prostate tissue stiffness alteration and its pathology is well established, elastography can potentially detect prostate cancer with a high degree of sensitivity and specificity. In this paper, we present a novel elastography technique which, unlike other elastography techniques, does not require displacement data acquisition system. This technique requires the prostate's pre-compression and postcompression transrectal ultrasound images. The conceptual foundation of reconstructing the prostate's normal and pathological tissues elastic moduli is to determine these moduli such that the similarity between calculated and observed shape features of the post compression prostate image is maximized. Results indicate that this technique is highly accurate and robust.

  17. Quantification of (1→4)-β-d-Galactans in Compression Wood Using an Immuno-Dot Assay

    PubMed Central

    Chavan, Ramesh R.; Fahey, Leona M.; Harris, Philip J.

    2015-01-01

    Compression wood is a type of reaction wood formed on the underside of softwood stems when they are tilted from the vertical and on the underside of branches. Its quantification is still a matter of some scientific debate. We developed a new technique that has the potential to do this based on the higher proportions of (1→4)-β-d-galactans that occur in tracheid cell walls of compression wood. Wood was milled, partially delignified, and the non-cellulosic polysaccharides, including the (1→4)-β-d-galactans, extracted with 6 M sodium hydroxide. After neutralizing, the solution was serially diluted, and the (1→4)-β-d-galactans determined by an immuno-dot assay using the monoclonal antibody LM5, which specifically recognizes this polysaccharide. Spots were quantified using a dilution series of a commercially available (1→4)-β-d-galactan from lupin seeds. Using this method, compression and opposite woods from radiata pine (Pinus radiata) were easily distinguished based on the amounts of (1→4)-β-d-galactans extracted. The non-cellulosic polysaccharides in the milled wood samples were also hydrolysed using 2 M trifluoroacetic acid followed by the separation and quantification of the released neutral monosaccharides by high performance anion exchange chromatography. This confirmed that the compression woods contained higher proportions of galactose-containing polysaccharides than the opposite woods. PMID:27135316

  18. Implementing and diagnosing magnetic flux compression on the Z pulsed power accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McBride, Ryan D.; Bliss, David E.; Gomez, Matthew R.

    2015-11-01

    We report on the progress made to date for a Laboratory Directed Research and Development (LDRD) project aimed at diagnosing magnetic flux compression on the Z pulsed-power accelerator (0-20 MA in 100 ns). Each experiment consisted of an initially solid Be or Al liner (cylindrical tube), which was imploded using the Z accelerator's drive current (0-20 MA in 100 ns). The imploding liner compresses a 10-T axial seed field, B z ( 0 ) , supplied by an independently driven Helmholtz coil pair. Assuming perfect flux conservation, the axial field amplification should be well described by B z ( tmore » ) = B z ( 0 ) x [ R ( 0 ) / R ( t )] 2 , where R is the liner's inner surface radius. With perfect flux conservation, B z ( t ) and dB z / dt values exceeding 10 4 T and 10 12 T/s, respectively, are expected. These large values, the diminishing liner volume, and the harsh environment on Z, make it particularly challenging to measure these fields. We report on our latest efforts to do so using three primary techniques: (1) micro B-dot probes to measure the fringe fields associated with flux compression, (2) streaked visible Zeeman absorption spectroscopy, and (3) fiber-based Faraday rotation. We also mention two new techniques that make use of the neutron diagnostics suite on Z. These techniques were not developed under this LDRD, but they could influence how we prioritize our efforts to diagnose magnetic flux compression on Z in the future. The first technique is based on the yield ratio of secondary DT to primary DD reactions. The second technique makes use of the secondary DT neutron time-of-flight energy spectra. Both of these techniques have been used successfully to infer the degree of magnetization at stagnation in fully integrated Magnetized Liner Inertial Fusion (MagLIF) experiments on Z [P. F. Schmit et al. , Phys. Rev. Lett. 113 , 155004 (2014); P. F. Knapp et al. , Phys. Plasmas, 22 , 056312 (2015)]. Finally, we present some recent developments for designing and fabricating novel micro B-dot probes to measure B z ( t ) inside of an imploding liner. In one approach, the micro B-dot loops were fabricated on a printed circuit board (PCB). The PCB was then soldered to off-the-shelf 0.020- inch-diameter semi-rigid coaxial cables, which were terminated with standard SMA connectors. These probes were recently tested using the COBRA pulsed power generator (0-1 MA in 100 ns) at Cornell University. In another approach, we are planning to use new multi-material 3D printing capabilities to fabricate novel micro B-dot packages. In the near future, we plan to 3D print these probes and then test them on the COBRA generator. With successful operation demonstrated at 1-MA, we will then make plans to use these probes on a 20-MA Z experiment.« less

  19. A Hybrid Data Compression Scheme for Power Reduction in Wireless Sensors for IoT.

    PubMed

    Deepu, Chacko John; Heng, Chun-Huat; Lian, Yong

    2017-04-01

    This paper presents a novel data compression and transmission scheme for power reduction in Internet-of-Things (IoT) enabled wireless sensors. In the proposed scheme, data is compressed with both lossy and lossless techniques, so as to enable hybrid transmission mode, support adaptive data rate selection and save power in wireless transmission. Applying the method to electrocardiogram (ECG), the data is first compressed using a lossy compression technique with a high compression ratio (CR). The residual error between the original data and the decompressed lossy data is preserved using entropy coding, enabling a lossless restoration of the original data when required. Average CR of 2.1 × and 7.8 × were achieved for lossless and lossy compression respectively with MIT/BIH database. The power reduction is demonstrated using a Bluetooth transceiver and is found to be reduced to 18% for lossy and 53% for lossless transmission respectively. Options for hybrid transmission mode, adaptive rate selection and system level power reduction make the proposed scheme attractive for IoT wireless sensors in healthcare applications.

  20. Coil Compression for Accelerated Imaging with Cartesian Sampling

    PubMed Central

    Zhang, Tao; Pauly, John M.; Vasanawala, Shreyas S.; Lustig, Michael

    2012-01-01

    MRI using receiver arrays with many coil elements can provide high signal-to-noise ratio and increase parallel imaging acceleration. At the same time, the growing number of elements results in larger datasets and more computation in the reconstruction. This is of particular concern in 3D acquisitions and in iterative reconstructions. Coil compression algorithms are effective in mitigating this problem by compressing data from many channels into fewer virtual coils. In Cartesian sampling there often are fully sampled k-space dimensions. In this work, a new coil compression technique for Cartesian sampling is presented that exploits the spatially varying coil sensitivities in these non-subsampled dimensions for better compression and computation reduction. Instead of directly compressing in k-space, coil compression is performed separately for each spatial location along the fully-sampled directions, followed by an additional alignment process that guarantees the smoothness of the virtual coil sensitivities. This important step provides compatibility with autocalibrating parallel imaging techniques. Its performance is not susceptible to artifacts caused by a tight imaging fieldof-view. High quality compression of in-vivo 3D data from a 32 channel pediatric coil into 6 virtual coils is demonstrated. PMID:22488589

  1. Textual data compression in computational biology: a synopsis.

    PubMed

    Giancarlo, Raffaele; Scaturro, Davide; Utro, Filippo

    2009-07-01

    Textual data compression, and the associated techniques coming from information theory, are often perceived as being of interest for data communication and storage. However, they are also deeply related to classification and data mining and analysis. In recent years, a substantial effort has been made for the application of textual data compression techniques to various computational biology tasks, ranging from storage and indexing of large datasets to comparison and reverse engineering of biological networks. The main focus of this review is on a systematic presentation of the key areas of bioinformatics and computational biology where compression has been used. When possible, a unifying organization of the main ideas and techniques is also provided. It goes without saying that most of the research results reviewed here offer software prototypes to the bioinformatics community. The Supplementary Material provides pointers to software and benchmark datasets for a range of applications of broad interest. In addition to provide reference to software, the Supplementary Material also gives a brief presentation of some fundamental results and techniques related to this paper. It is at: http://www.math.unipa.it/ approximately raffaele/suppMaterial/compReview/

  2. The dynamic micro computed tomography at SSRF

    NASA Astrophysics Data System (ADS)

    Chen, R.; Xu, L.; Du, G.; Deng, B.; Xie, H.; Xiao, T.

    2018-05-01

    Synchrotron radiation micro-computed tomography (SR-μCT) is a critical technique for quantitative characterizing the 3D internal structure of samples, recently the dynamic SR-μCT has been attracting vast attention since it can evaluate the three-dimensional structure evolution of a sample. A dynamic μCT method, which is based on monochromatic beam, was developed at the X-ray Imaging and Biomedical Application Beamline at Shanghai Synchrotron Radiation Facility, by combining the compressed sensing based CT reconstruction algorithm and hardware upgrade. The monochromatic beam based method can achieve quantitative information, and lower dose than the white beam base method in which the lower energy beam is absorbed by the sample rather than contribute to the final imaging signal. The developed method is successfully used to investigate the compression of the air sac during respiration in a bell cricket, providing new knowledge for further research on the insect respiratory system.

  3. Characterization of particle deformation during compression measured by confocal laser scanning microscopy.

    PubMed

    Guo, H X; Heinämäki, J; Yliruusi, J

    1999-09-20

    Direct compression of riboflavin sodium phosphate tablets was studied by confocal laser scanning microscopy (CLSM). The technique is non-invasive and generates three-dimensional (3D) images. Tablets of 1% riboflavin sodium phosphate with two grades of microcrystalline cellulose (MCC) were individually compressed at compression forces of 1.0 and 26.8 kN. The behaviour and deformation of drug particles on the upper and lower surfaces of the tablets were studied under compression forces. Even at the lower compression force, distinct recrystallized areas in the riboflavin sodium phosphate particles were observed in both Avicel PH-101 and Avicel PH-102 tablets. At the higher compression force, the recrystallization of riboflavin sodium phosphate was more extensive on the upper surface of the Avicel PH-102 tablet than the Avicel PH-101 tablet. The plastic deformation properties of both MCC grades reduced the fragmentation of riboflavin sodium phosphate particles. When compressed with MCC, riboflavin sodium phosphate behaved as a plastic material. The riboflavin sodium phosphate particles were more tightly bound on the upper surface of the tablet than on the lower surface, and this could also be clearly distinguished by CLSM. Drug deformation could not be visualized by other techniques. Confocal laser scanning microscopy provides valuable information on the internal mechanisms of direct compression of tablets.

  4. A closed-loop compressive-sensing-based neural recording system.

    PubMed

    Zhang, Jie; Mitra, Srinjoy; Suo, Yuanming; Cheng, Andrew; Xiong, Tao; Michon, Frederic; Welkenhuysen, Marleen; Kloosterman, Fabian; Chin, Peter S; Hsiao, Steven; Tran, Trac D; Yazicioglu, Firat; Etienne-Cummings, Ralph

    2015-06-01

    This paper describes a low power closed-loop compressive sensing (CS) based neural recording system. This system provides an efficient method to reduce data transmission bandwidth for implantable neural recording devices. By doing so, this technique reduces a majority of system power consumption which is dissipated at data readout interface. The design of the system is scalable and is a viable option for large scale integration of electrodes or recording sites onto a single device. The entire system consists of an application-specific integrated circuit (ASIC) with 4 recording readout channels with CS circuits, a real time off-chip CS recovery block and a recovery quality evaluation block that provides a closed feedback to adaptively adjust compression rate. Since CS performance is strongly signal dependent, the ASIC has been tested in vivo and with standard public neural databases. Implemented using efficient digital circuit, this system is able to achieve >10 times data compression on the entire neural spike band (500-6KHz) while consuming only 0.83uW (0.53 V voltage supply) additional digital power per electrode. When only the spikes are desired, the system is able to further compress the detected spikes by around 16 times. Unlike other similar systems, the characteristic spikes and inter-spike data can both be recovered which guarantes a >95% spike classification success rate. The compression circuit occupied 0.11mm(2)/electrode in a 180nm CMOS process. The complete signal processing circuit consumes <16uW/electrode. Power and area efficiency demonstrated by the system make it an ideal candidate for integration into large recording arrays containing thousands of electrode. Closed-loop recording and reconstruction performance evaluation further improves the robustness of the compression method, thus making the system more practical for long term recording.

  5. Comparison of Fit of Dentures Fabricated by Traditional Techniques Versus CAD/CAM Technology.

    PubMed

    McLaughlin, J Bryan; Ramos, Van; Dickinson, Douglas P

    2017-11-14

    To compare the shrinkage of denture bases fabricated by three methods: CAD/CAM, compression molding, and injection molding. The effect of arch form and palate depth was also tested. Nine titanium casts, representing combinations of tapered, ovoid, and square arch forms and shallow, medium, and deep palate depths, were fabricated using electron beam melting (EBM) technology. For each base fabrication method, three poly(vinyl siloxane) impressions were made from each cast, 27 dentures for each method. Compression-molded dentures were fabricated using Lucitone 199 poly methyl methacrylate (PMMA), and injection molded dentures with Ivobase's Hybrid Pink PMMA. For CAD/CAM, denture bases were designed and milled by Avadent using their Light PMMA. To quantify the space between the denture and the master cast, silicone duplicating material was placed in the intaglio of the dentures, the titanium master cast was seated under pressure, and the silicone was then trimmed and recovered. Three silicone measurements per denture were recorded, for a total of 243 measurements. Each silicone measurement was weighed and adjusted to the surface area of the respective arch, giving an average and standard deviation for each denture. Comparison of manufacturing methods showed a statistically significant difference (p = 0.0001). Using a ratio of the means, compression molding had on average 41% to 47% more space than injection molding and CAD/CAM. Comparison of arch/palate forms showed a statistically significant difference (p = 0.023), with shallow palate forms having more space with compression molding. The ovoid shallow form showed CAD/CAM and compression molding had more space than injection molding. Overall, injection molding and CAD/CAM fabrication methods produced equally well-fitting dentures, with both having a better fit than compression molding. Shallow palates appear to be more affected by shrinkage than medium or deep palates. Shallow ovoid arch forms appear to benefit from the use of injection molding compared to CAD/CAM and compression molding. © 2017 by the American College of Prosthodontists.

  6. Pulse compression favourable aperiodic infrared imaging approach for non-destructive testing and evaluation of bio-materials

    NASA Astrophysics Data System (ADS)

    Mulaveesala, Ravibabu; Dua, Geetika; Arora, Vanita; Siddiqui, Juned A.; Muniyappa, Amarnath

    2017-05-01

    In recent years, aperiodic, transient pulse compression favourable infrared imaging methodologies demonstrated as reliable, quantitative, remote characterization and evaluation techniques for testing and evaluation of various biomaterials. This present work demonstrates a pulse compression favourable aperiodic thermal wave imaging technique, frequency modulated thermal wave imaging technique for bone diagnostics, especially by considering the bone with tissue, skin and muscle over layers. In order to find the capabilities of the proposed frequency modulated thermal wave imaging technique to detect the density variations in a multi layered skin-fat-muscle-bone structure, finite element modeling and simulation studies have been carried out. Further, frequency and time domain post processing approaches have been adopted on the temporal temperature data in order to improve the detection capabilities of frequency modulated thermal wave imaging.

  7. Universal data compression

    NASA Astrophysics Data System (ADS)

    Lindsay, R. A.; Cox, B. V.

    Universal and adaptive data compression techniques have the capability to globally compress all types of data without loss of information but have the disadvantage of complexity and computation speed. Advances in hardware speed and the reduction of computational costs have made universal data compression feasible. Implementations of the Adaptive Huffman and Lempel-Ziv compression algorithms are evaluated for performance. Compression ratios versus run times for different size data files are graphically presented and discussed in the paper. Required adjustments needed for optimum performance of the algorithms relative to theoretical achievable limits will be outlined.

  8. Estimation of mechanical properties of nanomaterials using artificial intelligence methods

    NASA Astrophysics Data System (ADS)

    Vijayaraghavan, V.; Garg, A.; Wong, C. H.; Tai, K.

    2014-09-01

    Computational modeling tools such as molecular dynamics (MD), ab initio, finite element modeling or continuum mechanics models have been extensively applied to study the properties of carbon nanotubes (CNTs) based on given input variables such as temperature, geometry and defects. Artificial intelligence techniques can be used to further complement the application of numerical methods in characterizing the properties of CNTs. In this paper, we have introduced the application of multi-gene genetic programming (MGGP) and support vector regression to formulate the mathematical relationship between the compressive strength of CNTs and input variables such as temperature and diameter. The predictions of compressive strength of CNTs made by these models are compared to those generated using MD simulations. The results indicate that MGGP method can be deployed as a powerful method for predicting the compressive strength of the carbon nanotubes.

  9. A constrained modulus reconstruction technique for breast cancer assessment.

    PubMed

    Samani, A; Bishop, J; Plewes, D B

    2001-09-01

    A reconstruction technique for breast tissue elasticity modulus is described. This technique assumes that the geometry of normal and suspicious tissues is available from a contrast-enhanced magnetic resonance image. Furthermore, it is assumed that the modulus is constant throughout each tissue volume. The technique, which uses quasi-static strain data, is iterative where each iteration involves modulus updating followed by stress calculation. Breast mechanical stimulation is assumed to be done by two compressional rigid plates. As a result, stress is calculated using the finite element method based on the well-controlled boundary conditions of the compression plates. Using the calculated stress and the measured strain, modulus updating is done element-by-element based on Hooke's law. Breast tissue modulus reconstruction using simulated data and phantom modulus reconstruction using experimental data indicate that the technique is robust.

  10. Estimating JPEG2000 compression for image forensics using Benford's Law

    NASA Astrophysics Data System (ADS)

    Qadir, Ghulam; Zhao, Xi; Ho, Anthony T. S.

    2010-05-01

    With the tremendous growth and usage of digital images nowadays, the integrity and authenticity of digital content is becoming increasingly important, and a growing concern to many government and commercial sectors. Image Forensics, based on a passive statistical analysis of the image data only, is an alternative approach to the active embedding of data associated with Digital Watermarking. Benford's Law was first introduced to analyse the probability distribution of the 1st digit (1-9) numbers of natural data, and has since been applied to Accounting Forensics for detecting fraudulent income tax returns [9]. More recently, Benford's Law has been further applied to image processing and image forensics. For example, Fu et al. [5] proposed a Generalised Benford's Law technique for estimating the Quality Factor (QF) of JPEG compressed images. In our previous work, we proposed a framework incorporating the Generalised Benford's Law to accurately detect unknown JPEG compression rates of watermarked images in semi-fragile watermarking schemes. JPEG2000 (a relatively new image compression standard) offers higher compression rates and better image quality as compared to JPEG compression. In this paper, we propose the novel use of Benford's Law for estimating JPEG2000 compression for image forensics applications. By analysing the DWT coefficients and JPEG2000 compression on 1338 test images, the initial results indicate that the 1st digit probability of DWT coefficients follow the Benford's Law. The unknown JPEG2000 compression rates of the image can also be derived, and proved with the help of a divergence factor, which shows the deviation between the probabilities and Benford's Law. Based on 1338 test images, the mean divergence for DWT coefficients is approximately 0.0016, which is lower than DCT coefficients at 0.0034. However, the mean divergence for JPEG2000 images compression rate at 0.1 is 0.0108, which is much higher than uncompressed DWT coefficients. This result clearly indicates a presence of compression in the image. Moreover, we compare the results of 1st digit probability and divergence among JPEG2000 compression rates at 0.1, 0.3, 0.5 and 0.9. The initial results show that the expected difference among them could be used for further analysis to estimate the unknown JPEG2000 compression rates.

  11. Improved JPEG anti-forensics with better image visual quality and forensic undetectability.

    PubMed

    Singh, Gurinder; Singh, Kulbir

    2017-08-01

    There is an immediate need to validate the authenticity of digital images due to the availability of powerful image processing tools that can easily manipulate the digital image information without leaving any traces. The digital image forensics most often employs the tampering detectors based on JPEG compression. Therefore, to evaluate the competency of the JPEG forensic detectors, an anti-forensic technique is required. In this paper, two improved JPEG anti-forensic techniques are proposed to remove the blocking artifacts left by the JPEG compression in both spatial and DCT domain. In the proposed framework, the grainy noise left by the perceptual histogram smoothing in DCT domain can be reduced significantly by applying the proposed de-noising operation. Two types of denoising algorithms are proposed, one is based on the constrained minimization problem of total variation of energy and other on the normalized weighted function. Subsequently, an improved TV based deblocking operation is proposed to eliminate the blocking artifacts in the spatial domain. Then, a decalibration operation is applied to bring the processed image statistics back to its standard position. The experimental results show that the proposed anti-forensic approaches outperform the existing state-of-the-art techniques in achieving enhanced tradeoff between image visual quality and forensic undetectability, but with high computational cost. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Distributed Transforms for Efficient Data Gathering in Sensor Networks

    NASA Technical Reports Server (NTRS)

    Ortega, Antonio (Inventor); Shen, Godwin (Inventor); Narang, Sunil K. (Inventor); Perez-Trufero, Javier (Inventor)

    2014-01-01

    Devices, systems, and techniques for data collecting network such as wireless sensors are disclosed. A described technique includes detecting one or more remote nodes included in the wireless sensor network using a local power level that controls a radio range of the local node. The technique includes transmitting a local outdegree. The local outdegree can be based on a quantity of the one or more remote nodes. The technique includes receiving one or more remote outdegrees from the one or more remote nodes. The technique includes determining a local node type of the local node based on detecting a node type of the one or more remote nodes, using the one or more remote outdegrees, and using the local outdegree. The technique includes adjusting characteristics, including an energy usage characteristic and a data compression characteristic, of the wireless sensor network by selectively modifying the local power level and selectively changing the local node type.

  13. Transform coding for space applications

    NASA Technical Reports Server (NTRS)

    Glover, Daniel

    1993-01-01

    Data compression coding requirements for aerospace applications differ somewhat from the compression requirements for entertainment systems. On the one hand, entertainment applications are bit rate driven with the goal of getting the best quality possible with a given bandwidth. Science applications are quality driven with the goal of getting the lowest bit rate for a given level of reconstruction quality. In the past, the required quality level has been nothing less than perfect allowing only the use of lossless compression methods (if that). With the advent of better, faster, cheaper missions, an opportunity has arisen for lossy data compression methods to find a use in science applications as requirements for perfect quality reconstruction runs into cost constraints. This paper presents a review of the data compression problem from the space application perspective. Transform coding techniques are described and some simple, integer transforms are presented. The application of these transforms to space-based data compression problems is discussed. Integer transforms have an advantage over conventional transforms in computational complexity. Space applications are different from broadcast or entertainment in that it is desirable to have a simple encoder (in space) and tolerate a more complicated decoder (on the ground) rather than vice versa. Energy compaction with new transforms are compared with the Walsh-Hadamard (WHT), Discrete Cosine (DCT), and Integer Cosine (ICT) transforms.

  14. Multivariable control of vapor compression systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, X.D.; Liu, S.; Asada, H.H.

    1999-07-01

    This paper presents the results of a study of multi-input multi-output (MIMO) control of vapor compression cycles that have multiple actuators and sensors for regulating multiple outputs, e.g., superheat and evaporating temperature. The conventional single-input single-output (SISO) control was shown to have very limited performance. A low order lumped-parameter model was developed to describe the significant dynamics of vapor compression cycles. Dynamic modes were analyzed based on the low order model to provide physical insight of system dynamic behavior. To synthesize a MIMO control system, the Linear-Quadratic Gaussian (LQG) technique was applied to coordinate compressor speed and expansion valve openingmore » with guaranteed stability robustness in the design. Furthermore, to control a vapor compression cycle over a wide range of operating conditions where system nonlinearities become evident, a gain scheduling scheme was used so that the MIMO controller could adapt to changing operating conditions. Both analytical studies and experimental tests showed that the MIMO control could significantly improve the transient behavior of vapor compression cycles compared to the conventional SISO control scheme. The MIMO control proposed in this paper could be extended to the control of vapor compression cycles in a variety of HVAC and refrigeration applications to improve system performance and energy efficiency.« less

  15. Mammogram registration: a phantom-based evaluation of compressed breast thickness variation effects.

    PubMed

    Richard, Frédéric J P; Bakić, Predrag R; Maidment, Andrew D A

    2006-02-01

    The temporal comparison of mammograms is complex; a wide variety of factors can cause changes in image appearance. Mammogram registration is proposed as a method to reduce the effects of these changes and potentially to emphasize genuine alterations in breast tissue. Evaluation of such registration techniques is difficult since ground truth regarding breast deformations is not available in clinical mammograms. In this paper, we propose a systematic approach to evaluate sensitivity of registration methods to various types of changes in mammograms using synthetic breast images with known deformations. As a first step, images of the same simulated breasts with various amounts of simulated physical compression have been used to evaluate a previously described nonrigid mammogram registration technique. Registration performance is measured by calculating the average displacement error over a set of evaluation points identified in mammogram pairs. Applying appropriate thickness compensation and using a preferred order of the registered images, we obtained an average displacement error of 1.6 mm for mammograms with compression differences of 1-3 cm. The proposed methodology is applicable to analysis of other sources of mammogram differences and can be extended to the registration of multimodality breast data.

  16. Wavelet-Based Interpolation and Representation of Non-Uniformly Sampled Spacecraft Mission Data

    NASA Technical Reports Server (NTRS)

    Bose, Tamal

    2000-01-01

    A well-documented problem in the analysis of data collected by spacecraft instruments is the need for an accurate, efficient representation of the data set. The data may suffer from several problems, including additive noise, data dropouts, an irregularly-spaced sampling grid, and time-delayed sampling. These data irregularities render most traditional signal processing techniques unusable, and thus the data must be interpolated onto an even grid before scientific analysis techniques can be applied. In addition, the extremely large volume of data collected by scientific instrumentation presents many challenging problems in the area of compression, visualization, and analysis. Therefore, a representation of the data is needed which provides a structure which is conducive to these applications. Wavelet representations of data have already been shown to possess excellent characteristics for compression, data analysis, and imaging. The main goal of this project is to develop a new adaptive filtering algorithm for image restoration and compression. The algorithm should have low computational complexity and a fast convergence rate. This will make the algorithm suitable for real-time applications. The algorithm should be able to remove additive noise and reconstruct lost data samples from images.

  17. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Hixon, Duane; Sankar, L. N.

    1993-01-01

    During the past two decades, there has been significant progress in the field of numerical simulation of unsteady compressible viscous flows. At present, a variety of solution techniques exist such as the transonic small disturbance analyses (TSD), transonic full potential equation-based methods, unsteady Euler solvers, and unsteady Navier-Stokes solvers. These advances have been made possible by developments in three areas: (1) improved numerical algorithms; (2) automation of body-fitted grid generation schemes; and (3) advanced computer architectures with vector processing and massively parallel processing features. In this work, the GMRES scheme has been considered as a candidate for acceleration of a Newton iteration time marching scheme for unsteady 2-D and 3-D compressible viscous flow calculation; from preliminary calculations, this will provide up to a 65 percent reduction in the computer time requirements over the existing class of explicit and implicit time marching schemes. The proposed method has ben tested on structured grids, but is flexible enough for extension to unstructured grids. The described scheme has been tested only on the current generation of vector processor architecture of the Cray Y/MP class, but should be suitable for adaptation to massively parallel machines.

  18. Suggested techniques, equipment, and standards for the testing of hand insecticide-spraying equipment

    PubMed Central

    Hall, Lawrence B.

    1955-01-01

    The new demands placed upon application equipment by the introduction of modern insecticides have revealed the deficiencies of this equipment when required for continuous use on a large scale. If adequate equipment is to be produced, specifications must be based not only on basic materials tests but also on “use” tests, in which the conditions of field use are simulated. The author outlines suggested techniques to be followed and standards to be adopted in testing the performance of compression sprayers and allied equipment, with reference to the following features: compression-sprayer tank fatigue; tank impact; pump resistance to bursting; pump resistance to collapse; pump friction; cut-off valve durability; constant-pressure valves; cut-off valve actuation; hose flexure; hose tension and bursting-pressure; hose friction; gaskets, valve faces, and similar non-metallic parts; nozzle-orifice erosion; and nozzle pattern. ImagesFIG. 1FIG. 14FIG. 20 PMID:14364189

  19. Astronomy. Laser telemetry from space.

    PubMed

    Bland-Hawthorn, Joss; Harwit, Alex; Harwit, Martin

    2002-07-26

    Space missions currently on the drawing boards are expected to gather data at rates exceeding the transmission capabilities of today's telemetry systems by many orders of magnitude. Even on current missions, onboard data compression techniques are being implemented to compensate for lack of transmission speed. But while data compression can minimize the loss of data, it is no substitute for transmitting all of the data through a faster communications link. The transmission problem will soon reach crisis proportions and will affect astronomical, Earth resources, geophysical, meteorological, planetary and other space science missions. To overcome this communications bottleneck, the authors advocate the implementation of telemetry systems based on near-infrared laser transmission techniques. The fiber-optics communications industry has developed most of the basic components required for signal transmission in this wavelength band, which should make such a system affordable on scales relevant to the cost of anticipated space science missions.

  20. SCADA Protocol Anomaly Detection Utilizing Compression (SPADUC) 2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gordon Rueff; Lyle Roybal; Denis Vollmer

    2013-01-01

    There is a significant need to protect the nation’s energy infrastructures from malicious actors using cyber methods. Supervisory, Control, and Data Acquisition (SCADA) systems may be vulnerable due to the insufficient security implemented during the design and deployment of these control systems. This is particularly true in older legacy SCADA systems that are still commonly in use. The purpose of INL’s research on the SCADA Protocol Anomaly Detection Utilizing Compression (SPADUC) project was to determine if and how data compression techniques could be used to identify and protect SCADA systems from cyber attacks. Initially, the concept was centered on howmore » to train a compression algorithm to recognize normal control system traffic versus hostile network traffic. Because large portions of the TCP/IP message traffic (called packets) are repetitive, the concept of using compression techniques to differentiate “non-normal” traffic was proposed. In this manner, malicious SCADA traffic could be identified at the packet level prior to completing its payload. Previous research has shown that SCADA network traffic has traits desirable for compression analysis. This work investigated three different approaches to identify malicious SCADA network traffic using compression techniques. The preliminary analyses and results presented herein are clearly able to differentiate normal from malicious network traffic at the packet level at a very high confidence level for the conditions tested. Additionally, the master dictionary approach used in this research appears to initially provide a meaningful way to categorize and compare packets within a communication channel.« less

  1. Wavelet-based higher-order neural networks for mine detection in thermal IR imagery

    NASA Astrophysics Data System (ADS)

    Baertlein, Brian A.; Liao, Wen-Jiao

    2000-08-01

    An image processing technique is described for the detection of miens in RI imagery. The proposed technique is based on a third-order neural network, which processes the output of a wavelet packet transform. The technique is inherently invariant to changes in signature position, rotation and scaling. The well-known memory limitations that arise with higher-order neural networks are addressed by (1) the data compression capabilities of wavelet packets, (2) protections of the image data into a space of similar triangles, and (3) quantization of that 'triangle space'. Using these techniques, image chips of size 28 by 28, which would require 0(109) neural net weights, are processed by a network having 0(102) weights. ROC curves are presented for mine detection in real and simulated imagery.

  2. A Randomized Control Trial of Cardiopulmonary Feedback Devices and Their Impact on Infant Chest Compression Quality: A Simulation Study.

    PubMed

    Austin, Andrea L; Spalding, Carmen N; Landa, Katrina N; Myer, Brian R; Donald, Cure; Smith, Jason E; Platt, Gerald; King, Heather C

    2017-10-27

    In effort to improve chest compression quality among health care providers, numerous feedback devices have been developed. Few studies, however, have focused on the use of cardiopulmonary resuscitation feedback devices for infants and children. This study evaluated the quality of chest compressions with standard team-leader coaching, a metronome (MetroTimer by ONYX Apps), and visual feedback (SkillGuide Cardiopulmonary Feedback Device) during simulated infant cardiopulmonary resuscitation. Seventy voluntary health care providers who had recently completed Pediatric Advanced Life Support or Basic Life Support courses were randomized to perform simulated infant cardiopulmonary resuscitation into 1 of 3 groups: team-leader coaching alone (control), coaching plus metronome, or coaching plus SkillGuide for 2 minutes continuously. Rate, depth, and frequency of complete recoil during cardiopulmonary resuscitation were recorded by the Laerdal SimPad device for each participant. American Heart Association-approved compression techniques were randomized to either 2-finger or encircling thumbs. The metronome was associated with more ideal compression rate than visual feedback or coaching alone (104/min vs 112/min and 113/min; P = 0.003, 0.019). Visual feedback was associated with more ideal depth than auditory (41 mm vs 38.9; P = 0.03). There were no significant differences in complete recoil between groups. Secondary outcomes of compression technique revealed a difference of 1 mm. Subgroup analysis of male versus female showed no difference in mean number of compressions (221.76 vs 219.79; P = 0.72), mean compression depth (40.47 vs 39.25; P = 0.09), or rate of complete release (70.27% vs 64.96%; P = 0.54). In the adult literature, feedback devices often show an increase in quality of chest compressions. Although more studies are needed, this study did not demonstrate a clinically significant improvement in chest compressions with the addition of a metronome or visual feedback device, no clinically significant difference in Pediatric Advanced Life Support-approved compression technique, and no difference between compression quality between genders.

  3. QRFXFreeze: Queryable Compressor for RFX.

    PubMed

    Senthilkumar, Radha; Nandagopal, Gomathi; Ronald, Daphne

    2015-01-01

    The verbose nature of XML has been mulled over again and again and many compression techniques for XML data have been excogitated over the years. Some of the techniques incorporate support for querying the XML database in its compressed format while others have to be decompressed before they can be queried. XML compression in which querying is directly supported instantaneously with no compromise over time is forced to compromise over space. In this paper, we propose the compressor, QRFXFreeze, which not only reduces the space of storage but also supports efficient querying. The compressor does this without decompressing the compressed XML file. The compressor supports all kinds of XML documents along with insert, update, and delete operations. The forte of QRFXFreeze is that the textual data are semantically compressed and are indexed to reduce the querying time. Experimental results show that the proposed compressor performs much better than other well-known compressors.

  4. Narrative-compression coding for a channel with errors. Professional paper for period ending June 1987

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bond, J.W.

    1988-01-01

    Data-compression codes offer the possibility of improving the thruput of existing communication systems in the near term. This study was undertaken to determine if data-compression codes could be utilized to provide message compression in a channel with up to a 0.10-bit error rate. The data-compression capabilities of codes were investigated by estimating the average number of bits-per-character required to transmit narrative files. The performance of the codes in a channel with errors (a noisy channel) was investigated in terms of the average numbers of characters-decoded-in-error and of characters-printed-in-error-per-bit-error. Results were obtained by encoding four narrative files, which were resident onmore » an IBM-PC and use a 58-character set. The study focused on Huffman codes and suffix/prefix comma-free codes. Other data-compression codes, in particular, block codes and some simple variants of block codes, are briefly discussed to place the study results in context. Comma-free codes were found to have the most-promising data compression because error propagation due to bit errors are limited to a few characters for these codes. A technique was found to identify a suffix/prefix comma-free code giving nearly the same data compressions as a Huffman code with much less error propagation than the Huffman codes. Greater data compression can be achieved through the use of this comma-free code word assignments based on conditioned probabilities of character occurrence.« less

  5. Sequential neural text compression.

    PubMed

    Schmidhuber, J; Heil, S

    1996-01-01

    The purpose of this paper is to show that neural networks may be promising tools for data compression without loss of information. We combine predictive neural nets and statistical coding techniques to compress text files. We apply our methods to certain short newspaper articles and obtain compression ratios exceeding those of the widely used Lempel-Ziv algorithms (which build the basis of the UNIX functions "compress" and "gzip"). The main disadvantage of our methods is that they are about three orders of magnitude slower than standard methods.

  6. Gain compression and its dependence on output power in quantum dot lasers

    NASA Astrophysics Data System (ADS)

    Zhukov, A. E.; Maximov, M. V.; Savelyev, A. V.; Shernyakov, Yu. M.; Zubov, F. I.; Korenev, V. V.; Martinez, A.; Ramdane, A.; Provost, J.-G.; Livshits, D. A.

    2013-06-01

    The gain compression coefficient was evaluated by applying the frequency modulation/amplitude modulation technique in a distributed feedback InAs/InGaAs quantum dot laser. A strong dependence of the gain compression coefficient on the output power was found. Our analysis of the gain compression within the frame of the modified well-barrier hole burning model reveals that the gain compression coefficient decreases beyond the lasing threshold, which is in a good agreement with the experimental observations.

  7. Air-propelled abrasive grit for postemergence in-row weed control in field corn

    USDA-ARS?s Scientific Manuscript database

    Organic growers need additional tools for weed control. A new technique involving abrasive grit propelled by compressed air was tested in field plots. Grit derived from corn cobs was directed at seedlings of summer annual weeds growing at the bases of corn plants when the corn was at differing early...

  8. Compact storage of medical images with patient information.

    PubMed

    Acharya, R; Anand, D; Bhat, S; Niranjan, U C

    2001-12-01

    Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images to reduce storage and transmission overheads. The text data are encrypted before interleaving with images to ensure greater security. The graphical signals are compressed and subsequently interleaved with the image. Differential pulse-code-modulation and adaptive-delta-modulation techniques are employed for data compression, and encryption and results are tabulated for a specific example.

  9. Information extraction and transmission techniques for spaceborne synthetic aperture radar images

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Yurovsky, L.; Watson, E.; Townsend, K.; Gardner, S.; Boberg, D.; Watson, J.; Minden, G. J.; Shanmugan, K. S.

    1984-01-01

    Information extraction and transmission techniques for synthetic aperture radar (SAR) imagery were investigated. Four interrelated problems were addressed. An optimal tonal SAR image classification algorithm was developed and evaluated. A data compression technique was developed for SAR imagery which is simple and provides a 5:1 compression with acceptable image quality. An optimal textural edge detector was developed. Several SAR image enhancement algorithms have been proposed. The effectiveness of each algorithm was compared quantitatively.

  10. Deterministic compressive sampling for high-quality image reconstruction of ultrasound tomography.

    PubMed

    Huy, Tran Quang; Tue, Huynh Huu; Long, Ton That; Duc-Tan, Tran

    2017-05-25

    A well-known diagnostic imaging modality, termed ultrasound tomography, was quickly developed for the detection of very small tumors whose sizes are smaller than the wavelength of the incident pressure wave without ionizing radiation, compared to the current gold-standard X-ray mammography. Based on inverse scattering technique, ultrasound tomography uses some material properties such as sound contrast or attenuation to detect small targets. The Distorted Born Iterative Method (DBIM) based on first-order Born approximation is an efficient diffraction tomography approach. One of the challenges for a high quality reconstruction is to obtain many measurements from the number of transmitters and receivers. Given the fact that biomedical images are often sparse, the compressed sensing (CS) technique could be therefore effectively applied to ultrasound tomography by reducing the number of transmitters and receivers, while maintaining a high quality of image reconstruction. There are currently several work on CS that dispose randomly distributed locations for the measurement system. However, this random configuration is relatively difficult to implement in practice. Instead of it, we should adopt a methodology that helps determine the locations of measurement devices in a deterministic way. For this, we develop the novel DCS-DBIM algorithm that is highly applicable in practice. Inspired of the exploitation of the deterministic compressed sensing technique (DCS) introduced by the authors few years ago with the image reconstruction process implemented using l 1 regularization. Simulation results of the proposed approach have demonstrated its high performance, with the normalized error approximately 90% reduced, compared to the conventional approach, this new approach can save half of number of measurements and only uses two iterations. Universal image quality index is also evaluated in order to prove the efficiency of the proposed approach. Numerical simulation results indicate that CS and DCS techniques offer equivalent image reconstruction quality with simpler practical implementation. It would be a very promising approach in practical applications of modern biomedical imaging technology.

  11. Biochemical Imaging of Gliomas Using MR Spectroscopic Imaging for Radiotherapy Treatment Planning

    NASA Astrophysics Data System (ADS)

    Heikal, Amr Ahmed

    This thesis discusses the main obstacles facing wide clinical implementation of magnetic resonance spectroscopic imaging (MRSI) as a tumor delineation tool for radiotherapy treatment planning, particularly for gliomas. These main obstacles are identified as 1. observer bias and poor interpretational reproducibility of the results of MRSI scans, and 2. the long scan times required to conduct MRSI scans. An examination of an existing user-independent MRSI tumor delineation technique known as the choline-to-NAA index (CNI) is conducted to assess its utility in providing a tool for reproducible interpretation of MRSI results. While working with spatial resolutions typically twice those on which the CNI model was originally designed, a region of statistical uncertainty was discovered between the tumor and normal tissue populations and as such a modification to the CNI model was introduced to clearly identify that region. To address the issue of long scan times, a series of studies were conducted to adapt a scan acceleration technique, compressed sensing (CS), to work with MRSI and to quantify the effects of such a novel technique on the modulation transfer function (MTF), an important quantitative imaging metric. The studies included the development of the first phantom based method of measuring the MTF for MRSI data, a study of the correlation between the k-space sampling patterns used for compressed sensing and the resulting MTFs, and the introduction of a technique circumventing some of side-effects of compressed sensing by exploiting the conjugate symmetry property of k-space. The work in this thesis provides two essential steps towards wide clinical implementation of MRSI-based tumor delineation. The proposed modifications to the CNI method coupled with the application of CS to MRSI address the two main obstacles outlined. However, there continues to be room for improvement and questions that need to be answered by future research.

  12. Polydimethylsiloxane pressure sensors for force analysis in tension band wiring of the olecranon.

    PubMed

    Zens, Martin; Goldschmidtboeing, Frank; Wagner, Ferdinand; Reising, Kilian; Südkamp, Norbert P; Woias, Peter

    2016-11-14

    Several different surgical techniques are used in the treatment of olecranon fractures. Tension band wiring is one of the most preferred options by surgeons worldwide. The concept of this technique is to transform a tensile force into a compression force that adjoins two surfaces of a fractured bone. Currently, little is known about the resulting compression force within a fracture. Sensor devices are needed that directly transduce the compression force into a measurement quality. This allows the comparison of different surgical techniques. Ideally the sensor devices ought to be placed in the gap between the fractured segments. The design, development and characterization of miniaturized pressure sensors fabricated entirely from polydimethylsiloxane (PDMS) for a placement within a fracture is presented. The pressure sensors presented in this work are tested, calibrated and used in an experimental in vitro study. The pressure sensors are highly sensitive with an accuracy of approximately 3 kPa. A flexible fabrication process for various possible applications is described. The first in vitro study shows that using a single-twist or double-twist technique in tension band wiring of the olecranon has no significant effect on the resulting compression forces. The in vitro study shows the feasibility of the proposed measurement technique and the results of a first exemplary study.

  13. Interband coding extension of the new lossless JPEG standard

    NASA Astrophysics Data System (ADS)

    Memon, Nasir D.; Wu, Xiaolin; Sippy, V.; Miller, G.

    1997-01-01

    Due to the perceived inadequacy of current standards for lossless image compression, the JPEG committee of the International Standards Organization (ISO) has been developing a new standard. A baseline algorithm, called JPEG-LS, has already been completed and is awaiting approval by national bodies. The JPEG-LS baseline algorithm despite being simple is surprisingly efficient, and provides compression performance that is within a few percent of the best and more sophisticated techniques reported in the literature. Extensive experimentations performed by the authors seem to indicate that an overall improvement by more than 10 percent in compression performance will be difficult to obtain even at the cost of great complexity; at least not with traditional approaches to lossless image compression. However, if we allow inter-band decorrelation and modeling in the baseline algorithm, nearly 30 percent improvement in compression gains for specific images in the test set become possible with a modest computational cost. In this paper we propose and investigate a few techniques for exploiting inter-band correlations in multi-band images. These techniques have been designed within the framework of the baseline algorithm, and require minimal changes to the basic architecture of the baseline, retaining its essential simplicity.

  14. High performance optical encryption based on computational ghost imaging with QR code and compressive sensing technique

    NASA Astrophysics Data System (ADS)

    Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan

    2015-10-01

    In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.

  15. An object-oriented simulator for 3D digital breast tomosynthesis imaging system.

    PubMed

    Seyyedi, Saeed; Cengiz, Kubra; Kamasak, Mustafa; Yildirim, Isa

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values.

  16. An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System

    PubMed Central

    Cengiz, Kubra

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values. PMID:24371468

  17. Evaluation of Three Different Processing Techniques in the Fabrication of Complete Dentures

    PubMed Central

    Chintalacheruvu, Vamsi Krishna; Balraj, Rajasekaran Uttukuli; Putchala, Lavanya Sireesha; Pachalla, Sreelekha

    2017-01-01

    Aims and Objectives: The objective of the present study is to compare the effectiveness of three different processing techniques and to find out the accuracy of processing techniques through number of occlusal interferences and increase in vertical dimension after denture processing. Materials and Methods: A cross-sectional study was conducted on a sample of 18 patients indicated for complete denture fabrication was selected for the study and they were divided into three subgroups. Three processing techniques, compression molding and injection molding using prepolymerized resin and unpolymerized resin, were used to fabricate dentures for each of the groups. After processing, laboratory-remounted dentures were evaluated for number of occlusal interferences in centric and eccentric relations and change in vertical dimension through vertical pin rise in articulator. Data were analyzed using statistical test ANOVA and SPSS software version 19.0 by IBM was used. Results: Data obtained from three groups were subjected to one-way ANOVA test. After ANOVA test, results with significant variations were subjected to post hoc test. Number of occlusal interferences with compression molding technique was reported to be more in both centric and eccentric positions as compared to the two injection molding techniques with statistical significance in centric, protrusive, right lateral nonworking, and left lateral working positions (P < 0.05). Mean vertical pin rise (0.52 mm) was reported to more in compression molding technique as compared to injection molding techniques, which is statistically significant (P < 0.001). Conclusions: Within the limitations of this study, injection molding techniques exhibited less processing errors as compared to compression molding technique with statistical significance. There was no statistically significant difference in processing errors reported within two injection molding systems. PMID:28713763

  18. Evaluation of Three Different Processing Techniques in the Fabrication of Complete Dentures.

    PubMed

    Chintalacheruvu, Vamsi Krishna; Balraj, Rajasekaran Uttukuli; Putchala, Lavanya Sireesha; Pachalla, Sreelekha

    2017-06-01

    The objective of the present study is to compare the effectiveness of three different processing techniques and to find out the accuracy of processing techniques through number of occlusal interferences and increase in vertical dimension after denture processing. A cross-sectional study was conducted on a sample of 18 patients indicated for complete denture fabrication was selected for the study and they were divided into three subgroups. Three processing techniques, compression molding and injection molding using prepolymerized resin and unpolymerized resin, were used to fabricate dentures for each of the groups. After processing, laboratory-remounted dentures were evaluated for number of occlusal interferences in centric and eccentric relations and change in vertical dimension through vertical pin rise in articulator. Data were analyzed using statistical test ANOVA and SPSS software version 19.0 by IBM was used. Data obtained from three groups were subjected to one-way ANOVA test. After ANOVA test, results with significant variations were subjected to post hoc test. Number of occlusal interferences with compression molding technique was reported to be more in both centric and eccentric positions as compared to the two injection molding techniques with statistical significance in centric, protrusive, right lateral nonworking, and left lateral working positions ( P < 0.05). Mean vertical pin rise (0.52 mm) was reported to more in compression molding technique as compared to injection molding techniques, which is statistically significant ( P < 0.001). Within the limitations of this study, injection molding techniques exhibited less processing errors as compared to compression molding technique with statistical significance. There was no statistically significant difference in processing errors reported within two injection molding systems.

  19. Mismatch and resolution in compressive imaging

    NASA Astrophysics Data System (ADS)

    Fannjiang, Albert; Liao, Wenjing

    2011-09-01

    Highly coherent sensing matrices arise in discretization of continuum problems such as radar and medical imaging when the grid spacing is below the Rayleigh threshold as well as in using highly coherent, redundant dictionaries as sparsifying operators. Algorithms (BOMP, BLOOMP) based on techniques of band exclusion and local optimization are proposed to enhance Orthogonal Matching Pursuit (OMP) and deal with such coherent sensing matrices. BOMP and BLOOMP have provably performance guarantee of reconstructing sparse, widely separated objects independent of the redundancy and have a sparsity constraint and computational cost similar to OMP's. Numerical study demonstrates the effectiveness of BLOOMP for compressed sensing with highly coherent, redundant sensing matrices.

  20. Application of nonlinear pulse shaping of femtosecond pulse generation in a fiber amplifier at 500 MHz repetition rate

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Luo, Daping; Wang, Chao; Zhu, Zhiwei; Li, Wenxue

    2018-03-01

    We numerically and experimentally demonstrate that a nonlinear pulse shaping technique based on pre-chirping management in a short gain fiber can be exploited to improve the quality of a compressed pulse. With prior tuning of the pulse chirp, the amplified pulse express different nonlinear propagating processes. A spectrum with s flat top and more smooth wings, showing a similariton feature, generates with the optimal initial pulse chirp, and the shortest pulses with minimal pulse pedestals are obtained. Experimental results show the ability of nonlinear pulse shaping to enhance the quality of compressed pulses, as theoretically expected.

  1. Tensor-product preconditioners for higher-order space-time discontinuous Galerkin methods

    NASA Astrophysics Data System (ADS)

    Diosady, Laslo T.; Murman, Scott M.

    2017-02-01

    A space-time discontinuous-Galerkin spectral-element discretization is presented for direct numerical simulation of the compressible Navier-Stokes equations. An efficient solution technique based on a matrix-free Newton-Krylov method is developed in order to overcome the stiffness associated with high solution order. The use of tensor-product basis functions is key to maintaining efficiency at high-order. Efficient preconditioning methods are presented which can take advantage of the tensor-product formulation. A diagonalized Alternating-Direction-Implicit (ADI) scheme is extended to the space-time discontinuous Galerkin discretization. A new preconditioner for the compressible Euler/Navier-Stokes equations based on the fast-diagonalization method is also presented. Numerical results demonstrate the effectiveness of these preconditioners for the direct numerical simulation of subsonic turbulent flows.

  2. Tensor-Product Preconditioners for Higher-Order Space-Time Discontinuous Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Diosady, Laslo T.; Murman, Scott M.

    2016-01-01

    space-time discontinuous-Galerkin spectral-element discretization is presented for direct numerical simulation of the compressible Navier-Stokes equat ions. An efficient solution technique based on a matrix-free Newton-Krylov method is developed in order to overcome the stiffness associated with high solution order. The use of tensor-product basis functions is key to maintaining efficiency at high order. Efficient preconditioning methods are presented which can take advantage of the tensor-product formulation. A diagonalized Alternating-Direction-Implicit (ADI) scheme is extended to the space-time discontinuous Galerkin discretization. A new preconditioner for the compressible Euler/Navier-Stokes equations based on the fast-diagonalization method is also presented. Numerical results demonstrate the effectiveness of these preconditioners for the direct numerical simulation of subsonic turbulent flows.

  3. ElGamal cryptosystem with embedded compression-crypto technique

    NASA Astrophysics Data System (ADS)

    Mandangan, Arif; Yin, Lee Souk; Hung, Chang Ee; Hussin, Che Haziqah Che

    2014-12-01

    Key distribution problem in symmetric cryptography has been solved by the emergence of asymmetric cryptosystem. Due to its mathematical complexity, computation efficiency becomes a major problem in the real life application of asymmetric cryptosystem. This scenario encourage various researches regarding the enhancement of computation efficiency of asymmetric cryptosystems. ElGamal cryptosystem is one of the most established asymmetric cryptosystem. By using proper parameters, ElGamal cryptosystem is able to provide a good level of information security. On the other hand, Compression-Crypto technique is a technique used to reduce the number of plaintext to be encrypted from k∈ Z+, k > 2 plaintext become only 2 plaintext. Instead of encrypting k plaintext, we only need to encrypt these 2 plaintext. In this paper, we embed the Compression-Crypto technique into the ElGamal cryptosystem. To show that the embedded ElGamal cryptosystem works, we provide proofs on the decryption processes to recover the encrypted plaintext.

  4. Interactive Display of Surfaces Using Subdivision Surfaces and Wavelets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duchaineau, M A; Bertram, M; Porumbescu, S

    2001-10-03

    Complex surfaces and solids are produced by large-scale modeling and simulation activities in a variety of disciplines. Productive interaction with these simulations requires that these surfaces or solids be viewable at interactive rates--yet many of these surfaced solids can contain hundreds of millions of polygondpolyhedra. Interactive display of these objects requires compression techniques to minimize storage, and fast view-dependent triangulation techniques to drive the graphics hardware. In this paper, we review recent advances in subdivision-surface wavelet compression and optimization that can be used to provide a framework for both compression and triangulation. These techniques can be used to produce suitablemore » approximations of complex surfaces of arbitrary topology, and can be used to determine suitable triangulations for display. The techniques can be used in a variety of applications in computer graphics, computer animation and visualization.« less

  5. Develop advanced nonlinear signal analysis topographical mapping system

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The Space Shuttle Main Engine (SSME) has been undergoing extensive flight certification and developmental testing, which involves some 250 health monitoring measurements. Under the severe temperature, pressure, and dynamic environments sustained during operation, numerous major component failures have occurred, resulting in extensive engine hardware damage and scheduling losses. To enhance SSME safety and reliability, detailed analysis and evaluation of the measurements signal are mandatory to assess its dynamic characteristics and operational condition. Efficient and reliable signal detection techniques will reduce catastrophic system failure risks and expedite the evaluation of both flight and ground test data, and thereby reduce launch turn-around time. The basic objective of this contract are threefold: (1) develop and validate a hierarchy of innovative signal analysis techniques for nonlinear and nonstationary time-frequency analysis. Performance evaluation will be carried out through detailed analysis of extensive SSME static firing and flight data. These techniques will be incorporated into a fully automated system; (2) develop an advanced nonlinear signal analysis topographical mapping system (ATMS) to generate a Compressed SSME TOPO Data Base (CSTDB). This ATMS system will convert tremendous amount of complex vibration signals from the entire SSME test history into a bank of succinct image-like patterns while retaining all respective phase information. High compression ratio can be achieved to allow minimal storage requirement, while providing fast signature retrieval, pattern comparison, and identification capabilities; and (3) integrate the nonlinear correlation techniques into the CSTDB data base with compatible TOPO input data format. Such integrated ATMS system will provide the large test archives necessary for quick signature comparison. This study will provide timely assessment of SSME component operational status, identify probable causes of malfunction, and indicate feasible engineering solutions. The final result of this program will yield an ATMS system of nonlinear and nonstationary spectral analysis software package integrated with the Compressed SSME TOPO Data Base (CSTDB) on the same platform. This system will allow NASA engineers to retrieve any unique defect signatures and trends associated with different failure modes and anomalous phenomena over the entire SSME test history across turbo pump families.

  6. Develop advanced nonlinear signal analysis topographical mapping system

    NASA Technical Reports Server (NTRS)

    Jong, Jen-Yi

    1993-01-01

    The SSME has been undergoing extensive flight certification and developmental testing, which involves some 250 health monitoring measurements. Under the severe temperature pressure, and dynamic environments sustained during operation, numerous major component failures have occurred, resulting in extensive engine hardware damage and scheduling losses. To enhance SSME safety and reliability, detailed analysis and evaluation of the measurements signal are mandatory to assess its dynamic characteristics and operational condition. Efficient and reliable signal detection techniques will reduce catastrophic system failure risks and expedite the evaluation of both flight and ground test data, and thereby reduce launch turn-around time. The basic objective of this contract are threefold: (1) Develop and validate a hierarchy of innovative signal analysis techniques for nonlinear and nonstationary time-frequency analysis. Performance evaluation will be carried out through detailed analysis of extensive SSME static firing and flight data. These techniques will be incorporated into a fully automated system. (2) Develop an advanced nonlinear signal analysis topographical mapping system (ATMS) to generate a Compressed SSME TOPO Data Base (CSTDB). This ATMS system will convert tremendous amounts of complex vibration signals from the entire SSME test history into a bank of succinct image-like patterns while retaining all respective phase information. A high compression ratio can be achieved to allow the minimal storage requirement, while providing fast signature retrieval, pattern comparison, and identification capabilities. (3) Integrate the nonlinear correlation techniques into the CSTDB data base with compatible TOPO input data format. Such integrated ATMS system will provide the large test archives necessary for a quick signature comparison. This study will provide timely assessment of SSME component operational status, identify probable causes of malfunction, and indicate feasible engineering solutions. The final result of this program will yield an ATMS system of nonlinear and nonstationary spectral analysis software package integrated with the Compressed SSME TOPO Data Base (CSTDB) on the same platform. This system will allow NASA engineers to retrieve any unique defect signatures and trends associated with different failure modes and anomalous phenomena over the entire SSME test history across turbo pump families.

  7. Compressed sensing for rapid late gadolinium enhanced imaging of the left atrium: A preliminary study.

    PubMed

    Kamesh Iyer, Srikant; Tasdizen, Tolga; Burgon, Nathan; Kholmovski, Eugene; Marrouche, Nassir; Adluru, Ganesh; DiBella, Edward

    2016-09-01

    Current late gadolinium enhancement (LGE) imaging of left atrial (LA) scar or fibrosis is relatively slow and requires 5-15min to acquire an undersampled (R=1.7) 3D navigated dataset. The GeneRalized Autocalibrating Partially Parallel Acquisitions (GRAPPA) based parallel imaging method is the current clinical standard for accelerating 3D LGE imaging of the LA and permits an acceleration factor ~R=1.7. Two compressed sensing (CS) methods have been developed to achieve higher acceleration factors: a patch based collaborative filtering technique tested with acceleration factor R~3, and a technique that uses a 3D radial stack-of-stars acquisition pattern (R~1.8) with a 3D total variation constraint. The long reconstruction time of these CS methods makes them unwieldy to use, especially the patch based collaborative filtering technique. In addition, the effect of CS techniques on the quantification of percentage of scar/fibrosis is not known. We sought to develop a practical compressed sensing method for imaging the LA at high acceleration factors. In order to develop a clinically viable method with short reconstruction time, a Split Bregman (SB) reconstruction method with 3D total variation (TV) constraints was developed and implemented. The method was tested on 8 atrial fibrillation patients (4 pre-ablation and 4 post-ablation datasets). Blur metric, normalized mean squared error and peak signal to noise ratio were used as metrics to analyze the quality of the reconstructed images, Quantification of the extent of LGE was performed on the undersampled images and compared with the fully sampled images. Quantification of scar from post-ablation datasets and quantification of fibrosis from pre-ablation datasets showed that acceleration factors up to R~3.5 gave good 3D LGE images of the LA wall, using a 3D TV constraint and constrained SB methods. This corresponds to reducing the scan time by half, compared to currently used GRAPPA methods. Reconstruction of 3D LGE images using the SB method was over 20 times faster than standard gradient descent methods. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Construction of an ultra low temperature cryostat and transverse acoustic spectroscopy in superfluid helium-3 in compressed aerogels

    NASA Astrophysics Data System (ADS)

    Bhupathi, Pradeep

    An ultra low temperature cryostat is designed and implemented in this work to perform experiments at sub-millikelvin temperatures, specifically aimed at understanding the superfluid phases of 3He in various scenarios. The cryostat is a combination of a dilution refrigerator (Oxford Kelvinox 400) with a base temperature of 5.2 mK and a 48 mole copper block as the adiabatic nuclear demagnetization stage with a lowest temperature of ≈ 200 muK. With the various techniques implemented for limiting the ambient heat leak to the cryostat, we were able to stay below 1 mK for longer than 5 weeks. The details of design, construction and performance of the cryostat are presented. We measured high frequency shear acoustic impedance in superfluid 3He in 98% porosity aerogel at pressures of 29 bar and 32 bar in magnetic fields upto 3 kG with the aerogel cylinder compressed along the symmetry axis to generate global anisotropy. With 5% compression, there is an indication of a supercooled A-like to B-like transition in aerogel in a wider temperature width than the A phase in the bulk, while at 10% axial compression, the A-like to B-like transition is absent on cooling down to ≈ 300 muK in zero magnetic field and in magnetic fields up to 3 kG. This behavior is in contrast to that in 3He in uncompressed aerogels, in which the supercooled A-like to B-like transitions have been identified by various experimental techniques. Our result is consistent with theoretical predictions. To characterize the anisotropy in compressed aerogels, optical birefringence is measured in 98% porosity silica aerogel samples subjected to various degrees of uniaxial compression up to 15% strain, with wavelengths between 200 to 800 nm. Uncompressed aerogels exhibit no or a minimal degree of birefringence, indicating the isotropic nature of the material over the length scale of the wavelength. Uniaxial compression of aerogel introduces global anisotropy, which produces birefringence in the material. We observed a quasi-linear strain dependence in Deltan = ne -- no in compressed aerogels, where n e(o) is the index of refraction for the extraordinary (ordinary) ray of light that has its polarization parallel to the compression axis. Incidentally, this effect has potential applications for aerogels as tunable waveplates operating in a broad spectral range.

  9. Porous ceramic scaffolds with complex architectures

    NASA Astrophysics Data System (ADS)

    Munch, E.; Franco, J.; Deville, S.; Hunger, P.; Saiz, E.; Tomsia, A. P.

    2008-06-01

    This work compares two novel techniques for the fabrication of ceramic scaffolds for bone tissue engineering with complex porosity: robocasting and freeze casting. Both techniques are based on the preparation of concentrated ceramic suspensions with suitable properties for the process. In robocasting, the computer-guided deposition of the suspensions is used to build porous materials with designed three dimensional geometries and microstructures. Freeze casting uses ice crystals as a template to form porous lamellar ceramic materials. Preliminary results on the compressive strengths of the materials are also reported.

  10. Pulse compression of harmonic chirp signals using the fractional fourier transform.

    PubMed

    Arif, M; Cowell, D M J; Freear, S

    2010-06-01

    In ultrasound harmonic imaging with chirp-coded excitation, a harmonic matched filter (HMF) is typically used on the received signal to perform pulse compression of the second harmonic component (SHC) to recover signal axial resolution. Designing the HMF for the compression of the SHC is a problematic issue because it requires optimal window selection. In the compressed second harmonic signal, the sidelobe level may increase and the mainlobe width (MLW) widen under a mismatched condition, resulting in loss of axial resolution. We propose the use of the fractional Fourier transform (FrFT) as an alternative tool to perform compression of the chirp-coded SHC generated as a result of the nonlinear propagation of an ultrasound signal. Two methods are used to experimentally assess the performance benefits of the FrFT technique over the HMF techniques. The first method uses chirp excitation with central frequency of 2.25 MHz and bandwidth of 1 MHz. The second method uses chirp excitation with pulse inversion to increase the bandwidth to 2 MHz. In this study, experiments were performed in a water tank with a single-element transducer mounted coaxially with a hydrophone in a pitch-catch configuration. Results are presented that indicate that the FrFT can perform pulse compression of the second harmonic chirp component, with a 14% reduction in the MLW of the compressed signal when compared with the HMF. Also, the FrFT provides at least 23% reduction in the MLW of the compressed signal when compared with the harmonic mismatched filter (HMMF). The FrFT maintains comparable peak and integrated sidelobe levels when compared with the HMF and HMMF techniques. Copyright 2010 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  11. Safety and Efficacy of Defibrillator Charging During Ongoing Chest Compressions: A Multicenter Study

    PubMed Central

    Edelson, Dana P.; Robertson-Dick, Brian J.; Yuen, Trevor C.; Eilevstjønn, Joar; Walsh, Deborah; Bareis, Charles J.; Vanden Hoek, Terry L.; Abella, Benjamin S.

    2013-01-01

    BACKGROUND Pauses in chest compressions during cardiopulmonary resuscitation have been shown to correlate with poor outcomes. In an attempt to minimize these pauses, the American Heart Association recommends charging the defibrillator during chest compressions. While simulation work suggests decreased pause times using this technique, little is known about its use in clinical practice. METHODS We conducted a multicenter, retrospective study of defibrillator charging at three US academic teaching hospitals between April 2006 and April 2009. Data were abstracted from CPR-sensing defibrillator transcripts. Pre-shock pauses and total hands- off time preceding the defibrillation attempts were compared among techniques. RESULTS A total of 680 charge-cycles from 244 cardiac arrests were analyzed. The defibrillator was charged during ongoing chest compressions in 448 (65.9%) instances with wide variability across the three sites. Charging during compressions correlated with a decrease in median pre-shock pause [2.6 (IQR 1.9–3.8) vs 13.3 (IQR 8.6–19.5) s; p < 0.001] and total hands-off time in the 30 s preceding defibrillation [10.3 (IQR 6.4–13.8) vs 14.8 (IQR 11.0–19.6) s; p < 0.001]. The improvement in hands-off time was most pronounced when rescuers charged the defibrillator in anticipation of the pause, prior to any rhythm analysis. There was no difference in inappropriate shocks when charging during chest compressions (20.0 vs 20.1%; p=0.97) and there was only one instance noted of inadvertent shock administration during compressions, which went unnoticed by the compressor. CONCLUSIONS Charging during compressions is underutilized in clinical practice. The technique is associated with decreased hands-off time preceding defibrillation, with minimal risk to patients or rescuers. PMID:20807672

  12. Application of Compressive Sensing to Gravitational Microlensing Experiments

    NASA Technical Reports Server (NTRS)

    Korde-Patel, Asmita; Barry, Richard K.; Mohsenin, Tinoosh

    2016-01-01

    Compressive Sensing is an emerging technology for data compression and simultaneous data acquisition. This is an enabling technique for significant reduction in data bandwidth, and transmission power and hence, can greatly benefit spaceflight instruments. We apply this process to detect exoplanets via gravitational microlensing. We experiment with various impact parameters that describe microlensing curves to determine the effectiveness and uncertainty caused by Compressive Sensing. Finally, we describe implications for spaceflight missions.

  13. Virtual Sonography Through the Internet: Volume Compression Issues

    PubMed Central

    Vilarchao-Cavia, Joseba; Troyano-Luque, Juan-Mario; Clavijo, Matilde

    2001-01-01

    Background Three-dimensional ultrasound images allow virtual sonography even at a distance. However, the size of final 3-D files limits their transmission through slow networks such as the Internet. Objective To analyze compression techniques that transform ultrasound images into small 3-D volumes that can be transmitted through the Internet without loss of relevant medical information. Methods Samples were selected from ultrasound examinations performed during, 1999-2000, in the Obstetrics and Gynecology Department at the University Hospital in La Laguna, Canary Islands, Spain. The conventional ultrasound video output was recorded at 25 fps (frames per second) on a PC, producing 100- to 120-MB files (for from 500 to 550 frames). Processing to obtain 3-D images progressively reduced file size. Results The original frames passed through different compression stages: selecting the region of interest, rendering techniques, and compression for storage. Final 3-D volumes reached 1:25 compression rates (1.5- to 2-MB files). Those volumes need 7 to 8 minutes to be transmitted through the Internet at a mean data throughput of 6.6 Kbytes per second. At the receiving site, virtual sonography is possible using orthogonal projections or oblique cuts. Conclusions Modern volume-rendering techniques allowed distant virtual sonography through the Internet. This is the result of their efficient data compression that maintains its attractiveness as a main criterion for distant diagnosis. PMID:11720963

  14. Compressed air production with waste heat utilization in industry

    NASA Astrophysics Data System (ADS)

    Nolting, E.

    1984-06-01

    The centralized power-heat coupling (PHC) technique using block heating power stations, is presented. Compressed air production in PHC technique with internal combustion engine drive achieves a high degree of primary energy utilization. Cost savings of 50% are reached compared to conventional production. The simultaneous utilization of compressed air and heat is especially interesting. A speed regulated drive via an internal combustion motor gives a further saving of 10% to 20% compared to intermittent operation. The high fuel utilization efficiency ( 80%) leads to a pay off after two years for operation times of 3000 hr.

  15. High speed and high resolution interrogation of a fiber Bragg grating sensor based on microwave photonic filtering and chirped microwave pulse compression.

    PubMed

    Xu, Ou; Zhang, Jiejun; Yao, Jianping

    2016-11-01

    High speed and high resolution interrogation of a fiber Bragg grating (FBG) sensor based on microwave photonic filtering and chirped microwave pulse compression is proposed and experimentally demonstrated. In the proposed sensor, a broadband linearly chirped microwave waveform (LCMW) is applied to a single-passband microwave photonic filter (MPF) which is implemented based on phase modulation and phase modulation to intensity modulation conversion using a phase modulator (PM) and a phase-shifted FBG (PS-FBG). Since the center frequency of the MPF is a function of the central wavelength of the PS-FBG, when the PS-FBG experiences a strain or temperature change, the wavelength is shifted, which leads to the change in the center frequency of the MPF. At the output of the MPF, a filtered chirped waveform with the center frequency corresponding to the applied strain or temperature is obtained. By compressing the filtered LCMW in a digital signal processor, the resolution is improved. The proposed interrogation technique is experimentally demonstrated. The experimental results show that interrogation sensitivity and resolution as high as 1.25 ns/με and 0.8 με are achieved.

  16. Adaptive coding of MSS imagery. [Multi Spectral band Scanners

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Samulon, A. S.; Fultz, G. L.; Lumb, D.

    1977-01-01

    A number of adaptive data compression techniques are considered for reducing the bandwidth of multispectral data. They include adaptive transform coding, adaptive DPCM, adaptive cluster coding, and a hybrid method. The techniques are simulated and their performance in compressing the bandwidth of Landsat multispectral images is evaluated and compared using signal-to-noise ratio and classification consistency as fidelity criteria.

  17. Advances in clinical studies of cardiopulmonary resuscitation

    PubMed Central

    Chen, Shou-quan

    2015-01-01

    BACKGROUND: The survival rate of patients after cardiac arrest (CA) remains lower since 2010 International Consensus on Cardiopulmonary Resuscitation (CPR) and Emergency Cardiovascular Care (ECC) was published. In clinical trials, the methods and techniques for CPR have been overly described. This article gives an overview of the progress in methods and techniques for CPR in the past years. DATA SOURCES: Original articles about cardiac arrest and CPR from MEDLINE (PubMed) and relevant journals were searched, and most of them were clinical randomized controlled trials (RCTs). RESULTS: Forty-two articles on methods and techniques of CPR were reviewed, including chest compression and conventional CPR, chest compression depth and speed, defibrillation strategies and priority, mechanical and manual chest compression, advanced airway management, impedance threshold device (ITD) and active compression-decompression (ACD) CPR, epinephrine use, and therapeutic hypothermia. The results of studies and related issues described in the international guidelines had been testified. CONCLUSIONS: Although large multicenter studies on CPR are still difficult to carry out, progress has been made in the past 4 years in the methods and techniques of CPR. The results of this review provide evidences for updating the 2015 international guidelines. PMID:26056537

  18. The preparation of liposomes using compressed carbon dioxide: strategies, important considerations and comparison with conventional techniques.

    PubMed

    Bridson, R H; Santos, R C D; Al-Duri, B; McAllister, S M; Robertson, J; Alpar, H O

    2006-06-01

    Numerous strategies are currently available for preparing liposomes, although no single method is ideal in every respect. Two methods for producing liposomes using compressed carbon dioxide in either its liquid or supercritical state were therefore investigated as possible alternatives to the conventional techniques currently used. The first technique used modified compressed carbon dioxide as a solvent system. The way in which changes in pressure, temperature, apparatus geometry and solvent flow rate affected the size distributions of the formulations was examined. In general, liposomes in the nano-size range with an average diameter of 200 nm could be produced, although some micron-sized vesicles were also present. Liposomes were characterized according to their hydrophobic drug-loading capacity and encapsulated aqueous volumes. The latter were found to be higher than in conventional techniques such as high-pressure homogenization. The second method used compressed carbon dioxide as an anti-solvent to promote uniform precipitation of phospholipids from concentrated ethanolic solutions. Finely divided solvent-free phospholipid powders of saturated lipids could be prepared that were subsequently hydrated to produce liposomes with mean volume diameters of around 5 microm.

  19. Compression and information recovery in ptychography

    NASA Astrophysics Data System (ADS)

    Loetgering, L.; Treffer, D.; Wilhein, T.

    2018-04-01

    Ptychographic coherent diffraction imaging (PCDI) is a scanning microscopy modality that allows for simultaneous recovery of object and illumination information. This ability renders PCDI a suitable technique for x-ray lensless imaging and optics characterization. Its potential for information recovery typically relies on large amounts of data redundancy. However, the field of view in ptychography is practically limited by the memory and the computational facilities available. We describe techniques that achieve robust ptychographic information recovery at high compression rates. The techniques are compared and tested with experimental data.

  20. Design of a digital voice data compression technique for orbiter voice channels

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Candidate techniques were investigated for digital voice compression to a transmission rate of 8 kbps. Good voice quality, speaker recognition, and robustness in the presence of error bursts were considered. The technique of delayed-decision adaptive predictive coding is described and compared with conventional adaptive predictive coding. Results include a set of experimental simulations recorded on analog tape. The two FM broadcast segments produced show the delayed-decision technique to be virtually undegraded or minimally degraded at .001 and .01 Viterbi decoder bit error rates. Preliminary estimates of the hardware complexity of this technique indicate potential for implementation in space shuttle orbiters.

  1. Beam tuning and bunch length measurement in the bunch compression operation at the cERL

    NASA Astrophysics Data System (ADS)

    Honda, Y.; Shimada, M.; Miyajima, T.; Hotei, T.; Nakamura, N.; Kato, R.; Obina, T.; Takai, R.; Harada, K.; Ueda, A.

    2017-12-01

    Realization of a short bunch beam by manipulating the longitudinal phase space distribution with a finite longitudinal dispersion following an off-crest acceleration is a widely used technique. The technique was applied in a compact test accelerator of an energy-recovery linac scheme for compressing the bunch length at the return loop. A diagnostic system utilizing coherent transition radiation was developed for the beam tuning and for estimating the bunch length. By scanning the beam parameters, we experimentally found the best condition for the bunch compression. The RMS bunch length of 250 ±50 fs was obtained at a bunch charge of 2 pC. This result confirmed the design and the tuning procedure of the bunch compression operation for the future energy-recovery linac (ERL).

  2. A nonlinear relaxation/quasi-Newton algorithm for the compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Edwards, Jack R.; Mcrae, D. S.

    1992-01-01

    A highly efficient implicit method for the computation of steady, two-dimensional compressible Navier-Stokes flowfields is presented. The discretization of the governing equations is hybrid in nature, with flux-vector splitting utilized in the streamwise direction and central differences with flux-limited artificial dissipation used for the transverse fluxes. Line Jacobi relaxation is used to provide a suitable initial guess for a new nonlinear iteration strategy based on line Gauss-Seidel sweeps. The applicability of quasi-Newton methods as convergence accelerators for this and other line relaxation algorithms is discussed, and efficient implementations of such techniques are presented. Convergence histories and comparisons with experimental data are presented for supersonic flow over a flat plate and for several high-speed compression corner interactions. Results indicate a marked improvement in computational efficiency over more conventional upwind relaxation strategies, particularly for flowfields containing large pockets of streamwise subsonic flow.

  3. Evaluating the effectiveness of SW-only video coding for real-time video transmission over low-rate wireless networks

    NASA Astrophysics Data System (ADS)

    Bartolini, Franco; Pasquini, Cristina; Piva, Alessandro

    2001-04-01

    The recent development of video compression algorithms allowed the diffusion of systems for the transmission of video sequences over data networks. However, the transmission over error prone mobile communication channels is yet an open issue. In this paper, a system developed for the real time transmission of H263 video coded sequences over TETRA mobile networks is presented. TETRA is an open digital trunked radio standard defined by the European Telecommunications Standardization Institute developed for professional mobile radio users, providing full integration of voice and data services. Experimental tests demonstrate that, in spite of the low frame rate allowed by the SW only implementation of the decoder and by the low channel rate a video compression technique such as that complying with the H263 standard, is still preferable to a simpler but less effective frame based compression system.

  4. [Medical image compression: a review].

    PubMed

    Noreña, Tatiana; Romero, Eduardo

    2013-01-01

    Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings.

  5. A new technique for the diagnosis of acute appendicitis: abdominal CT with compression to the right lower quadrant.

    PubMed

    Kılınçer, Abidin; Akpınar, Erhan; Erbil, Bülent; Ünal, Emre; Karaosmanoğlu, Ali Devrim; Kaynaroğlu, Volkan; Akata, Deniz; Özmen, Mustafa

    2017-08-01

    To determine the diagnostic accuracy of abdominal CT with compression to the right lower quadrant (RLQ) in adults with acute appendicitis. 168 patients (age range, 18-78 years) were included who underwent contrast-enhanced CT for suspected appendicitis performed either using compression to the RLQ (n = 71) or a standard protocol (n = 97). Outer diameter of the appendix, appendiceal wall thickening, luminal content and associated findings were evaluated in each patient. Kruskal-Wallis, Fisher's and Pearson's chi-squared tests were used for statistical analysis. There was no significant difference in the mean outer diameter (MOD) between compression CT scans (10.6 ± 1.9 mm) and standard protocol (11.2 ± 2.3 mm) in patients with acute appendicitis (P = 1). MOD was significantly lower in the compression group (5.2 ± 0.8 mm) compared to the standard protocol (6.5 ± 1.1 mm) (P < 0.01) in patients without appendicitis. A cut-off value of 6.75 mm for the outer diameter of the appendix was found to be 100% sensitive in the diagnosis of acute appendicitis for both groups. The specificity was higher for compression CT technique (67.7 vs. 94.9%). Normal appendix diameter was significantly smaller in the compression-CT group compared to standard-CT group, increasing diagnostic accuracy of abdominal compression CT. • Normal appendix diameter is significantly smaller in compression CT. • Compression could force contrast material to flow through the appendiceal lumen. • Compression CT may be a CT counterpart of graded compression US.

  6. Mechanics Model for Simulating RC Hinges under Reversed Cyclic Loading

    PubMed Central

    Shukri, Ahmad Azim; Visintin, Phillip; Oehlers, Deric J.; Jumaat, Mohd Zamin

    2016-01-01

    Describing the moment rotation (M/θ) behavior of reinforced concrete (RC) hinges is essential in predicting the behavior of RC structures under severe loadings, such as under cyclic earthquake motions and blast loading. The behavior of RC hinges is defined by localized slip or partial interaction (PI) behaviors in both the tension and compression region. In the tension region, slip between the reinforcement and the concrete defines crack spacing, crack opening and closing, and tension stiffening. While in the compression region, slip along concrete to concrete interfaces defines the formation and failure of concrete softening wedges. Being strain-based, commonly-applied analysis techniques, such as the moment curvature approach, cannot directly simulate these PI behaviors because they are localized and displacement based. Therefore, strain-based approaches must resort to empirical factors to define behaviors, such as tension stiffening and concrete softening hinge lengths. In this paper, a displacement-based segmental moment rotation approach, which directly simulates the partial interaction behaviors in both compression and tension, is developed for predicting the M/θ response of an RC beam hinge under cyclic loading. Significantly, in order to develop the segmental approach, a partial interaction model to predict the tension stiffening load slip relationship between the reinforcement and the concrete is developed. PMID:28773430

  7. Mechanics Model for Simulating RC Hinges under Reversed Cyclic Loading.

    PubMed

    Shukri, Ahmad Azim; Visintin, Phillip; Oehlers, Deric J; Jumaat, Mohd Zamin

    2016-04-22

    Describing the moment rotation (M/θ) behavior of reinforced concrete (RC) hinges is essential in predicting the behavior of RC structures under severe loadings, such as under cyclic earthquake motions and blast loading. The behavior of RC hinges is defined by localized slip or partial interaction (PI) behaviors in both the tension and compression region. In the tension region, slip between the reinforcement and the concrete defines crack spacing, crack opening and closing, and tension stiffening. While in the compression region, slip along concrete to concrete interfaces defines the formation and failure of concrete softening wedges. Being strain-based, commonly-applied analysis techniques, such as the moment curvature approach, cannot directly simulate these PI behaviors because they are localized and displacement based. Therefore, strain-based approaches must resort to empirical factors to define behaviors, such as tension stiffening and concrete softening hinge lengths. In this paper, a displacement-based segmental moment rotation approach, which directly simulates the partial interaction behaviors in both compression and tension, is developed for predicting the M/θ response of an RC beam hinge under cyclic loading. Significantly, in order to develop the segmental approach, a partial interaction model to predict the tension stiffening load slip relationship between the reinforcement and the concrete is developed.

  8. Research on key technologies for data-interoperability-based metadata, data compression and encryption, and their application

    NASA Astrophysics Data System (ADS)

    Yu, Xu; Shao, Quanqin; Zhu, Yunhai; Deng, Yuejin; Yang, Haijun

    2006-10-01

    With the development of informationization and the separation between data management departments and application departments, spatial data sharing becomes one of the most important objectives for the spatial information infrastructure construction, and spatial metadata management system, data transmission security and data compression are the key technologies to realize spatial data sharing. This paper discusses the key technologies for metadata based on data interoperability, deeply researches the data compression algorithms such as adaptive Huffman algorithm, LZ77 and LZ78 algorithm, studies to apply digital signature technique to encrypt spatial data, which can not only identify the transmitter of spatial data, but also find timely whether the spatial data are sophisticated during the course of network transmission, and based on the analysis of symmetric encryption algorithms including 3DES,AES and asymmetric encryption algorithm - RAS, combining with HASH algorithm, presents a improved mix encryption method for spatial data. Digital signature technology and digital watermarking technology are also discussed. Then, a new solution of spatial data network distribution is put forward, which adopts three-layer architecture. Based on the framework, we give a spatial data network distribution system, which is efficient and safe, and also prove the feasibility and validity of the proposed solution.

  9. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  10. Application of Compressive Sensing to Gravitational Microlensing Experiments

    NASA Astrophysics Data System (ADS)

    Korde-Patel, Asmita; Barry, Richard K.; Mohsenin, Tinoosh

    2017-06-01

    Compressive Sensing is an emerging technology for data compression and simultaneous data acquisition. This is an enabling technique for significant reduction in data bandwidth, and transmission power and hence, can greatly benefit space-flight instruments. We apply this process to detect exoplanets via gravitational microlensing. We experiment with various impact parameters that describe microlensing curves to determine the effectiveness and uncertainty caused by Compressive Sensing. Finally, we describe implications for space-flight missions.

  11. Volume and tissue composition preserving deformation of breast CT images to simulate breast compression in mammographic imaging

    NASA Astrophysics Data System (ADS)

    Han, Tao; Chen, Lingyun; Lai, Chao-Jen; Liu, Xinming; Shen, Youtao; Zhong, Yuncheng; Ge, Shuaiping; Yi, Ying; Wang, Tianpeng; Shaw, Chris C.

    2009-02-01

    Images of mastectomy breast specimens have been acquired with a bench top experimental Cone beam CT (CBCT) system. The resulting images have been segmented to model an uncompressed breast for simulation of various CBCT techniques. To further simulate conventional or tomosynthesis mammographic imaging for comparison with the CBCT technique, a deformation technique was developed to convert the CT data for an uncompressed breast to a compressed breast without altering the breast volume or regional breast density. With this technique, 3D breast deformation is separated into two 2D deformations in coronal and axial views. To preserve the total breast volume and regional tissue composition, each 2D deformation step was achieved by altering the square pixels into rectangular ones with the pixel areas unchanged and resampling with the original square pixels using bilinear interpolation. The compression was modeled by first stretching the breast in the superior-inferior direction in the coronal view. The image data were first deformed by distorting the voxels with a uniform distortion ratio. These deformed data were then deformed again using distortion ratios varying with the breast thickness and re-sampled. The deformation procedures were applied in the axial view to stretch the breast in the chest wall to nipple direction while shrinking it in the mediolateral to lateral direction re-sampled and converted into data for uniform cubic voxels. Threshold segmentation was applied to the final deformed image data to obtain the 3D compressed breast model. Our results show that the original segmented CBCT image data were successfully converted into those for a compressed breast with the same volume and regional density preserved. Using this compressed breast model, conventional and tomosynthesis mammograms were simulated for comparison with CBCT.

  12. Spectral compression algorithms for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R.

    2007-10-16

    A method for spectrally compressing data sets enables the efficient analysis of very large multivariate images. The spectral compression algorithm uses a factored representation of the data that can be obtained from Principal Components Analysis or other factorization technique. Furthermore, a block algorithm can be used for performing common operations more efficiently. An image analysis can be performed on the factored representation of the data, using only the most significant factors. The spectral compression algorithm can be combined with a spatial compression algorithm to provide further computational efficiencies.

  13. Video multiple watermarking technique based on image interlacing using DWT.

    PubMed

    Ibrahim, Mohamed M; Abdel Kader, Neamat S; Zorkany, M

    2014-01-01

    Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.

  14. Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh I.; Keymeulen, Didier; Kimesh, Matthew A.

    2012-01-01

    Modern hyperspectral imaging systems are able to acquire far more data than can be downlinked from a spacecraft. Onboard data compression helps to alleviate this problem, but requires a system capable of power efficiency and high throughput. Software solutions have limited throughput performance and are power-hungry. Dedicated hardware solutions can provide both high throughput and power efficiency, while taking the load off of the main processor. Thus a hardware compression system was developed. The implementation uses a field-programmable gate array (FPGA). The implementation is based on the fast lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral-Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which achieves excellent compression performance and has low complexity. This algorithm performs predictive compression using an adaptive filtering method, and uses adaptive Golomb coding. The implementation also packetizes the coded data. The FL algorithm is well suited for implementation in hardware. In the FPGA implementation, one sample is compressed every clock cycle, which makes for a fast and practical realtime solution for space applications. Benefits of this implementation are: 1) The underlying algorithm achieves a combination of low complexity and compression effectiveness that exceeds that of techniques currently in use. 2) The algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. 3) Hardware acceleration provides a throughput improvement of 10 to 100 times vs. the software implementation. A prototype of the compressor is available in software, but it runs at a speed that does not meet spacecraft requirements. The hardware implementation targets the Xilinx Virtex IV FPGAs, and makes the use of this compressor practical for Earth satellites as well as beyond-Earth missions with hyperspectral instruments.

  15. Experimental Results From Stitched Composite Multi-Bay Fuselage Panels Tested Under Uni-Axial Compression

    NASA Technical Reports Server (NTRS)

    Baker, Donald J.

    2004-01-01

    The experimental results from two stitched VARTM composite panels tested under uni-axial compression loading are presented. The curved panels are divided by frames and stringers into five or six bays with a column of three bays along the compressive loading direction. The frames are supported at the ends to resist out-of-plane translation. Back-to-back strain gages are used to record the strain and displacement transducers were used to record the out-of-plane displacements. In addition a full-field measurement technique that utilizes a camera-based-stero-vision system was used to record displacements. The panels were loaded in increments to determine the first bay to buckle. Loading was discontinued at limit load and the panels were removed from the test machine for impact testing. After impacting at 20 ft-lbs to 25 ft-lbs of energy with a spherical indenter, the panels were loaded in compression until failure. Impact testing reduced the axial stiffness 4 percent and less than 1 percent. Postbuckled axial panel stiffness was 52 percent and 70 percent of the pre-buckled stiffness.

  16. Utilization of Forward Error Correction (FEC) Techniques With Extensible Markup Language (XML) Schema-Based Binary Compression (XSBC) Technology

    DTIC Science & Technology

    2004-12-01

    NY 7. Erik Chaum NUWC Newport, RI 8. David Bellino NPRI Newport, RI 9. Dick Nadolink NUWC Newport, RI 10. VADM Roger Bacon (Ret...Science Advisor Pearl Harbor, HI 16. LT Andrew Hurvitz, USN FNMOC Monterey, CA 17. ENS Darin Keeter, USN FNMOC Monterey, CA 18. CAPT David

  17. Percutaneous Treatment of Iatrogenic Pseudoaneurysms by Cyanoacrylate-Based Wall-Gluing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Corso, Andrea, E-mail: adelcorso2000@hotmail.com; Vergaro, Giuseppe

    Purpose. Although the majority of iatrogenic pseudoaneurysms (PSAs) are amenable to ultrasound (US)-guided thrombin injection, patients with those causing neuropathy, claudication, significant venous compression, or soft tissue necrosis are considered poor candidates for this option and referred to surgery. We aimed to test the effectiveness and feasibility of a novel percutaneous cyanoacrylate glue (NBCA-MS)-based technique for treatment of symptomatic and asymptomatic iatrogenic PSA. Material and Methods. During a 3-year period, we prospectively enrolled 91 patients with iatrogenic PSA [total n = 94 (femoral n = 76; brachial n = 11; radial n = 6; axillary n = 1)]. PSA weremore » asymptomatic in 66 % of cases, and 34 % presented with symptoms due to neuropathy, venous compression, and/or soft tissue necrosis. All patients signed informed consent. All patients received NBCA-MS-based percutaneous treatment. PSA chamber emptying was first obtained by US-guided compression; superior and inferior walls of the PSA chamber were then stuck together using NBCA-MS microinjections. Successfulness of the procedure was assessed immediately and at 1-day and 1-, 3-, and 12-month US follow-up. Results. PSA occlusion rate was 99 % (93 of 94 cases). After treatment, mean PSA antero-posterior diameter decrease was 67 {+-} 22 %. Neuropathy and vein compression immediately disappeared in 91 % (29 of 32) of cases. Patients with tissue necrosis (n = 6) underwent subsequent outpatient necrosectomy. No distal embolization occurred, nor was conversion to surgery necessary. Conclusion. PSA treatment by way of NBCA-MS glue injection proved to be safe and effective in asymptomatic patients as well as those with neuropathy, venous compression, or soft-tissue necrosis (currently candidates for surgery). Larger series are needed to confirm these findings.« less

  18. Ultrasonic data compression via parameter estimation.

    PubMed

    Cardoso, Guilherme; Saniie, Jafar

    2005-02-01

    Ultrasonic imaging in medical and industrial applications often requires a large amount of data collection. Consequently, it is desirable to use data compression techniques to reduce data and to facilitate the analysis and remote access of ultrasonic information. The precise data representation is paramount to the accurate analysis of the shape, size, and orientation of ultrasonic reflectors, as well as to the determination of the properties of the propagation path. In this study, a successive parameter estimation algorithm based on a modified version of the continuous wavelet transform (CWT) to compress and denoise ultrasonic signals is presented. It has been shown analytically that the CWT (i.e., time x frequency representation) yields an exact solution for the time-of-arrival and a biased solution for the center frequency. Consequently, a modified CWT (MCWT) based on the Gabor-Helstrom transform is introduced as a means to exactly estimate both time-of-arrival and center frequency of ultrasonic echoes. Furthermore, the MCWT also has been used to generate a phase x bandwidth representation of the ultrasonic echo. This representation allows the exact estimation of the phase and the bandwidth. The performance of this algorithm for data compression and signal analysis is studied using simulated and experimental ultrasonic signals. The successive parameter estimation algorithm achieves a data compression ratio of (1-5N/J), where J is the number of samples and N is the number of echoes in the signal. For a signal with 10 echoes and 2048 samples, a compression ratio of 96% is achieved with a signal-to-noise ratio (SNR) improvement above 20 dB. Furthermore, this algorithm performs robustly, yields accurate echo estimation, and results in SNR enhancements ranging from 10 to 60 dB for composite signals having SNR as low as -10 dB.

  19. A crossover trial comparing wide dynamic range compression and frequency compression in hearing aids for tinnitus therapy.

    PubMed

    Hodgson, Shirley-Anne; Herdering, Regina; Singh Shekhawat, Giriraj; Searchfield, Grant D

    2017-01-01

    It has been suggested that frequency lowering may be a superior tinnitus reducing digital signal processing (DSP) strategy in hearing aids than conventional amplification. A crossover trial was undertaken to determine if frequency compression (FC) was superior to wide dynamic range compression (WDRC) in reducing tinnitus. A 6-8-week crossover trial of two digital signal-processing techniques (WDRC and 2 WDRC with FC) was undertaken in 16 persons with high-frequency sensorineural hearing loss and chronic tinnitus. WDRC resulted in larger improvements in Tinnitus Functional Index and rating scale scores than WDRC with FC. The tinnitus improvements obtained with both processing types appear to be due to reduced hearing handicap and possibly decreased tinnitus audibility. Hearing aids are useful assistive devices in the rehabilitation of tinnitus. FC was very successful in a few individuals but was not superior to WDRC across the sample. It is recommended that WDRC remain as the default first choice tinnitus hearing aid processing strategy for tinnitus. FC should be considered as one of the many other options for selection based on individual hearing needs. Implications of Rehabilitation Hearing aids can significantly reduce the effects of tinnitus after 6-8 weeks of use. Addition of frequency compression digital signal processing does not appear superior to standard amplitude compression alone. Improvements in tinnitus were correlated with reductions in hearing handicap.

  20. Reference-free compression of high throughput sequencing data with a probabilistic de Bruijn graph.

    PubMed

    Benoit, Gaëtan; Lemaitre, Claire; Lavenier, Dominique; Drezen, Erwan; Dayris, Thibault; Uricaru, Raluca; Rizk, Guillaume

    2015-09-14

    Data volumes generated by next-generation sequencing (NGS) technologies is now a major concern for both data storage and transmission. This triggered the need for more efficient methods than general purpose compression tools, such as the widely used gzip method. We present a novel reference-free method meant to compress data issued from high throughput sequencing technologies. Our approach, implemented in the software LEON, employs techniques derived from existing assembly principles. The method is based on a reference probabilistic de Bruijn Graph, built de novo from the set of reads and stored in a Bloom filter. Each read is encoded as a path in this graph, by memorizing an anchoring kmer and a list of bifurcations. The same probabilistic de Bruijn Graph is used to perform a lossy transformation of the quality scores, which allows to obtain higher compression rates without losing pertinent information for downstream analyses. LEON was run on various real sequencing datasets (whole genome, exome, RNA-seq or metagenomics). In all cases, LEON showed higher overall compression ratios than state-of-the-art compression software. On a C. elegans whole genome sequencing dataset, LEON divided the original file size by more than 20. LEON is an open source software, distributed under GNU affero GPL License, available for download at http://gatb.inria.fr/software/leon/.

  1. Measurement of effective bulk and contact resistance of gas diffusion layer under inhomogeneous compression - Part II: Thermal conductivity

    NASA Astrophysics Data System (ADS)

    Roy Chowdhury, Prabudhya; Vikram, Ajit; Phillips, Ryan K.; Hoorfar, Mina

    2016-07-01

    The gas diffusion layer (GDL) is a thin porous layer sandwiched between a bipolar plate (BPP) and a catalyst coated membrane in a fuel cell. Besides providing passage for water and gas transport from and to the catalyst layer, it is responsible for electron and heat transfer from and to the BPP. In this paper, a method has been developed to measure the GDL bulk thermal conductivity and the contact resistance at the GDL/BPP interface under inhomogeneous compression occurring in an actual fuel cell assembly. Toray carbon paper GDL TGP-H-060 was tested under a range of compression pressure of 0.34 to 1.71 MPa. The results showed that the thermal contact resistance decreases non-linearly (from 3.8 × 10-4 to 1.17 × 10-4 Km2 W-1) with increasing pressure due to increase in microscopic contact area between the GDL and BPP; while the effective bulk thermal conductivity increases (from 0.56 to 1.42 Wm-1 K-1) with increasing the compression pressure. The thermal contact resistance was found to be greater (by a factor of 1.6-2.8) than the effective bulk thermal resistance for all compression pressure ranges applied here. This measurement technique can be used to identify optimum GDL based on minimum bulk and contact resistances measured under inhomogeneous compression.

  2. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received prior to the loss can be used to reconstruct that partition at lower fidelity. By virtue of the compression improvement it achieves relative to previous means of onboard data compression, this software enables (1) increased return of hyperspectral scientific data in the presence of limits on the rates of transmission of data from spacecraft to Earth via radio communication links and/or (2) reduction in spacecraft radio-communication power and/or cost through reduction in the amounts of data required to be downlinked and stored onboard prior to downlink. The software is also suitable for compressing hyperspectral images for ground storage or archival purposes.

  3. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  4. Noncontact Measurement Of Critical Current In Superconductor

    NASA Technical Reports Server (NTRS)

    Israelsson, Ulf E.; Strayer, Donald M.

    1992-01-01

    Critical current measured indirectly via flux-compression technique. Magnetic flux compressed into gap between superconductive hollow cylinder and superconductive rod when rod inserted in hole in cylinder. Hall-effect probe measures flux density before and after compression. Method does not involve any electrical contact with superconductor. Therefore, does not cause resistive heating and consequent premature loss of superconductivity.

  5. College curriculum-sharing via CTS. [Communications Technology Satellite

    NASA Technical Reports Server (NTRS)

    Hudson, H. E.; Guild, P. D.; Coll, D. C.; Lumb, D. R.

    1975-01-01

    Domestic communication satellites and video compression techniques will increase communication channel capacity and reduce cost of video transmission. NASA Ames Research Center, Stanford University and Carleton University are participants in an experiment to develop, demonstrate, and evaluate college course sharing techniques via satellite using video compression. The universities will exchange televised seminar and lecture courses via CTS. The experiment features real-time video compression with channel coding and quadra-phase modulation for reducing transmission bandwidth and power requirements. Evaluation plans and preliminary results of Carleton surveys on student attitudes to televised teaching are presented. Policy implications for the U.S. and Canada are outlined.

  6. Chitin and Chitosan as Direct Compression Excipients in Pharmaceutical Applications

    PubMed Central

    Badwan, Adnan A.; Rashid, Iyad; Al Omari, Mahmoud M.H.; Darras, Fouad H.

    2015-01-01

    Despite the numerous uses of chitin and chitosan as new functional materials of high potential in various fields, they are still behind several directly compressible excipients already dominating pharmaceutical applications. There are, however, new attempts to exploit chitin and chitosan in co-processing techniques that provide a product with potential to act as a direct compression (DC) excipient. This review outlines the compression properties of chitin and chitosan in the context of DC pharmaceutical applications. PMID:25810109

  7. Monitoring fatigue damage in carbon fiber composites using an acoustic impact technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haque, A.; Raju, P.K.

    1998-06-01

    The acoustic impact technique (AIT) of nondestructive testing (NDT) has been used to identify the damage that results from the compressive and tension-compression cycle loading around a circular notch of quasiisotropic carbon-fiber composites. This method involves applying a low velocity impact to the test specimen and evaluating the resulting localized acoustic response. Results indicate that AIT can be applied for identification of both compressive and fatigue damage in composite laminates. The gross area of compressive and fatigue damage is detected through an increase in the pulse width, and a decrease in the amplitude, of the force-time signal. The response obtainedmore » in AIT is sensitive to the frequency of the impactor and the amplitude of the impact force and requires careful monitoring of these values to achieve repeatability of results.« less

  8. Venous compressions of the nerves in the lower limbs.

    PubMed

    Artico, M; Stevanato, G; Ionta, B; Cesaroni, A; Bianchi, E; Morselli, C; Grippaudo, F R

    2012-06-01

    The lower limbs are frequently involved in neurovascular compression syndromes, owing to their anatomical, vascular and muscular characteristics and to the orthostatic position. These syndromes were identified by exclusion, using neuroimaging techniques and treated by microsurgical techniques. Eight patients with a neurovascular compression syndrome due to venous vascular lesions in the lower limbs (popliteal fossa, proximal and medial third of the inferior limb, tarsal tunnel) were selected. The symptomatology was characterized by pain, Tinel's sign, hyperalgesia, allodynia, numbness along the nerve course and foot weakness: all were exacerbated by the standing position, thus suggesting a neurovascular compression syndrome. Diagnostic tools comprised Doppler ultrasonography, Electromyography, CT 3D and MRI. Treatment consisted of microsurgery with neurovascular dissection. Following surgical treatment, rapid pain relief and a partial recovery of neurological deficits (including the ability to walk) was observed within 8-10 months. An early diagnosis of NCS using various neuroimaging techniques and prompt treatment may improve the response to surgical therapy. The aim of the case studies described is to improve understanding of these pathologies thus enabling correct clinical decisions.

  9. Structural kinematics based damage zone prediction in gradient structures using vibration database

    NASA Astrophysics Data System (ADS)

    Talha, Mohammad; Ashokkumar, Chimpalthradi R.

    2014-05-01

    To explore the applications of functionally graded materials (FGMs) in dynamic structures, structural kinematics based health monitoring technique becomes an important problem. Depending upon the displacements in three dimensions, the health of the material to withstand dynamic loads is inferred in this paper, which is based on the net compressive and tensile displacements that each structural degree of freedom takes. These net displacements at each finite element node predicts damage zones of the FGM where the material is likely to fail due to a vibration response which is categorized according to loading condition. The damage zone prediction of a dynamically active FGMs plate have been accomplished using Reddy's higher-order theory. The constituent material properties are assumed to vary in the thickness direction according to the power-law behavior. The proposed C0 finite element model (FEM) is applied to get net tensile and compressive displacement distributions across the structures. A plate made of Aluminum/Ziconia is considered to illustrate the concept of structural kinematics-based health monitoring aspects of FGMs.

  10. Near-common-path interferometer for imaging Fourier-transform spectroscopy in wide-field microscopy

    PubMed Central

    Wadduwage, Dushan N.; Singh, Vijay Raj; Choi, Heejin; Yaqoob, Zahid; Heemskerk, Hans; Matsudaira, Paul; So, Peter T. C.

    2017-01-01

    Imaging Fourier-transform spectroscopy (IFTS) is a powerful method for biological hyperspectral analysis based on various imaging modalities, such as fluorescence or Raman. Since the measurements are taken in the Fourier space of the spectrum, it can also take advantage of compressed sensing strategies. IFTS has been readily implemented in high-throughput, high-content microscope systems based on wide-field imaging modalities. However, there are limitations in existing wide-field IFTS designs. Non-common-path approaches are less phase-stable. Alternatively, designs based on the common-path Sagnac interferometer are stable, but incompatible with high-throughput imaging. They require exhaustive sequential scanning over large interferometric path delays, making compressive strategic data acquisition impossible. In this paper, we present a novel phase-stable, near-common-path interferometer enabling high-throughput hyperspectral imaging based on strategic data acquisition. Our results suggest that this approach can improve throughput over those of many other wide-field spectral techniques by more than an order of magnitude without compromising phase stability. PMID:29392168

  11. Damage assessment and residual compression strength of thick composite plates with through-the-thickness reinforcements

    NASA Technical Reports Server (NTRS)

    Smith, Barry T.

    1990-01-01

    Damage in composite materials was studied with through-the-thickness reinforcements. As a first step it was necessary to develop new ultrasonic imaging technology to better assess internal damage of the composite. A useful ultrasonic imaging technique was successfully developed to assess the internal damage of composite panels. The ultrasonic technique accurately determines the size of the internal damage. It was found that the ultrasonic imaging technique was better able to assess the damage in a composite panel with through-the-thickness reinforcements than by destructively sectioning the specimen and visual inspection under a microscope. Five composite compression-after-impact panels were tested. The compression-after-impact strength of the panels with the through-the-thickness reinforcements was almost twice that of the comparable panel without through-the-thickness reinforcement.

  12. A compressive sensing-based computational method for the inversion of wide-band ground penetrating radar data

    NASA Astrophysics Data System (ADS)

    Gelmini, A.; Gottardi, G.; Moriyama, T.

    2017-10-01

    This work presents an innovative computational approach for the inversion of wideband ground penetrating radar (GPR) data. The retrieval of the dielectric characteristics of sparse scatterers buried in a lossy soil is performed by combining a multi-task Bayesian compressive sensing (MT-BCS) solver and a frequency hopping (FH) strategy. The developed methodology is able to benefit from the regularization capabilities of the MT-BCS as well as to exploit the multi-chromatic informative content of GPR measurements. A set of numerical results is reported in order to assess the effectiveness of the proposed GPR inverse scattering technique, as well as to compare it to a simpler single-task implementation.

  13. Piezoelectric properties of synthetic hydroxyapatite-based organic-inorganic hydrated materials

    NASA Astrophysics Data System (ADS)

    Rodriguez, Rogelio; Rangel, Domingo; Fonseca, Gerardo; Gonzalez, Maykel; Vargas, Susana

    Disks of synthetic hydroxyapatite agglutinated with a synthetic polymer and hydrated in a moisture fog, were prepared. A well-defined piezoelectric signal of these samples was obtained when a relative small compression stress of 35 MPa (corresponding a force of 450 daN) was applied; piezoelectric signals of up to 12 mV were obtained with this stress. Two different compression methods were followed to obtain the piezoelectric signal: (a) hold method, where the load was maintained constant once it reaches the maximum stress and (b) release method, where the load was removed rapidly when the stress reaches its maximum value. The samples were characterized using the techniques: X-ray Diffraction, Dielectric Relaxation Spectroscopy and mechanical test.

  14. Analysis of distortion data from TF30-P-3 mixed compression inlet test

    NASA Technical Reports Server (NTRS)

    King, R. W.; Schuerman, J. A.; Muller, R. G.

    1976-01-01

    A program was conducted to reduce and analyze inlet and engine data obtained during testing of a TF30-P-3 engine operating behind a mixed compression inlet. Previously developed distortion analysis techniques were applied to the data to assist in the development of a new distortion methodology. Instantaneous distortion techniques were refined as part of the distortion methodology development. A technique for estimating maximum levels of instantaneous distortion from steady state and average turbulence data was also developed as part of the program.

  15. A burst compression and expansion technique for variable-rate users in satellite-switched TDMA networks

    NASA Technical Reports Server (NTRS)

    Budinger, James M.

    1990-01-01

    A burst compression and expansion technique is described for asynchronously interconnecting variable-data-rate users with cost-efficient ground terminals in a satellite-switched, time-division-multiple-access (SS/TDMA) network. Compression and expansion buffers in each ground terminal convert between lower rate, asynchronous, continuous-user data streams and higher-rate TDMA bursts synchronized with the satellite-switched timing. The technique described uses a first-in, first-out (FIFO) memory approach which enables the use of inexpensive clock sources by both the users and the ground terminals and obviates the need for elaborate user clock synchronization processes. A continous range of data rates from kilobits per second to that approaching the modulator burst rate (hundreds of megabits per second) can be accommodated. The technique was developed for use in the NASA Lewis Research Center System Integration, Test, and Evaluation (SITE) facility. Some key features of the technique have also been implemented in the gound terminals developed at NASA Lewis for use in on-orbit evaluation of the Advanced Communications Technology Satellite (ACTS) high burst rate (HBR) system.

  16. The impact of manual defibrillation technique on no-flow time during simulated cardiopulmonary resuscitation.

    PubMed

    Perkins, Gavin D; Davies, Robin P; Soar, Jasmeet; Thickett, David R

    2007-04-01

    Rapid defibrillation is the most effective strategy for establishing return of spontaneous circulation following cardiac arrest due to ventricular fibrillation. The aim of this study is to measure the delay due to of charging the defibrillator during chest compression in an attempt to reduce the duration of the pre-shock pause in between cessation of chest compressions and shock delivery as advocated by the American Heart Association (AHA) guidelines compared to charging the defibrillator immediately following rhythm analysis without resuming chest compressions as recommended by the European Resuscitation Council (ERC). This was a randomised controlled cross over trial comparing pre-shock pause times when defibrillation was performed on a manikin according to the AHA and ERC guidelines using paddles and hands free defibrillation systems. The pre-shock pause between cessation of chest compression and shock delivery was significantly different between techniques (Friedman test, P<0.0001). ERC paddles technique had the greatest pre-shock pause (7.4 s [6.7-11.2]) followed by ERC hands free (7.0 s [6.5-8.5]) and AHA paddles (1.6 s [1.1-2.3]). AHA hands free took the least amount of time (1.5 s [0.8-1.5]). Extrapolating these data to older defibrillators with longer charge times saw pre-shock pause intervals of 9 s (Codemaster XL) and 12 s (Lifepak 20) with the ERC approach. This study demonstrated clinically significant delays to defibrillation by analysing and charging the defibrillator without performing concurrent chest compressions. In a simulated scenario, charging the defibrillator whilst performing chest compressions was perceived as safe and significantly reduced the pre-shock pause between cessation of chest compression and shock delivery.

  17. Alignment-free genetic sequence comparisons: a review of recent approaches by word analysis.

    PubMed

    Bonham-Carter, Oliver; Steele, Joe; Bastola, Dhundy

    2014-11-01

    Modern sequencing and genome assembly technologies have provided a wealth of data, which will soon require an analysis by comparison for discovery. Sequence alignment, a fundamental task in bioinformatics research, may be used but with some caveats. Seminal techniques and methods from dynamic programming are proving ineffective for this work owing to their inherent computational expense when processing large amounts of sequence data. These methods are prone to giving misleading information because of genetic recombination, genetic shuffling and other inherent biological events. New approaches from information theory, frequency analysis and data compression are available and provide powerful alternatives to dynamic programming. These new methods are often preferred, as their algorithms are simpler and are not affected by synteny-related problems. In this review, we provide a detailed discussion of computational tools, which stem from alignment-free methods based on statistical analysis from word frequencies. We provide several clear examples to demonstrate applications and the interpretations over several different areas of alignment-free analysis such as base-base correlations, feature frequency profiles, compositional vectors, an improved string composition and the D2 statistic metric. Additionally, we provide detailed discussion and an example of analysis by Lempel-Ziv techniques from data compression. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  18. A spatially adaptive spectral re-ordering technique for lossless coding of hyper-spectral images

    NASA Technical Reports Server (NTRS)

    Memon, Nasir D.; Galatsanos, Nikolas

    1995-01-01

    In this paper, we propose a new approach, applicable to lossless compression of hyper-spectral images, that alleviates some limitations of linear prediction as applied to this problem. According to this approach, an adaptive re-ordering of the spectral components of each pixel is performed prior to prediction and encoding. This re-ordering adaptively exploits, on a pixel-by pixel basis, the presence of inter-band correlations for prediction. Furthermore, the proposed approach takes advantage of spatial correlations, and does not introduce any coding overhead to transmit the order of the spectral bands. This is accomplished by using the assumption that two spatially adjacent pixels are expected to have similar spectral relationships. We thus have a simple technique to exploit spectral and spatial correlations in hyper-spectral data sets, leading to compression performance improvements as compared to our previously reported techniques for lossless compression. We also look at some simple error modeling techniques for further exploiting any structure that remains in the prediction residuals prior to entropy coding.

  19. Application of Compressible Volume of Fluid Model in Simulating the Impact and Solidification of Hollow Spherical ZrO2 Droplet on a Surface

    NASA Astrophysics Data System (ADS)

    Safaei, Hadi; Emami, Mohsen Davazdah; Jazi, Hamidreza Salimi; Mostaghimi, Javad

    2017-12-01

    Applications of hollow spherical particles in thermal spraying process have been developed in recent years, accompanied by attempts in the form of experimental and numerical studies to better understand the process of impact of a hollow droplet on a surface. During such process, volume and density of the trapped gas inside droplet change. The numerical models should be able to simulate such changes and their consequent effects. The aim of this study is to numerically simulate the impact of a hollow ZrO2 droplet on a flat surface using the volume of fluid technique for compressible flows. An open-source, finite-volume-based CFD code was used to perform the simulations, where appropriate subprograms were added to handle the studied cases. Simulation results were compared with the available experimental data. Results showed that at high impact velocities ( U 0 > 100 m/s), the compression of trapped gas inside droplet played a significant role in the impact dynamics. In such velocities, the droplet splashed explosively. Compressibility effects result in a more porous splat, compared to the corresponding incompressible model. Moreover, the compressible model predicted a higher spread factor than the incompressible model, due to planetary structure of the splat.

  20. Transform coding for hardware-accelerated volume rendering.

    PubMed

    Fout, Nathaniel; Ma, Kwan-Liu

    2007-01-01

    Hardware-accelerated volume rendering using the GPU is now the standard approach for real-time volume rendering, although limited graphics memory can present a problem when rendering large volume data sets. Volumetric compression in which the decompression is coupled to rendering has been shown to be an effective solution to this problem; however, most existing techniques were developed in the context of software volume rendering, and all but the simplest approaches are prohibitive in a real-time hardware-accelerated volume rendering context. In this paper we present a novel block-based transform coding scheme designed specifically with real-time volume rendering in mind, such that the decompression is fast without sacrificing compression quality. This is made possible by consolidating the inverse transform with dequantization in such a way as to allow most of the reprojection to be precomputed. Furthermore, we take advantage of the freedom afforded by off-line compression in order to optimize the encoding as much as possible while hiding this complexity from the decoder. In this context we develop a new block classification scheme which allows us to preserve perceptually important features in the compression. The result of this work is an asymmetric transform coding scheme that allows very large volumes to be compressed and then decompressed in real-time while rendering on the GPU.

  1. Informational analysis for compressive sampling in radar imaging.

    PubMed

    Zhang, Jingxiong; Yang, Ke

    2015-03-24

    Compressive sampling or compressed sensing (CS) works on the assumption of the sparsity or compressibility of the underlying signal, relies on the trans-informational capability of the measurement matrix employed and the resultant measurements, operates with optimization-based algorithms for signal reconstruction and is thus able to complete data compression, while acquiring data, leading to sub-Nyquist sampling strategies that promote efficiency in data acquisition, while ensuring certain accuracy criteria. Information theory provides a framework complementary to classic CS theory for analyzing information mechanisms and for determining the necessary number of measurements in a CS environment, such as CS-radar, a radar sensor conceptualized or designed with CS principles and techniques. Despite increasing awareness of information-theoretic perspectives on CS-radar, reported research has been rare. This paper seeks to bridge the gap in the interdisciplinary area of CS, radar and information theory by analyzing information flows in CS-radar from sparse scenes to measurements and determining sub-Nyquist sampling rates necessary for scene reconstruction within certain distortion thresholds, given differing scene sparsity and average per-sample signal-to-noise ratios (SNRs). Simulated studies were performed to complement and validate the information-theoretic analysis. The combined strategy proposed in this paper is valuable for information-theoretic orientated CS-radar system analysis and performance evaluation.

  2. Analysis and Performance Evaluation of Electrocardiogram Data compression Techniques.

    DTIC Science & Technology

    1980-12-01

    techniques were investigated for potential real time implementation on an 8 bit Motorola 6800 microprocessor. Research indicated entropy reduction transform...EKG has been an area of active research since the late nineteen sixties. References (1) , (7) , (12) ,(26) ,(28) , (29) .(32) ,(33) , and (35) are...representative of the research efforts performed in the last ten years . The reasons for compressing EKG data are twofold: 1) digita" storage costs are

  3. A very efficient RCS data compression and reconstruction technique, volume 4

    NASA Technical Reports Server (NTRS)

    Tseng, N. Y.; Burnside, W. D.

    1992-01-01

    A very efficient compression and reconstruction scheme for RCS measurement data was developed. The compression is done by isolating the scattering mechanisms on the target and recording their individual responses in the frequency and azimuth scans, respectively. The reconstruction, which is an inverse process of the compression, is granted by the sampling theorem. Two sets of data, the corner reflectors and the F-117 fighter model, were processed and the results were shown to be convincing. The compression ratio can be as large as several hundred, depending on the target's geometry and scattering characteristics.

  4. Data compression for sequencing data

    PubMed Central

    2013-01-01

    Post-Sanger sequencing methods produce tons of data, and there is a general agreement that the challenge to store and process them must be addressed with data compression. In this review we first answer the question “why compression” in a quantitative manner. Then we also answer the questions “what” and “how”, by sketching the fundamental compression ideas, describing the main sequencing data types and formats, and comparing the specialized compression algorithms and tools. Finally, we go back to the question “why compression” and give other, perhaps surprising answers, demonstrating the pervasiveness of data compression techniques in computational biology. PMID:24252160

  5. Printability of calcium phosphate: calcium sulfate powders for the application of tissue engineered bone scaffolds using the 3D printing technique.

    PubMed

    Zhou, Zuoxin; Buchanan, Fraser; Mitchell, Christina; Dunne, Nicholas

    2014-05-01

    In this study, calcium phosphate (CaP) powders were blended with a three-dimensional printing (3DP) calcium sulfate (CaSO4)-based powder and the resulting composite powders were printed with a water-based binder using the 3DP technology. Application of a water-based binder ensured the manufacture of CaP:CaSO4 constructs on a reliable and repeatable basis, without long term damage of the printhead. Printability of CaP:CaSO4 powders was quantitatively assessed by investigating the key 3DP process parameters, i.e. in-process powder bed packing, drop penetration behavior and the quality of printed solid constructs. Effects of particle size, CaP:CaSO4 ratio and CaP powder type on the 3DP process were considered. The drop penetration technique was used to reliably identify powder formulations that could be potentially used for the application of tissue engineered bone scaffolds using the 3DP technique. Significant improvements (p<0.05) in the 3DP process parameters were found for CaP (30-110 μm):CaSO4 powders compared to CaP (<20 μm):CaSO4 powders. Higher compressive strength was obtained for the powders with the higher CaP:CaSO4 ratio. Hydroxyapatite (HA):CaSO4 powders showed better results than beta-tricalcium phosphate (β-TCP):CaSO4 powders. Solid and porous constructs were manufactured using the 3DP technique from the optimized CaP:CaSO4 powder formulations. High-quality printed constructs were manufactured, which exhibited appropriate green compressive strength and a high level of printing accuracy. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Alaska SAR Facility (ASF5) SAR Communications (SARCOM) Data Compression System

    NASA Technical Reports Server (NTRS)

    Mango, Stephen A.

    1989-01-01

    The real-time operational requirements for SARCOM translation into a high speed image data handler and processor to achieve the desired compression ratios and the selection of a suitable image data compression technique with as low as possible fidelity (information) losses and which can be implemented in an algorithm placing a relatively low arithmetic load on the system are described.

  7. An Assessment of the Effect of Compressibility on Dynamic Stall

    NASA Technical Reports Server (NTRS)

    Carr, Lawrence W.; Chandrasekhara, M. S.; David, Sanford S. (Technical Monitor)

    1994-01-01

    Compressibility plays a significant role in the development of separation on airfoils experiencing unsteady motion, even at moderately compressible free-stream flow velocities. This effect can result in completely changed stall characteristics compared to those observed at incompressible speed, and can dramatically affect techniques used to control separation. There has been a significant effort in recent years directed toward better understanding; of this process, and its impact on possible techniques for control of separation in this complex environment. A review of existing research in this area will be presented, with emphasis on the physical mechanisms that play such an important role in the development of separation on airfoils. The increasing impact of compressibility on the stall process will be discussed as a function of free-stream Mach number, and an analysis of the changing flow physics will be presented. Examples of the effect of compressibility on dynamic stall will be selected from both recent and historical efforts by members of the aerospace community, as well as from the ongoing research program of the present authors. This will include a presentation of a sample of high speed filming of compressible dynamic stall which has recently been created using real-time interferometry.

  8. Investigation of GDL compression effects on the performance of a PEM fuel cell cathode by lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Molaeimanesh, G. R.; Nazemian, M.

    2017-08-01

    Proton exchange membrane (PEM) fuel cells with a great potential for application in vehicle propulsion systems will have a promising future. However, to overcome the exiting challenges against their wider commercialization further fundamental research is inevitable. The effects of gas diffusion layer (GDL) compression on the performance of a PEM fuel cell is not well-recognized; especially, via pore-scale simulation technique capturing the fibrous microstructure of the GDL. In the current investigation, a stochastic microstructure reconstruction method is proposed which can capture GDL microstructure changes by compression. Afterwards, lattice Boltzmann pore-scale simulation technique is adopted to simulate the reactive gas flow through 10 different cathode electrodes with dissimilar carbon paper GDLs produced from five different compression levels and two different carbon fiber diameters. The distributions of oxygen mole fraction, water vapor mole fraction and current density for the simulated cases are presented and analyzed. The results of simulations demonstrate that when the fiber diameter is 9 μm adding compression leads to lower average current density while when the fiber diameter is 7 μm the compression effect is not monotonic.

  9. A Lower Bound on Adiabatic Heating of Compressed Turbulence for Simulation and Model Validation

    DOE PAGES

    Davidovits, Seth; Fisch, Nathaniel J.

    2017-03-31

    The energy in turbulent flow can be amplied by compression, when the compression occurs on a timescale shorter than the turbulent dissipation time. This mechanism may play a part in sustaining turbulence in various astrophysical systems, including molecular clouds. The amount of turbulent amplification depends on the net effect of the compressive forcing and turbulent dissipation. By giving an argument for a bound on this dissipation, we give a lower bound for the scaling of the turbulent velocity with compression ratio in compressed turbulence. That is, turbulence undergoing compression will be enhanced at least as much as the bound givenmore » here, subject to a set of caveats that will be outlined. Used as a validation check, this lower bound suggests that some models of compressing astrophysical turbulence are too dissipative. As a result, the technique used highlights the relationship between compressed turbulence and decaying turbulence.« less

  10. Characterization of New PEEK/HA Composites with 3D HA Network Fabricated by Extrusion Freeforming.

    PubMed

    Vaezi, Mohammad; Black, Cameron; Gibbs, David M R; Oreffo, Richard O C; Brady, Mark; Moshrefi-Torbati, Mohamed; Yang, Shoufeng

    2016-05-26

    Addition of bioactive materials such as calcium phosphates or Bioglass, and incorporation of porosity into polyetheretherketone (PEEK) has been identified as an effective approach to improve bone-implant interfaces and osseointegration of PEEK-based devices. In this paper, a novel production technique based on the extrusion freeforming method is proposed that yields a bioactive PEEK/hydroxyapatite (PEEK/HA) composite with a unique configuration in which the bioactive phase (i.e., HA) distribution is computer-controlled within a PEEK matrix. The 100% interconnectivity of the HA network in the biocomposite confers an advantage over alternative forms of other microstructural configurations. Moreover, the technique can be employed to produce porous PEEK structures with controlled pore size and distribution, facilitating greater cellular infiltration and biological integration of PEEK composites within patient tissue. The results of unconfined, uniaxial compressive tests on these new PEEK/HA biocomposites with 40% HA under both static and cyclic mode were promising, showing the composites possess yield and compressive strength within the range of human cortical bone suitable for load bearing applications. In addition, preliminary evidence supporting initial biological safety of the new technique developed is demonstrated in this paper. Sufficient cell attachment, sustained viability in contact with the sample over a seven-day period, evidence of cell bridging and matrix deposition all confirmed excellent biocompatibility.

  11. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing.

    PubMed

    Xu, Jason; Minin, Vladimir N

    2015-07-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.

  12. Echocardiographic image of an active human heart

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Echocardiographic images provide quick, safe images of the heart as it beats. While a state-of-the art echocardiograph unit is part of the Human Research Facility on International Space Station, quick transmission of images and data to Earth is a challenge. NASA is developing techniques to improve the echocardiography available to diagnose sick astronauts as well as study the long-term effects of space travel on their health. Echocardiography uses ultrasound, generated in a sensor head placed against the patient's chest, to produce images of the structure of the heart walls and valves. However, ultrasonic imaging creates an enormous volume of data, up to 220 million bits per second. This can challenge ISS communications as well as Earth-based providers. Compressing data for rapid transmission back to Earth can degrade the quality of the images. Researchers at the Cleveland Clinic Foundation are working with NASA to develop compression techniques that meet imaging standards now used on the Internet and by the medical community, and that ensure that physicians receive quality diagnostic images.

  13. Hydroxyapatite scaffolds processed using a TBA-based freeze-gel casting/polymer sponge technique.

    PubMed

    Yang, Tae Young; Lee, Jung Min; Yoon, Seog Young; Park, Hong Chae

    2010-05-01

    A novel freeze-gel casting/polymer sponge technique has been introduced to fabricate porous hydroxyapatite scaffolds with controlled "designer" pore structures and improved compressive strength for bone tissue engineering applications. Tertiary-butyl alcohol (TBA) was used as a solvent in this work. The merits of each production process, freeze casting, gel casting, and polymer sponge route were characterized by the sintered microstructure and mechanical strength. A reticulated structure with large pore size of 180-360 microm, which formed on burn-out of polyurethane foam, consisted of the strut with highly interconnected, unidirectional, long pore channels (approximately 4.5 microm in dia.) by evaporation of frozen TBA produced in freeze casting together with the dense inner walls with a few, isolated fine pores (<2 microm) by gel casting. The sintered porosity and pore size generally behaved in an opposite manner to the solid loading, i.e., a high solid loading gave low porosity and small pore size, and a thickening of the strut cross section, thus leading to higher compressive strengths.

  14. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing

    PubMed Central

    Xu, Jason; Minin, Vladimir N.

    2016-01-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377

  15. Granular Material Response to Dynamic Shock Compression: A Study of SiO2 in the Form of Sand and Soda Lime Glass Beads

    DTIC Science & Technology

    2011-06-01

    method was used vice more accurate immersion techniques based on Archimedes principle . The initial volume of the technical sand was determined by filling...of Porous Materials In solid materials small stresses and strains are very close to being the same as the shock Hugoniot and the principle isentrope

  16. Alignment-free genetic sequence comparisons: a review of recent approaches by word analysis

    PubMed Central

    Steele, Joe; Bastola, Dhundy

    2014-01-01

    Modern sequencing and genome assembly technologies have provided a wealth of data, which will soon require an analysis by comparison for discovery. Sequence alignment, a fundamental task in bioinformatics research, may be used but with some caveats. Seminal techniques and methods from dynamic programming are proving ineffective for this work owing to their inherent computational expense when processing large amounts of sequence data. These methods are prone to giving misleading information because of genetic recombination, genetic shuffling and other inherent biological events. New approaches from information theory, frequency analysis and data compression are available and provide powerful alternatives to dynamic programming. These new methods are often preferred, as their algorithms are simpler and are not affected by synteny-related problems. In this review, we provide a detailed discussion of computational tools, which stem from alignment-free methods based on statistical analysis from word frequencies. We provide several clear examples to demonstrate applications and the interpretations over several different areas of alignment-free analysis such as base–base correlations, feature frequency profiles, compositional vectors, an improved string composition and the D2 statistic metric. Additionally, we provide detailed discussion and an example of analysis by Lempel–Ziv techniques from data compression. PMID:23904502

  17. Computer-Based Method for On-Line Service and Compact Storage of Data

    NASA Astrophysics Data System (ADS)

    Vasilyev, S. V.

    New method for compressing some types of astronomical data is proposed and discussed. The method is intended to provide astronomers more convenient technique for data retrieval from observational databases. The technique is based on the principal component method (PCM) of data analysis and their representation by characteristic vectors and eigenvalues. It allows to change the variety of data records by relatively small number of parameters. The initial data can be restored simply by linear combinations of obtained characteristic vectors. This approach can essentially reduce the dimensions of data being stored in databases and transferred through a netware. Our study shows that resulting volumes of data depend on the required accuracy of the representation and can be several times less than the initial ones. We note that using this method does not prevent applying the widely-used software for further data compressing. As the PCM is able to represent data analytically it can be used for proper adaptation of the requested information to the researcher's aims. Finally, taking into account that the method itself is a powerful tool for data smoothing, modelling and comparison we find it having good prospects for use in computer databases. Some examples of the PCM applications are described.

  18. THz-driven zero-slippage IFEL scheme for phase space manipulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curry, E.; Fabbri, S.; Musumeci, P.

    In this paper, we describe an inverse free electron laser (IFEL) interaction driven by a near single-cycle THz pulse that is group velocity-matched to an electron bunch inside a waveguide, allowing for a sustained interaction in a magnetic undulator. We discuss the application of this guided-THz IFEL technique for compression of a relativistic electron bunch and synchronization with the external laser pulse used to generate the THz pulse via optical rectification, as well as a laser-driven THz streaking diagnostic with the potential for femtosecond scale temporal resolution. Initial measurements of the THz waveform via an electro-optic sampling based technique confirm the predicted reduction of the group velocity, using a curved parallel plate waveguide, as a function of the varying aperture size of the guide. We also present the design of a proof-of-principle experiment based on the bunch parameters available at the UCLA PEGASUS laboratory. With amore » $$10\\,\\mathrm{MV}\\,{{\\rm{m}}}^{-1}$$ THz peak field, our simulation model predicts compression of a $$6\\,\\mathrm{MeV}$$ $$100\\,\\mathrm{fs}$$ electron beam by nearly an order of magnitude and a significant reduction of its initial timing jitter.« less

  19. THz-driven zero-slippage IFEL scheme for phase space manipulation

    DOE PAGES

    Curry, E.; Fabbri, S.; Musumeci, P.; ...

    2016-11-24

    In this paper, we describe an inverse free electron laser (IFEL) interaction driven by a near single-cycle THz pulse that is group velocity-matched to an electron bunch inside a waveguide, allowing for a sustained interaction in a magnetic undulator. We discuss the application of this guided-THz IFEL technique for compression of a relativistic electron bunch and synchronization with the external laser pulse used to generate the THz pulse via optical rectification, as well as a laser-driven THz streaking diagnostic with the potential for femtosecond scale temporal resolution. Initial measurements of the THz waveform via an electro-optic sampling based technique confirm the predicted reduction of the group velocity, using a curved parallel plate waveguide, as a function of the varying aperture size of the guide. We also present the design of a proof-of-principle experiment based on the bunch parameters available at the UCLA PEGASUS laboratory. With amore » $$10\\,\\mathrm{MV}\\,{{\\rm{m}}}^{-1}$$ THz peak field, our simulation model predicts compression of a $$6\\,\\mathrm{MeV}$$ $$100\\,\\mathrm{fs}$$ electron beam by nearly an order of magnitude and a significant reduction of its initial timing jitter.« less

  20. Development, evaluation and pharmacokinetics of time-dependent ketorolac tromethamine tablets.

    PubMed

    Vemula, Sateesh Kumar; Veerareddy, Prabhakar Reddy

    2013-01-01

    The present study was intended to develop a time-dependent colon-targeted compression-coated tablets of ketorolac tromethamine (KTM) using hydroxypropyl methylcellulose (HPMC) that release the drug slowly but completely in the colonic region by retarding the drug releases in stomach and small intestine. KTM core tablets were prepared by direct compression method and were compression coated with HPMC. The formulation is optimized based on the in vitro drug release studies and further evaluated by X-ray imaging technique in healthy humans to ensure the colonic delivery. To prove these results, in vivo pharmacokinetic studies in human volunteers were designed to study the in vitro-in vivo correlation. From the in vitro dissolution study, optimized formulation F3 showed negligible drug release (6.75 ± 0.49%) in the initial lag period followed by slow release (97.47 ± 0.93%) for 24 h which clearly indicates that the drug is delivered to the colon. The X-ray imaging studies showed that the tablets reached the colon without disintegrating in upper gastrointestinal system. From the pharmacokinetic evaluation, the immediate-release tablets producing peak plasma concentration (C(max)) was 4482.74 ng/ml at 2 h T(max) and colon-targeted tablets showed C(max) = 3562.67 ng/ml at 10 h T(max). The area under the curve for the immediate-release and compression-coated tablets was 10595.14 and 18796.70 ng h/ml and the mean resident time was 3.82 and 10.75 h, respectively. Thus, the compression-coated tablets based on time-dependent approach were preferred for colon-targeted delivery of ketorolac.

  1. A Novel ECG Data Compression Method Using Adaptive Fourier Decomposition With Security Guarantee in e-Health Applications.

    PubMed

    Ma, JiaLi; Zhang, TanTan; Dong, MingChui

    2015-05-01

    This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6-44.5 and percentage root mean square difference (PRD) of 0.8-2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications.

  2. Evaluation of simulation-based training on the ability of birth attendants to correctly perform bimanual compression as obstetric first aid.

    PubMed

    Andreatta, Pamela; Gans-Larty, Florence; Debpuur, Domitilla; Ofosu, Anthony; Perosky, Joseph

    2011-10-01

    Maternal mortality from postpartum hemorrhage remains high globally, in large part because women give birth in rural communities where unskilled (traditional birth attendants) provide care for delivering mothers. Traditional attendants are neither trained nor equipped to recognize or manage postpartum hemorrhage as a life-threatening emergent condition. Recommended treatment includes using uterotonic agents and physical manipulation to aid uterine contraction. In resource-limited areas where Obstetric first aid may be the only care option, physical methods such as bimanual uterine compression are easily taught, highly practical and if performed correctly, highly effective. A simulator with objective performance feedback was designed to teach skilled and unskilled birth attendants to perform the technique. To evaluate the impact of simulation-based training on the ability of birth attendants to correctly perform bimanual compression in response to postpartum hemorrhage from uterine atony. Simulation-based training was conducted for skilled (N=111) and unskilled birth attendants (N=14) at two regional (Kumasi, Tamale) and two district (Savelugu, Sene) medical centers in Ghana. Training was evaluated using Kirkpatrick's 4-level model. All participants significantly increased their bimanual uterine compression skills after training (p=0.000). There were no significant differences between 2-week delayed post-test performances indicating retention (p=0.52). Applied behavioral and clinical outcomes were reported for 9 months from a subset of birth attendants in Sene District: 425 births, 13 postpartum hemorrhages were reported without concomitant maternal mortality. The results of this study suggest that simulation-based training for skilled and unskilled birth attendants to perform bi-manual uterine compression as postpartum hemorrhage Obstetric first aid leads to improved applied procedural skills. Results from a smaller subset of the sample suggest that these skills could potentially lead to improved clinical outcomes and additional study is merited. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. [Survey of nurses about compression therapy of acute deep venous thrombosis. Field study in Saxony-Anhalt].

    PubMed

    Thieme, Dorothea; Langer, Gero; Behrens, Johann

    2010-03-01

    In clinical practice, the compression therapy is an established method for the treatment of acute deep vein thrombosis (DVT). The aim of this study was to clarify the extent to which current guidelines and results of studies done in the field for the treatment of acute DVT--particularly compression therapy--are implemented in clinical practice. All hospitals in Saxony-Anhalt using primary diagnosis and therapy for DVT (n = 34) were informed about a survey in 2007 and the nursing staff of angiology and internistical wards in these hospitals was asked to take part. The collection of data was done with the help of a questionnaire that had been designed and tested for its validity in a specialised hospital. 510 questionnaires were distributed. The response rate of questionnaires was 69 percent. 79 percent of the nursing staff of internistical wards in Saxony-Anhalt and 94 percent of the nursing staff of angiology wards said that patients with acute DVT have initially received a compression bandage. Significant deficits were visible in transferring the knowledge of evidence-based medicine and nursing regarding techniques of compression bandage. The recommended Fischer-Bandage was only put on in exceptional cases in internistical wards (3 percent) and Angiology (2 percent). Compression stockings were not a suitable method into the treatment of acute deep vein thrombosis of Angiology. 21 percent of the nursing staff of internistical wards said that they have initially applied compression stockings. The treatment of acute DVT is important in clinical practice. The compression bandage should be effectively put on the leg. The quality of care and long-term compliance of the patients could be increased this way, leading to prevention of post thrombotic syndrome (PTS) and reduction the duration of patients stay in the clinics.

  4. Novel windowing technique realized in FPGA for radar system

    NASA Astrophysics Data System (ADS)

    Escamilla-Hernandez, E.; Kravchenko, V. F.; Ponomaryov, V. I.; Ikuo, Arai

    2006-02-01

    To improve the weak target detection ability in radar applications a pulse compression is usually used that in the case linear FM modulation can improve the SNR. One drawback in here is that it can add the range side-lobes in reflectivity measurements. Using weighting window processing in time domain it is possible to decrease significantly the side-lobe level (SLL) and resolve small or low power targets those are masked by powerful ones. There are usually used classical windows such as Hamming, Hanning, etc. in window processing. Additionally to classical ones in this paper we also use a novel class of windows based on atomic functions (AF) theory. For comparison of simulation and experimental results we applied the standard parameters, such as coefficient of amplification, maximum level of side-lobe, width of main lobe, etc. To implement the compression-windowing model on hardware level it has been employed FPGA. This work aims at demonstrating a reasonably flexible implementation of FM-linear signal, pulse compression and windowing employing FPGA's. Classical and novel AF window technique has been investigated to reduce the SLL taking into account the noise influence and increasing the detection ability of the small or weak targets in the imaging radar. Paper presents the experimental hardware results of windowing in pulse compression radar resolving several targets for rectangular, Hamming, Kaiser-Bessel, (see manuscript for formula) functions windows. The windows created by use the atomic functions offer sufficiently better decreasing of the SLL in case of noise presence and when we move away of the main lobe in comparison with classical windows.

  5. Novel Fourier-based iterative reconstruction for sparse fan projection using alternating direction total variation minimization

    NASA Astrophysics Data System (ADS)

    Zhao, Jin; Han-Ming, Zhang; Bin, Yan; Lei, Li; Lin-Yuan, Wang; Ai-Long, Cai

    2016-03-01

    Sparse-view x-ray computed tomography (CT) imaging is an interesting topic in CT field and can efficiently decrease radiation dose. Compared with spatial reconstruction, a Fourier-based algorithm has advantages in reconstruction speed and memory usage. A novel Fourier-based iterative reconstruction technique that utilizes non-uniform fast Fourier transform (NUFFT) is presented in this work along with advanced total variation (TV) regularization for a fan sparse-view CT. The proposition of a selective matrix contributes to improve reconstruction quality. The new method employs the NUFFT and its adjoin to iterate back and forth between the Fourier and image space. The performance of the proposed algorithm is demonstrated through a series of digital simulations and experimental phantom studies. Results of the proposed algorithm are compared with those of existing TV-regularized techniques based on compressed sensing method, as well as basic algebraic reconstruction technique. Compared with the existing TV-regularized techniques, the proposed Fourier-based technique significantly improves convergence rate and reduces memory allocation, respectively. Projected supported by the National High Technology Research and Development Program of China (Grant No. 2012AA011603) and the National Natural Science Foundation of China (Grant No. 61372172).

  6. Bayesian sparse channel estimation

    NASA Astrophysics Data System (ADS)

    Chen, Chulong; Zoltowski, Michael D.

    2012-05-01

    In Orthogonal Frequency Division Multiplexing (OFDM) systems, the technique used to estimate and track the time-varying multipath channel is critical to ensure reliable, high data rate communications. It is recognized that wireless channels often exhibit a sparse structure, especially for wideband and ultra-wideband systems. In order to exploit this sparse structure to reduce the number of pilot tones and increase the channel estimation quality, the application of compressed sensing to channel estimation is proposed. In this article, to make the compressed channel estimation more feasible for practical applications, it is investigated from a perspective of Bayesian learning. Under the Bayesian learning framework, the large-scale compressed sensing problem, as well as large time delay for the estimation of the doubly selective channel over multiple consecutive OFDM symbols, can be avoided. Simulation studies show a significant improvement in channel estimation MSE and less computing time compared to the conventional compressed channel estimation techniques.

  7. Endoscopic removal of a tablespoon lodged within the duodenum

    PubMed Central

    Watanabe, Takashi; Aoyagi, Kunihiko; Tomioka, Yoshitaka; Ishibashi, Hideki; Sakisaka, Shotaro

    2015-01-01

    Here we report the case of a 34-year-old man who underwent endoscopic removal of a tablespoon from the stomach that was lodged within the duodenum. Removal required the use of a two-channel upper endoscope and polypectomy snares. Using the double-snare technique, the spoon was grasped at the proximal and distal parts of the handle. The double-snare was first pulled unsuccessfully and then pulled with simultaneous manual abdominal compression of the bulbus from the body surface. Compression was gently applied towards the stomach. As a result, the head of the spoon prolapsed from the bulbus, and was easily retracted from the stomach without any complications. In cases of foreign body lodging within the duodenum, the manual abdominal compression technique may help clinicians pull out the object and avoid surgery. The usefulness of manual compression is dependent on the foreign body’s sharpness and the location. PMID:25945026

  8. Distributed Similarity based Clustering and Compressed Forwarding for wireless sensor networks.

    PubMed

    Arunraja, Muruganantham; Malathi, Veluchamy; Sakthivel, Erulappan

    2015-11-01

    Wireless sensor networks are engaged in various data gathering applications. The major bottleneck in wireless data gathering systems is the finite energy of sensor nodes. By conserving the on board energy, the life span of wireless sensor network can be well extended. Data communication being the dominant energy consuming activity of wireless sensor network, data reduction can serve better in conserving the nodal energy. Spatial and temporal correlation among the sensor data is exploited to reduce the data communications. Data similar cluster formation is an effective way to exploit spatial correlation among the neighboring sensors. By sending only a subset of data and estimate the rest using this subset is the contemporary way of exploiting temporal correlation. In Distributed Similarity based Clustering and Compressed Forwarding for wireless sensor networks, we construct data similar iso-clusters with minimal communication overhead. The intra-cluster communication is reduced using adaptive-normalized least mean squares based dual prediction framework. The cluster head reduces the inter-cluster data payload using a lossless compressive forwarding technique. The proposed work achieves significant data reduction in both the intra-cluster and the inter-cluster communications, with the optimal data accuracy of collected data. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Improving the scalability of hyperspectral imaging applications on heterogeneous platforms using adaptive run-time data compression

    NASA Astrophysics Data System (ADS)

    Plaza, Antonio; Plaza, Javier; Paz, Abel

    2010-10-01

    Latest generation remote sensing instruments (called hyperspectral imagers) are now able to generate hundreds of images, corresponding to different wavelength channels, for the same area on the surface of the Earth. In previous work, we have reported that the scalability of parallel processing algorithms dealing with these high-dimensional data volumes is affected by the amount of data to be exchanged through the communication network of the system. However, large messages are common in hyperspectral imaging applications since processing algorithms are pixel-based, and each pixel vector to be exchanged through the communication network is made up of hundreds of spectral values. Thus, decreasing the amount of data to be exchanged could improve the scalability and parallel performance. In this paper, we propose a new framework based on intelligent utilization of wavelet-based data compression techniques for improving the scalability of a standard hyperspectral image processing chain on heterogeneous networks of workstations. This type of parallel platform is quickly becoming a standard in hyperspectral image processing due to the distributed nature of collected hyperspectral data as well as its flexibility and low cost. Our experimental results indicate that adaptive lossy compression can lead to improvements in the scalability of the hyperspectral processing chain without sacrificing analysis accuracy, even at sub-pixel precision levels.

  10. Compressed sensing with cyclic-S Hadamard matrix for terahertz imaging applications

    NASA Astrophysics Data System (ADS)

    Ermeydan, Esra Şengün; ćankaya, Ilyas

    2018-01-01

    Compressed Sensing (CS) with Cyclic-S Hadamard matrix is proposed for single pixel imaging applications in this study. In single pixel imaging scheme, N = r . c samples should be taken for r×c pixel image where . denotes multiplication. CS is a popular technique claiming that the sparse signals can be reconstructed with samples under Nyquist rate. Therefore to solve the slow data acquisition problem in Terahertz (THz) single pixel imaging, CS is a good candidate. However, changing mask for each measurement is a challenging problem since there is no commercial Spatial Light Modulators (SLM) for THz band yet, therefore circular masks are suggested so that for each measurement one or two column shifting will be enough to change the mask. The CS masks are designed using cyclic-S matrices based on Hadamard transform for 9 × 7 and 15 × 17 pixel images within the framework of this study. The %50 compressed images are reconstructed using total variation based TVAL3 algorithm. Matlab simulations demonstrates that cyclic-S matrices can be used for single pixel imaging based on CS. The circular masks have the advantage to reduce the mechanical SLMs to a single sliding strip, whereas the CS helps to reduce acquisition time and energy since it allows to reconstruct the image from fewer samples.

  11. Effectiveness of compressed sensing and transmission in wireless sensor networks for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Fujiwara, Takahiro; Uchiito, Haruki; Tokairin, Tomoya; Kawai, Hiroyuki

    2017-04-01

    Regarding Structural Health Monitoring (SHM) for seismic acceleration, Wireless Sensor Networks (WSN) is a promising tool for low-cost monitoring. Compressed sensing and transmission schemes have been drawing attention to achieve effective data collection in WSN. Especially, SHM systems installing massive nodes of WSN require efficient data transmission due to restricted communications capability. The dominant frequency band of seismic acceleration is occupied within 100 Hz or less. In addition, the response motions on upper floors of a structure are activated at a natural frequency, resulting in induced shaking at the specified narrow band. Focusing on the vibration characteristics of structures, we introduce data compression techniques for seismic acceleration monitoring in order to reduce the amount of transmission data. We carry out a compressed sensing and transmission scheme by band pass filtering for seismic acceleration data. The algorithm executes the discrete Fourier transform for the frequency domain and band path filtering for the compressed transmission. Assuming that the compressed data is transmitted through computer networks, restoration of the data is performed by the inverse Fourier transform in the receiving node. This paper discusses the evaluation of the compressed sensing for seismic acceleration by way of an average error. The results present the average error was 0.06 or less for the horizontal acceleration, in conditions where the acceleration was compressed into 1/32. Especially, the average error on the 4th floor achieved a small error of 0.02. Those results indicate that compressed sensing and transmission technique is effective to reduce the amount of data with maintaining the small average error.

  12. A survey of quality measures for gray-scale image compression

    NASA Technical Reports Server (NTRS)

    Eskicioglu, Ahmet M.; Fisher, Paul S.

    1993-01-01

    Although a variety of techniques are available today for gray-scale image compression, a complete evaluation of these techniques cannot be made as there is no single reliable objective criterion for measuring the error in compressed images. The traditional subjective criteria are burdensome, and usually inaccurate or inconsistent. On the other hand, being the most common objective criterion, the mean square error (MSE) does not have a good correlation with the viewer's response. It is now understood that in order to have a reliable quality measure, a representative model of the complex human visual system is required. In this paper, we survey and give a classification of the criteria for the evaluation of monochrome image quality.

  13. Coextrusion-Based 3D Plotting of Ceramic Pastes for Porous Calcium Phosphate Scaffolds Comprised of Hollow Filaments.

    PubMed

    Jo, In-Hwan; Koh, Young-Hag; Kim, Hyoun-Ee

    2018-05-29

    This paper demonstrates the utility of coextrusion-based 3D plotting of ceramic pastes (CoEx-3DP) as a new type of additive manufacturing (AM) technique, which can produce porous calcium phosphate (CaP) ceramic scaffolds comprised of hollow CaP filaments. In this technique, green filaments with a controlled core/shell structure can be produced by coextruding an initial feedrod, comprised of the carbon black (CB) core and CaP shell, through a fine nozzle in an acetone bath and then deposited in a controlled manner according to predetermined paths. In addition, channels in CaP filaments can be created through the removal of the CB cores during heat-treatment. Produced CaP scaffolds had two different types of pores with well-defined geometries: three-dimensionally interconnected pores (~360 × 230 μm² in sizes) and channels (>100 μm in diameter) in hollow CaP filaments. The porous scaffolds showed high compressive strengths of ~12.3 ± 2.2 MPa at a high porosity of ~73 vol % when compressed parallel to the direction of the hollow CaP filaments. In addition, the mechanical properties of porous CaP scaffolds could be tailored by adjusting their porosity, for example, compressive strengths of 4.8 ± 1.1 MPa at a porosity of ~82 vol %. The porous CaP scaffold showed good biocompatibility, which was assessed by in vitro cell tests, where several the cells adhered to and spread actively with the outer and inner surfaces of the hollow CaP filaments.

  14. Image compression system and method having optimized quantization tables

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)

    1998-01-01

    A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.

  15. Tissue Acoustoelectric Effect Modeling From Solid Mechanics Theory.

    PubMed

    Song, Xizi; Qin, Yexian; Xu, Yanbin; Ingram, Pier; Witte, Russell S; Dong, Feng

    2017-10-01

    The acoustoelectric (AE) effect is a basic physical phenomenon, which underlies the changes made in the conductivity of a medium by the application of focused ultrasound. Recently, based on the AE effect, several biomedical imaging techniques have been widely studied, such as ultrasound-modulated electrical impedance tomography and ultrasound current source density imaging. To further investigate the mechanism of the AE effect in tissue and to provide guidance for such techniques, we have modeled the tissue AE effect using the theory of solid mechanics. Both bulk compression and thermal expansion of tissue are considered and discussed. Computation simulation shows that the muscle AE effect result, conductivity change rate, is 3.26×10 -3 with 4.3-MPa peak pressure, satisfying the theoretical value. Bulk compression plays the main role for muscle AE effect, while thermal expansion makes almost no contribution to it. In addition, the AE signals of porcine muscle are measured at different focal positions. With the same magnitude order and the same change trend, the experiment result confirms that the simulation result is effective. Both simulation and experimental results validate that tissue AE effect modeling using solid mechanics theory is feasible, which is of significance for the further development of related biomedical imaging techniques.

  16. A contribution to reduce sampling variability in the evaluation of deoxynivalenol contamination of organic wheat grain.

    PubMed

    Hallier, Arnaud; Celette, Florian; Coutarel, Julie; David, Christophe

    2013-01-01

    Fusarium head blight caused by different varieties of Fusarium species is one of the major serious worldwide diseases found in wheat production. It is therefore important to be able to quantify the deoxynivalenol concentration in wheat. Unfortunately, in mycotoxin quantification, due to the uneven distribution of mycotoxins within the initial lot, it is difficult, or even impossible, to obtain a truly representative analytical sample. In previous work we showed that the sampling step most responsible for variability was grain sampling. In this paper, it is more particularly the step scaling down from a laboratory sample of some kilograms to an analytical sample of a few grams that is investigated. The naturally contaminated wheat lot was obtained from an organic field located in the southeast of France (Rhône-Alpes) from the year 2008-2009 cropping season. The deoxynivalenol level was found to be 50.6 ± 2.3 ng g⁻¹. Deoxynivalenol was extracted with a acetonitrile-water mix and quantified by gas chromatography-electron capture detection (GC-ECD). Three different grain sampling techniques were tested to obtain analytical samples: a technique based on manually homogenisation and division, a second technique based on the use of a rotating shaker and a third on the use of compressed air. Both the rotating shaker and the compressed air techniques enabled a homogeneous laboratory sample to be obtained, from which representative analytical samples could be taken. Moreover, the techniques did away with many repetitions and grinding. This study, therefore, contributes to sampling variability reduction in the evaluation of deoxynivalenol contamination of organic wheat grain, and then, at a reasonable cost.

  17. Towards efficient backward-in-time adjoint computations using data compression techniques

    DOE PAGES

    Cyr, E. C.; Shadid, J. N.; Wildey, T.

    2014-12-16

    In the context of a posteriori error estimation for nonlinear time-dependent partial differential equations, the state-of-the-practice is to use adjoint approaches which require the solution of a backward-in-time problem defined by a linearization of the forward problem. One of the major obstacles in the practical application of these approaches, we found, is the need to store, or recompute, the forward solution to define the adjoint problem and to evaluate the error representation. Our study considers the use of data compression techniques to approximate forward solutions employed in the backward-in-time integration. The development derives an error representation that accounts for themore » difference between the standard-approach and the compressed approximation of the forward solution. This representation is algorithmically similar to the standard representation and only requires the computation of the quantity of interest for the forward solution and the data-compressed reconstructed solution (i.e. scalar quantities that can be evaluated as the forward problem is integrated). This approach is then compared with existing techniques, such as checkpointing and time-averaged adjoints. Lastly, we provide numerical results indicating the potential efficiency of our approach on a transient diffusion–reaction equation and on the Navier–Stokes equations. These results demonstrate memory compression ratios up to 450×450× while maintaining reasonable accuracy in the error-estimates.« less

  18. The Role of Efficient XML Interchange (EXI) in Navy Wide-Area Network (WAN) Optimization

    DTIC Science & Technology

    2015-03-01

    compress, and re-encrypt data to continue providing optimization through compression; however, that capability requires careful consideration of...optimization 23 of encrypted data requires a careful analysis and comparison of performance improvements and IA vulnerabilities. It is important...Contained EXI capitalizes on multiple techniques to improve compression, and they vary depending on a set of EXI options passed to the codec

  19. Laser shock compression experiments on precompressed water in ``SG-II'' laser facility

    NASA Astrophysics Data System (ADS)

    Shu, Hua; Huang, Xiuguang; Ye, Junjian; Fu, Sizu

    2017-06-01

    Laser shock compression experiments on precompressed samples offer the possibility to obtain new hugoniot data over a significantly broader range of density-temperature phase than was previously achievable. This technique was developed in ``SG-II'' laser facility. Hugoniot data were obtained for water in 300 GPa pressure range by laser-driven shock compression of samples statically precompressed in diamond-anvil cells.

  20. DNABIT Compress – Genome compression algorithm

    PubMed Central

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  1. Compton scattering measurements from dense plasmas

    DOE PAGES

    Glenzer, S. H.; Neumayer, P.; Doppner, T.; ...

    2008-06-12

    Here, Compton scattering techniques have been developed for accurate measurements of densities and temperatures in dense plasmas. One future challenge is the application of this technique to characterize compressed matter on the National Ignition Facility where hydrogen and beryllium will approach extremely dense states of matter of up to 1000 g/cc. In this regime, the density, compressibility, and capsule fuel adiabat may be directly measured from the Compton scattered spectrum of a high-energy x-ray line source. Specifically, the scattered spectra directly reflect the electron velocity distribution. In non-degenerate plasmas, the width provides an accurate measure of the electron temperatures, whilemore » in partially Fermi degenerate systems that occur in laser-compressed matter it provides the Fermi energy and hence the electron density. Both of these regimes have been accessed in experiments at the Omega laser by employing isochorically heated solid-density beryllium and moderately compressed beryllium foil targets. In the latter experiment, compressions by a factor of 3 at pressures of 40 Mbar have been measured in excellent agreement with radiation hydrodynamic modeling.« less

  2. The effect of the descent technique and truck cabin layout on the landing impact forces.

    PubMed

    Patenaude, S; Marchand, D; Samperi, S; Bélanger, M

    2001-12-01

    The majority of injuries to truckers are caused by falls during the descent from the cab of the truck. Several studies have shown that the techniques used to descend from the truck and the layout of the truck's cabin are the principal cause of injury. The goal of the present study was to measure the effects of the descent techniques used by the trucker and the layout of the truck's cabin on the impact forces absorbed by the lower limbs and the back. Kinematic data, obtained with the aid of a video camera, were combined with the force platform data to allow for calculation of the lower limb and L5-S1 torques as well as L5-S1 compressive forces. The trucker descended from two different conventional tractor cabin layouts. Each trucker descended from cabin using either "facing the truck" (FT) or "back to the truck" (BT) techniques. The results demonstrate that the BT technique produces greater ground impact forces than the FT technique, particularly when the truck does not have a handrail. The BT technique also causes an increase in the compressive forces exerted on the back. In conclusion, the use of the FT technique along with the aids (i.e., handrails and all the steps) help lower the landing impact forces as well as the lumbosacral compressive forces.

  3. Device Assists Cardiac Chest Compression

    NASA Technical Reports Server (NTRS)

    Eichstadt, Frank T.

    1995-01-01

    Portable device facilitates effective and prolonged cardiac resuscitation by chest compression. Developed originally for use in absence of gravitation, also useful in terrestrial environments and situations (confined spaces, water rescue, medical transport) not conducive to standard manual cardiopulmonary resuscitation (CPR) techniques.

  4. A study of data coding technology developments in the 1980-1985 time frame, volume 2

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Shahsavari, M. M.

    1978-01-01

    The source parameters of digitized analog data are discussed. Different data compression schemes are outlined and analysis of their implementation are presented. Finally, bandwidth compression techniques are given for video signals.

  5. Summary of Work for Joint Research Interchanges with DARWIN Integrated Product Team 1998

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus

    1999-01-01

    The intent of Stanford University's SciVis group is to develop technologies that enabled comparative analysis and visualization techniques for simulated and experimental flow fields. These techniques would then be made available under the Joint Research Interchange for potential injection into the DARWIN Workspace Environment (DWE). In the past, we have focused on techniques that exploited feature based comparisons such as shock and vortex extractions. Our current research effort focuses on finding a quantitative comparison of general vector fields based on topological features. Since the method relies on topological information, grid matching and vector alignment is not needed in the comparison. This is often a problem with many data comparison techniques. In addition, since only topology based information is stored and compared for each field, there is a significant compression of information that enables large databases to be quickly searched. This report will briefly (1) describe current technologies in the area of comparison techniques, (2) will describe the theory of our new method and finally (3) summarize a few of the results.

  6. Summary of Work for Joint Research Interchanges with DARWIN Integrated Product Team

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus

    1999-01-01

    The intent of Stanford University's SciVis group is to develop technologies that enabled comparative analysis and visualization techniques for simulated and experimental flow fields. These techniques would then be made available un- der the Joint Research Interchange for potential injection into the DARWIN Workspace Environment (DWE). In the past, we have focused on techniques that exploited feature based comparisons such as shock and vortex extractions. Our current research effort focuses on finding a quantitative comparison of general vector fields based on topological features. Since the method relies on topological information, grid matching an@ vector alignment is not needed in the comparison. This is often a problem with many data comparison techniques. In addition, since only topology based information is stored and compared for each field, there is a significant compression of information that enables large databases to be quickly searched. This report will briefly (1) describe current technologies in the area of comparison techniques, (2) will describe the theory of our new method and finally (3) summarize a few of the results.

  7. Resolution enhancement of low-quality videos using a high-resolution frame

    NASA Astrophysics Data System (ADS)

    Pham, Tuan Q.; van Vliet, Lucas J.; Schutte, Klamer

    2006-01-01

    This paper proposes an example-based Super-Resolution (SR) algorithm of compressed videos in the Discrete Cosine Transform (DCT) domain. Input to the system is a Low-Resolution (LR) compressed video together with a High-Resolution (HR) still image of similar content. Using a training set of corresponding LR-HR pairs of image patches from the HR still image, high-frequency details are transferred from the HR source to the LR video. The DCT-domain algorithm is much faster than example-based SR in spatial domain 6 because of a reduction in search dimensionality, which is a direct result of the compact and uncorrelated DCT representation. Fast searching techniques like tree-structure vector quantization 16 and coherence search1 are also key to the improved efficiency. Preliminary results on MJPEG sequence show promising result of the DCT-domain SR synthesis approach.

  8. A Computational Study of Shear Layer Receptivity

    NASA Astrophysics Data System (ADS)

    Barone, Matthew; Lele, Sanjiva

    2002-11-01

    The receptivity of two-dimensional, compressible shear layers to local and external excitation sources is examined using a computational approach. The family of base flows considered consists of a laminar supersonic stream separated from nearly quiescent fluid by a thin, rigid splitter plate with a rounded trailing edge. The linearized Euler and linearized Navier-Stokes equations are solved numerically in the frequency domain. The flow solver is based on a high order finite difference scheme, coupled with an overset mesh technique developed for computational aeroacoustics applications. Solutions are obtained for acoustic plane wave forcing near the most unstable shear layer frequency, and are compared to the existing low frequency theory. An adjoint formulation to the present problem is developed, and adjoint equation calculations are performed using the same numerical methods as for the regular equation sets. Solutions to the adjoint equations are used to shed light on the mechanisms which control the receptivity of finite-width compressible shear layers.

  9. Enhancement of orientation gradients during simple shear deformation by application of simple compression

    NASA Astrophysics Data System (ADS)

    Jahedi, Mohammad; Ardeljan, Milan; Beyerlein, Irene J.; Paydar, Mohammad Hossein; Knezevic, Marko

    2015-06-01

    We use a multi-scale, polycrystal plasticity micromechanics model to study the development of orientation gradients within crystals deforming by slip. At the largest scale, the model is a full-field crystal plasticity finite element model with explicit 3D grain structures created by DREAM.3D, and at the finest scale, at each integration point, slip is governed by a dislocation density based hardening law. For deformed polycrystals, the model predicts intra-granular misorientation distributions that follow well the scaling law seen experimentally by Hughes et al., Acta Mater. 45(1), 105-112 (1997), independent of strain level and deformation mode. We reveal that the application of a simple compression step prior to simple shearing significantly enhances the development of intra-granular misorientations compared to simple shearing alone for the same amount of total strain. We rationalize that the changes in crystallographic orientation and shape evolution when going from simple compression to simple shearing increase the local heterogeneity in slip, leading to the boost in intra-granular misorientation development. In addition, the analysis finds that simple compression introduces additional crystal orientations that are prone to developing intra-granular misorientations, which also help to increase intra-granular misorientations. Many metal working techniques for refining grain sizes involve a preliminary or concurrent application of compression with severe simple shearing. Our finding reveals that a pre-compression deformation step can, in fact, serve as another processing variable for improving the rate of grain refinement during the simple shearing of polycrystalline metals.

  10. Improved image decompression for reduced transform coding artifacts

    NASA Technical Reports Server (NTRS)

    Orourke, Thomas P.; Stevenson, Robert L.

    1994-01-01

    The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.

  11. Compression of facsimile graphics for transmission over digital mobile satellite circuits

    NASA Astrophysics Data System (ADS)

    Dimolitsas, Spiros; Corcoran, Frank L.

    A technique for reducing the transmission requirements of facsimile images while maintaining high intelligibility in mobile communications environments is described. The algorithms developed are capable of achieving a compression of approximately 32 to 1. The technique focuses on the implementation of a low-cost interface unit suitable for facsimile communication between low-power mobile stations and fixed stations for both point-to-point and point-to-multipoint transmissions. This interface may be colocated with the transmitting facsimile terminals. The technique was implemented and tested by intercepting facsimile documents in a store-and-forward mode.

  12. An analysis of supersonic flows with low-Reynolds number compressible two-equation turbulence models using LU finite volume implicit numerical techniques

    NASA Technical Reports Server (NTRS)

    Lee, J.

    1994-01-01

    A generalized flow solver using an implicit Lower-upper (LU) diagonal decomposition based numerical technique has been coupled with three low-Reynolds number kappa-epsilon models for analysis of problems with engineering applications. The feasibility of using the LU technique to obtain efficient solutions to supersonic problems using the kappa-epsilon model has been demonstrated. The flow solver is then used to explore limitations and convergence characteristics of several popular two equation turbulence models. Several changes to the LU solver have been made to improve the efficiency of turbulent flow predictions. In general, the low-Reynolds number kappa-epsilon models are easier to implement than the models with wall-functions, but require much finer near-wall grid to accurately resolve the physics. The three kappa-epsilon models use different approaches to characterize the near wall regions of the flow. Therefore, the limitations imposed by the near wall characteristics have been carefully resolved. The convergence characteristics of a particular model using a given numerical technique are also an important, but most often overlooked, aspect of turbulence model predictions. It is found that some convergence characteristics could be sacrificed for more accurate near-wall prediction. However, even this gain in accuracy is not sufficient to model the effects of an external pressure gradient imposed by a shock-wave/ boundary-layer interaction. Additional work on turbulence models, especially for compressibility, is required since the solutions obtained with base line turbulence are in only reasonable agreement with the experimental data for the viscous interaction problems.

  13. Compressed air injection technique to standardize block injection pressures : [La technique d'injection d'air comprimé pour normaliser les pressions d'injection d'un blocage nerveux].

    PubMed

    Tsui, Ban C H; Li, Lisa X Y; Pillay, Jennifer J

    2006-11-01

    Presently, no standardized technique exists to monitor injection pressures during peripheral nerve blocks. Our objective was to determine if a compressed air injection technique, using an in vitro model based on Boyle's law and typical regional anesthesia equipment, could consistently maintain injection pressures below a 1293 mmHg level associated with clinically significant nerve injury. Injection pressures for 20 and 30 mL syringes with various needle sizes ( 18G, 20G, 21 G, 22G, and 24G) were measured in a closed system. A set volume of air was aspirated into a saline-filled syringe and then compressed and maintained at various percentages while pressure was measured. The needle was inserted into the injection port of a pressure sensor, which had attached extension tubing with an injection plug clamped "off". Using linear regression with all data points, the pressure value and 99% confidence interval (CI) at 50% air compression was estimated. The linearity of Boyle's law was demonstrated with a high correlation, r = 0.99, and a slope of 0.984 (99% CI: 0.967-1.001). The net pressure generated at 50% compression was estimated as 744.8 mmHg, with the 99% CI between 729.6 and 760.0 mmHg. The various syringe/needle combinations had similar results. By creating and maintaining syringe air compression at 50% or less, injection pressures will be substantially below the 1293 mmHg threshold considered to be an associated risk factor for clinically significant nerve injury. This technique may allow simple, real-time and objective monitoring during local anesthetic injections while inherently reducing injection speed. Présentement, aucune technique normalisée ne permet de vérifier les pressions d'injection pendant les blocages nerveux périphériques. Nous voulions vérifier si une technique d'injection d'air comprimé, utilisant un modèle in vitro fondé sur la loi de Boyle et du matériel propre à l'anesthésie régionale, pouvait maintenir avec régularité les pressions d'injection sous les 1293 mmHg, pression associée à une lésion nerveuse cliniquement significative. MéTHODE: Les pressions d'injection pour des seringues de 20 et 30 mL et diverses tailles d'aiguilles (18G, 20G, 21G, 22G et 24G) ont été mesurées dans un système fermé. Un volume défini d'air a été aspiré dans une seringue rempli de solution saline, puis comprimé et maintenu à des pourcentages variés pendant la mesure de la pression. L'aiguille a été insérée dans l'ouverture à injection d'un détecteur de pression muni d'une extension avec un bouchon d'injection en position fermée. La valeur de la pression et l'intervalle de confiance de 99 % (IC) pour une compression d'air à 50 % ont été évalués en utilisant une régression linéaire avec tous les points de données. RéSULTATS: La linéarité de la loi de Boyle a été démontrée avec une forte corrélation, r = 0,99 et une pente de 0,984 (IC de 99 % : 0,967-1,001) La pression nette générée sous une compression de 50% a été de 744,8 mmHg avec un IC de 99 % entre 729,6 et 760,0 mmHg. Les diverses combinaisons de seringues et d'aiguilles ont présenté des résultats similaires. En créant et en maintenant dans la seringue une compression d'air à 50% ou moins, les pressions d'injection seront dans l'ensemble sous le seuil des 1293 mmHg associé à un facteur de risque de lésion nerveuse cliniquement significative. Cette technique peut permettre une surveillance simple, objective et en temps réel pendant les injections d'anesthésiques locaux tout en réduisant fondamentalement la vitesse d'injection.

  14. Effect of data compression on diagnostic accuracy in digital hand and chest radiography

    NASA Astrophysics Data System (ADS)

    Sayre, James W.; Aberle, Denise R.; Boechat, Maria I.; Hall, Theodore R.; Huang, H. K.; Ho, Bruce K. T.; Kashfian, Payam; Rahbar, Guita

    1992-05-01

    Image compression is essential to handle a large volume of digital images including CT, MR, CR, and digitized films in a digital radiology operation. The full-frame bit allocation using the cosine transform technique developed during the last few years has been proven to be an excellent irreversible image compression method. This paper describes the effect of using the hardware compression module on diagnostic accuracy in hand radiographs with subperiosteal resorption and chest radiographs with interstitial disease. Receiver operating characteristic analysis using 71 hand radiographs and 52 chest radiographs with five observers each demonstrates that there is no statistical significant difference in diagnostic accuracy between the original films and the compressed images with a compression ratio as high as 20:1.

  15. Directional filtering for block recovery using wavelet features

    NASA Astrophysics Data System (ADS)

    Hyun, Seung H.; Eom, Il K.; Kim, Yoo S.

    2005-07-01

    When images compressed with block-based compression techniques are transmitted over a noisy channel, unexpected block losses occur. Conventional methods that do not consider edge directions can cause blocked blurring artifacts. In this paper, we present a post-processing-based block recovery scheme using Haar wavelet features. The adaptive selection of neighboring blocks is performed based on the energy of wavelet subbands (EWS) and difference between DC values (DDC). The lost blocks are recovered by linear interpolation in the spatial domain using selected blocks. The method using only EWS performs well for horizontal and vertical edges, but not as well for diagonal edges. Conversely, only using DDC performs well for diagonal edges with the exception of line- or roof-type edge profiles. Therefore, we combine EWS and DDC for better results. The proposed directional recovery method is effective for the strong edge because exploit the varying neighboring blocks adaptively according to the edges and the directional information in the image. The proposed method outperforms the previous methods that used only fixed blocks.

  16. A general method to determine the stability of compressible flows

    NASA Technical Reports Server (NTRS)

    Guenther, R. A.; Chang, I. D.

    1982-01-01

    Several problems were studied using two completely different approaches. The initial method was to use the standard linearized perturbation theory by finding the value of the individual small disturbance quantities based on the equations of motion. These were serially eliminated from the equations of motion to derive a single equation that governs the stability of fluid dynamic system. These equations could not be reduced unless the steady state variable depends only on one coordinate. The stability equation based on one dependent variable was found and was examined to determine the stability of a compressible swirling jet. The second method applied a Lagrangian approach to the problem. Since the equations developed were based on different assumptions, the condition of stability was compared only for the Rayleigh problem of a swirling flow, both examples reduce to the Rayleigh criterion. This technique allows including the viscous shear terms which is not possible in the first method. The same problem was then examined to see what effect shear has on stability.

  17. Comparison between various patch wise strategies for reconstruction of ultra-spectral cubes captured with a compressive sensing system

    NASA Astrophysics Data System (ADS)

    Oiknine, Yaniv; August, Isaac Y.; Revah, Liat; Stern, Adrian

    2016-05-01

    Recently we introduced a Compressive Sensing Miniature Ultra-Spectral Imaging (CS-MUSI) system. The system is based on a single Liquid Crystal (LC) cell and a parallel sensor array where the liquid crystal cell performs spectral encoding. Within the framework of compressive sensing, the CS-MUSI system is able to reconstruct ultra-spectral cubes captured with only an amount of ~10% samples compared to a conventional system. Despite the compression, the technique is extremely complex computationally, because reconstruction of ultra-spectral images requires processing huge data cubes of Gigavoxel size. Fortunately, the computational effort can be alleviated by using separable operation. An additional way to reduce the reconstruction effort is to perform the reconstructions on patches. In this work, we consider processing on various patch shapes. We present an experimental comparison between various patch shapes chosen to process the ultra-spectral data captured with CS-MUSI system. The patches may be one dimensional (1D) for which the reconstruction is carried out spatially pixel-wise, or two dimensional (2D) - working on spatial rows/columns of the ultra-spectral cube, as well as three dimensional (3D).

  18. Prediction-guided quantization for video tone mapping

    NASA Astrophysics Data System (ADS)

    Le Dauphin, Agnès.; Boitard, Ronan; Thoreau, Dominique; Olivier, Yannick; Francois, Edouard; LeLéannec, Fabrice

    2014-09-01

    Tone Mapping Operators (TMOs) compress High Dynamic Range (HDR) content to address Low Dynamic Range (LDR) displays. However, before reaching the end-user, this tone mapped content is usually compressed for broadcasting or storage purposes. Any TMO includes a quantization step to convert floating point values to integer ones. In this work, we propose to adapt this quantization, in the loop of an encoder, to reduce the entropy of the tone mapped video content. Our technique provides an appropriate quantization for each mode of both the Intra and Inter-prediction that is performed in the loop of a block-based encoder. The mode that minimizes a rate-distortion criterion uses its associated quantization to provide integer values for the rest of the encoding process. The method has been implemented in HEVC and was tested over two different scenarios: the compression of tone mapped LDR video content (using the HM10.0) and the compression of perceptually encoded HDR content (HM14.0). Results show an average bit-rate reduction under the same PSNR for all the sequences and TMO considered of 20.3% and 27.3% for tone mapped content and 2.4% and 2.7% for HDR content.

  19. Temporary morphological changes in plus disease induced during contact digital imaging

    PubMed Central

    Zepeda-Romero, L C; Martinez-Perez, M E; Ruiz-Velasco, S; Ramirez-Ortiz, M A; Gutierrez-Padilla, J A

    2011-01-01

    Objective To compare and quantify the retinal vascular changes induced by non-intentional pressure contact by digital handheld camera during retinopathy of prematurity (ROP) imaging by means of a computer-based image analysis system, Retinal Image multiScale Analysis. Methods A set of 10 wide-angle retinal pairs of photographs per patient, who underwent routine ROP examinations, was measured. Vascular trees were matched between ‘compression artifact' (absence of the vascular column at the optic nerve) and ‘not compression artifact' conditions. Parameters were analyzed using a two-level linear model for each individual parameter for arterioles and venules separately: integrated curvature (IC), diameter (d), and tortuosity index (TI). Results Images affected with compression artifact showed significant vascular d (P<0.01) changes in both arteries and veins, as well as in artery IC (P<0.05). Vascular TI remained unchanged in both groups. Conclusions Non-adverted corneal pressure with the RetCam lens could compress and decrease intra-arterial diameter or even collapse retinal vessels. Careful attention to technique is essential to avoid absence of the arterial blood column at the optic nerve head that is indicative of increased pressure during imaging. PMID:21760627

  20. Moisture and temperature influence on mechanical behavior of PPS/buckypapers carbon fiber laminates

    NASA Astrophysics Data System (ADS)

    Rojas, J. A.; Santos, L. F. P.; Costa, M. L.; Ribeiro, B.; Botelho, E. C.

    2017-07-01

    In this work, multiwall carbon nanotubes (MWCNT) were dispersed in water with the assistance of water based surfactant and then sonicated in order to obtain a very well dispersed solution. The suspension was filtrate under vaccum conditions, generating a thin film called buckypapers (BP). Poly (phenylene sulphide) (PPS) reinforced carbon fiber (CF) and PPS reinforced CF/BP composites were manufactured through hot compression molding technique. Subsequently the samples were exposed to extreme humidity (90% of moisture) combined with high temperature (80 °C). The mechanical properties of the laminates were evaluated by dynamic mechanical analysis, compression shear test, interlaminar shear strength and impulse excitation of vibration. Volume fraction of pores were 10.93% for PPS/CF and 16.18% for PPS/BP/CF, indicating that the hot compression molding parameters employed in this investigation (1.4 MPa, 5 min and 330 °C) affected both the consolidation quality of the composites and the mechanical properties of the final laminates.

Top