Sample records for achieves high compression

  1. Thermofluidic compression effects to achieve combustion in a low-compression scramjet engine

    NASA Astrophysics Data System (ADS)

    Moura, A. F.; Wheatley, V.; Jahn, I.

    2018-07-01

    The compression provided by a scramjet inlet is an important parameter in its design. It must be low enough to limit thermal and structural loads and stagnation pressure losses, but high enough to provide the conditions favourable for combustion. Inlets are typically designed to achieve sufficient compression without accounting for the fluidic, and subsequently thermal, compression provided by the fuel injection, which can enable robust combustion in a low-compression engine. This is investigated using Reynolds-averaged Navier-Stokes numerical simulations of a simplified scramjet engine designed to have insufficient compression to auto-ignite fuel in the absence of thermofluidic compression. The engine was designed with a wide rectangular combustor and a single centrally located injector, in order to reduce three-dimensional effects of the walls on the fuel plume. By varying the injected mass flow rate of hydrogen fuel (equivalence ratios of 0.22, 0.17, and 0.13), it is demonstrated that higher equivalence ratios lead to earlier ignition and more rapid combustion, even though mean conditions in the combustor change by no more than 5% for pressure and 3% for temperature with higher equivalence ratio. By supplementing the lower equivalence ratio with helium to achieve a higher mass flow rate, it is confirmed that these benefits are primarily due to the local compression provided by the extra injected mass. Investigation of the conditions around the fuel plume indicated two connected mechanisms. The higher mass flow rate for higher equivalence ratios generated a stronger injector bow shock that compresses the free-stream gas, increasing OH radical production and promoting ignition. This was observed both in the higher equivalence ratio case and in the case with helium. This earlier ignition led to increased temperature and pressure downstream and, consequently, stronger combustion. The heat release from combustion provided thermal compression in the combustor, further

  2. Thermofluidic compression effects to achieve combustion in a low-compression scramjet engine

    NASA Astrophysics Data System (ADS)

    Moura, A. F.; Wheatley, V.; Jahn, I.

    2017-12-01

    The compression provided by a scramjet inlet is an important parameter in its design. It must be low enough to limit thermal and structural loads and stagnation pressure losses, but high enough to provide the conditions favourable for combustion. Inlets are typically designed to achieve sufficient compression without accounting for the fluidic, and subsequently thermal, compression provided by the fuel injection, which can enable robust combustion in a low-compression engine. This is investigated using Reynolds-averaged Navier-Stokes numerical simulations of a simplified scramjet engine designed to have insufficient compression to auto-ignite fuel in the absence of thermofluidic compression. The engine was designed with a wide rectangular combustor and a single centrally located injector, in order to reduce three-dimensional effects of the walls on the fuel plume. By varying the injected mass flow rate of hydrogen fuel (equivalence ratios of 0.22, 0.17, and 0.13), it is demonstrated that higher equivalence ratios lead to earlier ignition and more rapid combustion, even though mean conditions in the combustor change by no more than 5% for pressure and 3% for temperature with higher equivalence ratio. By supplementing the lower equivalence ratio with helium to achieve a higher mass flow rate, it is confirmed that these benefits are primarily due to the local compression provided by the extra injected mass. Investigation of the conditions around the fuel plume indicated two connected mechanisms. The higher mass flow rate for higher equivalence ratios generated a stronger injector bow shock that compresses the free-stream gas, increasing OH radical production and promoting ignition. This was observed both in the higher equivalence ratio case and in the case with helium. This earlier ignition led to increased temperature and pressure downstream and, consequently, stronger combustion. The heat release from combustion provided thermal compression in the combustor, further

  3. Layered compression for high-precision depth data.

    PubMed

    Miao, Dan; Fu, Jingjing; Lu, Yan; Li, Shipeng; Chen, Chang Wen

    2015-12-01

    With the development of depth data acquisition technologies, access to high-precision depth with more than 8-b depths has become much easier and determining how to efficiently represent and compress high-precision depth is essential for practical depth storage and transmission systems. In this paper, we propose a layered high-precision depth compression framework based on an 8-b image/video encoder to achieve efficient compression with low complexity. Within this framework, considering the characteristics of the high-precision depth, a depth map is partitioned into two layers: 1) the most significant bits (MSBs) layer and 2) the least significant bits (LSBs) layer. The MSBs layer provides rough depth value distribution, while the LSBs layer records the details of the depth value variation. For the MSBs layer, an error-controllable pixel domain encoding scheme is proposed to exploit the data correlation of the general depth information with sharp edges and to guarantee the data format of LSBs layer is 8 b after taking the quantization error from MSBs layer. For the LSBs layer, standard 8-b image/video codec is leveraged to perform the compression. The experimental results demonstrate that the proposed coding scheme can achieve real-time depth compression with satisfactory reconstruction quality. Moreover, the compressed depth data generated from this scheme can achieve better performance in view synthesis and gesture recognition applications compared with the conventional coding schemes because of the error control algorithm.

  4. High-quality lossy compression: current and future trends

    NASA Astrophysics Data System (ADS)

    McLaughlin, Steven W.

    1995-01-01

    This paper is concerned with current and future trends in the lossy compression of real sources such as imagery, video, speech and music. We put all lossy compression schemes into common framework where each can be characterized in terms of three well-defined advantages: cell shape, region shape and memory advantages. We concentrate on image compression and discuss how new entropy constrained trellis-based compressors achieve cell- shape, region-shape and memory gain resulting in high fidelity and high compression.

  5. High-speed and high-ratio referential genome compression.

    PubMed

    Liu, Yuansheng; Peng, Hui; Wong, Limsoon; Li, Jinyan

    2017-11-01

    The rapidly increasing number of genomes generated by high-throughput sequencing platforms and assembly algorithms is accompanied by problems in data storage, compression and communication. Traditional compression algorithms are unable to meet the demand of high compression ratio due to the intrinsic challenging features of DNA sequences such as small alphabet size, frequent repeats and palindromes. Reference-based lossless compression, by which only the differences between two similar genomes are stored, is a promising approach with high compression ratio. We present a high-performance referential genome compression algorithm named HiRGC. It is based on a 2-bit encoding scheme and an advanced greedy-matching search on a hash table. We compare the performance of HiRGC with four state-of-the-art compression methods on a benchmark dataset of eight human genomes. HiRGC takes <30 min to compress about 21 gigabytes of each set of the seven target genomes into 96-260 megabytes, achieving compression ratios of 217 to 82 times. This performance is at least 1.9 times better than the best competing algorithm on its best case. Our compression speed is also at least 2.9 times faster. HiRGC is stable and robust to deal with different reference genomes. In contrast, the competing methods' performance varies widely on different reference genomes. More experiments on 100 human genomes from the 1000 Genome Project and on genomes of several other species again demonstrate that HiRGC's performance is consistently excellent. The C ++ and Java source codes of our algorithm are freely available for academic and non-commercial use. They can be downloaded from https://github.com/yuansliu/HiRGC. jinyan.li@uts.edu.au. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  6. FRESCO: Referential compression of highly similar sequences.

    PubMed

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

  7. Broadband and tunable high-performance microwave absorption of an ultralight and highly compressible graphene foam.

    PubMed

    Zhang, Yi; Huang, Yi; Zhang, Tengfei; Chang, Huicong; Xiao, Peishuang; Chen, Honghui; Huang, Zhiyu; Chen, Yongsheng

    2015-03-25

    The broadband and tunable high-performance microwave absorption properties of an ultralight and highly compressible graphene foam (GF) are investigated. Simply via physical compression, the microwave absorption performance can be tuned. The qualified bandwidth coverage of 93.8% (60.5 GHz/64.5 GHz) is achieved for the GF under 90% compressive strain (1.0 mm thickness). This mainly because of the 3D conductive network. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Super high compression of line drawing data

    NASA Technical Reports Server (NTRS)

    Cooper, D. B.

    1976-01-01

    Models which can be used to accurately represent the type of line drawings which occur in teleconferencing and transmission for remote classrooms and which permit considerable data compression were described. The objective was to encode these pictures in binary sequences of shortest length but such that the pictures can be reconstructed without loss of important structure. It was shown that exploitation of reasonably simple structure permits compressions in the range of 30-100 to 1. When dealing with highly stylized material such as electronic or logic circuit schematics, it is unnecessary to reproduce configurations exactly. Rather, the symbols and configurations must be understood and be reproduced, but one can use fixed font symbols for resistors, diodes, capacitors, etc. Compression of pictures of natural phenomena such as can be realized by taking a similar approach, or essentially zero error reproducibility can be achieved but at a lower level of compression.

  9. Graphene/Polyaniline Aerogel with Superelasticity and High Capacitance as Highly Compression-Tolerant Supercapacitor Electrode

    NASA Astrophysics Data System (ADS)

    Lv, Peng; Tang, Xun; Zheng, Ruilin; Ma, Xiaobo; Yu, Kehan; Wei, Wei

    2017-12-01

    Superelastic graphene aerogel with ultra-high compressibility shows promising potential for compression-tolerant supercapacitor electrode. However, its specific capacitance is too low to meet the practical application. Herein, we deposited polyaniline (PANI) into the superelastic graphene aerogel to improve the capacitance while maintaining the superelasticity. Graphene/PANI aerogel with optimized PANI mass content of 63 wt% shows the improved specific capacitance of 713 F g-1 in the three-electrode system. And the graphene/PANI aerogel presents a high recoverable compressive strain of 90% due to the strong interaction between PANI and graphene. The all-solid-state supercapacitors were assembled to demonstrate the compression-tolerant ability of graphene/PANI electrodes. The gravimetric capacitance of graphene/PANI electrodes reaches 424 F g-1 and retains 96% even at 90% compressive strain. And a volumetric capacitance of 65.5 F cm-3 is achieved, which is much higher than that of other compressible composite electrodes. Furthermore, several compressible supercapacitors can be integrated and connected in series to enhance the overall output voltage, suggesting the potential to meet the practical application.

  10. Graphene/Polyaniline Aerogel with Superelasticity and High Capacitance as Highly Compression-Tolerant Supercapacitor Electrode.

    PubMed

    Lv, Peng; Tang, Xun; Zheng, Ruilin; Ma, Xiaobo; Yu, Kehan; Wei, Wei

    2017-12-19

    Superelastic graphene aerogel with ultra-high compressibility shows promising potential for compression-tolerant supercapacitor electrode. However, its specific capacitance is too low to meet the practical application. Herein, we deposited polyaniline (PANI) into the superelastic graphene aerogel to improve the capacitance while maintaining the superelasticity. Graphene/PANI aerogel with optimized PANI mass content of 63 wt% shows the improved specific capacitance of 713 F g -1 in the three-electrode system. And the graphene/PANI aerogel presents a high recoverable compressive strain of 90% due to the strong interaction between PANI and graphene. The all-solid-state supercapacitors were assembled to demonstrate the compression-tolerant ability of graphene/PANI electrodes. The gravimetric capacitance of graphene/PANI electrodes reaches 424 F g -1 and retains 96% even at 90% compressive strain. And a volumetric capacitance of 65.5 F cm -3 is achieved, which is much higher than that of other compressible composite electrodes. Furthermore, several compressible supercapacitors can be integrated and connected in series to enhance the overall output voltage, suggesting the potential to meet the practical application.

  11. High speed fluorescence imaging with compressed ultrafast photography

    NASA Astrophysics Data System (ADS)

    Thompson, J. V.; Mason, J. D.; Beier, H. T.; Bixler, J. N.

    2017-02-01

    Fluorescent lifetime imaging is an optical technique that facilitates imaging molecular interactions and cellular functions. Because the excited lifetime of a fluorophore is sensitive to its local microenvironment,1, 2 measurement of fluorescent lifetimes can be used to accurately detect regional changes in temperature, pH, and ion concentration. However, typical state of the art fluorescent lifetime methods are severely limited when it comes to acquisition time (on the order of seconds to minutes) and video rate imaging. Here we show that compressed ultrafast photography (CUP) can be used in conjunction with fluorescent lifetime imaging to overcome these acquisition rate limitations. Frame rates up to one hundred billion frames per second have been demonstrated with compressed ultrafast photography using a streak camera.3 These rates are achieved by encoding time in the spatial direction with a pseudo-random binary pattern. The time domain information is then reconstructed using a compressed sensing algorithm, resulting in a cube of data (x,y,t) for each readout image. Thus, application of compressed ultrafast photography will allow us to acquire an entire fluorescent lifetime image with a single laser pulse. Using a streak camera with a high-speed CMOS camera, acquisition rates of 100 frames per second can be achieved, which will significantly enhance our ability to quantitatively measure complex biological events with high spatial and temporal resolution. In particular, we will demonstrate the ability of this technique to do single-shot fluorescent lifetime imaging of cells and microspheres.

  12. High Order Filter Methods for the Non-ideal Compressible MHD Equations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjoegreen, Bjoern

    2003-01-01

    The generalization of a class of low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous gas dynamic flows to compressible MHD equations for structured curvilinear grids has been achieved. The new scheme is shown to provide a natural and efficient way for the minimization of the divergence of the magnetic field numerical error. Standard divergence cleaning is not required by the present filter approach. For certain non-ideal MHD test cases, divergence free preservation of the magnetic fields has been achieved.

  13. Divergence Free High Order Filter Methods for the Compressible MHD Equations

    NASA Technical Reports Server (NTRS)

    Yea, H. C.; Sjoegreen, Bjoern

    2003-01-01

    The generalization of a class of low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous gas dynamic flows to compressible MHD equations for structured curvilinear grids has been achieved. The new scheme is shown to provide a natural and efficient way for the minimization of the divergence of the magnetic field numerical error. Standard diver- gence cleaning is not required by the present filter approach. For certain MHD test cases, divergence free preservation of the magnetic fields has been achieved.

  14. High-performance compression and double cryptography based on compressive ghost imaging with the fast Fourier transform

    NASA Astrophysics Data System (ADS)

    Leihong, Zhang; Zilan, Pan; Luying, Wu; Xiuhua, Ma

    2016-11-01

    To solve the problem that large images can hardly be retrieved for stringent hardware restrictions and the security level is low, a method based on compressive ghost imaging (CGI) with Fast Fourier Transform (FFT) is proposed, named FFT-CGI. Initially, the information is encrypted by the sender with FFT, and the FFT-coded image is encrypted by the system of CGI with a secret key. Then the receiver decrypts the image with the aid of compressive sensing (CS) and FFT. Simulation results are given to verify the feasibility, security, and compression of the proposed encryption scheme. The experiment suggests the method can improve the quality of large images compared with conventional ghost imaging and achieve the imaging for large-sized images, further the amount of data transmitted largely reduced because of the combination of compressive sensing and FFT, and improve the security level of ghost images through ciphertext-only attack (COA), chosen-plaintext attack (CPA), and noise attack. This technique can be immediately applied to encryption and data storage with the advantages of high security, fast transmission, and high quality of reconstructed information.

  15. Merging-compression formation of high temperature tokamak plasma

    NASA Astrophysics Data System (ADS)

    Gryaznevich, M. P.; Sykes, A.

    2017-07-01

    Merging-compression is a solenoid-free plasma formation method used in spherical tokamaks (STs). Two plasma rings are formed and merged via magnetic reconnection into one plasma ring that then is radially compressed to form the ST configuration. Plasma currents of several hundred kA and plasma temperatures in the keV-range have been produced using this method, however until recently there was no full understanding of the merging-compression formation physics. In this paper we explain in detail, for the first time, all stages of the merging-compression plasma formation. This method will be used to create ST plasmas in the compact (R ~ 0.4-0.6 m) high field, high current (3 T/2 MA) ST40 tokamak. Moderate extrapolation from the available experimental data suggests the possibility of achieving plasma current ~2 MA, and 10 keV range temperatures at densities ~1-5  ×  1020 m-3, bringing ST40 plasmas into a burning plasma (alpha particle heating) relevant conditions directly from the plasma formation. Issues connected with this approach for ST40 and future ST reactors are discussed

  16. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  17. Wavelet data compression for archiving high-resolution icosahedral model data

    NASA Astrophysics Data System (ADS)

    Wang, N.; Bao, J.; Lee, J.

    2011-12-01

    With the increase of the resolution of global circulation models, it becomes ever more important to develop highly effective solutions to archive the huge datasets produced by those models. While lossless data compression guarantees the accuracy of the restored data, it can only achieve limited reduction of data size. Wavelet transform based data compression offers significant potentials in data size reduction, and it has been shown very effective in transmitting data for remote visualizations. However, for data archive purposes, a detailed study has to be conducted to evaluate its impact to the datasets that will be used in further numerical computations. In this study, we carried out two sets of experiments for both summer and winter seasons. An icosahedral grid weather model and a highly efficient wavelet data compression software were used for this study. Initial conditions were compressed and input to the model to run to 10 days. The forecast results were then compared to those forecast results from the model run with the original uncompressed initial conditions. Several visual comparisons, as well as the statistics of numerical comparisons are presented. These results indicate that with specified minimum accuracy losses, wavelet data compression achieves significant data size reduction, and at the same time, it maintains minimum numerical impacts to the datasets. In addition, some issues are discussed to increase the archive efficiency while retaining a complete set of meta data for each archived file.

  18. High-speed reconstruction of compressed images

    NASA Astrophysics Data System (ADS)

    Cox, Jerome R., Jr.; Moore, Stephen M.

    1990-07-01

    A compression scheme is described that allows high-definition radiological images with greater than 8-bit intensity resolution to be represented by 8-bit pixels. Reconstruction of the images with their original intensity resolution can be carried out by means of a pipeline architecture suitable for compact, high-speed implementation. A reconstruction system is described that can be fabricated according to this approach and placed between an 8-bit display buffer and the display's video system thereby allowing contrast control of images at video rates. Results for 50 CR chest images are described showing that error-free reconstruction of the original 10-bit CR images can be achieved.

  19. DNABIT Compress – Genome compression algorithm

    PubMed Central

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  20. A pulse-compression-ring circuit for high-efficiency electric propulsion.

    PubMed

    Owens, Thomas L

    2008-03-01

    A highly efficient, highly reliable pulsed-power system has been developed for use in high power, repetitively pulsed inductive plasma thrusters. The pulsed inductive thruster ejects plasma propellant at a high velocity using a Lorentz force developed through inductive coupling to the plasma. Having greatly increased propellant-utilization efficiency compared to chemical rockets, this type of electric propulsion system may one day propel spacecraft on long-duration deep-space missions. High system reliability and electrical efficiency are extremely important for these extended missions. In the prototype pulsed-power system described here, exceptional reliability is achieved using a pulse-compression circuit driven by both active solid-state switching and passive magnetic switching. High efficiency is achieved using a novel ring architecture that recovers unused energy in a pulse-compression system with minimal circuit loss after each impulse. As an added benefit, voltage reversal is eliminated in the ring topology, resulting in long lifetimes for energy-storage capacitors. System tests were performed using an adjustable inductive load at a voltage level of 3.3 kV, a peak current of 20 kA, and a current switching rate of 15 kA/micros.

  1. Engineering tough, highly compressible, biodegradable hydrogels by tuning the network architecture.

    PubMed

    Gu, Dunyin; Tan, Shereen; Xu, Chenglong; O'Connor, Andrea J; Qiao, Greg G

    2017-06-20

    By precisely tuning the network architecture, tough, highly compressible hydrogels were engineered. The hydrogels were made by interconnecting high-functionality hydrophobic domains through linear tri-block chains, consisting of soft hydrophilic middle blocks, flanked with flexible hydrophobic blocks. In showing their applicability, the efficient encapsulation and prolonged release of hydrophobic drugs were achieved.

  2. High-speed real-time image compression based on all-optical discrete cosine transformation

    NASA Astrophysics Data System (ADS)

    Guo, Qiang; Chen, Hongwei; Wang, Yuxi; Chen, Minghua; Yang, Sigang; Xie, Shizhong

    2017-02-01

    In this paper, we present a high-speed single-pixel imaging (SPI) system based on all-optical discrete cosine transform (DCT) and demonstrate its capability to enable noninvasive imaging of flowing cells in a microfluidic channel. Through spectral shaping based on photonic time stretch (PTS) and wavelength-to-space conversion, structured illumination patterns are generated at a rate (tens of MHz) which is three orders of magnitude higher than the switching rate of a digital micromirror device (DMD) used in a conventional single-pixel camera. Using this pattern projector, high-speed image compression based on DCT can be achieved in the optical domain. In our proposed system, a high compression ratio (approximately 10:1) and a fast image reconstruction procedure are both achieved, which implicates broad applications in industrial quality control and biomedical imaging.

  3. GPU Lossless Hyperspectral Data Compression System

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh I.; Keymeulen, Didier; Kiely, Aaron B.; Klimesh, Matthew A.

    2014-01-01

    Hyperspectral imaging systems onboard aircraft or spacecraft can acquire large amounts of data, putting a strain on limited downlink and storage resources. Onboard data compression can mitigate this problem but may require a system capable of a high throughput. In order to achieve a high throughput with a software compressor, a graphics processing unit (GPU) implementation of a compressor was developed targeting the current state-of-the-art GPUs from NVIDIA(R). The implementation is based on the fast lossless (FL) compression algorithm reported in "Fast Lossless Compression of Multispectral-Image Data" (NPO- 42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which operates on hyperspectral data and achieves excellent compression performance while having low complexity. The FL compressor uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. The new Consultative Committee for Space Data Systems (CCSDS) Standard for Lossless Multispectral & Hyperspectral image compression (CCSDS 123) is based on the FL compressor. The software makes use of the highly-parallel processing capability of GPUs to achieve a throughput at least six times higher than that of a software implementation running on a single-core CPU. This implementation provides a practical real-time solution for compression of data from airborne hyperspectral instruments.

  4. Soliton compression to few-cycle pulses with a high quality factor by engineering cascaded quadratic nonlinearities.

    PubMed

    Zeng, Xianglong; Guo, Hairun; Zhou, Binbin; Bache, Morten

    2012-11-19

    We propose an efficient approach to improve few-cycle soliton compression with cascaded quadratic nonlinearities by using an engineered multi-section structure of the nonlinear crystal. By exploiting engineering of the cascaded quadratic nonlinearities, in each section soliton compression with a low effective order is realized, and high-quality few-cycle pulses with large compression factors are feasible. Each subsequent section is designed so that the compressed pulse exiting the previous section experiences an overall effective self-defocusing cubic nonlinearity corresponding to a modest soliton order, which is kept larger than unity to ensure further compression. This is done by increasing the cascaded quadratic nonlinearity in the new section with an engineered reduced residual phase mismatch. The low soliton orders in each section ensure excellent pulse quality and high efficiency. Numerical results show that compressed pulses with less than three-cycle duration can be achieved even when the compression factor is very large, and in contrast to standard soliton compression, these compressed pulses have minimal pedestal and high quality factor.

  5. Compression of helium to high pressures and temperatures using a ballistic piston apparatus

    NASA Technical Reports Server (NTRS)

    Roman, B. P.; Rovel, G. P.; Lewis, M. J.

    1971-01-01

    Some preliminary experiments are described which were carried out in a high enthalpy laboratory to investigate the compression of helium, a typical shock-tube driver gas, to very high pressures and temperatures by means of a ballistic piston. The purpose of these measurements was to identify any problem areas in the compression process, to determine the importance of real gas effects duDC 47355s process, and to establish the feasibility of using a ballistic piston apparatus to achieve temperatures in helium in excess of 10,000 K.

  6. Reconstructing high-dimensional two-photon entangled states via compressive sensing

    PubMed Central

    Tonolini, Francesco; Chan, Susan; Agnew, Megan; Lindsay, Alan; Leach, Jonathan

    2014-01-01

    Accurately establishing the state of large-scale quantum systems is an important tool in quantum information science; however, the large number of unknown parameters hinders the rapid characterisation of such states, and reconstruction procedures can become prohibitively time-consuming. Compressive sensing, a procedure for solving inverse problems by incorporating prior knowledge about the form of the solution, provides an attractive alternative to the problem of high-dimensional quantum state characterisation. Using a modified version of compressive sensing that incorporates the principles of singular value thresholding, we reconstruct the density matrix of a high-dimensional two-photon entangled system. The dimension of each photon is equal to d = 17, corresponding to a system of 83521 unknown real parameters. Accurate reconstruction is achieved with approximately 2500 measurements, only 3% of the total number of unknown parameters in the state. The algorithm we develop is fast, computationally inexpensive, and applicable to a wide range of quantum states, thus demonstrating compressive sensing as an effective technique for measuring the state of large-scale quantum systems. PMID:25306850

  7. Telemedicine + OCT: toward design of optimized algorithms for high-quality compressed images

    NASA Astrophysics Data System (ADS)

    Mousavi, Mahta; Lurie, Kristen; Land, Julian; Javidi, Tara; Ellerbee, Audrey K.

    2014-03-01

    Telemedicine is an emerging technology that aims to provide clinical healthcare at a distance. Among its goals, the transfer of diagnostic images over telecommunication channels has been quite appealing to the medical community. When viewed as an adjunct to biomedical device hardware, one highly important consideration aside from the transfer rate and speed is the accuracy of the reconstructed image at the receiver end. Although optical coherence tomography (OCT) is an established imaging technique that is ripe for telemedicine, the effects of OCT data compression, which may be necessary on certain telemedicine platforms, have not received much attention in the literature. We investigate the performance and efficiency of several lossless and lossy compression techniques for OCT data and characterize their effectiveness with respect to achievable compression ratio, compression rate and preservation of image quality. We examine the effects of compression in the interferogram vs. A-scan domain as assessed with various objective and subjective metrics.

  8. High-quality compressive ghost imaging

    NASA Astrophysics Data System (ADS)

    Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun

    2018-04-01

    We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.

  9. Intelligent condition monitoring method for bearing faults from highly compressed measurements using sparse over-complete features

    NASA Astrophysics Data System (ADS)

    Ahmed, H. O. A.; Wong, M. L. D.; Nandi, A. K.

    2018-01-01

    Condition classification of rolling element bearings in rotating machines is important to prevent the breakdown of industrial machinery. A considerable amount of literature has been published on bearing faults classification. These studies aim to determine automatically the current status of a roller element bearing. Of these studies, methods based on compressed sensing (CS) have received some attention recently due to their ability to allow one to sample below the Nyquist sampling rate. This technology has many possible uses in machine condition monitoring and has been investigated as a possible approach for fault detection and classification in the compressed domain, i.e., without reconstructing the original signal. However, previous CS based methods have been found to be too weak for highly compressed data. The present paper explores computationally, for the first time, the effects of sparse autoencoder based over-complete sparse representations on the classification performance of highly compressed measurements of bearing vibration signals. For this study, the CS method was used to produce highly compressed measurements of the original bearing dataset. Then, an effective deep neural network (DNN) with unsupervised feature learning algorithm based on sparse autoencoder is used for learning over-complete sparse representations of these compressed datasets. Finally, the fault classification is achieved using two stages, namely, pre-training classification based on stacked autoencoder and softmax regression layer form the deep net stage (the first stage), and re-training classification based on backpropagation (BP) algorithm forms the fine-tuning stage (the second stage). The experimental results show that the proposed method is able to achieve high levels of accuracy even with extremely compressed measurements compared with the existing techniques.

  10. Rapid-Rate Compression Testing of Sheet Materials at High Temperatures

    NASA Technical Reports Server (NTRS)

    Bernett, E. C.; Gerberich, W. W.

    1961-01-01

    This Report describes the test equipment that was developed and the procedures that were used to evaluate structural sheet-material compression properties at preselected constant strain rates and/or loads. Electrical self-resistance was used to achieve a rapid heating rate of 200 F/sec. Four materials were tested at maximum temperatures which ranged from 600 F for the aluminum alloy to 2000 F for the Ni-Cr-Co iron-base alloy. Tests at 0.1, 0.001, and 0.00001 in./in./sec showed that strain rate has a major effect on the measured strength, especially at the high temperatures. The tests, under conditions of constant temperature and constant compression stress, showed that creep deformation can be a critical factor even when the time involved is on the order of a few seconds or less. The theoretical and practical aspects of rapid-rate compression testing are presented, and suggestions are made regarding possible modifications of the equipment which would improve the over-all capabilities.

  11. High compression image and image sequence coding

    NASA Technical Reports Server (NTRS)

    Kunt, Murat

    1989-01-01

    The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.

  12. Advances in high throughput DNA sequence data compression.

    PubMed

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz

    2016-06-01

    Advances in high throughput sequencing technologies and reduction in cost of sequencing have led to exponential growth in high throughput DNA sequence data. This growth has posed challenges such as storage, retrieval, and transmission of sequencing data. Data compression is used to cope with these challenges. Various methods have been developed to compress genomic and sequencing data. In this article, we present a comprehensive review of compression methods for genome and reads compression. Algorithms are categorized as referential or reference free. Experimental results and comparative analysis of various methods for data compression are presented. Finally, key challenges and research directions in DNA sequence data compression are highlighted.

  13. High Performance Compression of Science Data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Carpentieri, Bruno; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  14. High performance compression of science data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in the interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  15. High performance compression of science data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Cohn, Martin

    1992-01-01

    In the future, NASA expects to gather over a tera-byte per day of data requiring space for levels of archival storage. Data compression will be a key component in systems that store this data (e.g., optical disk and tape) as well as in communications systems (both between space and Earth and between scientific locations on Earth). We propose to develop algorithms that can be a basis for software and hardware systems that compress a wide variety of scientific data with different criteria for fidelity/bandwidth tradeoffs. The algorithmic approaches we consider are specially targeted for parallel computation where data rates of over 1 billion bits per second are achievable with current technology.

  16. High performance compression of science data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Cohn, Martin

    1993-01-01

    In the future, NASA expects to gather over a tera-byte per day of data requiring space for levels of archival storage. Data compression will be a key component in systems that store this data (e.g., optical disk and tape) as well as in communications systems (both between space and Earth and between scientific locations on Earth). We propose to develop algorithms that can be a basis for software and hardware systems that compress a wide variety of scientific data with different criteria for fidelity/bandwidth tradeoffs. The algorithmic approaches we consider are specially targeted for parallel computation where data rates of over 1 billion bits per second are achievable with current technology.

  17. Wireless EEG System Achieving High Throughput and Reduced Energy Consumption Through Lossless and Near-Lossless Compression.

    PubMed

    Alvarez, Guillermo Dufort Y; Favaro, Federico; Lecumberry, Federico; Martin, Alvaro; Oliver, Juan P; Oreggioni, Julian; Ramirez, Ignacio; Seroussi, Gadiel; Steinfeld, Leonardo

    2018-02-01

    This work presents a wireless multichannel electroencephalogram (EEG) recording system featuring lossless and near-lossless compression of the digitized EEG signal. Two novel, low-complexity, efficient compression algorithms were developed and tested in a low-power platform. The algorithms were tested on six public EEG databases comparing favorably with the best compression rates reported up to date in the literature. In its lossless mode, the platform is capable of encoding and transmitting 59-channel EEG signals, sampled at 500 Hz and 16 bits per sample, at a current consumption of 337 A per channel; this comes with a guarantee that the decompressed signal is identical to the sampled one. The near-lossless mode allows for significant energy savings and/or higher throughputs in exchange for a small guaranteed maximum per-sample distortion in the recovered signal. Finally, we address the tradeoff between computation cost and transmission savings by evaluating three alternatives: sending raw data, or encoding with one of two compression algorithms that differ in complexity and compression performance. We observe that the higher the throughput (number of channels and sampling rate) the larger the benefits obtained from compression.

  18. Highly Efficient Compression Algorithms for Multichannel EEG.

    PubMed

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  19. Context-dependent JPEG backward-compatible high-dynamic range image compression

    NASA Astrophysics Data System (ADS)

    Korshunov, Pavel; Ebrahimi, Touradj

    2013-10-01

    High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.

  20. Ultra-porous titanium oxide scaffold with high compressive strength

    PubMed Central

    Tiainen, Hanna; Lyngstadaas, S. Petter; Ellingsen, Jan Eirik

    2010-01-01

    Highly porous and well interconnected titanium dioxide (TiO2) scaffolds with compressive strength above 2.5 MPa were fabricated without compromising the desired pore architectural characteristics, such as high porosity, appropriate pore size, surface-to-volume ratio, and interconnectivity. Processing parameters and pore architectural characteristics were investigated in order to identify the key processing steps and morphological properties that contributed to the enhanced strength of the scaffolds. Cleaning of the TiO2 raw powder removed phosphates but introduced sodium into the powder, which was suggested to decrease the slurry stability. Strong correlation was found between compressive strength and both replication times and solid content in the ceramic slurry. Increase in the solid content resulted in more favourable sponge loading, which was achieved due to the more suitable rheological properties of the ceramic slurry. Repeated replication process induced only negligible changes in the pore architectural parameters indicating a reduced flaw size in the scaffold struts. The fabricated TiO2 scaffolds show great promise as load-bearing bone scaffolds for applications where moderate mechanical support is required. PMID:20711636

  1. Data compression techniques applied to high resolution high frame rate video technology

    NASA Technical Reports Server (NTRS)

    Hartz, William G.; Alexovich, Robert E.; Neustadter, Marc S.

    1989-01-01

    An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended.

  2. Intelligent bandwidth compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 bandwidth-compressed images are presented.

  3. Fuels for high-compression engines

    NASA Technical Reports Server (NTRS)

    Sparrow, Stanwood W

    1926-01-01

    From theoretical considerations one would expect an increase in power and thermal efficiency to result from increasing the compression ratio of an internal combustion engine. In reality it is upon the expansion ratio that the power and thermal efficiency depend, but since in conventional engines this is equal to the compression ratio, it is generally understood that a change in one ratio is accompanied by an equal change in the other. Tests over a wide range of compression ratios (extending to ratios as high as 14.1) have shown that ordinarily an increase in power and thermal efficiency is obtained as expected provided serious detonation or preignition does not result from the increase in ratio.

  4. Highly efficient frequency conversion with bandwidth compression of quantum light

    PubMed Central

    Allgaier, Markus; Ansari, Vahid; Sansoni, Linda; Eigner, Christof; Quiring, Viktor; Ricken, Raimund; Harder, Georg; Brecht, Benjamin; Silberhorn, Christine

    2017-01-01

    Hybrid quantum networks rely on efficient interfacing of dissimilar quantum nodes, as elements based on parametric downconversion sources, quantum dots, colour centres or atoms are fundamentally different in their frequencies and bandwidths. Although pulse manipulation has been demonstrated in very different systems, to date no interface exists that provides both an efficient bandwidth compression and a substantial frequency translation at the same time. Here we demonstrate an engineered sum-frequency-conversion process in lithium niobate that achieves both goals. We convert pure photons at telecom wavelengths to the visible range while compressing the bandwidth by a factor of 7.47 under preservation of non-classical photon-number statistics. We achieve internal conversion efficiencies of 61.5%, significantly outperforming spectral filtering for bandwidth compression. Our system thus makes the connection between previously incompatible quantum systems as a step towards usable quantum networks. PMID:28134242

  5. Compression of a mixed antiproton and electron non-neutral plasma to high densities

    NASA Astrophysics Data System (ADS)

    Aghion, Stefano; Amsler, Claude; Bonomi, Germano; Brusa, Roberto S.; Caccia, Massimo; Caravita, Ruggero; Castelli, Fabrizio; Cerchiari, Giovanni; Comparat, Daniel; Consolati, Giovanni; Demetrio, Andrea; Di Noto, Lea; Doser, Michael; Evans, Craig; Fanì, Mattia; Ferragut, Rafael; Fesel, Julian; Fontana, Andrea; Gerber, Sebastian; Giammarchi, Marco; Gligorova, Angela; Guatieri, Francesco; Haider, Stefan; Hinterberger, Alexander; Holmestad, Helga; Kellerbauer, Alban; Khalidova, Olga; Krasnický, Daniel; Lagomarsino, Vittorio; Lansonneur, Pierre; Lebrun, Patrice; Malbrunot, Chloé; Mariazzi, Sebastiano; Marton, Johann; Matveev, Victor; Mazzotta, Zeudi; Müller, Simon R.; Nebbia, Giancarlo; Nedelec, Patrick; Oberthaler, Markus; Pacifico, Nicola; Pagano, Davide; Penasa, Luca; Petracek, Vojtech; Prelz, Francesco; Prevedelli, Marco; Rienaecker, Benjamin; Robert, Jacques; Røhne, Ole M.; Rotondi, Alberto; Sandaker, Heidi; Santoro, Romualdo; Smestad, Lillian; Sorrentino, Fiodor; Testera, Gemma; Tietje, Ingmari C.; Widmann, Eberhard; Yzombard, Pauline; Zimmer, Christian; Zmeskal, Johann; Zurlo, Nicola; Antonello, Massimiliano

    2018-04-01

    We describe a multi-step "rotating wall" compression of a mixed cold antiproton-electron non-neutral plasma in a 4.46 T Penning-Malmberg trap developed in the context of the AEḡIS experiment at CERN. Such traps are routinely used for the preparation of cold antiprotons suitable for antihydrogen production. A tenfold antiproton radius compression has been achieved, with a minimum antiproton radius of only 0.17 mm. We describe the experimental conditions necessary to perform such a compression: minimizing the tails of the electron density distribution is paramount to ensure that the antiproton density distribution follows that of the electrons. Such electron density tails are remnants of rotating wall compression and in many cases can remain unnoticed. We observe that the compression dynamics for a pure electron plasma behaves the same way as that of a mixed antiproton and electron plasma. Thanks to this optimized compression method and the high single shot antiproton catching efficiency, we observe for the first time cold and dense non-neutral antiproton plasmas with particle densities n ≥ 1013 m-3, which pave the way for an efficient pulsed antihydrogen production in AEḡIS.

  6. Intelligent bandwith compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.

  7. Efficient compression of molecular dynamics trajectory files.

    PubMed

    Marais, Patrick; Kenwood, Julian; Smith, Keegan Carruthers; Kuttel, Michelle M; Gain, James

    2012-10-15

    We investigate whether specific properties of molecular dynamics trajectory files can be exploited to achieve effective file compression. We explore two classes of lossy, quantized compression scheme: "interframe" predictors, which exploit temporal coherence between successive frames in a simulation, and more complex "intraframe" schemes, which compress each frame independently. Our interframe predictors are fast, memory-efficient and well suited to on-the-fly compression of massive simulation data sets, and significantly outperform the benchmark BZip2 application. Our schemes are configurable: atomic positional accuracy can be sacrificed to achieve greater compression. For high fidelity compression, our linear interframe predictor gives the best results at very little computational cost: at moderate levels of approximation (12-bit quantization, maximum error ≈ 10(-2) Å), we can compress a 1-2 fs trajectory file to 5-8% of its original size. For 200 fs time steps-typically used in fine grained water diffusion experiments-we can compress files to ~25% of their input size, still substantially better than BZip2. While compression performance degrades with high levels of quantization, the simulation error is typically much greater than the associated approximation error in such cases. Copyright © 2012 Wiley Periodicals, Inc.

  8. Highly compressible reduced graphene oxide/polypyrrole/MnO2 aerogel electrodes meeting the requirement of limiting space

    NASA Astrophysics Data System (ADS)

    Lv, Peng; Tang, Xun; Yuan, Jiajiao; Ji, Chenglong

    2017-11-01

    Highly compressible electrodes are in high demand in volume-restricted energy storage devices. Superelastic reduced graphene oxide (rGO) aerogel with attractive characteristics are proposed as the promising skeleton for compressible electrodes. Herein, a ternary aerogel was prepared by successively electrodepositing polypyrrole (PPy) and MnO2 into the superelastic rGO aerogel. In the rGO/PPy/MnO2 aerogel, rGO aerogel provides the continuously conductive network; MnO2 is mainly responsible for pseudo reactions; the middle PPy layer not only reduces the interface resistance between rGO and MnO2, but also further enhanced the mechanical strength of rGO backbone. The synergistic effect of the three components leads to excellent performances including high specific capacitance, reversible compressibility, and extreme durability. The gravimetric capacitance of the compressible rGO/PPy/MnO2 aerogel electrodes reaches 366 F g-1 and can retain 95.3% even under 95% compressive strain. And a volumetric capacitance of 138 F cm-3 is achieved, which is much higher than that of other rGO-based compressible electrodes. This volumetric capacitance value can be preserved by 85% after 3500 charge/discharge cycles with various compression conditions. This work will pave the way for advanced applications in the area of compressible energy-storage devices meeting the requirement of limiting space.

  9. Compression of contour data through exploiting curve-to-curve dependence

    NASA Technical Reports Server (NTRS)

    Yalabik, N.; Cooper, D. B.

    1975-01-01

    An approach to exploiting curve-to-curve dependencies in order to achieve high data compression is presented. One of the approaches to date of along curve compression through use of cubic spline approximation is taken and extended by investigating the additional compressibility achievable through curve-to-curve structure exploitation. One of the models under investigation is reported on.

  10. Methods for compressible fluid simulation on GPUs using high-order finite differences

    NASA Astrophysics Data System (ADS)

    Pekkilä, Johannes; Väisälä, Miikka S.; Käpylä, Maarit J.; Käpylä, Petri J.; Anjum, Omer

    2017-08-01

    We focus on implementing and optimizing a sixth-order finite-difference solver for simulating compressible fluids on a GPU using third-order Runge-Kutta integration. Since graphics processing units perform well in data-parallel tasks, this makes them an attractive platform for fluid simulation. However, high-order stencil computation is memory-intensive with respect to both main memory and the caches of the GPU. We present two approaches for simulating compressible fluids using 55-point and 19-point stencils. We seek to reduce the requirements for memory bandwidth and cache size in our methods by using cache blocking and decomposing a latency-bound kernel into several bandwidth-bound kernels. Our fastest implementation is bandwidth-bound and integrates 343 million grid points per second on a Tesla K40t GPU, achieving a 3 . 6 × speedup over a comparable hydrodynamics solver benchmarked on two Intel Xeon E5-2690v3 processors. Our alternative GPU implementation is latency-bound and achieves the rate of 168 million updates per second.

  11. High-resolution quantization based on soliton self-frequency shift and spectral compression in a bi-directional comb-fiber architecture

    NASA Astrophysics Data System (ADS)

    Zhang, Xuyan; Zhang, Zhiyao; Wang, Shubing; Liang, Dong; Li, Heping; Liu, Yong

    2018-03-01

    We propose and demonstrate an approach that can achieve high-resolution quantization by employing soliton self-frequency shift and spectral compression. Our approach is based on a bi-directional comb-fiber architecture which is composed of a Sagnac-loop-based mirror and a comb-like combination of N sections of interleaved single-mode fibers and high nonlinear fibers. The Sagnac-loop-based mirror placed at the terminal of a bus line reflects the optical pulses back to the bus line to achieve additional N-stage spectral compression, thus single-stage soliton self-frequency shift (SSFS) and (2 N - 1)-stage spectral compression are realized in the bi-directional scheme. The fiber length in the architecture is numerically optimized, and the proposed quantization scheme is evaluated by both simulation and experiment in the case of N = 2. In the experiment, a quantization resolution of 6.2 bits is obtained, which is 1.2-bit higher than that of its uni-directional counterpart.

  12. Compressive sensing in medical imaging

    PubMed Central

    Graff, Christian G.; Sidky, Emil Y.

    2015-01-01

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400

  13. High-performance software-only H.261 video compression on PC

    NASA Astrophysics Data System (ADS)

    Kasperovich, Leonid

    1996-03-01

    This paper describes an implementation of a software H.261 codec for PC, that takes an advantage of the fast computational algorithms for DCT-based video compression, which have been presented by the author at the February's 1995 SPIE/IS&T meeting. The motivation for developing the H.261 prototype system is to demonstrate a feasibility of real time software- only videoconferencing solution to operate across a wide range of network bandwidth, frame rate, and resolution of the input video. As the bandwidths of current network technology will be increased, the higher frame rate and resolution of video to be transmitted is allowed, that requires, in turn, a software codec to be able to compress pictures of CIF (352 X 288) resolution at up to 30 frame/sec. Running on Pentium 133 MHz PC the codec presented is capable to compress video in CIF format at 21 - 23 frame/sec. This result is comparable to the known hardware-based H.261 solutions, but it doesn't require any specific hardware. The methods to achieve high performance, the program optimization technique for Pentium microprocessor along with the performance profile, showing the actual contribution of the different encoding/decoding stages to the overall computational process, are presented.

  14. High-resolution coded-aperture design for compressive X-ray tomography using low resolution detectors

    NASA Astrophysics Data System (ADS)

    Mojica, Edson; Pertuz, Said; Arguello, Henry

    2017-12-01

    One of the main challenges in Computed Tomography (CT) is obtaining accurate reconstructions of the imaged object while keeping a low radiation dose in the acquisition process. In order to solve this problem, several researchers have proposed the use of compressed sensing for reducing the amount of measurements required to perform CT. This paper tackles the problem of designing high-resolution coded apertures for compressed sensing computed tomography. In contrast to previous approaches, we aim at designing apertures to be used with low-resolution detectors in order to achieve super-resolution. The proposed method iteratively improves random coded apertures using a gradient descent algorithm subject to constraints in the coherence and homogeneity of the compressive sensing matrix induced by the coded aperture. Experiments with different test sets show consistent results for different transmittances, number of shots and super-resolution factors.

  15. Achieving high aspect ratio wrinkles by modifying material network stress.

    PubMed

    Chen, Yu-Cheng; Wang, Yan; McCarthy, Thomas J; Crosby, Alfred J

    2017-06-07

    Wrinkle aspect ratio, or the amplitude divided by the wavelength, is hindered by strain localization transitions when an increasing global compressive stress is applied to synthetic material systems. However, many examples from living organisms show extremely high aspect ratios, such as gut villi and flower petals. We use three experimental approaches to demonstrate that these high aspect ratio structures can be achieved by modifying the network stress in the wrinkle substrate. We modify the wrinkle stress and effectively delay the strain localization transition, such as folding, to larger aspect ratios by using a zero-stress initial wavy substrate, creating a secondary network with post-curing, or using chemical stress relaxation materials. A wrinkle aspect ratio as high as 0.85, almost three times higher than common values of synthetic wrinkles, is achieved, and a quantitative framework is presented to provide understanding the different strategies and predictions for future investigations.

  16. Stainless steel component with compressed fiber Bragg grating for high temperature sensing applications

    NASA Astrophysics Data System (ADS)

    Jinesh, Mathew; MacPherson, William N.; Hand, Duncan P.; Maier, Robert R. J.

    2016-05-01

    A smart metal component having the potential for high temperature strain sensing capability is reported. The stainless steel (SS316) structure is made by selective laser melting (SLM). A fiber Bragg grating (FBG) is embedded in to a 3D printed U-groove by high temperature brazing using a silver based alloy, achieving an axial FBG compression of 13 millistrain at room temperature. Initial results shows that the test component can be used for up to 700°C for sensing applications.

  17. Compressing DNA sequence databases with coil.

    PubMed

    White, W Timothy J; Hendy, Michael D

    2008-05-20

    Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression - an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression - the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  18. Study of radar pulse compression for high resolution satellite altimetry

    NASA Technical Reports Server (NTRS)

    Dooley, R. P.; Nathanson, F. E.; Brooks, L. W.

    1974-01-01

    Pulse compression techniques are studied which are applicable to a satellite altimeter having a topographic resolution of + 10 cm. A systematic design procedure is used to determine the system parameters. The performance of an optimum, maximum likelihood processor is analysed, which provides the basis for modifying the standard split-gate tracker to achieve improved performance. Bandwidth considerations lead to the recommendation of a full deramp STRETCH pulse compression technique followed by an analog filter bank to separate range returns. The implementation of the recommended technique is examined.

  19. Compressing DNA sequence databases with coil

    PubMed Central

    White, W Timothy J; Hendy, Michael D

    2008-01-01

    Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work. PMID:18489794

  20. Fast lossless compression via cascading Bloom filters

    PubMed Central

    2014-01-01

    Background Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. Results We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Conclusions Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time

  1. Fast lossless compression via cascading Bloom filters.

    PubMed

    Rozov, Roye; Shamir, Ron; Halperin, Eran

    2014-01-01

    Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space

  2. CWICOM: A Highly Integrated & Innovative CCSDS Image Compression ASIC

    NASA Astrophysics Data System (ADS)

    Poupat, Jean-Luc; Vitulli, Raffaele

    2013-08-01

    The space market is more and more demanding in terms of on image compression performances. The earth observation satellites instrument resolution, the agility and the swath are continuously increasing. It multiplies by 10 the volume of picture acquired on one orbit. In parallel, the satellites size and mass are decreasing, requiring innovative electronic technologies reducing size, mass and power consumption. Astrium, leader on the market of the combined solutions for compression and memory for space application, has developed a new image compression ASIC which is presented in this paper. CWICOM is a high performance and innovative image compression ASIC developed by Astrium in the frame of the ESA contract n°22011/08/NLL/LvH. The objective of this ESA contract is to develop a radiation hardened ASIC that implements the CCSDS 122.0-B-1 Standard for Image Data Compression, that has a SpaceWire interface for configuring and controlling the device, and that is compatible with Sentinel-2 interface and with similar Earth Observation missions. CWICOM stands for CCSDS Wavelet Image COMpression ASIC. It is a large dynamic, large image and very high speed image compression ASIC potentially relevant for compression of any 2D image with bi-dimensional data correlation such as Earth observation, scientific data compression… The paper presents some of the main aspects of the CWICOM development, such as the algorithm and specification, the innovative memory organization, the validation approach and the status of the project.

  3. High-quality JPEG compression history detection for fake uncompressed images

    NASA Astrophysics Data System (ADS)

    Zhang, Rong; Wang, Rang-Ding; Guo, Li-Jun; Jiang, Bao-Chuan

    2017-05-01

    Authenticity is one of the most important evaluation factors of images for photography competitions or journalism. Unusual compression history of an image often implies the illicit intent of its author. Our work aims at distinguishing real uncompressed images from fake uncompressed images that are saved in uncompressed formats but have been previously compressed. To detect the potential image JPEG compression, we analyze the JPEG compression artifacts based on the tetrolet covering, which corresponds to the local image geometrical structure. Since the compression can alter the structure information, the tetrolet covering indexes may be changed if a compression is performed on the test image. Such changes can provide valuable clues about the image compression history. To be specific, the test image is first compressed with different quality factors to generate a set of temporary images. Then, the test image is compared with each temporary image block-by-block to investigate whether the tetrolet covering index of each 4×4 block is different between them. The percentages of the changed tetrolet covering indexes corresponding to the quality factors (from low to high) are computed and used to form the p-curve, the local minimum of which may indicate the potential compression. Our experimental results demonstrate the advantage of our method to detect JPEG compressions of high quality, even the highest quality factors such as 98, 99, or 100 of the standard JPEG compression, from uncompressed-format images. At the same time, our detection algorithm can accurately identify the corresponding compression quality factor.

  4. Modeling Compressibility Effects in High-Speed Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Sarkar, S.

    2004-01-01

    Man has strived to make objects fly faster, first from subsonic to supersonic and then to hypersonic speeds. Spacecraft and high-speed missiles routinely fly at hypersonic Mach numbers, M greater than 5. In defense applications, aircraft reach hypersonic speeds at high altitude and so may civilian aircraft in the future. Hypersonic flight, while presenting opportunities, has formidable challenges that have spurred vigorous research and development, mainly by NASA and the Air Force in the USA. Although NASP, the premier hypersonic concept of the eighties and early nineties, did not lead to flight demonstration, much basic research and technology development was possible. There is renewed interest in supersonic and hypersonic flight with the HyTech program of the Air Force and the Hyper-X program at NASA being examples of current thrusts in the field. At high-subsonic to supersonic speeds, fluid compressibility becomes increasingly important in the turbulent boundary layers and shear layers associated with the flow around aerospace vehicles. Changes in thermodynamic variables: density, temperature and pressure, interact strongly with the underlying vortical, turbulent flow. The ensuing changes to the flow may be qualitative such as shocks which have no incompressible counterpart, or quantitative such as the reduction of skin friction with Mach number, large heat transfer rates due to viscous heating, and the dramatic reduction of fuel/oxidant mixing at high convective Mach number. The peculiarities of compressible turbulence, so-called compressibility effects, have been reviewed by Fernholz and Finley. Predictions of aerodynamic performance in high-speed applications require accurate computational modeling of these "compressibility effects" on turbulence. During the course of the project we have made fundamental advances in modeling the pressure-strain correlation and developed a code to evaluate alternate turbulence models in the compressible shear layer.

  5. High bit depth infrared image compression via low bit depth codecs

    NASA Astrophysics Data System (ADS)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-08-01

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.

  6. RF pulse compression for future linear colliders

    NASA Astrophysics Data System (ADS)

    Wilson, Perry B.

    1995-07-01

    Future (nonsuperconducting) linear colliders will require very high values of peak rf power per meter of accelerating structure. The role of rf pulse compression in producing this power is examined within the context of overall rf system design for three future colliders at energies of 1.0-1.5 TeV, 5 TeV, and 25 TeV. In order to keep the average AC input power and the length of the accelerator within reasonable limits, a collider in the 1.0-1.5 TeV energy range will probably be built at an x-band rf frequency, and will require a peak power on the order of 150-200 MW per meter of accelerating structure. A 5 TeV collider at 34 GHz with a reasonable length (35 km) and AC input power (225 MW) would require about 550 MW per meter of structure. Two-beam accelerators can achieve peak powers of this order by applying dc pulse compression techniques (induction linac modules) to produce the drive beam. Klystron-driven colliders achieve high peak power by a combination of dc pulse compression (modulators) and rf pulse compression, with about the same overall rf system efficiency (30-40%) as a two-beam collider. A high gain (6.8) three-stage binary pulse compression system with high efficiency (80%) is described, which (compared to a SLED-II system) can be used to reduce the klystron peak power by about a factor of two, or alternatively, to cut the number of klystrons in half for a 1.0-1.5 TeV x-band collider. For a 5 TeV klystron-driven collider, a high gain, high efficiency rf pulse compression system is essential.

  7. A High-Performance Lossless Compression Scheme for EEG Signals Using Wavelet Transform and Neural Network Predictors

    PubMed Central

    Sriraam, N.

    2012-01-01

    Developments of new classes of efficient compression algorithms, software systems, and hardware for data intensive applications in today's digital health care systems provide timely and meaningful solutions in response to exponentially growing patient information data complexity and associated analysis requirements. Of the different 1D medical signals, electroencephalography (EEG) data is of great importance to the neurologist for detecting brain-related disorders. The volume of digitized EEG data generated and preserved for future reference exceeds the capacity of recent developments in digital storage and communication media and hence there is a need for an efficient compression system. This paper presents a new and efficient high performance lossless EEG compression using wavelet transform and neural network predictors. The coefficients generated from the EEG signal by integer wavelet transform are used to train the neural network predictors. The error residues are further encoded using a combinational entropy encoder, Lempel-Ziv-arithmetic encoder. Also a new context-based error modeling is also investigated to improve the compression efficiency. A compression ratio of 2.99 (with compression efficiency of 67%) is achieved with the proposed scheme with less encoding time thereby providing diagnostic reliability for lossless transmission as well as recovery of EEG signals for telemedicine applications. PMID:22489238

  8. A high-performance lossless compression scheme for EEG signals using wavelet transform and neural network predictors.

    PubMed

    Sriraam, N

    2012-01-01

    Developments of new classes of efficient compression algorithms, software systems, and hardware for data intensive applications in today's digital health care systems provide timely and meaningful solutions in response to exponentially growing patient information data complexity and associated analysis requirements. Of the different 1D medical signals, electroencephalography (EEG) data is of great importance to the neurologist for detecting brain-related disorders. The volume of digitized EEG data generated and preserved for future reference exceeds the capacity of recent developments in digital storage and communication media and hence there is a need for an efficient compression system. This paper presents a new and efficient high performance lossless EEG compression using wavelet transform and neural network predictors. The coefficients generated from the EEG signal by integer wavelet transform are used to train the neural network predictors. The error residues are further encoded using a combinational entropy encoder, Lempel-Ziv-arithmetic encoder. Also a new context-based error modeling is also investigated to improve the compression efficiency. A compression ratio of 2.99 (with compression efficiency of 67%) is achieved with the proposed scheme with less encoding time thereby providing diagnostic reliability for lossless transmission as well as recovery of EEG signals for telemedicine applications.

  9. High spatial resolution compressed sensing (HSPARSE) functional MRI.

    PubMed

    Fang, Zhongnan; Van Le, Nguyen; Choy, ManKin; Lee, Jin Hyung

    2016-08-01

    To propose a novel compressed sensing (CS) high spatial resolution functional MRI (fMRI) method and demonstrate the advantages and limitations of using CS for high spatial resolution fMRI. A randomly undersampled variable density spiral trajectory enabling an acceleration factor of 5.3 was designed with a balanced steady state free precession sequence to achieve high spatial resolution data acquisition. A modified k-t SPARSE method was then implemented and applied with a strategy to optimize regularization parameters for consistent, high quality CS reconstruction. The proposed method improves spatial resolution by six-fold with 12 to 47% contrast-to-noise ratio (CNR), 33 to 117% F-value improvement and maintains the same temporal resolution. It also achieves high sensitivity of 69 to 99% compared the original ground-truth, small false positive rate of less than 0.05 and low hemodynamic response function distortion across a wide range of CNRs. The proposed method is robust to physiological noise and enables detection of layer-specific activities in vivo, which cannot be resolved using the highest spatial resolution Nyquist acquisition. The proposed method enables high spatial resolution fMRI that can resolve layer-specific brain activity and demonstrates the significant improvement that CS can bring to high spatial resolution fMRI. Magn Reson Med 76:440-455, 2016. © 2015 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made. © 2015 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.

  10. Data compression using Chebyshev transform

    NASA Technical Reports Server (NTRS)

    Cheng, Andrew F. (Inventor); Hawkins, III, S. Edward (Inventor); Nguyen, Lillian (Inventor); Monaco, Christopher A. (Inventor); Seagrave, Gordon G. (Inventor)

    2007-01-01

    The present invention is a method, system, and computer program product for implementation of a capable, general purpose compression algorithm that can be engaged on the fly. This invention has particular practical application with time-series data, and more particularly, time-series data obtained form a spacecraft, or similar situations where cost, size and/or power limitations are prevalent, although it is not limited to such applications. It is also particularly applicable to the compression of serial data streams and works in one, two, or three dimensions. The original input data is approximated by Chebyshev polynomials, achieving very high compression ratios on serial data streams with minimal loss of scientific information.

  11. 50% duty cycle may be inappropriate to achieve a sufficient chest compression depth when cardiopulmonary resuscitation is performed by female or light rescuers.

    PubMed

    Lee, Chang Jae; Chung, Tae Nyoung; Bae, Jinkun; Kim, Eui Chung; Choi, Sung Wook; Kim, Ok Jun

    2015-03-01

    Current guidelines for cardiopulmonary resuscitation recommend chest compressions (CC) during 50% of the duty cycle (DC) in part because of the ease with which individuals may learn to achieve it with practice. However, no consideration has been given to a possible interaction between DC and depth of CC, which has been the subject of recent study. Our aim was to determine if 50% DC is inappropriate to achieve sufficient chest compression depth for female and light rescuers. Previously collected CC data, performed by senior medical students guided by metronome sounds with various down-stroke patterns and rates, were included in the analysis. Multiple linear regression analysis was performed to determine the association between average compression depth (ACD) with average compression rate (ACR), DC, and physical characteristics of the performers. Expected ACD was calculated for various settings. DC, ACR, body weight, male sex, and self-assessed physical strength were significantly associated with ACD in multivariate analysis. Based on our calculations, with 50% of DC, only men with ACR of 140/min or faster or body weight over 74 kg with ACR of 120/min can achieve sufficient ACD. A shorter DC is independently correlated with deeper CC during simulated cardiopulmonary resuscitation. The optimal DC recommended in current guidelines may be inappropriate for achieving sufficient CD, especially for female or lighter-weight rescuers.

  12. 50% duty cycle may be inappropriate to achieve a sufficient chest compression depth when cardiopulmonary resuscitation is performed by female or light rescuers

    PubMed Central

    Lee, Chang Jae; Chung, Tae Nyoung; Bae, Jinkun; Kim, Eui Chung; Choi, Sung Wook; Kim, Ok Jun

    2015-01-01

    Objective Current guidelines for cardiopulmonary resuscitation recommend chest compressions (CC) during 50% of the duty cycle (DC) in part because of the ease with which individuals may learn to achieve it with practice. However, no consideration has been given to a possible interaction between DC and depth of CC, which has been the subject of recent study. Our aim was to determine if 50% DC is inappropriate to achieve sufficient chest compression depth for female and light rescuers. Methods Previously collected CC data, performed by senior medical students guided by metronome sounds with various down-stroke patterns and rates, were included in the analysis. Multiple linear regression analysis was performed to determine the association between average compression depth (ACD) with average compression rate (ACR), DC, and physical characteristics of the performers. Expected ACD was calculated for various settings. Results DC, ACR, body weight, male sex, and self-assessed physical strength were significantly associated with ACD in multivariate analysis. Based on our calculations, with 50% of DC, only men with ACR of 140/min or faster or body weight over 74 kg with ACR of 120/min can achieve sufficient ACD. Conclusion A shorter DC is independently correlated with deeper CC during simulated cardiopulmonary resuscitation. The optimal DC recommended in current guidelines may be inappropriate for achieving sufficient CD, especially for female or lighter-weight rescuers. PMID:27752567

  13. Compressed sensing for high-resolution nonlipid suppressed 1 H FID MRSI of the human brain at 9.4T.

    PubMed

    Nassirpour, Sahar; Chang, Paul; Avdievitch, Nikolai; Henning, Anke

    2018-04-29

    The aim of this study was to apply compressed sensing to accelerate the acquisition of high resolution metabolite maps of the human brain using a nonlipid suppressed ultra-short TR and TE 1 H FID MRSI sequence at 9.4T. X-t sparse compressed sensing reconstruction was optimized for nonlipid suppressed 1 H FID MRSI data. Coil-by-coil x-t sparse reconstruction was compared with SENSE x-t sparse and low rank reconstruction. The effect of matrix size and spatial resolution on the achievable acceleration factor was studied. Finally, in vivo metabolite maps with different acceleration factors of 2, 4, 5, and 10 were acquired and compared. Coil-by-coil x-t sparse compressed sensing reconstruction was not able to reliably recover the nonlipid suppressed data, rather a combination of parallel and sparse reconstruction was necessary (SENSE x-t sparse). For acceleration factors of up to 5, both the low-rank and the compressed sensing methods were able to reconstruct the data comparably well (root mean squared errors [RMSEs] ≤ 10.5% for Cre). However, the reconstruction time of the low rank algorithm was drastically longer than compressed sensing. Using the optimized compressed sensing reconstruction, acceleration factors of 4 or 5 could be reached for the MRSI data with a matrix size of 64 × 64. For lower spatial resolutions, an acceleration factor of up to R∼4 was successfully achieved. By tailoring the reconstruction scheme to the nonlipid suppressed data through parameter optimization and performance evaluation, we present high resolution (97 µL voxel size) accelerated in vivo metabolite maps of the human brain acquired at 9.4T within scan times of 3 to 3.75 min. © 2018 International Society for Magnetic Resonance in Medicine.

  14. A novel high-frequency encoding algorithm for image compression

    NASA Astrophysics Data System (ADS)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-12-01

    In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.

  15. Treating osteoporotic vertebral compression fractures with intraosseous vacuum phenomena using high-viscosity bone cement via bilateral percutaneous vertebroplasty

    PubMed Central

    Guo, Dan; Cai, Jun; Zhang, Shengfei; Zhang, Liang; Feng, Xinmin

    2017-01-01

    Abstract Osteoporotic vertebral compression fractures with intraosseous vacuum phenomena could cause persistent back pains in patients, even after receiving conservative treatment. The aim of this study was to evaluate the efficacy of using high-viscosity bone cement via bilateral percutaneous vertebroplasty in treating patients who have osteoporotic vertebral compression fractures with intraosseous vacuum phenomena. Twenty osteoporotic vertebral compression fracture patients with intraosseous vacuum phenomena, who received at least 2 months of conservative treatment, were further treated by injecting high-viscosity bone cement via bilateral percutaneous vertebroplasty due to failure of conservative treatment. Treatment efficacy was evaluated by determining the anterior vertebral compression rates, visual analog scale (VAS) scores, and Oswestry disability index (ODI) scores at 1 day before the operation, on the first day of postoperation, at 1-month postoperation, and at 1-year postoperation. Three of 20 patients had asymptomatic bone cement leakage when treated via percutaneous vertebroplasty; however, no serious complications related to these treatments were observed during the 1-year follow-up period. A statistically significant improvement on the anterior vertebral compression rates, VAS scores, and ODI scores were achieved after percutaneous vertebroplasty. However, differences in the anterior vertebral compression rate, VAS score, and ODI score in the different time points during the 1-year follow-up period was not statistically significant (P > 0.05). Within the limitations of this study, the injection of high-viscosity bone cement via bilateral percutaneous vertebroplasty for patients who have osteoporotic vertebral compression fractures with intraosseous vacuum phenomena significantly relieved their back pains and improved their daily life activities shortly after the operation, thereby improving their life quality. In this study, the use of high

  16. COxSwAIN: Compressive Sensing for Advanced Imaging and Navigation

    NASA Technical Reports Server (NTRS)

    Kurwitz, Richard; Pulley, Marina; LaFerney, Nathan; Munoz, Carlos

    2015-01-01

    The COxSwAIN project focuses on building an image and video compression scheme that can be implemented in a small or low-power satellite. To do this, we used Compressive Sensing, where the compression is performed by matrix multiplications on the satellite and reconstructed on the ground. Our paper explains our methodology and demonstrates the results of the scheme, being able to achieve high quality image compression that is robust to noise and corruption.

  17. A High Performance Image Data Compression Technique for Space Applications

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Venbrux, Jack

    2003-01-01

    A highly performing image data compression technique is currently being developed for space science applications under the requirement of high-speed and pushbroom scanning. The technique is also applicable to frame based imaging data. The algorithm combines a two-dimensional transform with a bitplane encoding; this results in an embedded bit string with exact desirable compression rate specified by the user. The compression scheme performs well on a suite of test images acquired from spacecraft instruments. It can also be applied to three-dimensional data cube resulting from hyper-spectral imaging instrument. Flight qualifiable hardware implementations are in development. The implementation is being designed to compress data in excess of 20 Msampledsec and support quantization from 2 to 16 bits. This paper presents the algorithm, its applications and status of development.

  18. Backwards compatible high dynamic range video compression

    NASA Astrophysics Data System (ADS)

    Dolzhenko, Vladimir; Chesnokov, Vyacheslav; Edirisinghe, Eran A.

    2014-02-01

    This paper presents a two layer CODEC architecture for high dynamic range video compression. The base layer contains the tone mapped video stream encoded with 8 bits per component which can be decoded using conventional equipment. The base layer content is optimized for rendering on low dynamic range displays. The enhancement layer contains the image difference, in perceptually uniform color space, between the result of inverse tone mapped base layer content and the original video stream. Prediction of the high dynamic range content reduces the redundancy in the transmitted data while still preserves highlights and out-of-gamut colors. Perceptually uniform colorspace enables using standard ratedistortion optimization algorithms. We present techniques for efficient implementation and encoding of non-uniform tone mapping operators with low overhead in terms of bitstream size and number of operations. The transform representation is based on human vision system model and suitable for global and local tone mapping operators. The compression techniques include predicting the transform parameters from previously decoded frames and from already decoded data for current frame. Different video compression techniques are compared: backwards compatible and non-backwards compatible using AVC and HEVC codecs.

  19. Universal data compression

    NASA Astrophysics Data System (ADS)

    Lindsay, R. A.; Cox, B. V.

    Universal and adaptive data compression techniques have the capability to globally compress all types of data without loss of information but have the disadvantage of complexity and computation speed. Advances in hardware speed and the reduction of computational costs have made universal data compression feasible. Implementations of the Adaptive Huffman and Lempel-Ziv compression algorithms are evaluated for performance. Compression ratios versus run times for different size data files are graphically presented and discussed in the paper. Required adjustments needed for optimum performance of the algorithms relative to theoretical achievable limits will be outlined.

  20. Radiological Image Compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  1. Nonlinear consolidation in randomly heterogeneous highly compressible aquitards

    NASA Astrophysics Data System (ADS)

    Zapata-Norberto, Berenice; Morales-Casique, Eric; Herrera, Graciela S.

    2018-05-01

    Severe land subsidence due to groundwater extraction may occur in multiaquifer systems where highly compressible aquitards are present. The highly compressible nature of the aquitards leads to nonlinear consolidation where the groundwater flow parameters are stress-dependent. The case is further complicated by the heterogeneity of the hydrogeologic and geotechnical properties of the aquitards. The effect of realistic vertical heterogeneity of hydrogeologic and geotechnical parameters on the consolidation of highly compressible aquitards is investigated by means of one-dimensional Monte Carlo numerical simulations where the lower boundary represents the effect of an instant drop in hydraulic head due to groundwater pumping. Two thousand realizations are generated for each of the following parameters: hydraulic conductivity ( K), compression index ( C c), void ratio ( e) and m (an empirical parameter relating hydraulic conductivity and void ratio). The correlation structure, the mean and the variance for each parameter were obtained from a literature review about field studies in the lacustrine sediments of Mexico City. The results indicate that among the parameters considered, random K has the largest effect on the ensemble average behavior of the system when compared to a nonlinear consolidation model with deterministic initial parameters. The deterministic solution underestimates the ensemble average of total settlement when initial K is random. In addition, random K leads to the largest variance (and therefore largest uncertainty) of total settlement, groundwater flux and time to reach steady-state conditions.

  2. Rationale and design of a randomised clinical trial comparing vascular closure device and manual compression to achieve haemostasis after diagnostic coronary angiography: the Instrumental Sealing of ARterial puncture site - CLOSURE device versus manual compression (ISAR-CLOSURE) trial.

    PubMed

    Xhepa, Erion; Byrne, Robert A; Schulz, Stefanie; Helde, Sandra; Gewalt, Senta; Cassese, Salvatore; Linhardt, Maryam; Ibrahim, Tareq; Mehilli, Julinda; Hoppe, Katharina; Grupp, Katharina; Kufner, Sebastian; Böttiger, Corinna; Hoppmann, Petra; Burgdorf, Christof; Fusaro, Massimiliano; Ott, Ilka; Schneider, Simon; Hengstenberg, Christian; Schunkert, Heribert; Laugwitz, Karl-Ludwig; Kastrati, Adnan

    2014-06-01

    Vascular closure devices (VCD) have been introduced into clinical practice with the aim of increasing the procedural efficiency and clinical safety of coronary angiography. However, clinical studies comparing VCD and manual compression have yielded mixed results, and large randomised clinical trials comparing the two strategies are missing. Moreover, comparative efficacy studies between different VCD in routine clinical use are lacking. The Instrumental Sealing of ARterial puncture site - CLOSURE device versus manual compression (ISAR-CLOSURE) trial is a prospective, randomised clinical trial designed to compare the outcomes associated with the use of VCD or manual compression to achieve femoral haemostasis. The test hypothesis is that femoral haemostasis after coronary angiography achieved using VCD is not inferior to manual compression in terms of access-site-related vascular complications. Patients undergoing coronary angiography via the common femoral artery will be randomised in a 1:1:1 fashion to receive FemoSeal VCD, EXOSEAL VCD or manual compression. The primary endpoint is the incidence of the composite of arterial access-related complications (haematoma ≥5 cm, pseudoaneurysm, arteriovenous fistula, access-site-related bleeding, acute ipsilateral leg ischaemia, the need for vascular surgical/interventional treatment or documented local infection) at 30 days after randomisation. According to power calculations based on non-inferiority hypothesis testing, enrolment of 4,500 patients is planned. The trial is registered at www.clinicaltrials.gov (study identifier: NCT01389375). The safety of VCD as compared to manual compression in patients undergoing transfemoral coronary angiography remains an issue of clinical equipoise. The aim of the ISAR-CLOSURE trial is to assess whether femoral haemostasis achieved through the use of VCD is non-inferior to manual compression in terms of access-site-related vascular complications.

  3. A zero-error operational video data compression system

    NASA Technical Reports Server (NTRS)

    Kutz, R. L.

    1973-01-01

    A data compression system has been operating since February 1972, using ATS spin-scan cloud cover data. With the launch of ITOS 3 in October 1972, this data compression system has become the only source of near-realtime very high resolution radiometer image data at the data processing facility. The VHRR image data are compressed and transmitted over a 50 kilobit per second wideband ground link. The goal of the data compression experiment was to send data quantized to six bits at twice the rate possible when no compression is used, while maintaining zero error between the transmitted and reconstructed data. All objectives of the data compression experiment were met, and thus a capability of doubling the data throughput of the system has been achieved.

  4. A Bunch Compression Method for Free Electron Lasers that Avoids Parasitic Compressions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benson, Stephen V.; Douglas, David R.; Tennant, Christopher D.

    2015-09-01

    Virtually all existing high energy (>few MeV) linac-driven FELs compress the electron bunch length though the use of off-crest acceleration on the rising side of the RF waveform followed by transport through a magnetic chicane. This approach has at least three flaws: 1) it is difficult to correct aberrations--particularly RF curvature, 2) rising side acceleration exacerbates space charge-induced distortion of the longitudinal phase space, and 3) all achromatic "negative compaction" compressors create parasitic compression during the final compression process, increasing the CSR-induced emittance growth. One can avoid these deficiencies by using acceleration on the falling side of the RF waveformmore » and a compressor with M 56>0. This approach offers multiple advantages: 1) It is readily achieved in beam lines supporting simple schemes for aberration compensation, 2) Longitudinal space charge (LSC)-induced phase space distortion tends, on the falling side of the RF waveform, to enhance the chirp, and 3) Compressors with M 56>0 can be configured to avoid spurious over-compression. We will discuss this bunch compression scheme in detail and give results of a successful beam test in April 2012 using the JLab UV Demo FEL« less

  5. Data Compression Techniques for Maps

    DTIC Science & Technology

    1989-01-01

    Lempel - Ziv compression is applied to the classified and unclassified images as also to the output of the compression algorithms . The algorithms ...resulted in a compression of 7:1. The output of the quadtree coding algorithm was then compressed using Lempel - Ziv coding. The compression ratio achieved...using Lempel - Ziv coding. The unclassified image gave a compression ratio of only 1.4:1. The K means classified image

  6. Compressive hyperspectral and multispectral imaging fusion

    NASA Astrophysics Data System (ADS)

    Espitia, Óscar; Castillo, Sergio; Arguello, Henry

    2016-05-01

    Image fusion is a valuable framework which combines two or more images of the same scene from one or multiple sensors, allowing to improve the resolution of the images and increase the interpretable content. In remote sensing a common fusion problem consists of merging hyperspectral (HS) and multispectral (MS) images that involve large amount of redundant data, which ignores the highly correlated structure of the datacube along the spatial and spectral dimensions. Compressive HS and MS systems compress the spectral data in the acquisition step allowing to reduce the data redundancy by using different sampling patterns. This work presents a compressed HS and MS image fusion approach, which uses a high dimensional joint sparse model. The joint sparse model is formulated by combining HS and MS compressive acquisition models. The high spectral and spatial resolution image is reconstructed by using sparse optimization algorithms. Different fusion spectral image scenarios are used to explore the performance of the proposed scheme. Several simulations with synthetic and real datacubes show promising results as the reliable reconstruction of a high spectral and spatial resolution image can be achieved by using as few as just the 50% of the datacube.

  7. Dynamic High-temperature Testing of an Iridium Alloy in Compression at High-strain Rates: Dynamic High-temperature Testing

    DOE PAGES

    Song, B.; Nelson, K.; Lipinski, R.; ...

    2014-08-21

    Iridium alloys have superior strength and ductility at elevated temperatures, making them useful as structural materials for certain high-temperature applications. However, experimental data on their high-strain -rate performance are needed for understanding high-speed impacts in severe environments. Kolsky bars (also called split Hopkinson bars) have been extensively employed for high-strain -rate characterization of materials at room temperature, but it has been challenging to adapt them for the measurement of dynamic properties at high temperatures. In our study, we analyzed the difficulties encountered in high-temperature Kolsky bar testing of thin iridium alloy specimens in compression. We made appropriate modifications using themore » current high-temperature Kolsky bar technique in order to obtain reliable compressive stress–strain response of an iridium alloy at high-strain rates (300–10 000 s -1) and temperatures (750 and 1030°C). The compressive stress–strain response of the iridium alloy showed significant sensitivity to both strain rate and temperature.« less

  8. High precision Hugoniot measurements on statically pre-compressed fluid helium

    NASA Astrophysics Data System (ADS)

    Seagle, Christopher T.; Reinhart, William D.; Lopez, Andrew J.; Hickman, Randy J.; Thornhill, Tom F.

    2016-09-01

    The capability for statically pre-compressing fluid targets for Hugoniot measurements utilizing gas gun driven flyer plates has been developed. Pre-compression expands the capability for initial condition control, allowing access to thermodynamic states off the principal Hugoniot. Absolute Hugoniot measurements with an uncertainty less than 3% on density and pressure were obtained on statically pre-compressed fluid helium utilizing a two stage light gas gun. Helium is highly compressible; the locus of shock states resulting from dynamic loading of an initially compressed sample at room temperature is significantly denser than the cryogenic fluid Hugoniot even for relatively modest (0.27-0.38 GPa) initial pressures. The dynamic response of pre-compressed helium in the initial density range of 0.21-0.25 g/cm3 at ambient temperature may be described by a linear shock velocity (us) and particle velocity (up) relationship: us = C0 + sup, with C0 = 1.44 ± 0.14 km/s and s = 1.344 ± 0.025.

  9. Image and Video Compression with VLSI Neural Networks

    NASA Technical Reports Server (NTRS)

    Fang, W.; Sheu, B.

    1993-01-01

    An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.

  10. A high-order vertex-based central ENO finite-volume scheme for three-dimensional compressible flows

    DOE PAGES

    Charest, Marc R.J.; Canfield, Thomas R.; Morgan, Nathaniel R.; ...

    2015-03-11

    High-order discretization methods offer the potential to reduce the computational cost associated with modeling compressible flows. However, it is difficult to obtain accurate high-order discretizations of conservation laws that do not produce spurious oscillations near discontinuities, especially on multi-dimensional unstructured meshes. A novel, high-order, central essentially non-oscillatory (CENO) finite-volume method that does not have these difficulties is proposed for tetrahedral meshes. The proposed unstructured method is vertex-based, which differs from existing cell-based CENO formulations, and uses a hybrid reconstruction procedure that switches between two different solution representations. It applies a high-order k-exact reconstruction in smooth regions and a limited linearmore » reconstruction when discontinuities are encountered. Both reconstructions use a single, central stencil for all variables, making the application of CENO to arbitrary unstructured meshes relatively straightforward. The new approach was applied to the conservation equations governing compressible flows and assessed in terms of accuracy and computational cost. For all problems considered, which included various function reconstructions and idealized flows, CENO demonstrated excellent reliability and robustness. Up to fifth-order accuracy was achieved in smooth regions and essentially non-oscillatory solutions were obtained near discontinuities. The high-order schemes were also more computationally efficient for high-accuracy solutions, i.e., they took less wall time than the lower-order schemes to achieve a desired level of error. In one particular case, it took a factor of 24 less wall-time to obtain a given level of error with the fourth-order CENO scheme than to obtain the same error with the second-order scheme.« less

  11. Multiple-correction hybrid k-exact schemes for high-order compressible RANS-LES simulations on fully unstructured grids

    NASA Astrophysics Data System (ADS)

    Pont, Grégoire; Brenner, Pierre; Cinnella, Paola; Maugars, Bruno; Robinet, Jean-Christophe

    2017-12-01

    A Godunov's type unstructured finite volume method suitable for highly compressible turbulent scale-resolving simulations around complex geometries is constructed by using a successive correction technique. First, a family of k-exact Godunov schemes is developed by recursively correcting the truncation error of the piecewise polynomial representation of the primitive variables. The keystone of the proposed approach is a quasi-Green gradient operator which ensures consistency on general meshes. In addition, a high-order single-point quadrature formula, based on high-order approximations of the successive derivatives of the solution, is developed for flux integration along cell faces. The proposed family of schemes is compact in the algorithmic sense, since it only involves communications between direct neighbors of the mesh cells. The numerical properties of the schemes up to fifth-order are investigated, with focus on their resolvability in terms of number of mesh points required to resolve a given wavelength accurately. Afterwards, in the aim of achieving the best possible trade-off between accuracy, computational cost and robustness in view of industrial flow computations, we focus more specifically on the third-order accurate scheme of the family, and modify locally its numerical flux in order to reduce the amount of numerical dissipation in vortex-dominated regions. This is achieved by switching from the upwind scheme, mostly applied in highly compressible regions, to a fourth-order centered one in vortex-dominated regions. An analytical switch function based on the local grid Reynolds number is adopted in order to warrant numerical stability of the recentering process. Numerical applications demonstrate the accuracy and robustness of the proposed methodology for compressible scale-resolving computations. In particular, supersonic RANS/LES computations of the flow over a cavity are presented to show the capability of the scheme to predict flows with shocks

  12. Toward an image compression algorithm for the high-resolution electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.

  13. Calculations of the Performance of a Compression-Ignition Engine-Compressor Turbine Combination I : Performance of a Highly Supercharged Compression-Ignition Engine

    NASA Technical Reports Server (NTRS)

    Sanders, J. C.; Mendelson, Alexander

    1945-01-01

    Small high-speed single-cylinder compression-ignition engines were tested to determine their performance characteristics under high supercharging. Calculations were made on the energy available in the exhaust gas of the compression-ignition engines. The maximum power at any given maximum cylinder pressure was obtained when the compression pressure was equal to the maximum cylinder pressure. Constant-pressure combustion was found possible at an engine speed of 2200 rpm. Exhaust pressures and temperatures were determined from an analysis of indicator cards. The analysis showed that, at rich mixtures with the exhaust back pressure equal to the inlet-air pressure, there is excess energy available for driving a turbine over that required for supercharging. The presence of this excess energy indicates that a highly supercharged compression-ignition engine might be desirable as a compressor and combustion chamber for a turbine.

  14. Wavelet compression of noisy tomographic images

    NASA Astrophysics Data System (ADS)

    Kappeler, Christian; Mueller, Stefan P.

    1995-09-01

    3D data acquisition is increasingly used in positron emission tomography (PET) to collect a larger fraction of the emitted radiation. A major practical difficulty with data storage and transmission in 3D-PET is the large size of the data sets. A typical dynamic study contains about 200 Mbyte of data. PET images inherently have a high level of photon noise and therefore usually are evaluated after being processed by a smoothing filter. In this work we examined lossy compression schemes under the postulate not induce image modifications exceeding those resulting from low pass filtering. The standard we will refer to is the Hanning filter. Resolution and inhomogeneity serve as figures of merit for quantification of image quality. The images to be compressed are transformed to a wavelet representation using Daubechies12 wavelets and compressed after filtering by thresholding. We do not include further compression by quantization and coding here. Achievable compression factors at this level of processing are thirty to fifty.

  15. Analysis-Preserving Video Microscopy Compression via Correlation and Mathematical Morphology

    PubMed Central

    Shao, Chong; Zhong, Alfred; Cribb, Jeremy; Osborne, Lukas D.; O’Brien, E. Timothy; Superfine, Richard; Mayer-Patel, Ketan; Taylor, Russell M.

    2015-01-01

    The large amount video data produced by multi-channel, high-resolution microscopy system drives the need for a new high-performance domain-specific video compression technique. We describe a novel compression method for video microscopy data. The method is based on Pearson's correlation and mathematical morphology. The method makes use of the point-spread function (PSF) in the microscopy video acquisition phase. We compare our method to other lossless compression methods and to lossy JPEG, JPEG2000 and H.264 compression for various kinds of video microscopy data including fluorescence video and brightfield video. We find that for certain data sets, the new method compresses much better than lossless compression with no impact on analysis results. It achieved a best compressed size of 0.77% of the original size, 25× smaller than the best lossless technique (which yields 20% for the same video). The compressed size scales with the video's scientific data content. Further testing showed that existing lossy algorithms greatly impacted data analysis at similar compression sizes. PMID:26435032

  16. High precision Hugoniot measurements on statically pre-compressed fluid helium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seagle, Christopher T.; Reinhart, William D.; Lopez, Andrew J.

    Here we describe how the capability for statically pre-compressing fluid targets for Hugoniot measurements utilizing gas gun driven flyer plates has been developed. Pre-compression expands the capability for initial condition control, allowing access to thermodynamic states off the principal Hugoniot. Absolute Hugoniot measurements with an uncertainty less than 3% on density and pressure were obtained on statically pre-compressed fluid helium utilizing a two stage light gas gun. Helium is highly compressible; the locus of shock states resulting from dynamic loading of an initially compressed sample at room temperature is significantly denser than the cryogenic fluid Hugoniot even for relatively modestmore » (0.27–0.38 GPa) initial pressures. Lastly, the dynamic response of pre-compressed helium in the initial density range of 0.21–0.25 g/cm3 at ambient temperature may be described by a linear shock velocity (us) and particle velocity (u p) relationship: u s = C 0 + su p, with C 0 = 1.44 ± 0.14 km/s and s = 1.344 ± 0.025.« less

  17. High precision Hugoniot measurements on statically pre-compressed fluid helium

    DOE PAGES

    Seagle, Christopher T.; Reinhart, William D.; Lopez, Andrew J.; ...

    2016-09-27

    Here we describe how the capability for statically pre-compressing fluid targets for Hugoniot measurements utilizing gas gun driven flyer plates has been developed. Pre-compression expands the capability for initial condition control, allowing access to thermodynamic states off the principal Hugoniot. Absolute Hugoniot measurements with an uncertainty less than 3% on density and pressure were obtained on statically pre-compressed fluid helium utilizing a two stage light gas gun. Helium is highly compressible; the locus of shock states resulting from dynamic loading of an initially compressed sample at room temperature is significantly denser than the cryogenic fluid Hugoniot even for relatively modestmore » (0.27–0.38 GPa) initial pressures. Lastly, the dynamic response of pre-compressed helium in the initial density range of 0.21–0.25 g/cm3 at ambient temperature may be described by a linear shock velocity (us) and particle velocity (u p) relationship: u s = C 0 + su p, with C 0 = 1.44 ± 0.14 km/s and s = 1.344 ± 0.025.« less

  18. Investigation of a High Voltage, High Frequency Power Conditioning System for Use with Flux Compression Generators

    DTIC Science & Technology

    2007-06-01

    missouri.edu Abstract The University of Missouri-Columbia is developing a compact pulsed power system to condition the high current signal from a...flux compression generator (FCG) to the high voltage, high frequency signal required for many pulsed power applications. The system consists of a...non-magnetic core, spiral-wound transformer, series exploding wire fuse, and an oscillating mesoband source. The flux compression generator is being

  19. A Novel Image Compression Algorithm for High Resolution 3D Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2014-06-01

    This research presents a novel algorithm to compress high-resolution images for accurate structured light 3D reconstruction. Structured light images contain a pattern of light and shadows projected on the surface of the object, which are captured by the sensor at very high resolutions. Our algorithm is concerned with compressing such images to a high degree with minimum loss without adversely affecting 3D reconstruction. The Compression Algorithm starts with a single level discrete wavelet transform (DWT) for decomposing an image into four sub-bands. The sub-band LL is transformed by DCT yielding a DC-matrix and an AC-matrix. The Minimize-Matrix-Size Algorithm is used to compress the AC-matrix while a DWT is applied again to the DC-matrix resulting in LL2, HL2, LH2 and HH2 sub-bands. The LL2 sub-band is transformed by DCT, while the Minimize-Matrix-Size Algorithm is applied to the other sub-bands. The proposed algorithm has been tested with images of different sizes within a 3D reconstruction scenario. The algorithm is demonstrated to be more effective than JPEG2000 and JPEG concerning higher compression rates with equivalent perceived quality and the ability to more accurately reconstruct the 3D models.

  20. Compression of high-density EMG signals for trapezius and gastrocnemius muscles.

    PubMed

    Itiki, Cinthia; Furuie, Sergio S; Merletti, Roberto

    2014-03-10

    New technologies for data transmission and multi-electrode arrays increased the demand for compressing high-density electromyography (HD EMG) signals. This article aims the compression of HD EMG signals recorded by two-dimensional electrode matrices at different muscle-contraction forces. It also shows methodological aspects of compressing HD EMG signals for non-pinnate (upper trapezius) and pinnate (medial gastrocnemius) muscles, using image compression techniques. HD EMG signals were placed in image rows, according to two distinct electrode orders: parallel and perpendicular to the muscle longitudinal axis. For the lossless case, the images obtained from single-differential signals as well as their differences in time were compressed. For the lossy algorithm, the images associated to the recorded monopolar or single-differential signals were compressed for different compression levels. Lossless compression provided up to 59.3% file-size reduction (FSR), with lower contraction forces associated to higher FSR. For lossy compression, a 90.8% reduction on the file size was attained, while keeping the signal-to-noise ratio (SNR) at 21.19 dB. For a similar FSR, higher contraction forces corresponded to higher SNR CONCLUSIONS: The computation of signal differences in time improves the performance of lossless compression while the selection of signals in the transversal order improves the lossy compression of HD EMG, for both pinnate and non-pinnate muscles.

  1. Compression of high-density EMG signals for trapezius and gastrocnemius muscles

    PubMed Central

    2014-01-01

    Background New technologies for data transmission and multi-electrode arrays increased the demand for compressing high-density electromyography (HD EMG) signals. This article aims the compression of HD EMG signals recorded by two-dimensional electrode matrices at different muscle-contraction forces. It also shows methodological aspects of compressing HD EMG signals for non-pinnate (upper trapezius) and pinnate (medial gastrocnemius) muscles, using image compression techniques. Methods HD EMG signals were placed in image rows, according to two distinct electrode orders: parallel and perpendicular to the muscle longitudinal axis. For the lossless case, the images obtained from single-differential signals as well as their differences in time were compressed. For the lossy algorithm, the images associated to the recorded monopolar or single-differential signals were compressed for different compression levels. Results Lossless compression provided up to 59.3% file-size reduction (FSR), with lower contraction forces associated to higher FSR. For lossy compression, a 90.8% reduction on the file size was attained, while keeping the signal-to-noise ratio (SNR) at 21.19 dB. For a similar FSR, higher contraction forces corresponded to higher SNR Conclusions The computation of signal differences in time improves the performance of lossless compression while the selection of signals in the transversal order improves the lossy compression of HD EMG, for both pinnate and non-pinnate muscles. PMID:24612604

  2. High-performance compression of astronomical images

    NASA Technical Reports Server (NTRS)

    White, Richard L.

    1993-01-01

    Astronomical images have some rather unusual characteristics that make many existing image compression techniques either ineffective or inapplicable. A typical image consists of a nearly flat background sprinkled with point sources and occasional extended sources. The images are often noisy, so that lossless compression does not work very well; furthermore, the images are usually subjected to stringent quantitative analysis, so any lossy compression method must be proven not to discard useful information, but must instead discard only the noise. Finally, the images can be extremely large. For example, the Space Telescope Science Institute has digitized photographic plates covering the entire sky, generating 1500 images each having 14000 x 14000 16-bit pixels. Several astronomical groups are now constructing cameras with mosaics of large CCD's (each 2048 x 2048 or larger); these instruments will be used in projects that generate data at a rate exceeding 100 MBytes every 5 minutes for many years. An effective technique for image compression may be based on the H-transform (Fritze et al. 1977). The method that we have developed can be used for either lossless or lossy compression. The digitized sky survey images can be compressed by at least a factor of 10 with no noticeable losses in the astrometric and photometric properties of the compressed images. The method has been designed to be computationally efficient: compression or decompression of a 512 x 512 image requires only 4 seconds on a Sun SPARCstation 1. The algorithm uses only integer arithmetic, so it is completely reversible in its lossless mode, and it could easily be implemented in hardware for space applications.

  3. Visually lossless compression of digital hologram sequences

    NASA Astrophysics Data System (ADS)

    Darakis, Emmanouil; Kowiel, Marcin; Näsänen, Risto; Naughton, Thomas J.

    2010-01-01

    Digital hologram sequences have great potential for the recording of 3D scenes of moving macroscopic objects as their numerical reconstruction can yield a range of perspective views of the scene. Digital holograms inherently have large information content and lossless coding of holographic data is rather inefficient due to the speckled nature of the interference fringes they contain. Lossy coding of still holograms and hologram sequences has shown promising results. By definition, lossy compression introduces errors in the reconstruction. In all of the previous studies, numerical metrics were used to measure the compression error and through it, the coding quality. Digital hologram reconstructions are highly speckled and the speckle pattern is very sensitive to data changes. Hence, numerical quality metrics can be misleading. For example, for low compression ratios, a numerically significant coding error can have visually negligible effects. Yet, in several cases, it is of high interest to know how much lossy compression can be achieved, while maintaining the reconstruction quality at visually lossless levels. Using an experimental threshold estimation method, the staircase algorithm, we determined the highest compression ratio that was not perceptible to human observers for objects compressed with Dirac and MPEG-4 compression methods. This level of compression can be regarded as the point below which compression is perceptually lossless although physically the compression is lossy. It was found that up to 4 to 7.5 fold compression can be obtained with the above methods without any perceptible change in the appearance of video sequences.

  4. A Real-Time High Performance Data Compression Technique For Space Applications

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Venbrux, Jack; Bhatia, Prakash; Miller, Warner H.

    2000-01-01

    A high performance lossy data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on block-transform combined with bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate. The lossy coder is described. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Hardware implementations are in development; a functional chip set is expected by the end of 2001.

  5. Recce imagery compression options

    NASA Astrophysics Data System (ADS)

    Healy, Donald J.

    1995-09-01

    The errors introduced into reconstructed RECCE imagery by ATARS DPCM compression are compared to those introduced by the more modern DCT-based JPEG compression algorithm. For storage applications in which uncompressed sensor data is available JPEG provides better mean-square-error performance while also providing more flexibility in the selection of compressed data rates. When ATARS DPCM compression has already been performed, lossless encoding techniques may be applied to the DPCM deltas to achieve further compression without introducing additional errors. The abilities of several lossless compression algorithms including Huffman, Lempel-Ziv, Lempel-Ziv-Welch, and Rice encoding to provide this additional compression of ATARS DPCM deltas are compared. It is shown that the amount of noise in the original imagery significantly affects these comparisons.

  6. Development of high-temperature Kolsky compression bar techniques for recrystallization investigation

    NASA Astrophysics Data System (ADS)

    Song, B.; Antoun, B. R.; Boston, M.

    2012-05-01

    We modified the design originally developed by Kuokkala's group to develop an automated high-temperature Kolsky compression bar for characterizing high-rate properties of 304L stainless steel at elevated temperatures. Additional features have been implemented to this high-temperature Kolsky compression bar for recrystallization investigation. The new features ensure a single loading on the specimen and precise time and temperature control for quenching to the specimen after dynamic loading. Dynamic compressive stress-strain curves of 304L stainless steel were obtained at 21, 204, 427, 649, and 871 °C (or 70, 400, 800, 1200, and 1600 °F) at the same constant strain rate of 332 s-1. The specimen subjected to specific time and temperature control for quenching after a single dynamic loading was preserved for investigating microstructure recrystallization.

  7. N-Cadherin Maintains the Healthy Biology of Nucleus Pulposus Cells under High-Magnitude Compression.

    PubMed

    Wang, Zhenyu; Leng, Jiali; Zhao, Yuguang; Yu, Dehai; Xu, Feng; Song, Qingxu; Qu, Zhigang; Zhuang, Xinming; Liu, Yi

    2017-01-01

    Mechanical load can regulate disc nucleus pulposus (NP) biology in terms of cell viability, matrix homeostasis and cell phenotype. N-cadherin (N-CDH) is a molecular marker of NP cells. This study investigated the role of N-CDH in maintaining NP cell phenotype, NP matrix synthesis and NP cell viability under high-magnitude compression. Rat NP cells seeded on scaffolds were perfusion-cultured using a self-developed perfusion bioreactor for 5 days. NP cell biology in terms of cell apoptosis, matrix biosynthesis and cell phenotype was studied after the cells were subjected to different compressive magnitudes (low- and high-magnitudes: 2% and 20% compressive deformation, respectively). Non-loaded NP cells were used as controls. Lentivirus-mediated N-CDH overexpression was used to further investigate the role of N-CDH under high-magnitude compression. The 20% deformation compression condition significantly decreased N-CDH expression compared with the 2% deformation compression and control conditions. Meanwhile, 20% deformation compression increased the number of apoptotic NP cells, up-regulated the expression of Bax and cleaved-caspase-3 and down-regulated the expression of Bcl-2, matrix macromolecules (aggrecan and collagen II) and NP cell markers (glypican-3, CAXII and keratin-19) compared with 2% deformation compression. Additionally, N-CDH overexpression attenuated the effects of 20% deformation compression on NP cell biology in relation to the designated parameters. N-CDH helps to restore the cell viability, matrix biosynthesis and cellular phenotype of NP cells under high-magnitude compression. © 2017 The Author(s). Published by S. Karger AG, Basel.

  8. Compression techniques in tele-radiology

    NASA Astrophysics Data System (ADS)

    Lu, Tianyu; Xiong, Zixiang; Yun, David Y.

    1999-10-01

    This paper describes a prototype telemedicine system for remote 3D radiation treatment planning. Due to voluminous medical image data and image streams generated in interactive frame rate involved in the application, the importance of deploying adjustable lossy to lossless compression techniques is emphasized in order to achieve acceptable performance via various kinds of communication networks. In particular, the compression of the data substantially reduces the transmission time and therefore allows large-scale radiation distribution simulation and interactive volume visualization using remote supercomputing resources in a timely fashion. The compression algorithms currently used in the software we developed are JPEG and H.263 lossy methods and Lempel-Ziv (LZ77) lossless methods. Both objective and subjective assessment of the effect of lossy compression methods on the volume data are conducted. Favorable results are obtained showing that substantial compression ratio is achievable within distortion tolerance. From our experience, we conclude that 30dB (PSNR) is about the lower bound to achieve acceptable quality when applying lossy compression to anatomy volume data (e.g. CT). For computer simulated data, much higher PSNR (up to 100dB) is expectable. This work not only introduces such novel approach for delivering medical services that will have significant impact on the existing cooperative image-based services, but also provides a platform for the physicians to assess the effects of lossy compression techniques on the diagnostic and aesthetic appearance of medical imaging.

  9. Highly compressible and all-solid-state supercapacitors based on nanostructured composite sponge.

    PubMed

    Niu, Zhiqiang; Zhou, Weiya; Chen, Xiaodong; Chen, Jun; Xie, Sishen

    2015-10-21

    Based on polyaniline-single-walled carbon nanotubes -sponge electrodes, highly compressible all-solid-state supercapacitors are prepared with an integrated configuration using a poly(vinyl alcohol) (PVA)/H2 SO4 gel as the electrolyte. The unique configuration enables the resultant supercapacitors to be compressed as an integrated unit arbitrarily during 60% compressible strain. Furthermore, the performance of the resultant supercapacitors is nearly unchanged even under 60% compressible strain. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Implementation of a compressive sampling scheme for wireless sensors to achieve energy efficiency in a structural health monitoring system

    NASA Astrophysics Data System (ADS)

    O'Connor, Sean M.; Lynch, Jerome P.; Gilbert, Anna C.

    2013-04-01

    Wireless sensors have emerged to offer low-cost sensors with impressive functionality (e.g., data acquisition, computing, and communication) and modular installations. Such advantages enable higher nodal densities than tethered systems resulting in increased spatial resolution of the monitoring system. However, high nodal density comes at a cost as huge amounts of data are generated, weighing heavy on power sources, transmission bandwidth, and data management requirements, often making data compression necessary. The traditional compression paradigm consists of high rate (>Nyquist) uniform sampling and storage of the entire target signal followed by some desired compression scheme prior to transmission. The recently proposed compressed sensing (CS) framework combines the acquisition and compression stage together, thus removing the need for storage and operation of the full target signal prior to transmission. The effectiveness of the CS approach hinges on the presence of a sparse representation of the target signal in a known basis, similarly exploited by several traditional compressive sensing applications today (e.g., imaging, MRI). Field implementations of CS schemes in wireless SHM systems have been challenging due to the lack of commercially available sensing units capable of sampling methods (e.g., random) consistent with the compressed sensing framework, often moving evaluation of CS techniques to simulation and post-processing. The research presented here describes implementation of a CS sampling scheme to the Narada wireless sensing node and the energy efficiencies observed in the deployed sensors. Of interest in this study is the compressibility of acceleration response signals collected from a multi-girder steel-concrete composite bridge. The study shows the benefit of CS in reducing data requirements while ensuring data analysis on compressed data remain accurate.

  11. Edge-preserving image compression for magnetic-resonance images using dynamic associative neural networks (DANN)-based neural networks

    NASA Astrophysics Data System (ADS)

    Wan, Tat C.; Kabuka, Mansur R.

    1994-05-01

    With the tremendous growth in imaging applications and the development of filmless radiology, the need for compression techniques that can achieve high compression ratios with user specified distortion rates becomes necessary. Boundaries and edges in the tissue structures are vital for detection of lesions and tumors, which in turn requires the preservation of edges in the image. The proposed edge preserving image compressor (EPIC) combines lossless compression of edges with neural network compression techniques based on dynamic associative neural networks (DANN), to provide high compression ratios with user specified distortion rates in an adaptive compression system well-suited to parallel implementations. Improvements to DANN-based training through the use of a variance classifier for controlling a bank of neural networks speed convergence and allow the use of higher compression ratios for `simple' patterns. The adaptation and generalization capabilities inherent in EPIC also facilitate progressive transmission of images through varying the number of quantization levels used to represent compressed patterns. Average compression ratios of 7.51:1 with an averaged average mean squared error of 0.0147 were achieved.

  12. Active high-power RF switch and pulse compression system

    DOEpatents

    Tantawi, Sami G.; Ruth, Ronald D.; Zolotorev, Max

    1998-01-01

    A high-power RF switching device employs a semiconductor wafer positioned in the third port of a three-port RF device. A controllable source of directed energy, such as a suitable laser or electron beam, is aimed at the semiconductor material. When the source is turned on, the energy incident on the wafer induces an electron-hole plasma layer on the wafer, changing the wafer's dielectric constant, turning the third port into a termination for incident RF signals, and. causing all incident RF signals to be reflected from the surface of the wafer. The propagation constant of RF signals through port 3, therefore, can be changed by controlling the beam. By making the RF coupling to the third port as small as necessary, one can reduce the peak electric field on the unexcited silicon surface for any level of input power from port 1, thereby reducing risk of damaging the wafer by RF with high peak power. The switch is useful to the construction of an improved pulse compression system to boost the peak power of microwave tubes driving linear accelerators. In this application, the high-power RF switch is placed at the coupling iris between the charging waveguide and the resonant storage line of a pulse compression system. This optically controlled high power RF pulse compression system can handle hundreds of Megawatts of power at X-band.

  13. Wave phenomena in a high Reynolds number compressible boundary layer

    NASA Technical Reports Server (NTRS)

    Bayliss, A.; Maestrello, L.; Parikh, P.; Turkel, E.

    1985-01-01

    Growth of unstable disturbances in a high Reynolds number compressible boundary layer is numerically simulated. Localized periodic surface heating and cooling as a means of active control of these disturbances is studied. It is shown that compressibility in itself stabilizes the flow but at a lower Mach number, significant nonlinear distortions are produced. Phase cancellation is shown to be an effective mechanism for active boundary layer control.

  14. Surface-initiated phase transition in solid hydrogen under the high-pressure compression

    NASA Astrophysics Data System (ADS)

    Lei, Haile; Lin, Wei; Wang, Kai; Li, Xibo

    2018-03-01

    The large-scale molecular dynamics simulations have been performed to understand the microscopic mechanism governing the phase transition of solid hydrogen under the high-pressure compression. These results demonstrate that the face-centered-cubic-to-hexagonal close-packed phase transition is initiated first at the surfaces at a much lower pressure than in the volume and then extends gradually from the surface to volume in the solid hydrogen. The infrared spectra from the surface are revealed to exhibit a different pressure-dependent feature from those of the volume during the high-pressure compression. It is thus deduced that the weakening intramolecular H-H bonds are always accompanied by hardening surface phonons through strengthening the intermolecular H2-H2 coupling at the surfaces with respect to the counterparts in the volume at high pressures. This is just opposite to the conventional atomic crystals, in which the surface phonons are softening. The high-pressure compression has further been predicted to force the atoms or molecules to spray out of surface to degrade the pressure. These results provide a glimpse of structural properties of solid hydrogen at the early stage during the high-pressure compression.

  15. Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh I.; Keymeulen, Didier; Kimesh, Matthew A.

    2012-01-01

    Modern hyperspectral imaging systems are able to acquire far more data than can be downlinked from a spacecraft. Onboard data compression helps to alleviate this problem, but requires a system capable of power efficiency and high throughput. Software solutions have limited throughput performance and are power-hungry. Dedicated hardware solutions can provide both high throughput and power efficiency, while taking the load off of the main processor. Thus a hardware compression system was developed. The implementation uses a field-programmable gate array (FPGA). The implementation is based on the fast lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral-Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which achieves excellent compression performance and has low complexity. This algorithm performs predictive compression using an adaptive filtering method, and uses adaptive Golomb coding. The implementation also packetizes the coded data. The FL algorithm is well suited for implementation in hardware. In the FPGA implementation, one sample is compressed every clock cycle, which makes for a fast and practical realtime solution for space applications. Benefits of this implementation are: 1) The underlying algorithm achieves a combination of low complexity and compression effectiveness that exceeds that of techniques currently in use. 2) The algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. 3) Hardware acceleration provides a throughput improvement of 10 to 100 times vs. the software implementation. A prototype of the compressor is available in software, but it runs at a speed that does not meet spacecraft requirements. The hardware implementation targets the Xilinx Virtex IV FPGAs, and makes the use of this compressor practical for Earth satellites as well as beyond-Earth missions with hyperspectral instruments.

  16. Joint image encryption and compression scheme based on IWT and SPIHT

    NASA Astrophysics Data System (ADS)

    Zhang, Miao; Tong, Xiaojun

    2017-03-01

    A joint lossless image encryption and compression scheme based on integer wavelet transform (IWT) and set partitioning in hierarchical trees (SPIHT) is proposed to achieve lossless image encryption and compression simultaneously. Making use of the properties of IWT and SPIHT, encryption and compression are combined. Moreover, the proposed secure set partitioning in hierarchical trees (SSPIHT) via the addition of encryption in the SPIHT coding process has no effect on compression performance. A hyper-chaotic system, nonlinear inverse operation, Secure Hash Algorithm-256(SHA-256), and plaintext-based keystream are all used to enhance the security. The test results indicate that the proposed methods have high security and good lossless compression performance.

  17. Radar Range Sidelobe Reduction Using Adaptive Pulse Compression Technique

    NASA Technical Reports Server (NTRS)

    Li, Lihua; Coon, Michael; McLinden, Matthew

    2013-01-01

    Pulse compression has been widely used in radars so that low-power, long RF pulses can be transmitted, rather than a highpower short pulse. Pulse compression radars offer a number of advantages over high-power short pulsed radars, such as no need of high-power RF circuitry, no need of high-voltage electronics, compact size and light weight, better range resolution, and better reliability. However, range sidelobe associated with pulse compression has prevented the use of this technique on spaceborne radars since surface returns detected by range sidelobes may mask the returns from a nearby weak cloud or precipitation particles. Research on adaptive pulse compression was carried out utilizing a field-programmable gate array (FPGA) waveform generation board and a radar transceiver simulator. The results have shown significant improvements in pulse compression sidelobe performance. Microwave and millimeter-wave radars present many technological challenges for Earth and planetary science applications. The traditional tube-based radars use high-voltage power supply/modulators and high-power RF transmitters; therefore, these radars usually have large size, heavy weight, and reliability issues for space and airborne platforms. Pulse compression technology has provided a path toward meeting many of these radar challenges. Recent advances in digital waveform generation, digital receivers, and solid-state power amplifiers have opened a new era for applying pulse compression to the development of compact and high-performance airborne and spaceborne remote sensing radars. The primary objective of this innovative effort is to develop and test a new pulse compression technique to achieve ultrarange sidelobes so that this technique can be applied to spaceborne, airborne, and ground-based remote sensing radars to meet future science requirements. By using digital waveform generation, digital receiver, and solid-state power amplifier technologies, this improved pulse compression

  18. Squeezing the muscle: compression clothing and muscle metabolism during recovery from high intensity exercise.

    PubMed

    Sperlich, Billy; Born, Dennis-Peter; Kaskinoro, Kimmo; Kalliokoski, Kari K; Laaksonen, Marko S

    2013-01-01

    The purpose of this experiment was to investigate skeletal muscle blood flow and glucose uptake in m. biceps (BF) and m. quadriceps femoris (QF) 1) during recovery from high intensity cycle exercise, and 2) while wearing a compression short applying ~37 mmHg to the thigh muscles. Blood flow and glucose uptake were measured in the compressed and non-compressed leg of 6 healthy men by using positron emission tomography. At baseline blood flow in QF (P = 0.79) and BF (P = 0.90) did not differ between the compressed and the non-compressed leg. During recovery muscle blood flow was higher compared to baseline in both compressed (P<0.01) and non-compressed QF (P<0.001) but not in compressed (P = 0.41) and non-compressed BF (P = 0.05; effect size = 2.74). During recovery blood flow was lower in compressed QF (P<0.01) but not in BF (P = 0.26) compared to the non-compressed muscles. During baseline and recovery no differences in blood flow were detected between the superficial and deep parts of QF in both, compressed (baseline P = 0.79; recovery P = 0.68) and non-compressed leg (baseline P = 0.64; recovery P = 0.06). During recovery glucose uptake was higher in QF compared to BF in both conditions (P<0.01) with no difference between the compressed and non-compressed thigh. Glucose uptake was higher in the deep compared to the superficial parts of QF (compression leg P = 0.02). These results demonstrate that wearing compression shorts with ~37 mmHg of external pressure reduces blood flow both in the deep and superficial regions of muscle tissue during recovery from high intensity exercise but does not affect glucose uptake in BF and QF.

  19. Compressed air production with waste heat utilization in industry

    NASA Astrophysics Data System (ADS)

    Nolting, E.

    1984-06-01

    The centralized power-heat coupling (PHC) technique using block heating power stations, is presented. Compressed air production in PHC technique with internal combustion engine drive achieves a high degree of primary energy utilization. Cost savings of 50% are reached compared to conventional production. The simultaneous utilization of compressed air and heat is especially interesting. A speed regulated drive via an internal combustion motor gives a further saving of 10% to 20% compared to intermittent operation. The high fuel utilization efficiency ( 80%) leads to a pay off after two years for operation times of 3000 hr.

  20. High speed and high resolution interrogation of a fiber Bragg grating sensor based on microwave photonic filtering and chirped microwave pulse compression.

    PubMed

    Xu, Ou; Zhang, Jiejun; Yao, Jianping

    2016-11-01

    High speed and high resolution interrogation of a fiber Bragg grating (FBG) sensor based on microwave photonic filtering and chirped microwave pulse compression is proposed and experimentally demonstrated. In the proposed sensor, a broadband linearly chirped microwave waveform (LCMW) is applied to a single-passband microwave photonic filter (MPF) which is implemented based on phase modulation and phase modulation to intensity modulation conversion using a phase modulator (PM) and a phase-shifted FBG (PS-FBG). Since the center frequency of the MPF is a function of the central wavelength of the PS-FBG, when the PS-FBG experiences a strain or temperature change, the wavelength is shifted, which leads to the change in the center frequency of the MPF. At the output of the MPF, a filtered chirped waveform with the center frequency corresponding to the applied strain or temperature is obtained. By compressing the filtered LCMW in a digital signal processor, the resolution is improved. The proposed interrogation technique is experimentally demonstrated. The experimental results show that interrogation sensitivity and resolution as high as 1.25 ns/με and 0.8 με are achieved.

  1. Electrophysiological examination and high frequency ultrasonography for diagnosis of radial nerve torsion and compression

    PubMed Central

    Shi, Miao; Qi, Hengtao; Ding, Hongyu; Chen, Feng; Xin, Zhaoqin; Zhao, Qinghua; Guan, Shibing; Shi, Hao

    2018-01-01

    Abstract This study aims to evaluate the value of electrophysiological examination and high frequency ultrasonography in the differential diagnosis of radial nerve torsion and radial nerve compression. Patients with radial nerve torsion (n = 14) and radial nerve compression (n = 14) were enrolled. The results of neurophysiological and high frequency ultrasonography were compared. Electrophysiological examination and high-frequency ultrasonography had a high diagnostic rate for both diseases with consistent results. Of the 28 patients, 23 were positive for electrophysiological examination, showing decreased amplitude and decreased conduction velocity of radial nerve; however, electrophysiological examination cannot distinguish torsion from compression. A total of 27 cases showed positive in ultrasound examinations among all 28 cases. On ultrasound images, the nerve was thinned at torsion site whereas thickened at the distal ends of torsion. The diameter and cross-sectional area of torsion or compression determined the nerve damage, and ultrasound could locate the nerve injury site and measure the length of the nerve. Electrophysiological examination and high-frequency ultrasonography can diagnose radial neuropathy, with electrophysiological examination reflecting the neurological function, and high-frequency ultrasound differentiating nerve torsion from compression. PMID:29480857

  2. Pulse compression using a tapered microstructure optical fiber.

    PubMed

    Hu, Jonathan; Marks, Brian S; Menyuk, Curtis R; Kim, Jinchae; Carruthers, Thomas F; Wright, Barbara M; Taunay, Thierry F; Friebele, E J

    2006-05-01

    We calculate the pulse compression in a tapered microstructure optical fiber with four layers of holes. We show that the primary limitation on pulse compression is the loss due to mode leakage. As a fiber's diameter decreases due to the tapering, so does the air-hole diameter, and at a sufficiently small diameter the guided mode loss becomes unacceptably high. For the four-layer geometry we considered, a compression factor of 10 can be achieved by a pulse with an initial FWHM duration of 3 ps in a tapered fiber that is 28 m long. We find that there is little difference in the pulse compression between a linear taper profile and a Gaussian taper profile. More layers of air-holes allows the pitch to decrease considerably before losses become unacceptable, but only a moderate increase in the degree of pulse compression is obtained.

  3. Dynamic compressive behavior of Pr-Nd alloy at high strain rates and temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Huanran; Cai Canyuan; Chen Danian

    2012-07-01

    Based on compressive tests, static on 810 material test system and dynamic on the first compressive loading in split Hopkinson pressure bar (SHPB) tests for Pr-Nd alloy cylinder specimens at high strain rates and temperatures, this study determined a J-C type [G. R. Johnson and W. H. Cook, in Proceedings of Seventh International Symposium on Ballistics (The Hague, The Netherlands, 1983), pp. 541-547] compressive constitutive equation of Pr-Nd alloy. It was recorded by a high speed camera that the Pr-Nd alloy cylinder specimens fractured during the first compressive loading in SHPB tests at high strain rates and temperatures. From highmore » speed camera images, the critical strains of the dynamic shearing instability for Pr-Nd alloy in SHPB tests were determined, which were consistent with that estimated by using Batra and Wei's dynamic shearing instability criterion [R. C. Batra and Z. G. Wei, Int. J. Impact Eng. 34, 448 (2007)] and the determined compressive constitutive equation of Pr-Nd alloy. The transmitted and reflected pulses of SHPB tests for Pr-Nd alloy cylinder specimens computed with the determined compressive constitutive equation of Pr-Nd alloy and Batra and Wei's dynamic shearing instability criterion could be consistent with the experimental data. The fractured Pr-Nd alloy cylinder specimens of compressive tests were investigated by using 3D supper depth digital microscope and scanning electron microscope.« less

  4. Time-resolved compression of a capsule with a cone to high density for fast-ignition laser fusion.

    PubMed

    Theobald, W; Solodov, A A; Stoeckl, C; Anderson, K S; Beg, F N; Epstein, R; Fiksel, G; Giraldez, E M; Glebov, V Yu; Habara, H; Ivancic, S; Jarrott, L C; Marshall, F J; McKiernan, G; McLean, H S; Mileham, C; Nilson, P M; Patel, P K; Pérez, F; Sangster, T C; Santos, J J; Sawada, H; Shvydky, A; Stephens, R B; Wei, M S

    2014-12-12

    The advent of high-intensity lasers enables us to recreate and study the behaviour of matter under the extreme densities and pressures that exist in many astrophysical objects. It may also enable us to develop a power source based on laser-driven nuclear fusion. Achieving such conditions usually requires a target that is highly uniform and spherically symmetric. Here we show that it is possible to generate high densities in a so-called fast-ignition target that consists of a thin shell whose spherical symmetry is interrupted by the inclusion of a metal cone. Using picosecond-time-resolved X-ray radiography, we show that we can achieve areal densities in excess of 300 mg cm(-2) with a nanosecond-duration compression pulse--the highest areal density ever reported for a cone-in-shell target. Such densities are high enough to stop MeV electrons, which is necessary for igniting the fuel with a subsequent picosecond pulse focused into the resulting plasma.

  5. Comparison of lossless compression techniques for prepress color images

    NASA Astrophysics Data System (ADS)

    Van Assche, Steven; Denecker, Koen N.; Philips, Wilfried R.; Lemahieu, Ignace L.

    1998-12-01

    In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only lossless compression is acceptable. Most existing lossless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. In this paper we present a comparison of three state-of-the-art lossless compression techniques which exploit such color redundancies: IEP (Inter- color Error Prediction) and a KLT-based technique, which are both linear color decorrelation techniques, and Interframe CALIC, which uses a non-linear approach to color decorrelation. It is shown that these techniques are able to exploit color redundancies and that color decorrelation can be done effectively and efficiently. The linear color decorrelators provide a considerable coding gain (about 2 bpp) on some typical prepress images. The non-linear interframe CALIC predictor does not yield better results, but the full interframe CALIC technique does.

  6. Optical properties of highly compressed polystyrene: An ab initio study

    NASA Astrophysics Data System (ADS)

    Hu, S. X.; Collins, L. A.; Colgan, J. P.; Goncharov, V. N.; Kilcrease, D. P.

    2017-10-01

    Using all-electron density functional theory, we have performed an ab initio study on x-ray absorption spectra of highly compressed polystyrene (CH). We found that the K -edge shifts in strongly coupled, degenerate polystyrene cannot be explained by existing continuum-lowering models adopted in traditional plasma physics. To gain insights into the K -edge shift in warm, dense CH, we have developed a model designated as "single mixture in a box" (SMIAB), which incorporates both the lowering of the continuum and the rising of the Fermi surface resulting from high compression. This simple SMIAB model correctly predicts the K -edge shift of carbon in highly compressed CH in good agreement with results from quantum molecular dynamics (QMD) calculations. Traditional opacity models failed to give the proper K -edge shifts as the CH density increased. Based on QMD calculations, we have established a first-principles opacity table (FPOT) for CH in a wide range of densities and temperatures [ρ =0.1 -100 g /c m3 and T =2000 -1 000 000 K ]. The FPOT gives much higher Rosseland mean opacity compared to the cold-opacity-patched astrophysics opacity table for warm, dense CH and favorably compares to the newly improved Los Alamos atomic model for moderately compressed CH (ρCH≤10 g /c m3 ), but remains a factor of 2 to 3 higher at extremely high densities (ρCH≥50 g /c m3 ). We anticipate the established FPOT of CH will find important applications to reliable designs of high-energy-density experiments. Moreover, the understanding of K -edge shifting revealed in this study could provide guides for improving the traditional opacity models to properly handle the strongly coupled and degenerate conditions.

  7. Optical properties of highly compressed polystyrene: An ab initio study

    DOE PAGES

    Hu, S. X.; Collins, L. A.; Colgan, J. P.; ...

    2017-10-16

    Using all-electron density functional theory, we have performed an ab initio study on x ray absorption spectra of highly compressed polystyrene (CH). Here, we found that the K-edge shifts in strongly coupled, degenerate polystyrene cannot be explained by existing continuum-lowering models adopted in traditional plasma physics. To gain insights into the K edge shift in warm, dense CH, we have developed a model designated as “single-mixture-in-a-box” (SMIAB), which incorporates both the lowering of continuum and the rising of Fermi surface resulting from high compression. This simple SMIAB model correctly predicts the K-edge shift of carbon in highly compressed CH inmore » good agreement with results from quantum-molecular-dynamics (QMD) calculations. Traditional opacity models failed to give the proper K-edge shifts as the CH density increased. Based on QMD calculations, we have established a first-principles opacity table (FPOT) for CH in a wide range of densities and temperatures [p = 0.1 to 100 g/cm 3 and T = 2000 to 1,000,000 K]. The FPOT gives much higher Rosseland mean opacity compared to the cold-opacity–patched astrophysics opacity table for warm, dense CH and favorably compares to the newly improved Los Alamos ATOMIC model for moderately compressed CH (pCH ≤10 g/cm 3) but remains a factor of 2 to 3 higher at extremely high densities (pCH ≥ 50 g/cm 3). We anticipate the established FPOT of CH will find important applications to reliable designs of high-energy-density experiments. Moreover, the understanding of K-edge shifting revealed in this study could provide guides for improving the traditional opacity models to properly handle the strongly coupled and degenerate conditions.« less

  8. Optical properties of highly compressed polystyrene: An ab initio study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, S. X.; Collins, L. A.; Colgan, J. P.

    Using all-electron density functional theory, we have performed an ab initio study on x ray absorption spectra of highly compressed polystyrene (CH). Here, we found that the K-edge shifts in strongly coupled, degenerate polystyrene cannot be explained by existing continuum-lowering models adopted in traditional plasma physics. To gain insights into the K edge shift in warm, dense CH, we have developed a model designated as “single-mixture-in-a-box” (SMIAB), which incorporates both the lowering of continuum and the rising of Fermi surface resulting from high compression. This simple SMIAB model correctly predicts the K-edge shift of carbon in highly compressed CH inmore » good agreement with results from quantum-molecular-dynamics (QMD) calculations. Traditional opacity models failed to give the proper K-edge shifts as the CH density increased. Based on QMD calculations, we have established a first-principles opacity table (FPOT) for CH in a wide range of densities and temperatures [p = 0.1 to 100 g/cm 3 and T = 2000 to 1,000,000 K]. The FPOT gives much higher Rosseland mean opacity compared to the cold-opacity–patched astrophysics opacity table for warm, dense CH and favorably compares to the newly improved Los Alamos ATOMIC model for moderately compressed CH (pCH ≤10 g/cm 3) but remains a factor of 2 to 3 higher at extremely high densities (pCH ≥ 50 g/cm 3). We anticipate the established FPOT of CH will find important applications to reliable designs of high-energy-density experiments. Moreover, the understanding of K-edge shifting revealed in this study could provide guides for improving the traditional opacity models to properly handle the strongly coupled and degenerate conditions.« less

  9. Alaska SAR Facility (ASF5) SAR Communications (SARCOM) Data Compression System

    NASA Technical Reports Server (NTRS)

    Mango, Stephen A.

    1989-01-01

    The real-time operational requirements for SARCOM translation into a high speed image data handler and processor to achieve the desired compression ratios and the selection of a suitable image data compression technique with as low as possible fidelity (information) losses and which can be implemented in an algorithm placing a relatively low arithmetic load on the system are described.

  10. Fast implementation for compressive recovery of highly accelerated cardiac cine MRI using the balanced sparse model.

    PubMed

    Ting, Samuel T; Ahmad, Rizwan; Jin, Ning; Craft, Jason; Serafim da Silveira, Juliana; Xue, Hui; Simonetti, Orlando P

    2017-04-01

    Sparsity-promoting regularizers can enable stable recovery of highly undersampled magnetic resonance imaging (MRI), promising to improve the clinical utility of challenging applications. However, lengthy computation time limits the clinical use of these methods, especially for dynamic MRI with its large corpus of spatiotemporal data. Here, we present a holistic framework that utilizes the balanced sparse model for compressive sensing and parallel computing to reduce the computation time of cardiac MRI recovery methods. We propose a fast, iterative soft-thresholding method to solve the resulting ℓ1-regularized least squares problem. In addition, our approach utilizes a parallel computing environment that is fully integrated with the MRI acquisition software. The methodology is applied to two formulations of the multichannel MRI problem: image-based recovery and k-space-based recovery. Using measured MRI data, we show that, for a 224 × 144 image series with 48 frames, the proposed k-space-based approach achieves a mean reconstruction time of 2.35 min, a 24-fold improvement compared a reconstruction time of 55.5 min for the nonlinear conjugate gradient method, and the proposed image-based approach achieves a mean reconstruction time of 13.8 s. Our approach can be utilized to achieve fast reconstruction of large MRI datasets, thereby increasing the clinical utility of reconstruction techniques based on compressed sensing. Magn Reson Med 77:1505-1515, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  11. Highly Compressible Carbon Sponge Supercapacitor Electrode with Enhanced Performance by Growing Nickel-Cobalt Sulfide Nanosheets.

    PubMed

    Liang, Xu; Nie, Kaiwen; Ding, Xian; Dang, Liqin; Sun, Jie; Shi, Feng; Xu, Hua; Jiang, Ruibin; He, Xuexia; Liu, Zonghuai; Lei, Zhibin

    2018-03-28

    The development of compressible supercapacitor highly relies on the innovative design of electrode materials with both superior compression property and high capacitive performance. This work reports a highly compressible supercapacitor electrode which is prepared by growing electroactive NiCo 2 S 4 (NCS) nanosheets on the compressible carbon sponge (CS). The strong adhesion of the metallic conductive NCS nanosheets to the highly porous carbon scaffolds enable the CS-NCS composite electrode to exhibit an enhanced conductivity and ideal structural integrity during repeated compression-release cycles. Accordingly, the CS-NCS composite electrode delivers a specific capacitance of 1093 F g -1 at 0.5 A g -1 and remarkable rate performance with 91% capacitance retention in the range of 0.5-20 A g -1 . Capacitance performance under the strain of 60% shows that the incorporation of NCS nanosheets in CS scaffolds leads to over five times enhancement in gravimetric capacitance and 17 times enhancement in volumetric capacitance. These performances enable the CS-NCS composite to be one of the promising candidates for potential applications in compressible electrochemical energy storage devices.

  12. High throughput dual-wavelength temperature distribution imaging via compressive imaging

    NASA Astrophysics Data System (ADS)

    Yao, Xu-Ri; Lan, Ruo-Ming; Liu, Xue-Feng; Zhu, Ge; Zheng, Fu; Yu, Wen-Kai; Zhai, Guang-Jie

    2018-03-01

    Thermal imaging is an essential tool in a wide variety of research areas. In this work we demonstrate high-throughput double-wavelength temperature distribution imaging using a modified single-pixel camera without the requirement of a beam splitter (BS). A digital micro-mirror device (DMD) is utilized to display binary masks and split the incident radiation, which eliminates the necessity of a BS. Because the spatial resolution is dictated by the DMD, this thermal imaging system has the advantage of perfect spatial registration between the two images, which limits the need for the pixel registration and fine adjustments. Two bucket detectors, which measures the total light intensity reflected from the DMD, are employed in this system and yield an improvement in the detection efficiency of the narrow-band radiation. A compressive imaging algorithm is utilized to achieve under-sampling recovery. A proof-of-principle experiment was presented to demonstrate the feasibility of this structure.

  13. C-FSCV: Compressive Fast-Scan Cyclic Voltammetry for Brain Dopamine Recording.

    PubMed

    Zamani, Hossein; Bahrami, Hamid Reza; Chalwadi, Preeti; Garris, Paul A; Mohseni, Pedram

    2018-01-01

    This paper presents a novel compressive sensing framework for recording brain dopamine levels with fast-scan cyclic voltammetry (FSCV) at a carbon-fiber microelectrode. Termed compressive FSCV (C-FSCV), this approach compressively samples the measured total current in each FSCV scan and performs basic FSCV processing steps, e.g., background current averaging and subtraction, directly with compressed measurements. The resulting background-subtracted faradaic currents, which are shown to have a block-sparse representation in the discrete cosine transform domain, are next reconstructed from their compressively sampled counterparts with the block sparse Bayesian learning algorithm. Using a previously recorded dopamine dataset, consisting of electrically evoked signals recorded in the dorsal striatum of an anesthetized rat, the C-FSCV framework is shown to be efficacious in compressing and reconstructing brain dopamine dynamics and associated voltammograms with high fidelity (correlation coefficient, ), while achieving compression ratio, CR, values as high as ~ 5. Moreover, using another set of dopamine data recorded 5 minutes after administration of amphetamine (AMPH) to an ambulatory rat, C-FSCV once again compresses (CR = 5) and reconstructs the temporal pattern of dopamine release with high fidelity ( ), leading to a true-positive rate of 96.4% in detecting AMPH-induced dopamine transients.

  14. High Compressive Stresses Near the Surface of the Sierra Nevada, California

    NASA Astrophysics Data System (ADS)

    Martel, S. J.; Logan, J. M.; Stock, G. M.

    2012-12-01

    Observations and stress measurements in granitic rocks of the Sierra Nevada, California reveal strong compressive stresses parallel to the surface of the range at shallow depths. New overcoring measurements show high compressive stresses at three locations along an east-west transect through Yosemite National Park. At the westernmost site (west end of Tenaya Lake), the mean compressive stress is 1.9. At the middle site (north shore of Tenaya Lake) the mean compressive stress is 6.8 MPa. At the easternmost site (south side of Lembert Dome) the mean compressive stress is 3.0 MPa. The trend of the most compressive stress at these sites is within ~30° of the strike of the local topographic surface. Previously published hydraulic fracturing measurements by others elsewhere in the Sierra Nevada indicate surface-parallel compressive stresses of several MPa within several tens of meters of the surface, with the stress magnitudes generally diminishing to the west. Both the new and the previously published compressive stress magnitudes are consistent with the presence of sheeting joints (i.e., "exfoliation joints") in the Sierra Nevada, which require lateral compressive stresses of several MPa to form. These fractures are widespread: they are distributed in granitic rocks from the north end of the range to its southern tip and across the width of the range. Uplift along the normal faults of the eastern escarpment, recently measured by others at ~1-2 mm/yr, probably contributes to these stresses substantially. Geodetic surveys reveal that normal faulting flexes a range concave upwards in response to fault slip, and this flexure is predicted by elastic dislocation models. The topographic relief of the eastern escarpment of the Sierra Nevada is 2-4 km, and since alluvial fill generally buries the bedrock east of the faults, the offset of granitic rocks is at least that much. Compressive stresses of several MPa are predicted by elastic dislocation models of the range front

  15. A perspective on the range of gasoline compression ignition combustion strategies for high engine efficiency and low NOx and soot emissions: Effects of in-cylinder fuel stratification

    DOE PAGES

    Dempsey, Adam B.; Curran, Scott J.; Wagner, Robert M.

    2016-01-14

    Many research studies have shown that low temperature combustion in compression ignition engines has the ability to yield ultra-low NOx and soot emissions while maintaining high thermal efficiency. To achieve low temperature combustion, sufficient mixing time between the fuel and air in a globally dilute environment is required, thereby avoiding fuel-rich regions and reducing peak combustion temperatures, which significantly reduces soot and NOx formation, respectively. It has been demonstrated that achieving low temperature combustion with diesel fuel over a wide range of conditions is difficult because of its properties, namely, low volatility and high chemical reactivity. On the contrary, gasolinemore » has a high volatility and low chemical reactivity, meaning it is easier to achieve the amount of premixing time required prior to autoignition to achieve low temperature combustion. In order to achieve low temperature combustion while meeting other constraints, such as low pressure rise rates and maintaining control over the timing of combustion, in-cylinder fuel stratification has been widely investigated for gasoline low temperature combustion engines. The level of fuel stratification is, in reality, a continuum ranging from fully premixed (i.e. homogeneous charge of fuel and air) to heavily stratified, heterogeneous operation, such as diesel combustion. However, to illustrate the impact of fuel stratification on gasoline compression ignition, the authors have identified three representative operating strategies: partial, moderate, and heavy fuel stratification. Thus, this article provides an overview and perspective of the current research efforts to develop engine operating strategies for achieving gasoline low temperature combustion in a compression ignition engine via fuel stratification. In this paper, computational fluid dynamics modeling of the in-cylinder processes during the closed valve portion of the cycle was used to illustrate the opportunities

  16. A perspective on the range of gasoline compression ignition combustion strategies for high engine efficiency and low NOx and soot emissions: Effects of in-cylinder fuel stratification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dempsey, Adam B.; Curran, Scott J.; Wagner, Robert M.

    Many research studies have shown that low temperature combustion in compression ignition engines has the ability to yield ultra-low NOx and soot emissions while maintaining high thermal efficiency. To achieve low temperature combustion, sufficient mixing time between the fuel and air in a globally dilute environment is required, thereby avoiding fuel-rich regions and reducing peak combustion temperatures, which significantly reduces soot and NOx formation, respectively. It has been demonstrated that achieving low temperature combustion with diesel fuel over a wide range of conditions is difficult because of its properties, namely, low volatility and high chemical reactivity. On the contrary, gasolinemore » has a high volatility and low chemical reactivity, meaning it is easier to achieve the amount of premixing time required prior to autoignition to achieve low temperature combustion. In order to achieve low temperature combustion while meeting other constraints, such as low pressure rise rates and maintaining control over the timing of combustion, in-cylinder fuel stratification has been widely investigated for gasoline low temperature combustion engines. The level of fuel stratification is, in reality, a continuum ranging from fully premixed (i.e. homogeneous charge of fuel and air) to heavily stratified, heterogeneous operation, such as diesel combustion. However, to illustrate the impact of fuel stratification on gasoline compression ignition, the authors have identified three representative operating strategies: partial, moderate, and heavy fuel stratification. Thus, this article provides an overview and perspective of the current research efforts to develop engine operating strategies for achieving gasoline low temperature combustion in a compression ignition engine via fuel stratification. In this paper, computational fluid dynamics modeling of the in-cylinder processes during the closed valve portion of the cycle was used to illustrate the opportunities

  17. A compression scheme for radio data in high performance computing

    NASA Astrophysics Data System (ADS)

    Masui, K.; Amiri, M.; Connor, L.; Deng, M.; Fandino, M.; Höfer, C.; Halpern, M.; Hanna, D.; Hincks, A. D.; Hinshaw, G.; Parra, J. M.; Newburgh, L. B.; Shaw, J. R.; Vanderlinde, K.

    2015-09-01

    We present a procedure for efficiently compressing astronomical radio data for high performance applications. Integrated, post-correlation data are first passed through a nearly lossless rounding step which compares the precision of the data to a generalized and calibration-independent form of the radiometer equation. This allows the precision of the data to be reduced in a way that has an insignificant impact on the data. The newly developed Bitshuffle lossless compression algorithm is subsequently applied. When the algorithm is used in conjunction with the HDF5 library and data format, data produced by the CHIME Pathfinder telescope is compressed to 28% of its original size and decompression throughputs in excess of 1 GB/s are obtained on a single core.

  18. Data-dependent bucketing improves reference-free compression of sequencing reads.

    PubMed

    Patro, Rob; Kingsford, Carl

    2015-09-01

    The storage and transmission of high-throughput sequencing data consumes significant resources. As our capacity to produce such data continues to increase, this burden will only grow. One approach to reduce storage and transmission requirements is to compress this sequencing data. We present a novel technique to boost the compression of sequencing that is based on the concept of bucketing similar reads so that they appear nearby in the file. We demonstrate that, by adopting a data-dependent bucketing scheme and employing a number of encoding ideas, we can achieve substantially better compression ratios than existing de novo sequence compression tools, including other bucketing and reordering schemes. Our method, Mince, achieves up to a 45% reduction in file sizes (28% on average) compared with existing state-of-the-art de novo compression schemes. Mince is written in C++11, is open source and has been made available under the GPLv3 license. It is available at http://www.cs.cmu.edu/∼ckingsf/software/mince. carlk@cs.cmu.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  19. High-resolution three-dimensional imaging with compress sensing

    NASA Astrophysics Data System (ADS)

    Wang, Jingyi; Ke, Jun

    2016-10-01

    LIDAR three-dimensional imaging technology have been used in many fields, such as military detection. However, LIDAR require extremely fast data acquisition speed. This makes the manufacture of detector array for LIDAR system is very difficult. To solve this problem, we consider using compress sensing which can greatly decrease the data acquisition and relax the requirement of a detection device. To use the compressive sensing idea, a spatial light modulator will be used to modulate the pulsed light source. Then a photodetector is used to receive the reflected light. A convex optimization problem is solved to reconstruct the 2D depth map of the object. To improve the resolution in transversal direction, we use multiframe image restoration technology. For each 2D piecewise-planar scene, we move the SLM half-pixel each time. Then the position where the modulated light illuminates will changed accordingly. We repeat moving the SLM to four different directions. Then we can get four low-resolution depth maps with different details of the same plane scene. If we use all of the measurements obtained by the subpixel movements, we can reconstruct a high-resolution depth map of the sense. A linear minimum-mean-square error algorithm is used for the reconstruction. By combining compress sensing and multiframe image restoration technology, we reduce the burden on data analyze and improve the efficiency of detection. More importantly, we obtain high-resolution depth maps of a 3D scene.

  20. A novel ECG data compression method based on adaptive Fourier decomposition

    NASA Astrophysics Data System (ADS)

    Tan, Chunyu; Zhang, Liming

    2017-12-01

    This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.

  1. Viscosity and compressibility of diacylglycerol under high pressure

    NASA Astrophysics Data System (ADS)

    Malanowski, Aleksander; Rostocki, A. J.; Kiełczyński, P.; Szalewski, M.; Balcerzak, A.; Kościesza, R.; Tarakowski, R.; Ptasznik, S.; Siegoczyński, R. M.

    2013-03-01

    The influence of high pressure on viscosity and compressibility of diacylglycerol (DAG) oil has been presented in this paper. The investigated DAG oil was composed of 82% of DAGs and 18% TAGs (triacylglycerols). The dynamic viscosity of DAG was investigated as a function of the pressure up to 400 MPa. The viscosity was measured by means of the surface acoustic wave method, where the acoustic waveguides were used as sensing elements. As the pressure was rising, the larger ultrasonic wave attenuation was observed, whereas amplitude decreased with the liquid viscosity augmentation. Measured changes of physical properties were most significant in the pressure range near the phase transition. Deeper understanding of DAG viscosity and compressibility changes versus pressure could shed more light on thermodynamic properties of edible oils.

  2. Widefield compressive multiphoton microscopy.

    PubMed

    Alemohammad, Milad; Shin, Jaewook; Tran, Dung N; Stroud, Jasper R; Chin, Sang Peter; Tran, Trac D; Foster, Mark A

    2018-06-15

    A single-pixel compressively sensed architecture is exploited to simultaneously achieve a 10× reduction in acquired data compared with the Nyquist rate, while alleviating limitations faced by conventional widefield temporal focusing microscopes due to scattering of the fluorescence signal. Additionally, we demonstrate an adaptive sampling scheme that further improves the compression and speed of our approach.

  3. Cosmological Particle Data Compression in Practice

    NASA Astrophysics Data System (ADS)

    Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.

    2017-12-01

    In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.

  4. Anomalous anisotropic compression behavior of superconducting CrAs under high pressure

    PubMed Central

    Yu, Zhenhai; Wu, Wei; Hu, Qingyang; Zhao, Jinggeng; Li, Chunyu; Yang, Ke; Cheng, Jinguang; Luo, Jianlin; Wang, Lin; Mao, Ho-kwang

    2015-01-01

    CrAs was observed to possess the bulk superconductivity under high-pressure conditions. To understand the superconducting mechanism and explore the correlation between the structure and superconductivity, the high-pressure structural evolution of CrAs was investigated using the angle-dispersive X-ray diffraction (XRD) method. The structure of CrAs remains stable up to 1.8 GPa, whereas the lattice parameters exhibit anomalous compression behaviors. With increasing pressure, the lattice parameters a and c both demonstrate a nonmonotonic change, and the lattice parameter b undergoes a rapid contraction at ∼0.18−0.35 GPa, which suggests that a pressure-induced isostructural phase transition occurs in CrAs. Above the phase transition pressure, the axial compressibilities of CrAs present remarkable anisotropy. A schematic band model was used to address the anomalous compression behavior of CrAs. The present results shed light on the structural and related electronic responses to high pressure, which play a key role toward understanding the superconductivity of CrAs. PMID:26627230

  5. High Achievers: 23rd Annual Survey. Attitudes and Opinions from the Nation's High Achieving Teens.

    ERIC Educational Resources Information Center

    Who's Who among American High School Students, Northbrook, IL.

    This report presents data from an annual survey of high school student leaders and high achievers. It is noted that of the nearly 700,000 high achievers featured in this edition, 5,000 students were sent the survey and 2,092 questionnaires were completed. Subjects were high school juniors and seniors selected for recognition by their principals or…

  6. Real-Time Aggressive Image Data Compression

    DTIC Science & Technology

    1990-03-31

    implemented with higher degrees of modularity, concurrency, and higher levels of machine intelligence , thereby providing higher data -throughput rates...Project Summary Project Title: Real-Time Aggressive Image Data Compression Principal Investigators: Dr. Yih-Fang Huang and Dr. Ruey-wen Liu Institution...Summary The objective of the proposed research is to develop reliable algorithms !.hat can achieve aggressive image data compression (with a compression

  7. Experimental Compressibility of Molten Hedenbergite at High Pressure

    NASA Astrophysics Data System (ADS)

    Agee, C. B.; Barnett, R. G.; Guo, X.; Lange, R. A.; Waller, C.; Asimow, P. D.

    2010-12-01

    Experiments using the sink/float method have bracketed the density of molten hedenbergite (CaFeSi2O6) at high pressures and temperatures. The experiments are the first of their kind to determine the compressibility of molten hedenbergite at high pressure and are part of a collaborative effort to establish a new database for an array of silicate melt compositions, which will contribute to the development of an empirically based predictive model that will allow calculation of silicate liquid density and compressibility over a wide range of P-T-X conditions where melting could occur in the Earth. Each melt composition will be measured using: (i) double-bob Archimedean method for melt density and thermal expansion at ambient pressure, (ii) sound speed measurements on liquids to constrain melt compressibility at ambient pressure, (iii) sink/float technique to measure melt density to 15 GPa, and (iv) shock wave measurements of P-V-E equation of state and temperature between 10 and 150 GPa. Companion abstracts on molten fayalite (Waller et al., 2010) and liquid mixes of hedenbergite-diopside and anorthite-hedenbergite-diopside (Guo and Lange, 2010) are also presented at this meeting. In the present study, the hedenbergite starting material was synthesized at the Experimental Petrology Lab, University of Michigan, where melt density, thermal expansion, and sound speed measurements were also carried out. The starting material has also been loaded into targets at the Caltech Shockwave Lab, and experiments there are currently underway. We report here preliminary results from static compression measurement performed at the Department of Petrology, Vrije Universiteit, Amsterdam, and the High Pressure Lab, Institute of Meteoritics, University of New Mexico. Experiments were carried out in Quick Press piston-cylinder devices and a Walker-style multi-anvil device. Sink/float marker spheres implemented were gem quality synthetic forsterite (Fo100), San Carlos olivine (Fo90), and

  8. FEM Modeling of the Relationship between the High-Temperature Hardness and High-Temperature, Quasi-Static Compression Experiment.

    PubMed

    Zhang, Tao; Jiang, Feng; Yan, Lan; Xu, Xipeng

    2017-12-26

    The high-temperature hardness test has a wide range of applications, but lacks test standards. The purpose of this study is to develop a finite element method (FEM) model of the relationship between the high-temperature hardness and high-temperature, quasi-static compression experiment, which is a mature test technology with test standards. A high-temperature, quasi-static compression test and a high-temperature hardness test were carried out. The relationship between the high-temperature, quasi-static compression test results and the high-temperature hardness test results was built by the development of a high-temperature indentation finite element (FE) simulation. The simulated and experimental results of high-temperature hardness have been compared, verifying the accuracy of the high-temperature indentation FE simulation.The simulated results show that the high temperature hardness basically does not change with the change of load when the pile-up of material during indentation is ignored. The simulated and experimental results show that the decrease in hardness and thermal softening are consistent. The strain and stress of indentation were analyzed from the simulated contour. It was found that the strain increases with the increase of the test temperature, and the stress decreases with the increase of the test temperature.

  9. FEM Modeling of the Relationship between the High-Temperature Hardness and High-Temperature, Quasi-Static Compression Experiment

    PubMed Central

    Zhang, Tao; Jiang, Feng; Yan, Lan; Xu, Xipeng

    2017-01-01

    The high-temperature hardness test has a wide range of applications, but lacks test standards. The purpose of this study is to develop a finite element method (FEM) model of the relationship between the high-temperature hardness and high-temperature, quasi-static compression experiment, which is a mature test technology with test standards. A high-temperature, quasi-static compression test and a high-temperature hardness test were carried out. The relationship between the high-temperature, quasi-static compression test results and the high-temperature hardness test results was built by the development of a high-temperature indentation finite element (FE) simulation. The simulated and experimental results of high-temperature hardness have been compared, verifying the accuracy of the high-temperature indentation FE simulation.The simulated results show that the high temperature hardness basically does not change with the change of load when the pile-up of material during indentation is ignored. The simulated and experimental results show that the decrease in hardness and thermal softening are consistent. The strain and stress of indentation were analyzed from the simulated contour. It was found that the strain increases with the increase of the test temperature, and the stress decreases with the increase of the test temperature. PMID:29278398

  10. Compression in wearable sensor nodes: impacts of node topology.

    PubMed

    Imtiaz, Syed Anas; Casson, Alexander J; Rodriguez-Villegas, Esther

    2014-04-01

    Wearable sensor nodes monitoring the human body must operate autonomously for very long periods of time. Online and low-power data compression embedded within the sensor node is therefore essential to minimize data storage/transmission overheads. This paper presents a low-power MSP430 compressive sensing implementation for providing such compression, focusing particularly on the impact of the sensor node architecture on the compression performance. Compression power performance is compared for four different sensor nodes incorporating different strategies for wireless transmission/on-sensor-node local storage of data. The results demonstrate that the compressive sensing used must be designed differently depending on the underlying node topology, and that the compression strategy should not be guided only by signal processing considerations. We also provide a practical overview of state-of-the-art sensor node topologies. Wireless transmission of data is often preferred as it offers increased flexibility during use, but in general at the cost of increased power consumption. We demonstrate that wireless sensor nodes can highly benefit from the use of compressive sensing and now can achieve power consumptions comparable to, or better than, the use of local memory.

  11. High-energy synchrotron X-ray radiography of shock-compressed materials

    NASA Astrophysics Data System (ADS)

    Rutherford, Michael E.; Chapman, David J.; Collinson, Mark A.; Jones, David R.; Music, Jasmina; Stafford, Samuel J. P.; Tear, Gareth R.; White, Thomas G.; Winters, John B. R.; Drakopoulos, Michael; Eakins, Daniel E.

    2015-06-01

    This presentation will discuss the development and application of a high-energy (50 to 250 keV) synchrotron X-ray imaging method to study shock-compressed, high-Z samples at Beamline I12 at the Diamond Light Source synchrotron (Rutherford-Appleton Laboratory, UK). Shock waves are driven into materials using a portable, single-stage gas gun designed by the Institute of Shock Physics. Following plate impact, material deformation is probed in-situ by white-beam X-ray radiography and complimentary velocimetry diagnostics. The high energies, large beam size (13 x 13 mm), and appreciable sample volumes (~ 1 cm3) viable for study at Beamline I12 compliment existing in-house pulsed X-ray capabilities and studies at the Dynamic Compression Sector. The authors gratefully acknowledge the ongoing support of Imperial College London, EPSRC, STFC and the Diamond Light Source, and AWE Plc.

  12. Application of content-based image compression to telepathology

    NASA Astrophysics Data System (ADS)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  13. Quenchable compressed graphite synthesized from neutron-irradiated highly oriented pyrolytic graphite in high pressure treatment at 1500 °C

    NASA Astrophysics Data System (ADS)

    Niwase, Keisuke; Terasawa, Mititaka; Honda, Shin-ichi; Niibe, Masahito; Hisakuni, Tomohiko; Iwata, Tadao; Higo, Yuji; Hirai, Takeshi; Shinmei, Toru; Ohfuji, Hiroaki; Irifune, Tetsuo

    2018-04-01

    The super hard material of "compressed graphite" (CG) has been reported to be formed under compression of graphite at room temperature. However, it returns to graphite under decompression. Neutron-irradiated graphite, on the other hand, is a unique material for the synthesis of a new carbon phase, as reported by the formation of an amorphous diamond by shock compression. Here, we investigate the change of structure of highly oriented pyrolytic graphite (HOPG) irradiated with neutrons to a fluence of 1.4 × 1024 n/m2 under static pressure. The neutron-irradiated HOPG sample was compressed to 15 GPa at room temperature and then the temperature was increased up to 1500 °C. X-ray diffraction, high-resolution transmission electron microscopy on the recovered sample clearly showed the formation of a significant amount of quenchable-CG with ordinary graphite. Formation of hexagonal and cubic diamonds was also confirmed. The effect of irradiation-induced defects on the synthesis of quenchable-CG under high pressure and high temperature treatment was discussed.

  14. Laser-driven magnetic-flux compression in high-energy-density plasmas.

    PubMed

    Gotchev, O V; Chang, P Y; Knauer, J P; Meyerhofer, D D; Polomarov, O; Frenje, J; Li, C K; Manuel, M J-E; Petrasso, R D; Rygg, J R; Séguin, F H; Betti, R

    2009-11-20

    The demonstration of magnetic field compression to many tens of megagauss in cylindrical implosions of inertial confinement fusion targets is reported for the first time. The OMEGA laser [T. R. Boehly, Opt. Commun. 133, 495 (1997)10.1016/S0030-4018(96)00325-2] was used to implode cylindrical CH targets filled with deuterium gas and seeded with a strong external field (>50 kG) from a specially developed magnetic pulse generator. This seed field was trapped (frozen) in the shock-heated gas fill and compressed by the imploding shell at a high implosion velocity, minimizing the effect of resistive flux diffusion. The magnetic fields in the compressed core were probed via proton deflectrometry using the fusion products from an imploding D3He target. Line-averaged magnetic fields between 30 and 40 MG were observed.

  15. A Comparison of LBG and ADPCM Speech Compression Techniques

    NASA Astrophysics Data System (ADS)

    Bachu, Rajesh G.; Patel, Jignasa; Barkana, Buket D.

    Speech compression is the technology of converting human speech into an efficiently encoded representation that can later be decoded to produce a close approximation of the original signal. In all speech there is a degree of predictability and speech coding techniques exploit this to reduce bit rates yet still maintain a suitable level of quality. This paper is a study and implementation of Linde-Buzo-Gray Algorithm (LBG) and Adaptive Differential Pulse Code Modulation (ADPCM) algorithms to compress speech signals. In here we implemented the methods using MATLAB 7.0. The methods we used in this study gave good results and performance in compressing the speech and listening tests showed that efficient and high quality coding is achieved.

  16. An Image Processing Technique for Achieving Lossy Compression of Data at Ratios in Excess of 100:1

    DTIC Science & Technology

    1992-11-01

    5 Lempel , Ziv , Welch (LZW) Compression ............... 7 Lossless Compression Tests Results ................. 9 Exact...since IBM holds the patent for this technique. Lempel , Ziv , Welch (LZW) Compression The LZW compression is related to two compression techniques known as... compression , using the input stream as data . This step is possible because the compression algorithm always outputs the phrase and character components of a

  17. A Streaming PCA VLSI Chip for Neural Data Compression.

    PubMed

    Wu, Tong; Zhao, Wenfeng; Guo, Hongsun; Lim, Hubert H; Yang, Zhi

    2017-12-01

    Neural recording system miniaturization and integration with low-power wireless technologies require compressing neural data before transmission. Feature extraction is a procedure to represent data in a low-dimensional space; its integration into a recording chip can be an efficient approach to compress neural data. In this paper, we propose a streaming principal component analysis algorithm and its microchip implementation to compress multichannel local field potential (LFP) and spike data. The circuits have been designed in a 65-nm CMOS technology and occupy a silicon area of 0.06 mm. Throughout the experiments, the chip compresses LFPs by 10 at the expense of as low as 1% reconstruction errors and 144-nW/channel power consumption; for spikes, the achieved compression ratio is 25 with 8% reconstruction errors and 3.05-W/channel power consumption. In addition, the algorithm and its hardware architecture can swiftly adapt to nonstationary spiking activities, which enables efficient hardware sharing among multiple channels to support a high-channel count recorder.

  18. 100J Pulsed Laser Shock Driver for Dynamic Compression Research

    NASA Astrophysics Data System (ADS)

    Wang, X.; Sethian, J.; Bromage, J.; Fochs, S.; Broege, D.; Zuegel, J.; Roides, R.; Cuffney, R.; Brent, G.; Zweiback, J.; Currier, Z.; D'Amico, K.; Hawreliak, J.; Zhang, J.; Rigg, P. A.; Gupta, Y. M.

    2017-06-01

    Logos Technologies and the Laboratory for Laser Energetics (LLE, University of Rochester) - in partnership with Washington State University - have designed, built and deployed a one of a kind 100J pulsed UV (351 nm) laser system to perform real-time, x-ray diffraction and imaging experiments in laser-driven compression experiments at the Dynamic Compression Sector (DCS) at the Advanced Photon Source, Argonne National Laboratory. The laser complements the other dynamic compression drivers at DCS. The laser system features beam smoothing for 2-d spatially uniform loading of samples and four, highly reproducible, temporal profiles (total pulse duration: 5-15 ns) to accommodate a wide variety of scientific needs. Other pulse shapes can be achieved as the experimental needs evolve. Timing of the laser pulse is highly precise (<200 ps) to allow accurate synchronization of the x-rays with the dynamic compression event. Details of the laser system, its operating parameters, and representative results will be presented. Work supported by DOE/NNSA.

  19. Compressive light field imaging

    NASA Astrophysics Data System (ADS)

    Ashok, Amit; Neifeld, Mark A.

    2010-04-01

    Light field imagers such as the plenoptic and the integral imagers inherently measure projections of the four dimensional (4D) light field scalar function onto a two dimensional sensor and therefore, suffer from a spatial vs. angular resolution trade-off. Programmable light field imagers, proposed recently, overcome this spatioangular resolution trade-off and allow high-resolution capture of the (4D) light field function with multiple measurements at the cost of a longer exposure time. However, these light field imagers do not exploit the spatio-angular correlations inherent in the light fields of natural scenes and thus result in photon-inefficient measurements. Here, we describe two architectures for compressive light field imaging that require relatively few photon-efficient measurements to obtain a high-resolution estimate of the light field while reducing the overall exposure time. Our simulation study shows that, compressive light field imagers using the principal component (PC) measurement basis require four times fewer measurements and three times shorter exposure time compared to a conventional light field imager in order to achieve an equivalent light field reconstruction quality.

  20. Wave energy devices with compressible volumes.

    PubMed

    Kurniawan, Adi; Greaves, Deborah; Chaplin, John

    2014-12-08

    We present an analysis of wave energy devices with air-filled compressible submerged volumes, where variability of volume is achieved by means of a horizontal surface free to move up and down relative to the body. An analysis of bodies without power take-off (PTO) systems is first presented to demonstrate the positive effects a compressible volume could have on the body response. Subsequently, two compressible device variations are analysed. In the first variation, the compressible volume is connected to a fixed volume via an air turbine for PTO. In the second variation, a water column separates the compressible volume from another volume, which is fitted with an air turbine open to the atmosphere. Both floating and bottom-fixed, axisymmetric, configurations are considered, and linear analysis is employed throughout. Advantages and disadvantages of each device are examined in detail. Some configurations with displaced volumes less than 2000 m 3 and with constant turbine coefficients are shown to be capable of achieving 80% of the theoretical maximum absorbed power over a wave period range of about 4 s.

  1. Wave energy devices with compressible volumes

    PubMed Central

    Kurniawan, Adi; Greaves, Deborah; Chaplin, John

    2014-01-01

    We present an analysis of wave energy devices with air-filled compressible submerged volumes, where variability of volume is achieved by means of a horizontal surface free to move up and down relative to the body. An analysis of bodies without power take-off (PTO) systems is first presented to demonstrate the positive effects a compressible volume could have on the body response. Subsequently, two compressible device variations are analysed. In the first variation, the compressible volume is connected to a fixed volume via an air turbine for PTO. In the second variation, a water column separates the compressible volume from another volume, which is fitted with an air turbine open to the atmosphere. Both floating and bottom-fixed, axisymmetric, configurations are considered, and linear analysis is employed throughout. Advantages and disadvantages of each device are examined in detail. Some configurations with displaced volumes less than 2000 m3 and with constant turbine coefficients are shown to be capable of achieving 80% of the theoretical maximum absorbed power over a wave period range of about 4 s. PMID:25484609

  2. Quantization Distortion in Block Transform-Compressed Data

    NASA Technical Reports Server (NTRS)

    Boden, A. F.

    1995-01-01

    The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.

  3. Loaded delay lines for future RF pulse compression systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, R.M.; Wilson, P.B.; Kroll, N.M.

    1995-05-01

    The peak power delivered by the klystrons in the NLCRA (Next Linear Collider Test Accelerator) now under construction at SLAC is enhanced by a factor of four in a SLED-II type of R.F. pulse compression system (pulse width compression ratio of six). To achieve the desired output pulse duration of 250 ns, a delay line constructed from a 36 m length of circular waveguide is used. Future colliders, however, will require even higher peak power and larger compression factors, which favors a more efficient binary pulse compression approach. Binary pulse compression, however, requires a line whose delay time is approximatelymore » proportional to the compression factor. To reduce the length of these lines to manageable proportions, periodically loaded delay lines are being analyzed using a generalized scattering matrix approach. One issue under study is the possibility of propagating two TE{sub o} modes, one with a high group velocity and one with a group velocity of the order 0.05c, for use in a single-line binary pulse compression system. Particular attention is paid to time domain pulse degradation and to Ohmic losses.« less

  4. Design of Restoration Method Based on Compressed Sensing and TwIST Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Fei; Piao, Yan

    2018-04-01

    In order to improve the subjective and objective quality of degraded images at low sampling rates effectively,save storage space and reduce computational complexity at the same time, this paper proposes a joint restoration algorithm of compressed sensing and two step iterative threshold shrinkage (TwIST). The algorithm applies the TwIST algorithm which used in image restoration to the compressed sensing theory. Then, a small amount of sparse high-frequency information is obtained in frequency domain. The TwIST algorithm based on compressed sensing theory is used to accurately reconstruct the high frequency image. The experimental results show that the proposed algorithm achieves better subjective visual effects and objective quality of degraded images while accurately restoring degraded images.

  5. New Algorithms and Lower Bounds for Sequential-Access Data Compression

    NASA Astrophysics Data System (ADS)

    Gagie, Travis

    2009-02-01

    This thesis concerns sequential-access data compression, i.e., by algorithms that read the input one or more times from beginning to end. In one chapter we consider adaptive prefix coding, for which we must read the input character by character, outputting each character's self-delimiting codeword before reading the next one. We show how to encode and decode each character in constant worst-case time while producing an encoding whose length is worst-case optimal. In another chapter we consider one-pass compression with memory bounded in terms of the alphabet size and context length, and prove a nearly tight tradeoff between the amount of memory we can use and the quality of the compression we can achieve. In a third chapter we consider compression in the read/write streams model, which allows us passes and memory both polylogarithmic in the size of the input. We first show how to achieve universal compression using only one pass over one stream. We then show that one stream is not sufficient for achieving good grammar-based compression. Finally, we show that two streams are necessary and sufficient for achieving entropy-only bounds.

  6. Lossy compression for Animated Web Visualisation

    NASA Astrophysics Data System (ADS)

    Prudden, R.; Tomlinson, J.; Robinson, N.; Arribas, A.

    2017-12-01

    This talk will discuss an technique for lossy data compression specialised for web animation. We set ourselves the challenge of visualising a full forecast weather field as an animated 3D web page visualisation. This data is richly spatiotemporal, however it is routinely communicated to the public as a 2D map, and scientists are largely limited to visualising data via static 2D maps or 1D scatter plots. We wanted to present Met Office weather forecasts in a way that represents all the generated data. Our approach was to repurpose the technology used to stream high definition videos. This enabled us to achieve high rates of compression, while being compatible with both web browsers and GPU processing. Since lossy compression necessarily involves discarding information, evaluating the results is an important and difficult problem. This is essentially a problem of forecast verification. The difficulty lies in deciding what it means for two weather fields to be "similar", as simple definitions such as mean squared error often lead to undesirable results. In the second part of the talk, I will briefly discuss some ideas for alternative measures of similarity.

  7. Configuring and Characterizing X-Rays for Laser-Driven Compression Experiments at the Dynamic Compression Sector

    NASA Astrophysics Data System (ADS)

    Li, Y.; Capatina, D.; D'Amico, K.; Eng, P.; Hawreliak, J.; Graber, T.; Rickerson, D.; Klug, J.; Rigg, P. A.; Gupta, Y. M.

    2017-06-01

    Coupling laser-driven compression experiments to the x-ray beam at the Dynamic Compression Sector (DCS) at the Advanced Photon Source (APS) of Argonne National Laboratory requires state-of-the-art x-ray focusing, pulse isolation, and diagnostics capabilities. The 100J UV pulsed laser system can be fired once every 20 minutes so precise alignment and focusing of the x-rays on each new sample must be fast and reproducible. Multiple Kirkpatrick-Baez (KB) mirrors are used to achieve a focal spot size as small as 50 μm at the target, while the strategic placement of scintillating screens, cameras, and detectors allows for fast diagnosis of the beam shape, intensity, and alignment of the sample to the x-ray beam. In addition, a series of x-ray choppers and shutters are used to ensure that the sample is exposed to only a single x-ray pulse ( 80ps) during the dynamic compression event and require highly precise synchronization. Details of the technical requirements, layout, and performance of these instruments will be presented. Work supported by DOE/NNSA.

  8. A Novel ECG Data Compression Method Using Adaptive Fourier Decomposition With Security Guarantee in e-Health Applications.

    PubMed

    Ma, JiaLi; Zhang, TanTan; Dong, MingChui

    2015-05-01

    This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6-44.5 and percentage root mean square difference (PRD) of 0.8-2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications.

  9. Compression Therapy: Clinical and Experimental Evidence

    PubMed Central

    2012-01-01

    Aim: A review is given on the different tools of compression therapy and their mode of action. Methods: Interface pressure and stiffness of compression devices, alone or in combination can be measured in vivo. Hemodynamic effects have been demonstrated by measuring venous volume and flow velocity using MRI, Duplex and radioisotopes, venous reflux and venous pumping function using plethysmography and phlebodynamometry. Oedema reduction can be measured by limb volumetry. Results: Compression stockings exerting a pressure of ~20 mmHg on the distal leg are able to increase venous blood flow velocity in the supine position and to prevent leg swelling after prolonged sitting and standing. In the upright position, an interface pressure of more than 50 mmHg is needed for intermittent occlusion of incompetent veins and for a reduction of ambulatory venous hypertension during walking. Such high intermittent interface pressure peaks exerting a “massaging effect” may rather be achieved by short stretch multilayer bandages than by elastic stockings. Conclusion: Compression is a cornerstone in the management of venous and lymphatic insufficiency. However, this treatment modality is still underestimated and deserves better understanding and improved educational programs, both for patients and medical staff. PMID:23641263

  10. Poor Results for High Achievers

    ERIC Educational Resources Information Center

    Bui, Sa; Imberman, Scott; Craig, Steven

    2012-01-01

    Three million students in the United States are classified as gifted, yet little is known about the effectiveness of traditional gifted and talented (G&T) programs. In theory, G&T programs might help high-achieving students because they group them with other high achievers and typically offer specially trained teachers and a more advanced…

  11. Image compression-encryption algorithms by combining hyper-chaotic system with discrete fractional random transform

    NASA Astrophysics Data System (ADS)

    Gong, Lihua; Deng, Chengzhi; Pan, Shumin; Zhou, Nanrun

    2018-07-01

    Based on hyper-chaotic system and discrete fractional random transform, an image compression-encryption algorithm is designed. The original image is first transformed into a spectrum by the discrete cosine transform and the resulting spectrum is compressed according to the method of spectrum cutting. The random matrix of the discrete fractional random transform is controlled by a chaotic sequence originated from the high dimensional hyper-chaotic system. Then the compressed spectrum is encrypted by the discrete fractional random transform. The order of DFrRT and the parameters of the hyper-chaotic system are the main keys of this image compression and encryption algorithm. The proposed algorithm can compress and encrypt image signal, especially can encrypt multiple images once. To achieve the compression of multiple images, the images are transformed into spectra by the discrete cosine transform, and then the spectra are incised and spliced into a composite spectrum by Zigzag scanning. Simulation results demonstrate that the proposed image compression and encryption algorithm is of high security and good compression performance.

  12. Aerodynamics inside a rapid compression machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Gaurav; Sung, Chih-Jen

    2006-04-15

    The aerodynamics inside a rapid compression machine after the end of compression is investigated using planar laser-induced fluorescence (PLIF) of acetone. To study the effect of reaction chamber configuration on the resulting aerodynamics and temperature field, experiments are conducted and compared using a creviced piston and a flat piston under varying conditions. Results show that the flat piston design leads to significant mixing of the cold vortex with the hot core region, which causes alternate hot and cold regions inside the combustion chamber. At higher pressures, the effect of the vortex is reduced. The creviced piston head configuration is demonstratedmore » to result in drastic reduction of the effect of the vortex. Experimental conditions are also simulated using the Star-CD computational fluid dynamics package. Computed results closely match with experimental observation. Numerical results indicate that with a flat piston design, gas velocity after compression is very high and the core region shrinks quickly due to rapid entrainment of cold gases. Whereas, for a creviced piston head design, gas velocity after compression is significantly lower and the core region remains unaffected for a long duration. As a consequence, for the flat piston, adiabatic core assumption can significantly overpredict the maximum temperature after the end of compression. For the creviced piston, the adiabatic core assumption is found to be valid even up to 100 ms after compression. This work therefore experimentally and numerically substantiates the importance of piston head design for achieving a homogeneous core region inside a rapid compression machine. (author)« less

  13. Measuring Ionization in Highly Compressed, Near-Degenerate Plasmas

    NASA Astrophysics Data System (ADS)

    Doeppner, Tilo; Kraus, D.; Neumayer, P.; Bachmann, B.; Collins, G. W.; Divol, L.; Kritcher, A.; Landen, O. L.; Pak, A.; Weber, C.; Fletcher, L.; Glenzer, S. H.; Falcone, R. W.; Saunders, A.; Chapman, D.; Baggott, R.; Gericke, D. O.; Yi, A.

    2016-10-01

    A precise knowledge of ionization at given temperature and density is required to accurately model compressibility and heat capacity of materials at extreme conditions. We use x-ray Thomson scattering to characterize the plasma conditions in plastic and beryllium capsules near stagnation in implosion experiments at the National Ignition Facility. We expect the capsules to be compressed to more than 20x and electron densities approaching 1025 cm-3, corresponding to a Fermi energy of 170 eV. Zinc Heα x-rays (9 keV) scattering at 120° off the plasma yields high sensitivity to K-shell ionization, while at the same time constraining density and temperature. We will discuss recent results in the context of ionization potential depression at these extreme conditions. This work was performed under the auspices of the US Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  14. 16th Annual Survey of High Achievers: Attitudes and Opinions from the Nation's High Achieving Teens.

    ERIC Educational Resources Information Center

    Who's Who among American High School Students, Northbrook, IL.

    The report presents data from 2,043 questionnaires completed by secondary student leaders and high achievers. Ss were selected for recognition in "Who's Who Among American High School Students" by their principals or guidance counselors, national youth organizations, or the publishing company because of high achievement in academics, activities,…

  15. Multicontrast reconstruction using compressed sensing with low rank and spatially varying edge-preserving constraints for high-resolution MR characterization of myocardial infarction.

    PubMed

    Zhang, Li; Athavale, Prashant; Pop, Mihaela; Wright, Graham A

    2017-08-01

    To enable robust reconstruction for highly accelerated three-dimensional multicontrast late enhancement imaging to provide improved MR characterization of myocardial infarction with isotropic high spatial resolution. A new method using compressed sensing with low rank and spatially varying edge-preserving constraints (CS-LASER) is proposed to improve the reconstruction of fine image details from highly undersampled data. CS-LASER leverages the low rank feature of the multicontrast volume series in MR relaxation and integrates spatially varying edge preservation into the explicit low rank constrained compressed sensing framework using weighted total variation. With an orthogonal temporal basis pre-estimated, a multiscale iterative reconstruction framework is proposed to enable the practice of CS-LASER with spatially varying weights of appropriate accuracy. In in vivo pig studies with both retrospective and prospective undersamplings, CS-LASER preserved fine image details better and presented tissue characteristics with a higher degree of consistency with histopathology, particularly in the peri-infarct region, than an alternative technique for different acceleration rates. An isotropic resolution of 1.5 mm was achieved in vivo within a single breath-hold using the proposed techniques. Accelerated three-dimensional multicontrast late enhancement with CS-LASER can achieve improved MR characterization of myocardial infarction with high spatial resolution. Magn Reson Med 78:598-610, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  16. Three dimensional range geometry and texture data compression with space-filling curves.

    PubMed

    Chen, Xia; Zhang, Song

    2017-10-16

    This paper presents a novel method to effectively store three-dimensional (3D) data and 2D texture data into a regular 24-bit image. The proposed method uses the Hilbert space-filling curve to map the normalized unwrapped phase map to two 8-bit color channels, and saves the third color channel for 2D texture storage. By further leveraging existing 2D image and video compression techniques, the proposed method can achieve high compression ratios while effectively preserving data quality. Since the encoding and decoding processes can be applied to most of the current 2D media platforms, this proposed compression method can make 3D data storage and transmission available for many electrical devices without requiring special hardware changes. Experiments demonstrate that if a lossless 2D image/video format is used, both original 3D geometry and 2D color texture can be accurately recovered; if lossy image/video compression is used, only black-and-white or grayscale texture can be properly recovered, but much higher compression ratios (e.g., 1543:1 against the ASCII OBJ format) are achieved with slight loss of 3D geometry quality.

  17. A new hyperspectral image compression paradigm based on fusion

    NASA Astrophysics Data System (ADS)

    Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.

  18. Audiovisual focus of attention and its application to Ultra High Definition video compression

    NASA Astrophysics Data System (ADS)

    Rerabek, Martin; Nemoto, Hiromi; Lee, Jong-Seok; Ebrahimi, Touradj

    2014-02-01

    Using Focus of Attention (FoA) as a perceptual process in image and video compression belongs to well-known approaches to increase coding efficiency. It has been shown that foveated coding, when compression quality varies across the image according to region of interest, is more efficient than the alternative coding, when all region are compressed in a similar way. However, widespread use of such foveated compression has been prevented due to two main conflicting causes, namely, the complexity and the efficiency of algorithms for FoA detection. One way around these is to use as much information as possible from the scene. Since most video sequences have an associated audio, and moreover, in many cases there is a correlation between the audio and the visual content, audiovisual FoA can improve efficiency of the detection algorithm while remaining of low complexity. This paper discusses a simple yet efficient audiovisual FoA algorithm based on correlation of dynamics between audio and video signal components. Results of audiovisual FoA detection algorithm are subsequently taken into account for foveated coding and compression. This approach is implemented into H.265/HEVC encoder producing a bitstream which is fully compliant to any H.265/HEVC decoder. The influence of audiovisual FoA in the perceived quality of high and ultra-high definition audiovisual sequences is explored and the amount of gain in compression efficiency is analyzed.

  19. Squish: Near-Optimal Compression for Archival of Relational Datasets

    PubMed Central

    Gao, Yihan; Parameswaran, Aditya

    2017-01-01

    Relational datasets are being generated at an alarmingly rapid rate across organizations and industries. Compressing these datasets could significantly reduce storage and archival costs. Traditional compression algorithms, e.g., gzip, are suboptimal for compressing relational datasets since they ignore the table structure and relationships between attributes. We study compression algorithms that leverage the relational structure to compress datasets to a much greater extent. We develop Squish, a system that uses a combination of Bayesian Networks and Arithmetic Coding to capture multiple kinds of dependencies among attributes and achieve near-entropy compression rate. Squish also supports user-defined attributes: users can instantiate new data types by simply implementing five functions for a new class interface. We prove the asymptotic optimality of our compression algorithm and conduct experiments to show the effectiveness of our system: Squish achieves a reduction of over 50% in storage size relative to systems developed in prior work on a variety of real datasets. PMID:28180028

  20. H.264/AVC Video Compression on Smartphones

    NASA Astrophysics Data System (ADS)

    Sharabayko, M. P.; Markov, N. G.

    2017-01-01

    In this paper, we studied the usage of H.264/AVC video compression tools by the flagship smartphones. The results show that only a subset of tools is used, meaning that there is still a potential to achieve higher compression efficiency within the H.264/AVC standard, but the most advanced smartphones are already reaching the compression efficiency limit of H.264/AVC.

  1. Biological sequence compression algorithms.

    PubMed

    Matsumoto, T; Sadakane, K; Imai, H

    2000-01-01

    Today, more and more DNA sequences are becoming available. The information about DNA sequences are stored in molecular biology databases. The size and importance of these databases will be bigger and bigger in the future, therefore this information must be stored or communicated efficiently. Furthermore, sequence compression can be used to define similarities between biological sequences. The standard compression algorithms such as gzip or compress cannot compress DNA sequences, but only expand them in size. On the other hand, CTW (Context Tree Weighting Method) can compress DNA sequences less than two bits per symbol. These algorithms do not use special structures of biological sequences. Two characteristic structures of DNA sequences are known. One is called palindromes or reverse complements and the other structure is approximate repeats. Several specific algorithms for DNA sequences that use these structures can compress them less than two bits per symbol. In this paper, we improve the CTW so that characteristic structures of DNA sequences are available. Before encoding the next symbol, the algorithm searches an approximate repeat and palindrome using hash and dynamic programming. If there is a palindrome or an approximate repeat with enough length then our algorithm represents it with length and distance. By using this preprocessing, a new program achieves a little higher compression ratio than that of existing DNA-oriented compression algorithms. We also describe new compression algorithm for protein sequences.

  2. Ultra high-speed x-ray imaging of laser-driven shock compression using synchrotron light

    NASA Astrophysics Data System (ADS)

    Olbinado, Margie P.; Cantelli, Valentina; Mathon, Olivier; Pascarelli, Sakura; Grenzer, Joerg; Pelka, Alexander; Roedel, Melanie; Prencipe, Irene; Laso Garcia, Alejandro; Helbig, Uwe; Kraus, Dominik; Schramm, Ulrich; Cowan, Tom; Scheel, Mario; Pradel, Pierre; De Resseguier, Thibaut; Rack, Alexander

    2018-02-01

    A high-power, nanosecond pulsed laser impacting the surface of a material can generate an ablation plasma that drives a shock wave into it; while in situ x-ray imaging can provide a time-resolved probe of the shock-induced material behaviour on macroscopic length scales. Here, we report on an investigation into laser-driven shock compression of a polyurethane foam and a graphite rod by means of single-pulse synchrotron x-ray phase-contrast imaging with MHz frame rate. A 6 J, 10 ns pulsed laser was used to generate shock compression. Physical processes governing the laser-induced dynamic response such as elastic compression, compaction, pore collapse, fracture, and fragmentation have been imaged; and the advantage of exploiting the partial spatial coherence of a synchrotron source for studying low-density, carbon-based materials is emphasized. The successful combination of a high-energy laser and ultra high-speed x-ray imaging using synchrotron light demonstrates the potentiality of accessing complementary information from scientific studies of laser-driven shock compression.

  3. Homogenous charge compression ignition engine having a cylinder including a high compression space

    DOEpatents

    Agama, Jorge R.; Fiveland, Scott B.; Maloney, Ronald P.; Faletti, James J.; Clarke, John M.

    2003-12-30

    The present invention relates generally to the field of homogeneous charge compression engines. In these engines, fuel is injected upstream or directly into the cylinder when the power piston is relatively close to its bottom dead center position. The fuel mixes with air in the cylinder as the power piston advances to create a relatively lean homogeneous mixture that preferably ignites when the power piston is relatively close to the top dead center position. However, if the ignition event occurs either earlier or later than desired, lowered performance, engine misfire, or even engine damage, can result. Thus, the present invention divides the homogeneous charge between a controlled volume higher compression space and a lower compression space to better control the start of ignition.

  4. Real-time compression of raw computed tomography data: technology, architecture, and benefits

    NASA Astrophysics Data System (ADS)

    Wegener, Albert; Chandra, Naveen; Ling, Yi; Senzig, Robert; Herfkens, Robert

    2009-02-01

    Compression of computed tomography (CT) projection samples reduces slip ring and disk drive costs. A lowcomplexity, CT-optimized compression algorithm called Prism CTTM achieves at least 1.59:1 and up to 2.75:1 lossless compression on twenty-six CT projection data sets. We compare the lossless compression performance of Prism CT to alternative lossless coders, including Lempel-Ziv, Golomb-Rice, and Huffman coders using representative CT data sets. Prism CT provides the best mean lossless compression ratio of 1.95:1 on the representative data set. Prism CT compression can be integrated into existing slip rings using a single FPGA. Prism CT decompression operates at 100 Msamp/sec using one core of a dual-core Xeon CPU. We describe a methodology to evaluate the effects of lossy compression on image quality to achieve even higher compression ratios. We conclude that lossless compression of raw CT signals provides significant cost savings and performance improvements for slip rings and disk drive subsystems in all CT machines. Lossy compression should be considered in future CT data acquisition subsystems because it provides even more system benefits above lossless compression while achieving transparent diagnostic image quality. This result is demonstrated on a limited dataset using appropriately selected compression ratios and an experienced radiologist.

  5. Learning random networks for compression of still and moving images

    NASA Technical Reports Server (NTRS)

    Gelenbe, Erol; Sungur, Mert; Cramer, Christopher

    1994-01-01

    Image compression for both still and moving images is an extremely important area of investigation, with numerous applications to videoconferencing, interactive education, home entertainment, and potential applications to earth observations, medical imaging, digital libraries, and many other areas. We describe work on a neural network methodology to compress/decompress still and moving images. We use the 'point-process' type neural network model which is closer to biophysical reality than standard models, and yet is mathematically much more tractable. We currently achieve compression ratios of the order of 120:1 for moving grey-level images, based on a combination of motion detection and compression. The observed signal-to-noise ratio varies from values above 25 to more than 35. The method is computationally fast so that compression and decompression can be carried out in real-time. It uses the adaptive capabilities of a set of neural networks so as to select varying compression ratios in real-time as a function of quality achieved. It also uses a motion detector which will avoid retransmitting portions of the image which have varied little from the previous frame. Further improvements can be achieved by using on-line learning during compression, and by appropriate compensation of nonlinearities in the compression/decompression scheme. We expect to go well beyond the 250:1 compression level for color images with good quality levels.

  6. Mathematics Achievement in High- and Low-Achieving Secondary Schools

    ERIC Educational Resources Information Center

    Mohammadpour, Ebrahim; Shekarchizadeh, Ahmadreza

    2015-01-01

    This paper identifies the amount of variance in mathematics achievement in high- and low-achieving schools that can be explained by school-level factors, while controlling for student-level factors. The data were obtained from 2679 Iranian eighth graders who participated in the 2007 Trends in International Mathematics and Science Study. Of the…

  7. Lossless compression techniques for maskless lithography data

    NASA Astrophysics Data System (ADS)

    Dai, Vito; Zakhor, Avideh

    2002-07-01

    Future lithography systems must produce more dense chips with smaller feature sizes, while maintaining the throughput of one wafer per sixty seconds per layer achieved by today's optical lithography systems. To achieve this throughput with a direct-write maskless lithography system, using 25 nm pixels for 50 nm feature sizes, requires data rates of about 10 Tb/s. In a previous paper, we presented an architecture which achieves this data rate contingent on consistent 25 to 1 compression of lithography data, and on implementation of a decoder-writer chip with a real-time decompressor fabricated on the same chip as the massively parallel array of lithography writers. In this paper, we examine the compression efficiency of a spectrum of techniques suitable for lithography data, including two industry standards JBIG and JPEG-LS, a wavelet based technique SPIHT, general file compression techniques ZIP and BZIP2, our own 2D-LZ technique, and a simple list-of-rectangles representation RECT. Layouts rasterized both to black-and-white pixels, and to 32 level gray pixels are considered. Based on compression efficiency, JBIG, ZIP, 2D-LZ, and BZIP2 are found to be strong candidates for application to maskless lithography data, in many cases far exceeding the required compression ratio of 25. To demonstrate the feasibility of implementing the decoder-writer chip, we consider the design of a hardware decoder based on ZIP, the simplest of the four candidate techniques. The basic algorithm behind ZIP compression is Lempel-Ziv 1977 (LZ77), and the design parameters of LZ77 decompression are optimized to minimize circuit usage while maintaining compression efficiency.

  8. Recognizable or Not: Towards Image Semantic Quality Assessment for Compression

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Dandan; Li, Houqiang

    2017-12-01

    Traditionally, image compression was optimized for the pixel-wise fidelity or the perceptual quality of the compressed images given a bit-rate budget. But recently, compressed images are more and more utilized for automatic semantic analysis tasks such as recognition and retrieval. For these tasks, we argue that the optimization target of compression is no longer perceptual quality, but the utility of the compressed images in the given automatic semantic analysis task. Accordingly, we propose to evaluate the quality of the compressed images neither at pixel level nor at perceptual level, but at semantic level. In this paper, we make preliminary efforts towards image semantic quality assessment (ISQA), focusing on the task of optical character recognition (OCR) from compressed images. We propose a full-reference ISQA measure by comparing the features extracted from text regions of original and compressed images. We then propose to integrate the ISQA measure into an image compression scheme. Experimental results show that our proposed ISQA measure is much better than PSNR and SSIM in evaluating the semantic quality of compressed images; accordingly, adopting our ISQA measure to optimize compression for OCR leads to significant bit-rate saving compared to using PSNR or SSIM. Moreover, we perform subjective test about text recognition from compressed images, and observe that our ISQA measure has high consistency with subjective recognizability. Our work explores new dimensions in image quality assessment, and demonstrates promising direction to achieve higher compression ratio for specific semantic analysis tasks.

  9. Effect of multilayer high-compression bandaging on ankle range of motion and oxygen cost of walking

    PubMed Central

    Roaldsen, K S; Elfving, B; Stanghelle, J K; Mattsson, E

    2012-01-01

    Objective To evaluate the effects of multilayer high-compression bandaging on ankle range of motion, oxygen consumption and subjective walking ability in healthy subjects. Method A volunteer sample of 22 healthy subjects (10 women and 12 men; aged 67 [63–83] years) were studied. The intervention included treadmill-walking at self-selected speed with and without multilayer high-compression bandaging (Proforeº), randomly selected. The primary outcome variables were ankle range of motion, oxygen consumption and subjective walking ability. Results Total ankle range of motion decreased 4% with compression. No change in oxygen cost of walking was observed. Less than half the subjects reported that walking-shoe comfort or walking distance was negatively affected. Conclusion Ankle range of motion decreased with compression but could probably be counteracted with a regular exercise programme. There were no indications that walking with compression was more exhausting than walking without. Appropriate walking shoes could seem important to secure gait efficiency when using compression garments. PMID:21810941

  10. Evaluation on Compressive Characteristics of Medical Stents Applied by Mesh Structures

    NASA Astrophysics Data System (ADS)

    Hirayama, Kazuki; He, Jianmei

    2017-11-01

    There are concerns about strength reduction and fatigue fracture due to stress concentration in currently used medical stents. To address these problems, meshed stents applied by mesh structures were interested for achieving long life and high strength perfromance of medical stents. The purpose of this study is to design basic mesh shapes to obatin three dimensional (3D) meshed stent models for mechanical property evaluation. The influence of introduced design variables on compressive characteristics of meshed stent models are evaluated through finite element analysis using ANSYS Workbench code. From the analytical results, the compressive stiffness are changed periodically with compressive directions, average results need to be introduced as the mean value of compressive stiffness of meshed stents. Secondly, compressive flexibility of meshed stents can be improved by increasing the angle proportional to the arm length of the mesh basic shape. By increasing the number of basic mesh shapes arranged in stent’s circumferential direction, compressive rigidity of meshed stent tends to be increased. Finaly reducing the mesh line width is found effective to improve compressive flexibility of meshed stents.

  11. A Study on Homogeneous Charge Compression Ignition Gasoline Engines

    NASA Astrophysics Data System (ADS)

    Kaneko, Makoto; Morikawa, Koji; Itoh, Jin; Saishu, Youhei

    A new engine concept consisting of HCCI combustion for low and midrange loads and spark ignition combustion for high loads was introduced. The timing of the intake valve closing was adjusted to alter the negative valve overlap and effective compression ratio to provide suitable HCCI conditions. The effect of mixture formation on auto-ignition was also investigated using a direct injection engine. As a result, HCCI combustion was achieved with a relatively low compression ratio when the intake air was heated by internal EGR. The resulting combustion was at a high thermal efficiency, comparable to that of modern diesel engines, and produced almost no NOx emissions or smoke. The mixture stratification increased the local A/F concentration, resulting in higher reactivity. A wide range of combustible A/F ratios was used to control the compression ignition timing. Photographs showed that the flame filled the entire chamber during combustion, reducing both emissions and fuel consumption.

  12. SCALCE: boosting sequence compression algorithms using locally consistent encoding

    PubMed Central

    Hach, Faraz; Numanagić, Ibrahim; Sahinalp, S Cenk

    2012-01-01

    Motivation: The high throughput sequencing (HTS) platforms generate unprecedented amounts of data that introduce challenges for the computational infrastructure. Data management, storage and analysis have become major logistical obstacles for those adopting the new platforms. The requirement for large investment for this purpose almost signalled the end of the Sequence Read Archive hosted at the National Center for Biotechnology Information (NCBI), which holds most of the sequence data generated world wide. Currently, most HTS data are compressed through general purpose algorithms such as gzip. These algorithms are not designed for compressing data generated by the HTS platforms; for example, they do not take advantage of the specific nature of genomic sequence data, that is, limited alphabet size and high similarity among reads. Fast and efficient compression algorithms designed specifically for HTS data should be able to address some of the issues in data management, storage and communication. Such algorithms would also help with analysis provided they offer additional capabilities such as random access to any read and indexing for efficient sequence similarity search. Here we present SCALCE, a ‘boosting’ scheme based on Locally Consistent Parsing technique, which reorganizes the reads in a way that results in a higher compression speed and compression rate, independent of the compression algorithm in use and without using a reference genome. Results: Our tests indicate that SCALCE can improve the compression rate achieved through gzip by a factor of 4.19—when the goal is to compress the reads alone. In fact, on SCALCE reordered reads, gzip running time can improve by a factor of 15.06 on a standard PC with a single core and 6 GB memory. Interestingly even the running time of SCALCE + gzip improves that of gzip alone by a factor of 2.09. When compared with the recently published BEETL, which aims to sort the (inverted) reads in lexicographic order for

  13. SCALCE: boosting sequence compression algorithms using locally consistent encoding.

    PubMed

    Hach, Faraz; Numanagic, Ibrahim; Alkan, Can; Sahinalp, S Cenk

    2012-12-01

    The high throughput sequencing (HTS) platforms generate unprecedented amounts of data that introduce challenges for the computational infrastructure. Data management, storage and analysis have become major logistical obstacles for those adopting the new platforms. The requirement for large investment for this purpose almost signalled the end of the Sequence Read Archive hosted at the National Center for Biotechnology Information (NCBI), which holds most of the sequence data generated world wide. Currently, most HTS data are compressed through general purpose algorithms such as gzip. These algorithms are not designed for compressing data generated by the HTS platforms; for example, they do not take advantage of the specific nature of genomic sequence data, that is, limited alphabet size and high similarity among reads. Fast and efficient compression algorithms designed specifically for HTS data should be able to address some of the issues in data management, storage and communication. Such algorithms would also help with analysis provided they offer additional capabilities such as random access to any read and indexing for efficient sequence similarity search. Here we present SCALCE, a 'boosting' scheme based on Locally Consistent Parsing technique, which reorganizes the reads in a way that results in a higher compression speed and compression rate, independent of the compression algorithm in use and without using a reference genome. Our tests indicate that SCALCE can improve the compression rate achieved through gzip by a factor of 4.19-when the goal is to compress the reads alone. In fact, on SCALCE reordered reads, gzip running time can improve by a factor of 15.06 on a standard PC with a single core and 6 GB memory. Interestingly even the running time of SCALCE + gzip improves that of gzip alone by a factor of 2.09. When compared with the recently published BEETL, which aims to sort the (inverted) reads in lexicographic order for improving bzip2, SCALCE + gzip

  14. THE TURBULENT DYNAMO IN HIGHLY COMPRESSIBLE SUPERSONIC PLASMAS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Federrath, Christoph; Schober, Jennifer; Bovino, Stefano

    The turbulent dynamo may explain the origin of cosmic magnetism. While the exponential amplification of magnetic fields has been studied for incompressible gases, little is known about dynamo action in highly compressible, supersonic plasmas, such as the interstellar medium of galaxies and the early universe. Here we perform the first quantitative comparison of theoretical models of the dynamo growth rate and saturation level with three-dimensional magnetohydrodynamical simulations of supersonic turbulence with grid resolutions of up to 1024{sup 3} cells. We obtain numerical convergence and find that dynamo action occurs for both low and high magnetic Prandtl numbers Pm = ν/ηmore » = 0.1-10 (the ratio of viscous to magnetic dissipation), which had so far only been seen for Pm ≥ 1 in supersonic turbulence. We measure the critical magnetic Reynolds number, Rm{sub crit}=129{sub −31}{sup +43}, showing that the compressible dynamo is almost as efficient as in incompressible gas. Considering the physical conditions of the present and early universe, we conclude that magnetic fields need to be taken into account during structure formation from the early to the present cosmic ages, because they suppress gas fragmentation and drive powerful jets and outflows, both greatly affecting the initial mass function of stars.« less

  15. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    PubMed

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.

  16. Gmz: a Gml Compression Model for Webgis

    NASA Astrophysics Data System (ADS)

    Khandelwal, A.; Rajan, K. S.

    2017-09-01

    Geography markup language (GML) is an XML specification for expressing geographical features. Defined by Open Geospatial Consortium (OGC), it is widely used for storage and transmission of maps over the Internet. XML schemas provide the convenience to define custom features profiles in GML for specific needs as seen in widely popular cityGML, simple features profile, coverage, etc. Simple features profile (SFP) is a simpler subset of GML profile with support for point, line and polygon geometries. SFP has been constructed to make sure it covers most commonly used GML geometries. Web Feature Service (WFS) serves query results in SFP by default. But it falls short of being an ideal choice due to its high verbosity and size-heavy nature, which provides immense scope for compression. GMZ is a lossless compression model developed to work for SFP compliant GML files. Our experiments indicate GMZ achieves reasonably good compression ratios and can be useful in WebGIS based applications.

  17. Geostationary Imaging FTS (GIFTS) Data Processing: Measurement Simulation and Compression

    NASA Technical Reports Server (NTRS)

    Huang, Hung-Lung; Revercomb, H. E.; Thom, J.; Antonelli, P. B.; Osborne, B.; Tobin, D.; Knuteson, R.; Garcia, R.; Dutcher, S.; Li, J.

    2001-01-01

    GIFTS (Geostationary Imaging Fourier Transform Spectrometer), a forerunner of next generation geostationary satellite weather observing systems, will be built to fly on the NASA EO-3 geostationary orbit mission in 2004 to demonstrate the use of large area detector arrays and readouts. Timely high spatial resolution images and quantitative soundings of clouds, water vapor, temperature, and pollutants of the atmosphere for weather prediction and air quality monitoring will be achieved. GIFTS is novel in terms of providing many scientific returns that traditionally can only be achieved by separate advanced imaging and sounding systems. GIFTS' ability to obtain half-hourly high vertical density wind over the full earth disk is revolutionary. However, these new technologies bring forth many challenges for data transmission, archiving, and geophysical data processing. In this paper, we will focus on the aspect of data volume and downlink issues by conducting a GIFTS data compression experiment. We will discuss the scenario of using principal component analysis as a foundation for atmospheric data retrieval and compression of uncalibrated and un-normalized interferograms. The effects of compression on the degradation of the signal and noise reduction in interferogram and spectral domains will be highlighted. A simulation system developed to model the GIFTS instrument measurements is described in detail.

  18. A seismic data compression system using subband coding

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.; Pollara, F.

    1995-01-01

    This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  19. Resource efficient data compression algorithms for demanding, WSN based biomedical applications.

    PubMed

    Antonopoulos, Christos P; Voros, Nikolaos S

    2016-02-01

    During the last few years, medical research areas of critical importance such as Epilepsy monitoring and study, increasingly utilize wireless sensor network technologies in order to achieve better understanding and significant breakthroughs. However, the limited memory and communication bandwidth offered by WSN platforms comprise a significant shortcoming to such demanding application scenarios. Although, data compression can mitigate such deficiencies there is a lack of objective and comprehensive evaluation of relative approaches and even more on specialized approaches targeting specific demanding applications. The research work presented in this paper focuses on implementing and offering an in-depth experimental study regarding prominent, already existing as well as novel proposed compression algorithms. All algorithms have been implemented in a common Matlab framework. A major contribution of this paper, that differentiates it from similar research efforts, is the employment of real world Electroencephalography (EEG) and Electrocardiography (ECG) datasets comprising the two most demanding Epilepsy modalities. Emphasis is put on WSN applications, thus the respective metrics focus on compression rate and execution latency for the selected datasets. The evaluation results reveal significant performance and behavioral characteristics of the algorithms related to their complexity and the relative negative effect on compression latency as opposed to the increased compression rate. It is noted that the proposed schemes managed to offer considerable advantage especially aiming to achieve the optimum tradeoff between compression rate-latency. Specifically, proposed algorithm managed to combine highly completive level of compression while ensuring minimum latency thus exhibiting real-time capabilities. Additionally, one of the proposed schemes is compared against state-of-the-art general-purpose compression algorithms also exhibiting considerable advantages as far as the

  20. Light-weight reference-based compression of FASTQ data.

    PubMed

    Zhang, Yongpeng; Li, Linsen; Yang, Yanli; Yang, Xiao; He, Shan; Zhu, Zexuan

    2015-06-09

    The exponential growth of next generation sequencing (NGS) data has posed big challenges to data storage, management and archive. Data compression is one of the effective solutions, where reference-based compression strategies can typically achieve superior compression ratios compared to the ones not relying on any reference. This paper presents a lossless light-weight reference-based compression algorithm namely LW-FQZip to compress FASTQ data. The three components of any given input, i.e., metadata, short reads and quality score strings, are first parsed into three data streams in which the redundancy information are identified and eliminated independently. Particularly, well-designed incremental and run-length-limited encoding schemes are utilized to compress the metadata and quality score streams, respectively. To handle the short reads, LW-FQZip uses a novel light-weight mapping model to fast map them against external reference sequence(s) and produce concise alignment results for storage. The three processed data streams are then packed together with some general purpose compression algorithms like LZMA. LW-FQZip was evaluated on eight real-world NGS data sets and achieved compression ratios in the range of 0.111-0.201. This is comparable or superior to other state-of-the-art lossless NGS data compression algorithms. LW-FQZip is a program that enables efficient lossless FASTQ data compression. It contributes to the state of art applications for NGS data storage and transmission. LW-FQZip is freely available online at: http://csse.szu.edu.cn/staff/zhuzx/LWFQZip.

  1. Compression of computer generated phase-shifting hologram sequence using AVC and HEVC

    NASA Astrophysics Data System (ADS)

    Xing, Yafei; Pesquet-Popescu, Béatrice; Dufaux, Frederic

    2013-09-01

    With the capability of achieving twice the compression ratio of Advanced Video Coding (AVC) with similar reconstruction quality, High Efficiency Video Coding (HEVC) is expected to become the newleading technique of video coding. In order to reduce the storage and transmission burden of digital holograms, in this paper we propose to use HEVC for compressing the phase-shifting digital hologram sequences (PSDHS). By simulating phase-shifting digital holography (PSDH) interferometry, interference patterns between illuminated three dimensional( 3D) virtual objects and the stepwise phase changed reference wave are generated as digital holograms. The hologram sequences are obtained by the movement of the virtual objects and compressed by AVC and HEVC. The experimental results show that AVC and HEVC are efficient to compress PSDHS, with HEVC giving better performance. Good compression rate and reconstruction quality can be obtained with bitrate above 15000kbps.

  2. Adaptive compressive learning for prediction of protein-protein interactions from primary sequence.

    PubMed

    Zhang, Ya-Nan; Pan, Xiao-Yong; Huang, Yan; Shen, Hong-Bin

    2011-08-21

    Protein-protein interactions (PPIs) play an important role in biological processes. Although much effort has been devoted to the identification of novel PPIs by integrating experimental biological knowledge, there are still many difficulties because of lacking enough protein structural and functional information. It is highly desired to develop methods based only on amino acid sequences for predicting PPIs. However, sequence-based predictors are often struggling with the high-dimensionality causing over-fitting and high computational complexity problems, as well as the redundancy of sequential feature vectors. In this paper, a novel computational approach based on compressed sensing theory is proposed to predict yeast Saccharomyces cerevisiae PPIs from primary sequence and has achieved promising results. The key advantage of the proposed compressed sensing algorithm is that it can compress the original high-dimensional protein sequential feature vector into a much lower but more condensed space taking the sparsity property of the original signal into account. What makes compressed sensing much more attractive in protein sequence analysis is its compressed signal can be reconstructed from far fewer measurements than what is usually considered necessary in traditional Nyquist sampling theory. Experimental results demonstrate that proposed compressed sensing method is powerful for analyzing noisy biological data and reducing redundancy in feature vectors. The proposed method represents a new strategy of dealing with high-dimensional protein discrete model and has great potentiality to be extended to deal with many other complicated biological systems. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Thermodynamical effects and high resolution methods for compressible fluid flows

    NASA Astrophysics Data System (ADS)

    Li, Jiequan; Wang, Yue

    2017-08-01

    One of the fundamental differences of compressible fluid flows from incompressible fluid flows is the involvement of thermodynamics. This difference should be manifested in the design of numerical schemes. Unfortunately, the role of entropy, expressing irreversibility, is often neglected even though the entropy inequality, as a conceptual derivative, is verified for some first order schemes. In this paper, we refine the GRP solver to illustrate how the thermodynamical variation is integrated into the design of high resolution methods for compressible fluid flows and demonstrate numerically the importance of thermodynamic effects in the resolution of strong waves. As a by-product, we show that the GRP solver works for generic equations of state, and is independent of technical arguments.

  4. Dynamic compression of copper to over 450 GPa: A high-pressure standard

    DOE PAGES

    Kraus, R. G.; Davis, J. -P.; Seagle, C. T.; ...

    2016-04-12

    We obtained an absolute stress-density path for shocklessly compressed copper to over 450 GPa. A magnetic pressure drive is temporally tailored to generate shockless compression waves through over 2.5-mm-thick copper samples. Furthermore, the free-surface velocity data is analyzed for Lagrangian sound velocity using the iterative Lagrangian analysis (ILA) technique, which relies upon the method of characteristics. We correct for the effects of strength and plastic work heating to determine an isentropic compression path. By assuming a Debye model for the heat capacity, we can further correct the isentrope to an isotherm. Finally, our determination of the isentrope and isotherm ofmore » copper represents a highly accurate pressure standard for copper to over 450 GPa.« less

  5. Delivery of compression therapy for venous leg ulcers.

    PubMed

    Zarchi, Kian; Jemec, Gregor B E

    2014-07-01

    Despite the documented effect of compression therapy in clinical studies and its widespread prescription, treatment of venous leg ulcers is often prolonged and recurrence rates high. Data on provided compression therapy are limited. To assess whether home care nurses achieve adequate subbandage pressure when treating patients with venous leg ulcers and the factors that predict the ability to achieve optimal pressure. We performed a cross-sectional study from March 1, 2011, through March 31, 2012, in home care centers in 2 Danish municipalities. Sixty-eight home care nurses who managed wounds in their everyday practice were included. Participant-masked measurements of subbandage pressure achieved with an elastic, long-stretch, single-component bandage; an inelastic, short-stretch, single-component bandage; and a multilayer, 2-component bandage, as well as, association between achievement of optimal pressure and years in the profession, attendance at wound care educational programs, previous work experience, and confidence in bandaging ability. A substantial variation in the exerted pressure was found: subbandage pressures ranged from 11 mm Hg exerted by an inelastic bandage to 80 mm Hg exerted by a 2-component bandage. The optimal subbandage pressure range, defined as 30 to 50 mm Hg, was achieved by 39 of 62 nurses (63%) applying the 2-component bandage, 28 of 68 nurses (41%) applying the elastic bandage, and 27 of 68 nurses (40%) applying the inelastic bandage. More than half the nurses applying the inelastic (38 [56%]) and elastic (36 [53%]) bandages obtained pressures less than 30 mm Hg. At best, only 17 of 62 nurses (27%) using the 2-component bandage achieved subbandage pressure within the range they aimed for. In this study, none of the investigated factors was associated with the ability to apply a bandage with optimal pressure. This study demonstrates the difficulty of achieving the desired subbandage pressure and indicates that a substantial proportion of

  6. LFQC: a lossless compression algorithm for FASTQ files

    PubMed Central

    Nicolae, Marius; Pathak, Sudipta; Rajasekaran, Sanguthevar

    2015-01-01

    Motivation: Next Generation Sequencing (NGS) technologies have revolutionized genomic research by reducing the cost of whole genome sequencing. One of the biggest challenges posed by modern sequencing technology is economic storage of NGS data. Storing raw data is infeasible because of its enormous size and high redundancy. In this article, we address the problem of storage and transmission of large FASTQ files using innovative compression techniques. Results: We introduce a new lossless non-reference based FASTQ compression algorithm named Lossless FASTQ Compressor. We have compared our algorithm with other state of the art big data compression algorithms namely gzip, bzip2, fastqz (Bonfield and Mahoney, 2013), fqzcomp (Bonfield and Mahoney, 2013), Quip (Jones et al., 2012), DSRC2 (Roguski and Deorowicz, 2014). This comparison reveals that our algorithm achieves better compression ratios on LS454 and SOLiD datasets. Availability and implementation: The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/rajasek/lfqc-v1.1.zip. Contact: rajasek@engr.uconn.edu PMID:26093148

  7. Compression of regions in the global advanced very high resolution radiometer 1-km data set

    NASA Technical Reports Server (NTRS)

    Kess, Barbara L.; Steinwand, Daniel R.; Reichenbach, Stephen E.

    1994-01-01

    The global advanced very high resolution radiometer (AVHRR) 1-km data set is a 10-band image produced at USGS' EROS Data Center for the study of the world's land surfaces. The image contains masked regions for non-land areas which are identical in each band but vary between data sets. They comprise over 75 percent of this 9.7 gigabyte image. The mask is compressed once and stored separately from the land data which is compressed for each of the 10 bands. The mask is stored in a hierarchical format for multi-resolution decompression of geographic subwindows of the image. The land for each band is compressed by modifying a method that ignores fill values. This multi-spectral region compression efficiently compresses the region data and precludes fill values from interfering with land compression statistics. Results show that the masked regions in a one-byte test image (6.5 Gigabytes) compress to 0.2 percent of the 557,756,146 bytes they occupy in the original image, resulting in a compression ratio of 89.9 percent for the entire image.

  8. Quality ratings of frequency-compressed speech by participants with extensive high-frequency dead regions in the cochlea

    PubMed Central

    Salorio-Corbetto, Marina; Baer, Thomas; Moore, Brian C. J.

    2017-01-01

    Abstract Objective: The objective was to assess the degradation of speech sound quality produced by frequency compression for listeners with extensive high-frequency dead regions (DRs). Design: Quality ratings were obtained using values of the starting frequency (Sf) of the frequency compression both below and above the estimated edge frequency, fe, of each DR. Thus, the value of Sf often fell below the lowest value currently used in clinical practice. Several compression ratios were used for each value of Sf. Stimuli were sentences processed via a prototype hearing aid based on Phonak Exélia Art P. Study sample: Five participants (eight ears) with extensive high-frequency DRs were tested. Results: Reductions of sound-quality produced by frequency compression were small to moderate. Ratings decreased significantly with decreasing Sf and increasing CR. The mean ratings were lowest for the lowest Sf and highest CR. Ratings varied across participants, with one participant rating frequency compression lower than no frequency compression even when Sf was above fe. Conclusions: Frequency compression degraded sound quality somewhat for this small group of participants with extensive high-frequency DRs. The degradation was greater for lower values of Sf relative to fe, and for greater values of CR. Results varied across participants. PMID:27724057

  9. Investigations of Compression Shocks and Boundary Layers in Gases Moving at High Speed

    NASA Technical Reports Server (NTRS)

    Ackeret, J.; Feldmann, F.; Rott, N.

    1947-01-01

    The mutual influences of compression shocks and friction boundary layers were investigated by means of high speed wind tunnels.Schlieren optics provided a clear picture of the flow phenomena and were used for determining the location of the compression shocks, measurement of shock angles, and also for Mach angles. Pressure measurement and humidity measurements were also taken into consideration.Results along with a mathematical model are described.

  10. Some practical aspects of lossless and nearly-lossless compression of AVHRR imagery

    NASA Technical Reports Server (NTRS)

    Hogan, David B.; Miller, Chris X.; Christensen, Than Lee; Moorti, Raj

    1994-01-01

    Compression of Advanced Very high Resolution Radiometers (AVHRR) imagery operating in a lossless or nearly-lossless mode is evaluated. Several practical issues are analyzed including: variability of compression over time and among channels, rate-smoothing buffer size, multi-spectral preprocessing of data, day/night handling, and impact on key operational data applications. This analysis is based on a DPCM algorithm employing the Universal Noiseless Coder, which is a candidate for inclusion in many future remote sensing systems. It is shown that compression rates of about 2:1 (daytime) can be achieved with modest buffer sizes (less than or equal to 2.5 Mbytes) and a relatively simple multi-spectral preprocessing step.

  11. Superelastic Graphene Aerogel/Poly(3,4-Ethylenedioxythiophene)/MnO2 Composite as Compression-Tolerant Electrode for Electrochemical Capacitors

    PubMed Central

    Lv, Peng; Wang, Yaru; Ji, Chenglong; Yuan, Jiajiao

    2017-01-01

    Ultra-compressible electrodes with high electrochemical performance, reversible compressibility and extreme durability are in high demand in compression-tolerant energy storage devices. Herein, an ultra-compressible ternary composite was synthesized by successively electrodepositing poly(3,4-ethylenedioxythiophene) (PEDOT) and MnO2 into the superelastic graphene aerogel (SEGA). In SEGA/PEDOT/MnO2 ternary composite, SEGA provides the compressible backbone and conductive network; MnO2 is mainly responsible for pseudo reactions; the middle PEDOT not only reduces the interface resistance between MnO2 and graphene, but also further reinforces the strength of graphene cellar walls. The synergistic effect of the three components in the ternary composite electrode leads to high electrochemical performances and good compression-tolerant ability. The gravimetric capacitance of the compressible ternary composite electrodes reaches 343 F g−1 and can retain 97% even at 95% compressive strain. And a volumetric capacitance of 147.4 F cm−3 is achieved, which is much higher than that of other graphene-based compressible electrodes. This value of volumetric capacitance can be preserved by 80% after 3500 charge/discharge cycles under various compression strains, indicating an extreme durability.

  12. A source-specific model for lossless compression of global Earth data

    NASA Astrophysics Data System (ADS)

    Kess, Barbara Lynne

    A Source Specific Model for Global Earth Data (SSM-GED) is a lossless compression method for large images that captures global redundancy in the data and achieves a significant improvement over CALIC and DCXT-BT/CARP, two leading lossless compression schemes. The Global Land 1-Km Advanced Very High Resolution Radiometer (AVHRR) data, which contains 662 Megabytes (MB) per band, is an example of a large data set that requires decompression of regions of the data. For this reason, SSM-GED compresses the AVHRR data as a collection of subwindows. This approach defines the statistical parameters for the model prior to compression. Unlike universal models that assume no a priori knowledge of the data, SSM-GED captures global redundancy that exists among all of the subwindows of data. The overlap in parameters among subwindows of data enables SSM-GED to improve the compression rate by increasing the number of parameters and maintaining a small model cost for each subwindow of data. This lossless compression method is applicable to other large volumes of image data such as video.

  13. High Efficiency, Low Emissions Homogeneous Charge Compression Ignition (HCCI) Engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gravel, Roland; Maronde, Carl; Gehrke, Chris

    2010-10-30

    This is the final report of the High Efficiency Clean Combustion (HECC) Research Program for the U.S. Department of Energy. Work under this co-funded program began in August 2005 and finished in July 2010. The objective of this program was to develop and demonstrate a low emission, high thermal efficiency engine system that met 2010 EPA heavy-duty on-highway truck emissions requirements (0.2g/bhp-hr NOx, 0.14g/bhp-hr HC and 0.01g/bhp-hr PM) with a thermal efficiency of 46%. To achieve this goal, development of diesel homogenous charge compression ignition (HCCI) combustion was the chosen approach. This report summarizes the development of diesel HCCI combustionmore » and associated enabling technologies that occurred during the HECC program between August 2005 and July 2010. This program showed that although diesel HCCI with conventional US diesel fuel was not a feasible means to achieve the program objectives, the HCCI load range could be increased with a higher volatility, lower cetane number fuel, such as gasoline, if the combustion rate could be moderated to avoid excessive cylinder pressure rise rates. Given the potential efficiency and emissions benefits, continued research of combustion with low cetane number fuels and the effects of fuel distillation are recommended. The operation of diesel HCCI was only feasible at part-load due to a limited fuel injection window. A 4% fuel consumption benefit versus conventional, low-temperature combustion was realized over the achievable operating range. Several enabling technologies were developed under this program that also benefited non-HCCI combustion. The development of a 300MPa fuel injector enabled the development of extended lifted flame combustion. A design methodology for minimizing the heat transfer to jacket water, known as precision cooling, will benefit conventional combustion engines, as well as HCCI engines. An advanced combustion control system based on cylinder pressure measurements was developed. A

  14. A CAM-based LZ data compression IC

    NASA Technical Reports Server (NTRS)

    Winters, K.; Bode, R.; Schneider, E.

    1993-01-01

    A custom CMOS processor is introduced that implements the Data Compression Lempel-Ziv (DCLZ) standard, a variation of the LZ2 Algorithm. This component presently achieves a sustained compression and decompression rate of 10 megabytes/second by employing an on-chip content-addressable memory for string table storage.

  15. Unfulfilled Potential: High-Achieving Minority Students and the High School Achievement Gap in Math

    ERIC Educational Resources Information Center

    Kotok, Stephen

    2017-01-01

    This study uses multilevel modeling to examine a subset of the highest performing 9th graders and explores the extent that achievement gaps in math widen for high performing African American and Latino students and their high performing White and Asian peers during high school. Using nationally representative data from the High School Longitudinal…

  16. 2D-pattern matching image and video compression: theory, algorithms, and experiments.

    PubMed

    Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth

    2002-01-01

    In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.

  17. An effective and efficient compression algorithm for ECG signals with irregular periods.

    PubMed

    Chou, Hsiao-Hsuan; Chen, Ying-Jui; Shiau, Yu-Chien; Kuo, Te-Son

    2006-06-01

    This paper presents an effective and efficient preprocessing algorithm for two-dimensional (2-D) electrocardiogram (ECG) compression to better compress irregular ECG signals by exploiting their inter- and intra-beat correlations. To better reveal the correlation structure, we first convert the ECG signal into a proper 2-D representation, or image. This involves a few steps including QRS detection and alignment, period sorting, and length equalization. The resulting 2-D ECG representation is then ready to be compressed by an appropriate image compression algorithm. We choose the state-of-the-art JPEG2000 for its high efficiency and flexibility. In this way, the proposed algorithm is shown to outperform some existing arts in the literature by simultaneously achieving high compression ratio (CR), low percent root mean squared difference (PRD), low maximum error (MaxErr), and low standard derivation of errors (StdErr). In particular, because the proposed period sorting method rearranges the detected heartbeats into a smoother image that is easier to compress, this algorithm is insensitive to irregular ECG periods. Thus either the irregular ECG signals or the QRS false-detection cases can be better compressed. This is a significant improvement over existing 2-D ECG compression methods. Moreover, this algorithm is not tied exclusively to JPEG2000. It can also be combined with other 2-D preprocessing methods or appropriate codecs to enhance the compression performance in irregular ECG cases.

  18. A New Approach for Fingerprint Image Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefactsmore » which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.« less

  19. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  20. Counterstereotypic Identity among High-Achieving Black Students

    ERIC Educational Resources Information Center

    Harpalani, Vinay

    2017-01-01

    This article examines how racial stereotypes affect achievement and identity formation among low income, urban Black adolescents. Specifically, the major question addressed is: how do high-achieving Black students succeed academically despite negative stereotypes of their intellectual abilities? Results indicate that high-achieving Black youth,…

  1. Digital compression algorithms for HDTV transmission

    NASA Technical Reports Server (NTRS)

    Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.

    1990-01-01

    Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.

  2. High-speed imaging using compressed sensing and wavelength-dependent scattering (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Shin, Jaewook; Bosworth, Bryan T.; Foster, Mark A.

    2017-02-01

    The process of multiple scattering has inherent characteristics that are attractive for high-speed imaging with high spatial resolution and a wide field-of-view. A coherent source passing through a multiple-scattering medium naturally generates speckle patterns with diffraction-limited features over an arbitrarily large field-of-view. In addition, the process of multiple scattering is deterministic allowing a given speckle pattern to be reliably reproduced with identical illumination conditions. Here, by exploiting wavelength dependent multiple scattering and compressed sensing, we develop a high-speed 2D time-stretch microscope. Highly chirped pulses from a 90-MHz mode-locked laser are sent through a 2D grating and a ground-glass diffuser to produce 2D speckle patterns that rapidly evolve with the instantaneous frequency of the chirped pulse. To image a scene, we first characterize the high-speed evolution of the generated speckle patterns. Subsequently we project the patterns onto the microscopic region of interest and collect the total light from the scene using a single high-speed photodetector. Thus the wavelength dependent speckle patterns serve as high-speed pseudorandom structured illumination of the scene. An image sequence is then recovered using the time-dependent signal received by the photodetector, the known speckle pattern evolution, and compressed sensing algorithms. Notably, the use of compressed sensing allows for reconstruction of a time-dependent scene using a highly sub-Nyquist number of measurements, which both increases the speed of the imager and reduces the amount of data that must be collected and stored. We will discuss our experimental demonstration of this approach and the theoretical limits on imaging speed.

  3. Wavelet-based audio embedding and audio/video compression

    NASA Astrophysics Data System (ADS)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  4. Multilayer compressive seal for sealing in high temperature devices

    DOEpatents

    Chou, Yeong-Shyung [Richland, WA; Stevenson, Jeffry W [Richland, WA

    2007-08-21

    A mica based compressive seal has been developed exhibiting superior thermal cycle stability when compared to other compressive seals known in the art. The seal is composed of compliant glass or metal interlayers and a sealing (gasket) member layer composed of mica that is infiltrated with a glass forming material, which effectively reduces leaks within the seal. The compressive seal shows approximately a 100-fold reduction in leak rates compared with previously developed hybrid seals after from 10 to about 40 thermal cycles under a compressive stress of from 50 psi to 100 psi at temperatures in the range from 600.degree. C. to about 850.degree. C.

  5. Self Regulated Learning of High Achievers

    ERIC Educational Resources Information Center

    Rathod, Ami

    2010-01-01

    The study was conducted on high achievers of Senior Secondary school. Main objectives were to identify the self regulated learners among the high achievers, to find out dominant components and characteristics operative in self regulated learners and to compare self regulated learning of learners with respect to their subject (science and non…

  6. High-harmonic generation in ZnO driven by self-compressed mid-infrared pulses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gholam-Mirzaei, Shima; Beetar, John E.; Chacon, Alexis

    Progress in attosecond science has relied on advancements in few-cycle pulse generation technology and its application to high-order harmonic generation. Traditionally, self-phase modulation in bulk solids has been used for the compression of moderate-energy pulses, additionally exhibiting favorable dispersion properties for mid-infrared (mid-IR) pulses. For this study, we use the anomalous dispersion of Y 3Al 5O 12 (YAG) to self-compress many-cycle pulses from a 50 kHz mid-IR OPA down to produce sub-three-cycle 10 μJ pulses and further use them to generate high-order harmonics in a ZnO crystal. In agreement with theoretical predictions, we observe a boost in the harmonic yieldmore » by a factor of two, and spectral broadening of above-gap harmonics, compared to longer driving pulses. The enhanced yield results from an increase in the intensity for the self-compressed pulses.« less

  7. High-harmonic generation in ZnO driven by self-compressed mid-infrared pulses

    DOE PAGES

    Gholam-Mirzaei, Shima; Beetar, John E.; Chacon, Alexis; ...

    2018-02-20

    Progress in attosecond science has relied on advancements in few-cycle pulse generation technology and its application to high-order harmonic generation. Traditionally, self-phase modulation in bulk solids has been used for the compression of moderate-energy pulses, additionally exhibiting favorable dispersion properties for mid-infrared (mid-IR) pulses. For this study, we use the anomalous dispersion of Y 3Al 5O 12 (YAG) to self-compress many-cycle pulses from a 50 kHz mid-IR OPA down to produce sub-three-cycle 10 μJ pulses and further use them to generate high-order harmonics in a ZnO crystal. In agreement with theoretical predictions, we observe a boost in the harmonic yieldmore » by a factor of two, and spectral broadening of above-gap harmonics, compared to longer driving pulses. The enhanced yield results from an increase in the intensity for the self-compressed pulses.« less

  8. Compressed Secret Key Agreement:Maximizing Multivariate Mutual Information per Bit

    NASA Astrophysics Data System (ADS)

    Chan, Chung

    2017-10-01

    The multiterminal secret key agreement problem by public discussion is formulated with an additional source compression step where, prior to the public discussion phase, users independently compress their private sources to filter out strongly correlated components for generating a common secret key. The objective is to maximize the achievable key rate as a function of the joint entropy of the compressed sources. Since the maximum achievable key rate captures the total amount of information mutual to the compressed sources, an optimal compression scheme essentially maximizes the multivariate mutual information per bit of randomness of the private sources, and can therefore be viewed more generally as a dimension reduction technique. Single-letter lower and upper bounds on the maximum achievable key rate are derived for the general source model, and an explicit polynomial-time computable formula is obtained for the pairwise independent network model. In particular, the converse results and the upper bounds are obtained from those of the related secret key agreement problem with rate-limited discussion. A precise duality is shown for the two-user case with one-way discussion, and such duality is extended to obtain the desired converse results in the multi-user case. In addition to posing new challenges in information processing and dimension reduction, the compressed secret key agreement problem helps shed new light on resolving the difficult problem of secret key agreement with rate-limited discussion, by offering a more structured achieving scheme and some simpler conjectures to prove.

  9. An Optimal Seed Based Compression Algorithm for DNA Sequences

    PubMed Central

    Gopalakrishnan, Gopakumar; Karunakaran, Muralikrishnan

    2016-01-01

    This paper proposes a seed based lossless compression algorithm to compress a DNA sequence which uses a substitution method that is similar to the LempelZiv compression scheme. The proposed method exploits the repetition structures that are inherent in DNA sequences by creating an offline dictionary which contains all such repeats along with the details of mismatches. By ensuring that only promising mismatches are allowed, the method achieves a compression ratio that is at par or better than the existing lossless DNA sequence compression algorithms. PMID:27555868

  10. Compression-based integral curve data reuse framework for flow visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Fan; Bi, Chongke; Guo, Hanqi

    Currently, by default, integral curves are repeatedly re-computed in different flow visualization applications, such as FTLE field computation, source-destination queries, etc., leading to unnecessary resource cost. We present a compression-based data reuse framework for integral curves, to greatly reduce their retrieval cost, especially in a resource-limited environment. In our design, a hierarchical and hybrid compression scheme is proposed to balance three objectives, including high compression ratio, controllable error, and low decompression cost. Specifically, we use and combine digitized curve sparse representation, floating-point data compression, and octree space partitioning to adaptively achieve the objectives. Results have shown that our data reusemore » framework could acquire tens of times acceleration in the resource-limited environment compared to on-the-fly particle tracing, and keep controllable information loss. Moreover, our method could provide fast integral curve retrieval for more complex data, such as unstructured mesh data.« less

  11. An image compression algorithm for a high-resolution digital still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    The Electronic Still Camera (ESC) project will provide for the capture and transmission of high-quality images without the use of film. The image quality will be superior to video and will approach the quality of 35mm film. The camera, which will have the same general shape and handling as a 35mm camera, will be able to send images to earth in near real-time. Images will be stored in computer memory (RAM) in removable cartridges readable by a computer. To save storage space, the image will be compressed and reconstructed at the time of viewing. Both lossless and loss-y image compression algorithms are studied, described, and compared.

  12. n-Gram-Based Text Compression.

    PubMed

    Nguyen, Vu H; Nguyen, Hien T; Duong, Hieu N; Snasel, Vaclav

    2016-01-01

    We propose an efficient method for compressing Vietnamese text using n -gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n -grams and then encodes them based on n -gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n -gram is encoded by two to four bytes accordingly based on its corresponding n -gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n -gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods.

  13. n-Gram-Based Text Compression

    PubMed Central

    Duong, Hieu N.; Snasel, Vaclav

    2016-01-01

    We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods. PMID:27965708

  14. Compressive sensing for single-shot two-dimensional coherent spectroscopy

    NASA Astrophysics Data System (ADS)

    Harel, E.; Spencer, A.; Spokoyny, B.

    2017-02-01

    In this work, we explore the use of compressive sensing for the rapid acquisition of two-dimensional optical spectra that encodes the electronic structure and ultrafast dynamics of condensed-phase molecular species. Specifically, we have developed a means to combine multiplexed single-element detection and single-shot and phase-resolved two-dimensional coherent spectroscopy. The method described, which we call Single Point Array Reconstruction by Spatial Encoding (SPARSE) eliminates the need for costly array detectors while speeding up acquisition by several orders of magnitude compared to scanning methods. Physical implementation of SPARSE is facilitated by combining spatiotemporal encoding of the nonlinear optical response and signal modulation by a high-speed digital micromirror device. We demonstrate the approach by investigating a well-characterized cyanine molecule and a photosynthetic pigment-protein complex. Hadamard and compressive sensing algorithms are demonstrated, with the latter achieving compression factors as high as ten. Both show good agreement with directly detected spectra. We envision a myriad of applications in nonlinear spectroscopy using SPARSE with broadband femtosecond light sources in so-far unexplored regions of the electromagnetic spectrum.

  15. An Efficient Framework for Compressed Sensing Reconstruction of Highly Accelerated Dynamic Cardiac MRI

    NASA Astrophysics Data System (ADS)

    Ting, Samuel T.

    cine images. First, algorithmic and implementational approaches are proposed for reducing the computational time for a compressed sensing reconstruction framework. Specific optimization algorithms based on the fast iterative/shrinkage algorithm (FISTA) are applied in the context of real-time cine image reconstruction to achieve efficient per-iteration computation time. Implementation within a code framework utilizing commercially available graphics processing units (GPUs) allows for practical and efficient implementation directly within the clinical environment. Second, patch-based sparsity models are proposed to enable compressed sensing signal recovery from highly undersampled data. Numerical studies demonstrate that this approach can help improve image quality at higher undersampling ratios, enabling real-time cine imaging at higher acceleration rates. In this work, it is shown that these techniques yield a holistic framework for achieving efficient reconstruction of real-time cine images with spatial and temporal resolution sufficient for use in the clinical environment. A thorough description of these techniques from both a theoretical and practical view is provided - both of which may be of interest to the reader in terms of future work.

  16. Mental Aptitude and Comprehension of Time-Compressed and Compressed-Expanded Listening Selections.

    ERIC Educational Resources Information Center

    Sticht, Thomas G.

    The comprehensibility of materials compressed and then expanded by means of an electromechanical process was tested with 280 Army inductees divided into groups of high and low mental aptitude. Three short listening selections relating to military activities were subjected to compression and compression-expansion to produce seven versions. Data…

  17. Near-lossless multichannel EEG compression based on matrix and tensor decompositions.

    PubMed

    Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej

    2013-05-01

    A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.

  18. Time-resolved compression of a capsule with a cone to high density for fast-ignition laser fusion

    DOE PAGES

    Theobald, W.; Solodov, A. A.; Stoeckl, C.; ...

    2014-12-12

    The advent of high-intensity lasers enables us to recreate and study the behaviour of matter under the extreme densities and pressures that exist in many astrophysical objects. It may also enable us to develop a power source based on laser-driven nuclear fusion. Achieving such conditions usually requires a target that is highly uniform and spherically symmetric. Here we show that it is possible to generate high densities in a so-called fast-ignition target that consists of a thin shell whose spherical symmetry is interrupted by the inclusion of a metal cone. Using picosecond-time-resolved X-ray radiography, we show that we can achievemore » areal densities in excess of 300 mg cm -2 with a nanosecond-duration compression pulse -- the highest areal density ever reported for a cone-in-shell target. Such densities are high enough to stop MeV electrons, which is necessary for igniting the fuel with a subsequent picosecond pulse focused into the resulting plasma.« less

  19. Compressed air-assisted solvent extraction (CASX) for metal removal.

    PubMed

    Li, Chi-Wang; Chen, Yi-Ming; Hsiao, Shin-Tien

    2008-03-01

    A novel process, compressed air-assisted solvent extraction (CASX), was developed to generate micro-sized solvent-coated air bubbles (MSAB) for metal extraction. Through pressurization of solvent with compressed air followed by releasing air-oversaturated solvent into metal-containing wastewater, MSAB were generated instantaneously. The enormous surface area of MSAB makes extraction process extremely fast and achieves very high aqueous/solvent weight ratio (A/S ratio). CASX process completely removed Cr(VI) from acidic electroplating wastewater under A/S ratio of 115 and extraction time of less than 10s. When synthetic wastewater containing Cd(II) of 50mgl(-1) was treated, A/S ratios of higher than 714 and 1190 could be achieved using solvent with extractant/diluent weight ratio of 1:1 and 5:1, respectively. Also, MSAB have very different physical properties, such as size and density, compared to the emulsified solvent droplets, making separation and recovery of solvent from treated effluent very easy.

  20. Disk-based compression of data from genome sequencing.

    PubMed

    Grabowski, Szymon; Deorowicz, Sebastian; Roguski, Łukasz

    2015-05-01

    High-coverage sequencing data have significant, yet hard to exploit, redundancy. Most FASTQ compressors cannot efficiently compress the DNA stream of large datasets, since the redundancy between overlapping reads cannot be easily captured in the (relatively small) main memory. More interesting solutions for this problem are disk based, where the better of these two, from Cox et al. (2012), is based on the Burrows-Wheeler transform (BWT) and achieves 0.518 bits per base for a 134.0 Gbp human genome sequencing collection with almost 45-fold coverage. We propose overlapping reads compression with minimizers, a compression algorithm dedicated to sequencing reads (DNA only). Our method makes use of a conceptually simple and easily parallelizable idea of minimizers, to obtain 0.317 bits per base as the compression ratio, allowing to fit the 134.0 Gbp dataset into only 5.31 GB of space. http://sun.aei.polsl.pl/orcom under a free license. sebastian.deorowicz@polsl.pl Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Structure and Properties of Silica Glass Densified in Cold Compression and Hot Compression

    NASA Astrophysics Data System (ADS)

    Guerette, Michael; Ackerson, Michael R.; Thomas, Jay; Yuan, Fenglin; Bruce Watson, E.; Walker, David; Huang, Liping

    2015-10-01

    Silica glass has been shown in numerous studies to possess significant capacity for permanent densification under pressure at different temperatures to form high density amorphous (HDA) silica. However, it is unknown to what extent the processes leading to irreversible densification of silica glass in cold-compression at room temperature and in hot-compression (e.g., near glass transition temperature) are common in nature. In this work, a hot-compression technique was used to quench silica glass from high temperature (1100 °C) and high pressure (up to 8 GPa) conditions, which leads to density increase of ~25% and Young’s modulus increase of ~71% relative to that of pristine silica glass at ambient conditions. Our experiments and molecular dynamics (MD) simulations provide solid evidences that the intermediate-range order of the hot-compressed HDA silica is distinct from that of the counterpart cold-compressed at room temperature. This explains the much higher thermal and mechanical stability of the former than the latter upon heating and compression as revealed in our in-situ Brillouin light scattering (BLS) experiments. Our studies demonstrate the limitation of the resulting density as a structural indicator of polyamorphism, and point out the importance of temperature during compression in order to fundamentally understand HDA silica.

  2. Adaptive efficient compression of genomes

    PubMed Central

    2012-01-01

    Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. However, memory requirements of the current algorithms are high and run times often are slow. In this paper, we propose an adaptive, parallel and highly efficient referential sequence compression method which allows fine-tuning of the trade-off between required memory and compression speed. When using 12 MB of memory, our method is for human genomes on-par with the best previous algorithms in terms of compression ratio (400:1) and compression speed. In contrast, it compresses a complete human genome in just 11 seconds when provided with 9 GB of main memory, which is almost three times faster than the best competitor while using less main memory. PMID:23146997

  3. Gas turbine power plant with supersonic shock compression ramps

    DOEpatents

    Lawlor, Shawn P [Bellevue, WA; Novaresi, Mark A [San Diego, CA; Cornelius, Charles C [Kirkland, WA

    2008-10-14

    A gas turbine engine. The engine is based on the use of a gas turbine driven rotor having a compression ramp traveling at a local supersonic inlet velocity (based on the combination of inlet gas velocity and tangential speed of the ramp) which compresses inlet gas against a stationary sidewall. The supersonic compressor efficiently achieves high compression ratios while utilizing a compact, stabilized gasdynamic flow path. Operated at supersonic speeds, the inlet stabilizes an oblique/normal shock system in the gasdynamic flow path formed between the rim of the rotor, the strakes, and a stationary external housing. Part load efficiency is enhanced by use of a lean pre-mix system, a pre-swirl compressor, and a bypass stream to bleed a portion of the gas after passing through the pre-swirl compressor to the combustion gas outlet. Use of a stationary low NOx combustor provides excellent emissions results.

  4. Lossless compression of VLSI layout image data.

    PubMed

    Dai, Vito; Zakhor, Avideh

    2006-09-01

    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.

  5. [Compressive and bend strength of experimental admixed high copper alloys].

    PubMed

    Sourai, P; Paximada, H; Lagouvardos, P; Douvitsas, G

    1988-01-01

    Mixed alloys for dental amalgams have been used mainly in the form of admixed alloys, where eutectic spheres are blend with conventional flakes. In the present study the compressive strength, bend strength and microstructure of two high-copper alloys (Tytin, Ana-2000) is compared with three experimental alloys prepared of the two high copper by mixing them in proportions of 3:1, 1:1 and 1:3 by weight. The results revealed that experimental alloys inherited high early and final strength values without any significant change in their microstructure.

  6. A Study of the Efficiency of High-strength, Steel, Cellular-core Sandwich Plates in Compression

    NASA Technical Reports Server (NTRS)

    Johnson, Aldie E , Jr; Semonian, Joseph W

    1956-01-01

    Structural efficiency curves are presented for high-strength, stainless-steel, cellular-core sandwich plates of various proportions subjected to compressive end loads for temperatures of 80 F and 600 F. Optimum proportions of sandwich plates for any value of the compressive loading intensity can be determined from the curves. The efficiency of steel sandwich plates of optimum proportions is compared with the efficiency of solid plates of high-strength steel and aluminum and titanium alloys at the two temperatures.

  7. Stability of retained austenite in high carbon steel under compressive stress: an investigation from macro to nano scale

    PubMed Central

    Hossain, R.; Pahlevani, F.; Quadir, M. Z.; Sahajwalla, V.

    2016-01-01

    Although high carbon martensitic steels are well known for their industrial utility in high abrasion and extreme operating environments, due to their hardness and strength, the compressive stability of their retained austenite, and the implications for the steels’ performance and potential uses, is not well understood. This article describes the first investigation at both the macro and nano scale of the compressive stability of retained austenite in high carbon martensitic steel. Using a combination of standard compression testing, X-ray diffraction, optical microstructure, electron backscattering diffraction imaging, electron probe micro-analysis, nano-indentation and micro-indentation measurements, we determined the mechanical stability of retained austenite and martensite in high carbon steel under compressive stress and identified the phase transformation mechanism, from the macro to the nano level. We found at the early stage of plastic deformation hexagonal close-packed (HCP) martensite formation dominates, while higher compression loads trigger body-centred tetragonal (BCT) martensite formation. The combination of this phase transformation and strain hardening led to an increase in the hardness of high carbon steel of around 30%. This comprehensive characterisation of stress induced phase transformation could enable the precise control of the microstructures of high carbon martensitic steels, and hence their properties. PMID:27725722

  8. An adaptive technique to maximize lossless image data compression of satellite images

    NASA Technical Reports Server (NTRS)

    Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe

    1994-01-01

    Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.

  9. High-dynamic-range scene compression in humans

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2006-02-01

    Single pixel dynamic-range compression alters a particular input value to a unique output value - a look-up table. It is used in chemical and most digital photographic systems having S-shaped transforms to render high-range scenes onto low-range media. Post-receptor neural processing is spatial, as shown by the physiological experiments of Dowling, Barlow, Kuffler, and Hubel & Wiesel. Human vision does not render a particular receptor-quanta catch as a unique response. Instead, because of spatial processing, the response to a particular quanta catch can be any color. Visual response is scene dependent. Stockham proposed an approach to model human range compression using low-spatial frequency filters. Campbell, Ginsberg, Wilson, Watson, Daly and many others have developed spatial-frequency channel models. This paper describes experiments measuring the properties of desirable spatial-frequency filters for a variety of scenes. Given the radiances of each pixel in the scene and the observed appearances of objects in the image, one can calculate the visual mask for that individual image. Here, visual mask is the spatial pattern of changes made by the visual system in processing the input image. It is the spatial signature of human vision. Low-dynamic range images with many white areas need no spatial filtering. High-dynamic-range images with many blacks, or deep shadows, require strong spatial filtering. Sun on the right and shade on the left requires directional filters. These experiments show that variable scene- scenedependent filters are necessary to mimic human vision. Although spatial-frequency filters can model human dependent appearances, the problem still remains that an analysis of the scene is still needed to calculate the scene-dependent strengths of each of the filters for each frequency.

  10. Continuous manufacturing of extended release tablets via powder mixing and direct compression.

    PubMed

    Ervasti, Tuomas; Simonaho, Simo-Pekka; Ketolainen, Jarkko; Forsberg, Peter; Fransson, Magnus; Wikström, Håkan; Folestad, Staffan; Lakio, Satu; Tajarobi, Pirjo; Abrahmsén-Alami, Susanna

    2015-11-10

    The aim of the current work was to explore continuous dry powder mixing and direct compression for manufacturing of extended release (ER) matrix tablets. The study was span out with a challenging formulation design comprising ibuprofen compositions with varying particle size and a relatively low amount of the matrix former hydroxypropyl methylcellulose (HPMC). Standard grade HPMC (CR) was compared to a recently developed direct compressible grade (DC2). The work demonstrate that ER tablets with desired quality attributes could be manufactured via integrated continuous mixing and direct compression. The most robust tablet quality (weight, assay, tensile strength) was obtained using high mixer speed and large particle size ibuprofen and HPMC DC2 due to good powder flow. At low mixer speed it was more difficult to achieve high quality low dose tablets. Notably, with HPMC DC2 the processing conditions had a significant effect on drug release. Longer processing time and/or faster mixer speed was needed to achieve robust release with compositions containing DC2 compared with those containing CR. This work confirms the importance of balancing process parameters and material properties to find consistent product quality. Also, adaptive control is proven a pivotal means for control of continuous manufacturing systems. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Lossless compression of AVIRIS data: Comparison of methods and instrument constraints

    NASA Technical Reports Server (NTRS)

    Roger, R. E.; Arnold, J. F.; Cavenor, M. C.; Richards, J. A.

    1992-01-01

    A family of lossless compression methods, allowing exact image reconstruction, are evaluated for compressing Airborne Visible/Infrared Imaging Spectrometers (AVIRIS) image data. The methods are used on Differential Pulse Code Modulation (DPCM). The compressed data have an entropy of order 6 bits/pixel. A theoretical model indicates that significantly better lossless compression is unlikely to be achieved because of limits caused by the noise in the AVIRIS channels. AVIRIS data differ from data produced by other visible/near-infrared sensors, such as LANDSAT-TM or SPOT, in several ways. Firstly, the data are recorded at a greater resolution (12 bits, though packed into 16-bit words). Secondly, the spectral channels are relatively narrow and provide continuous coverage of the spectrum so that the data in adjacent channels are generally highly correlated. Thirdly, the noise characteristics of the AVIRIS are defined by the channels' Noise Equivalent Radiances (NER's), and these NER's show that, at some wavelengths, the least significant 5 or 6 bits of data are essentially noise.

  12. Online High School Achievement versus Traditional High School Achievement

    ERIC Educational Resources Information Center

    Blohm, Katherine E.

    2017-01-01

    The following study examined the question of student achievement in online charter schools and how the achievement scores of students at online charter schools compare to achievement scores of students at traditional schools. Arizona has seen explosive growth in charter schools and online charter schools. A study comparing how these two types of…

  13. Compression and information recovery in ptychography

    NASA Astrophysics Data System (ADS)

    Loetgering, L.; Treffer, D.; Wilhein, T.

    2018-04-01

    Ptychographic coherent diffraction imaging (PCDI) is a scanning microscopy modality that allows for simultaneous recovery of object and illumination information. This ability renders PCDI a suitable technique for x-ray lensless imaging and optics characterization. Its potential for information recovery typically relies on large amounts of data redundancy. However, the field of view in ptychography is practically limited by the memory and the computational facilities available. We describe techniques that achieve robust ptychographic information recovery at high compression rates. The techniques are compared and tested with experimental data.

  14. [Neurovascular compression of the medulla oblongata: a rare cause of secondary hypertension].

    PubMed

    Nádas, Judit; Czirják, Sándor; Igaz, Péter; Vörös, Erika; Jermendy, György; Rácz, Károly; Tóth, Miklós

    2014-05-25

    Compression of the rostral ventrolateral medulla oblongata is one of the rarely identified causes of refractory hypertension. In patients with severe, intractable hypertension caused by neurovascular compression, neurosurgical decompression should be considered. The authors present the history of a 20-year-old man with severe hypertension. After excluding other possible causes of secondary hypertension, the underlying cause of his high blood pressure was identified by the demonstration of neurovascular compression shown by magnetic resonance angiography and an increased sympathetic activity (sinus tachycardia) during the high blood pressure episodes. Due to frequent episodes of hypertensive crises, surgical decompression was recommended, which was performed with the placement of an isograft between the brainstem and the left vertebral artery. In the first six months after the operation, the patient's blood pressure could be kept in the normal range with significantly reduced doses of antihypertensive medication. Repeat magnetic resonance angiography confirmed the cessation of brainstem compression. After six months, increased blood pressure returned periodically, but to a smaller extent and less frequently. Based on the result of magnetic resonance angiography performed 22 months after surgery, re-operation was considered. According to previous literature data long-term success can only be achieved in one third of patients after surgical decompression. In the majority of patients surgery results in a significant decrease of blood pressure, an increased efficiency of antihypertensive therapy as well as a decrease in the frequency of highly increased blood pressure episodes. Thus, a significant improvement of the patient's quality of life can be achieved. The case of this patient is an example of the latter scenario.

  15. Compressive Detection of Highly Overlapped Spectra Using Walsh-Hadamard-Based Filter Functions.

    PubMed

    Corcoran, Timothy C

    2018-03-01

    In the chemometric context in which spectral loadings of the analytes are already known, spectral filter functions may be constructed which allow the scores of mixtures of analytes to be determined in on-the-fly fashion directly, by applying a compressive detection strategy. Rather than collecting the entire spectrum over the relevant region for the mixture, a filter function may be applied within the spectrometer itself so that only the scores are recorded. Consequently, compressive detection shrinks data sets tremendously. The Walsh functions, the binary basis used in Walsh-Hadamard transform spectroscopy, form a complete orthonormal set well suited to compressive detection. A method for constructing filter functions using binary fourfold linear combinations of Walsh functions is detailed using mathematics borrowed from genetic algorithm work, as a means of optimizing said functions for a specific set of analytes. These filter functions can be constructed to automatically strip the baseline from analysis. Monte Carlo simulations were performed with a mixture of four highly overlapped Raman loadings and with ten excitation-emission matrix loadings; both sets showed a very high degree of spectral overlap. Reasonable estimates of the true scores were obtained in both simulations using noisy data sets, proving the linearity of the method.

  16. Video compression via log polar mapping

    NASA Astrophysics Data System (ADS)

    Weiman, Carl F. R.

    1990-09-01

    A three stage process for compressing real time color imagery by factors in the range of 1600-to-i is proposed for remote driving'. The key is to match the resolution gradient of human vision and preserve only those cues important for driving. Some hardware components have been built and a research prototype is planned. Stage 1 is log polar mapping, which reduces peripheral image sampling resolution to match the peripheral gradient in human visual acuity. This can yield 25-to-i compression. Stage 2 partitions color and contrast into separate channels. This can yield 8-to-i compression. Stage 3 is conventional block data compression such as hybrid DCT/DPCM which can yield 8-to-i compression. The product of all three stages is i600-to-i data compression. The compressed signal can be transmitted over FM bands which do not require line-of-sight, greatly increasing the range of operation and reducing the topographic exposure of teleoperated vehicles. Since the compressed channel data contains the essential constituents of human visual perception, imagery reconstructed by inverting each of the three compression stages is perceived as complete, provided the operator's direction of gaze is at the center of the mapping. This can be achieved by eye-tracker feedback which steers the center of log polar mapping in the remote vehicle to match the teleoperator's direction of gaze.

  17. Stress management on underlying GaN-based epitaxial films: A new vision for achieving high-performance LEDs on Si substrates

    NASA Astrophysics Data System (ADS)

    Lin, Zhiting; Wang, Haiyan; Lin, Yunhao; Wang, Wenliang; Li, Guoqiang

    2017-11-01

    High-performance blue GaN-based light-emitting diodes (LEDs) on Si substrates have been achieved by applying a suitable tensile stress in the underlying n-GaN. It is demonstrated by simulation that tensile stress in the underlying n-GaN alleviates the negative effect from polarization electric fields on multiple quantum wells but an excessively large tensile stress severely bends the band profile of the electron blocking layer, resulting in carrier loss and large electric resistance. A medium level of tensile stress, which ranges from 4 to 5 GPa, can maximally improve the luminous intensity and decrease forward voltage of LEDs on Si substrates. The LED with the optimal tensile stress shows the largest simulated luminous intensity and the smallest simulated voltage at 35 A/cm2. Compared to the LEDs with a compressive stress of -3 GPa and a large tensile stress of 8 GPa, the improvement of luminous intensity can reach 102% and 28.34%, respectively. Subsequent experimental results provide evidence of the superiority of applying tensile stress in n-GaN. The experimental light output power of the LEDs with a tensile stress of 1.03 GPa is 528 mW, achieving a significant improvement of 19.4% at 35 A/cm2 in comparison to the reference LED with a compressive stress of -0.63 GPa. The forward voltage of this LED is 3.08 V, which is smaller than 3.11 V for the reference LED. This methodology of stress management on underlying GaN-based epitaxial films shows a bright feature for achieving high-performance LED devices on Si substrates.

  18. The integrated design and archive of space-borne signal processing and compression coding

    NASA Astrophysics Data System (ADS)

    He, Qiang-min; Su, Hao-hang; Wu, Wen-bo

    2017-10-01

    With the increasing demand of users for the extraction of remote sensing image information, it is very urgent to significantly enhance the whole system's imaging quality and imaging ability by using the integrated design to achieve its compact structure, light quality and higher attitude maneuver ability. At this present stage, the remote sensing camera's video signal processing unit and image compression and coding unit are distributed in different devices. The volume, weight and consumption of these two units is relatively large, which unable to meet the requirements of the high mobility remote sensing camera. This paper according to the high mobility remote sensing camera's technical requirements, designs a kind of space-borne integrated signal processing and compression circuit by researching a variety of technologies, such as the high speed and high density analog-digital mixed PCB design, the embedded DSP technology and the image compression technology based on the special-purpose chips. This circuit lays a solid foundation for the research of the high mobility remote sensing camera.

  19. Optimisation algorithms for ECG data compression.

    PubMed

    Haugland, D; Heber, J G; Husøy, J H

    1997-07-01

    The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.

  20. Oblivious image watermarking combined with JPEG compression

    NASA Astrophysics Data System (ADS)

    Chen, Qing; Maitre, Henri; Pesquet-Popescu, Beatrice

    2003-06-01

    For most data hiding applications, the main source of concern is the effect of lossy compression on hidden information. The objective of watermarking is fundamentally in conflict with lossy compression. The latter attempts to remove all irrelevant and redundant information from a signal, while the former uses the irrelevant information to mask the presence of hidden data. Compression on a watermarked image can significantly affect the retrieval of the watermark. Past investigations of this problem have heavily relied on simulation. It is desirable not only to measure the effect of compression on embedded watermark, but also to control the embedding process to survive lossy compression. In this paper, we focus on oblivious watermarking by assuming that the watermarked image inevitably undergoes JPEG compression prior to watermark extraction. We propose an image-adaptive watermarking scheme where the watermarking algorithm and the JPEG compression standard are jointly considered. Watermark embedding takes into consideration the JPEG compression quality factor and exploits an HVS model to adaptively attain a proper trade-off among transparency, hiding data rate, and robustness to JPEG compression. The scheme estimates the image-dependent payload under JPEG compression to achieve the watermarking bit allocation in a determinate way, while maintaining consistent watermark retrieval performance.

  1. Equation of State for Shock Compression of High Distension Solids

    NASA Astrophysics Data System (ADS)

    Grady, Dennis

    2013-06-01

    Shock Hugoniot data for full-density and porous compounds of boron carbide, silicon dioxide, tantalum pentoxide, uranium dioxide and playa alluvium are investigated for the purpose of equation-of-state representation of intense shock compression. Complications of multivalued Hugoniot behavior characteristic of highly distended solids are addressed through the application of enthalpy-based equations of state of the form originally proposed by Rice and Walsh in the late 1950's. Additivity of cold and thermal pressure intrinsic to the Mie-Gruneisen EOS framework is replaced by isobaric additive functions of the cold and thermal specific volume components in the enthalpy-based formulation. Additionally, experimental evidence supports acceleration of shock-induced phase transformation on the Hugoniot with increasing levels of initial distention for silicon dioxide, uranium dioxide and possibly boron carbide. Methods for addressing this experimentally observed facet of the shock compression are introduced into the EOS model.

  2. Compressive and flexural strength of high strength phase change mortar

    NASA Astrophysics Data System (ADS)

    Qiao, Qingyao; Fang, Changle

    2018-04-01

    High-strength cement produces a lot of hydration heat when hydrated, it will usually lead to thermal cracks. Phase change materials (PCM) are very potential thermal storage materials. Utilize PCM can help reduce the hydration heat. Research shows that apply suitable amount of PCM has a significant effect on improving the compressive strength of cement mortar, and can also improve the flexural strength to some extent.

  3. High load operation in a homogeneous charge compression ignition engine

    DOEpatents

    Duffy, Kevin P [Metamora, IL; Kieser, Andrew J [Morton, IL; Liechty, Michael P [Chillicothe, IL; Hardy, William L [Peoria, IL; Rodman, Anthony [Chillicothe, IL; Hergart, Carl-Anders [Peoria, IL

    2008-12-23

    A homogeneous charge compression ignition engine is set up by first identifying combinations of compression ratio and exhaust gas percentages for each speed and load across the engines operating range. These identified ratios and exhaust gas percentages can then be converted into geometric compression ratio controller settings and exhaust gas recirculation rate controller settings that are mapped against speed and load, and made available to the electronic

  4. Attitudes and Opinions from the Nation's High Achieving Teens. 18th Annual Survey of High Achievers.

    ERIC Educational Resources Information Center

    Educational Communications, Inc., Lake Forest, IL.

    This document contains factsheets and news releases which cite findings from a national survey of 1,985 high achieving high school students. Factsheets describe the Who's Who Among American High School Students recognition and service program for high school students and explain the Who's Who survey. A summary report of this eighteenth annual…

  5. Development of High Speed Imaging and Analysis Techniques Compressible Dynamics Stall

    NASA Technical Reports Server (NTRS)

    Chandrasekhara, M. S.; Carr, L. W.; Wilder, M. C.; Davis, Sanford S. (Technical Monitor)

    1996-01-01

    parameters on the dynamic stall process. When interferograms can be captured in real time, the potential for real-time mapping of a developing unsteady flow such as dynamic stall becomes a possibility. This has been achieved in the present case through the use of a high-speed drum camera combined with electronic circuitry which has resulted in a series of interferograms obtained during a single cycle of dynamic stall; images obtained at the rate of 20 KHz will be presented as a part of the formal presentation. Interferometry has been available for a long time; however, most of its use has been limited to visualization. The present research has focused on use of interferograms for quantitative mapping of the flow over oscillating airfoils. Instantaneous pressure distributions can now be obtained semi-automatically, making practical the analysis of the thousands of interferograms that are produced in this research. A review of the techniques that have been developed as part of this research effort will be presented in the final paper.

  6. Observer performance assessment of JPEG-compressed high-resolution chest images

    NASA Astrophysics Data System (ADS)

    Good, Walter F.; Maitz, Glenn S.; King, Jill L.; Gennari, Rose C.; Gur, David

    1999-05-01

    The JPEG compression algorithm was tested on a set of 529 chest radiographs that had been digitized at a spatial resolution of 100 micrometer and contrast sensitivity of 12 bits. Images were compressed using five fixed 'psychovisual' quantization tables which produced average compression ratios in the range 15:1 to 61:1, and were then printed onto film. Six experienced radiologists read all cases from the laser printed film, in each of the five compressed modes as well as in the non-compressed mode. For comparison purposes, observers also read the same cases with reduced pixel resolutions of 200 micrometer and 400 micrometer. The specific task involved detecting masses, pneumothoraces, interstitial disease, alveolar infiltrates and rib fractures. Over the range of compression ratios tested, for images digitized at 100 micrometer, we were unable to demonstrate any statistically significant decrease (p greater than 0.05) in observer performance as measured by ROC techniques. However, the observers' subjective assessments of image quality did decrease significantly as image resolution was reduced and suggested a decreasing, but nonsignificant, trend as the compression ratio was increased. The seeming discrepancy between our failure to detect a reduction in observer performance, and other published studies, is likely due to: (1) the higher resolution at which we digitized our images; (2) the higher signal-to-noise ratio of our digitized films versus typical CR images; and (3) our particular choice of an optimized quantization scheme.

  7. A simple, robust and efficient high-order accurate shock-capturing scheme for compressible flows: Towards minimalism

    NASA Astrophysics Data System (ADS)

    Ohwada, Taku; Shibata, Yuki; Kato, Takuma; Nakamura, Taichi

    2018-06-01

    Developed is a high-order accurate shock-capturing scheme for the compressible Euler/Navier-Stokes equations; the formal accuracy is 5th order in space and 4th order in time. The performance and efficiency of the scheme are validated in various numerical tests. The main ingredients of the scheme are nothing special; they are variants of the standard numerical flux, MUSCL, the usual Lagrange's polynomial and the conventional Runge-Kutta method. The scheme can compute a boundary layer accurately with a rational resolution and capture a stationary contact discontinuity sharply without inner points. And yet it is endowed with high resistance against shock anomalies (carbuncle phenomenon, post-shock oscillations, etc.). A good balance between high robustness and low dissipation is achieved by blending three types of numerical fluxes according to physical situation in an intuitively easy-to-understand way. The performance of the scheme is largely comparable to that of WENO5-Rusanov, while its computational cost is 30-40% less than of that of the advanced scheme.

  8. Continuous direct compression as manufacturing platform for sustained release tablets.

    PubMed

    Van Snick, B; Holman, J; Cunningham, C; Kumar, A; Vercruysse, J; De Beer, T; Remon, J P; Vervaet, C

    2017-03-15

    This study presents a framework for process and product development on a continuous direct compression manufacturing platform. A challenging sustained release formulation with high content of a poorly flowing low density drug was selected. Two HPMC grades were evaluated as matrix former: standard Methocel CR and directly compressible Methocel DC2. The feeding behavior of each formulation component was investigated by deriving feed factor profiles. The maximum feed factor was used to estimate the drive command and depended strongly upon the density of the material. Furthermore, the shape of the feed factor profile allowed definition of a customized refill regime for each material. Inline NIRs was used to estimate the residence time distribution (RTD) in the mixer and monitor blend uniformity. Tablet content and weight variability were determined as additional measures of mixing performance. For Methocel CR, the best axial mixing (i.e. feeder fluctuation dampening) was achieved when an impeller with high number of radial mixing blades operated at low speed. However, the variability in tablet weight and content uniformity deteriorated under this condition. One can therefore conclude that balancing axial mixing with tablet quality is critical for Methocel CR. However, reformulating with the direct compressible Methocel DC2 as matrix former improved tablet quality vastly. Furthermore, both process and product were significantly more robust to changes in process and design variables. This observation underpins the importance of flowability during continuous blending and die-filling. At the compaction stage, blends with Methocel CR showed better tabletability driven by a higher compressibility as the smaller CR particles have a higher bonding area. However, tablets of similar strength were achieved using Methocel DC2 by targeting equal porosity. Compaction pressure impacted tablet properties and dissolution. Hence controlling thickness during continuous manufacturing of

  9. Image quality (IQ) guided multispectral image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  10. Compression for radiological images

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.

    1992-07-01

    The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

  11. Hyperspectral IASI L1C Data Compression.

    PubMed

    García-Sobrino, Joaquín; Serra-Sagristà, Joan; Bartrina-Rapesta, Joan

    2017-06-16

    The Infrared Atmospheric Sounding Interferometer (IASI), implemented on the MetOp satellite series, represents a significant step forward in atmospheric forecast and weather understanding. The instrument provides infrared soundings of unprecedented accuracy and spectral resolution to derive humidity and atmospheric temperature profiles, as well as some of the chemical components playing a key role in climate monitoring. IASI collects rich spectral information, which results in large amounts of data (about 16 Gigabytes per day). Efficient compression techniques are requested for both transmission and storage of such huge data. This study reviews the performance of several state of the art coding standards and techniques for IASI L1C data compression. Discussion embraces lossless, near-lossless and lossy compression. Several spectral transforms, essential to achieve improved coding performance due to the high spectral redundancy inherent to IASI products, are also discussed. Illustrative results are reported for a set of 96 IASI L1C orbits acquired over a full year (4 orbits per month for each IASI-A and IASI-B from July 2013 to June 2014) . Further, this survey provides organized data and facts to assist future research and the atmospheric scientific community.

  12. Parallel hyperspectral compressive sensing method on GPU

    NASA Astrophysics Data System (ADS)

    Bernabé, Sergio; Martín, Gabriel; Nascimento, José M. P.

    2015-10-01

    Remote hyperspectral sensors collect large amounts of data per flight usually with low spatial resolution. It is known that the bandwidth connection between the satellite/airborne platform and the ground station is reduced, thus a compression onboard method is desirable to reduce the amount of data to be transmitted. This paper presents a parallel implementation of an compressive sensing method, called parallel hyperspectral coded aperture (P-HYCA), for graphics processing units (GPU) using the compute unified device architecture (CUDA). This method takes into account two main properties of hyperspectral dataset, namely the high correlation existing among the spectral bands and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. Experimental results conducted using synthetic and real hyperspectral datasets on two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN, reveal that the use of GPUs can provide real-time compressive sensing performance. The achieved speedup is up to 20 times when compared with the processing time of HYCA running on one core of the Intel i7-2600 CPU (3.4GHz), with 16 Gbyte memory.

  13. Optimal Compression Methods for Floating-point Format Images

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2009-01-01

    We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.

  14. Compressed Sensing for Resolution Enhancement of Hyperpolarized 13C Flyback 3D-MRSI

    PubMed Central

    Hu, Simon; Lustig, Michael; Chen, Albert P.; Crane, Jason; Kerr, Adam; Kelley, Douglas A.C.; Hurd, Ralph; Kurhanewicz, John; Nelson, Sarah J.; Pauly, John M.; Vigneron, Daniel B.

    2008-01-01

    High polarization of nuclear spins in liquid state through dynamic nuclear polarization has enabled the direct monitoring of 13C metabolites in vivo at very high signal to noise, allowing for rapid assessment of tissue metabolism. The abundant SNR afforded by this hyperpolarization technique makes high resolution 13C 3D-MRSI feasible. However, the number of phase encodes that can be fit into the short acquisition time for hyperpolarized imaging limits spatial coverage and resolution. To take advantage of the high SNR available from hyperpolarization, we have applied compressed sensing to achieve a factor of 2 enhancement in spatial resolution without increasing acquisition time or decreasing coverage. In this paper, the design and testing of compressed sensing suited for a flyback 13C 3D-MRSI sequence are presented. The key to this design was the undersampling of spectral k-space using a novel blipped scheme, thus taking advantage of the considerable sparsity in typical hyperpolarized 13C spectra. Phantom tests validated the accuracy of the compressed sensing approach and initial mouse experiments demonstrated in vivo feasibility. PMID:18367420

  15. Embedded wavelet packet transform technique for texture compression

    NASA Astrophysics Data System (ADS)

    Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay

    1995-09-01

    A highly efficient texture compression scheme is proposed in this research. With this scheme, energy compaction of texture images is first achieved by the wavelet packet transform, and an embedding approach is then adopted for the coding of the wavelet packet transform coefficients. By comparing the proposed algorithm with the JPEG standard, FBI wavelet/scalar quantization standard and the EZW scheme with extensive experimental results, we observe a significant improvement in the rate-distortion performance and visual quality.

  16. What factors determine academic achievement in high achieving undergraduate medical students? A qualitative study.

    PubMed

    Abdulghani, Hamza M; Al-Drees, Abdulmajeed A; Khalil, Mahmood S; Ahmad, Farah; Ponnamperuma, Gominda G; Amin, Zubair

    2014-04-01

    Medical students' academic achievement is affected by many factors such as motivational beliefs and emotions. Although students with high intellectual capacity are selected to study medicine, their academic performance varies widely. The aim of this study is to explore the high achieving students' perceptions of factors contributing to academic achievement. Focus group discussions (FGD) were carried out with 10 male and 9 female high achieving (scores more than 85% in all tests) students, from the second, third, fourth and fifth academic years. During the FGDs, the students were encouraged to reflect on their learning strategies and activities. The discussion was audio-recorded, transcribed and analysed qualitatively. Factors influencing high academic achievement include: attendance to lectures, early revision, prioritization of learning needs, deep learning, learning in small groups, mind mapping, learning in skills lab, learning with patients, learning from mistakes, time management, and family support. Internal motivation and expected examination results are important drivers of high academic performance. Management of non-academic issues like sleep deprivation, homesickness, language barriers, and stress is also important for academic success. Addressing these factors, which might be unique for a given student community, in a systematic manner would be helpful to improve students' performance.

  17. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    PubMed

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  18. TEM Video Compressive Sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into amore » single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental

  19. Information content exploitation of imaging spectrometer's images for lossless compression

    NASA Astrophysics Data System (ADS)

    Wang, Jianyu; Zhu, Zhenyu; Lin, Kan

    1996-11-01

    Imaging spectrometer, such as MAIS produces a tremendous volume of image data with up to 5.12 Mbps raw data rate, which needs urgently a real-time, efficient and reversible compression implementation. Between the lossy scheme with high compression ratio and the lossless scheme with high fidelity, we must make our choice based on the particular information content analysis of each imaging spectrometer's image data. In this paper, we present a careful analysis of information-preserving compression of imaging spectrometer MAIS with an entropy and autocorrelation study on the hyperspectral images. First, the statistical information in an actual MAIS image, captured in Marble Bar Australia, is measured with its entropy, conditional entropy, mutual information and autocorrelation coefficients on both spatial dimensions and spectral dimension. With these careful analyses, it is shown that there is high redundancy existing in the spatial dimensions, but the correlation in spectral dimension of the raw images is smaller than expected. The main reason of the nonstationarity on spectral dimension is attributed to the instruments's discrepancy on detector's response and channel's amplification in different spectral bands. To restore its natural correlation, we preprocess the signal in advance. There are two methods to accomplish this requirement: onboard radiation calibration and normalization. A better result can be achieved by the former one. After preprocessing, the spectral correlation increases so high that it contributes much redundancy in addition to spatial correlation. At last, an on-board hardware implementation for the lossless compression is presented with an ideal result.

  20. Highly compressible fluorescent particles for pressure sensing in liquids

    NASA Astrophysics Data System (ADS)

    Cellini, F.; Peterson, S. D.; Porfiri, M.

    2017-05-01

    Pressure sensing in liquids is important for engineering applications ranging from industrial processing to naval architecture. Here, we propose a pressure sensor based on highly compressible polydimethylsiloxane foam particles embedding fluorescent Nile Red molecules. The particles display pressure sensitivities as low as 0.0018 kPa-1, which are on the same order of magnitude of sensitivities reported in commercial pressure-sensitive paints for air flows. We envision the application of the proposed sensor in particle image velocimetry toward an improved understanding of flow kinetics in liquids.

  1. Nonlinear vibration analysis of the high-efficiency compressive-mode piezoelectric energy harvester

    NASA Astrophysics Data System (ADS)

    Yang, Zhengbao; Zu, Jean

    2015-04-01

    Power source is critical to achieve independent and autonomous operations of electronic mobile devices. The vibration-based energy harvesting is extensively studied recently, and recognized as a promising technology to realize inexhaustible power supply for small-scale electronics. Among various approaches, the piezoelectric energy harvesting has gained the most attention due to its high conversion efficiency and simple configurations. However, most of piezoelectric energy harvesters (PEHs) to date are based on bending-beam structures and can only generate limited power with a narrow working bandwidth. The insufficient electric output has greatly impeded their practical applications. In this paper, we present an innovative lead zirconate titanate (PZT) energy harvester, named high-efficiency compressive-mode piezoelectric energy harvester (HC-PEH), to enhance the performance of energy harvesters. A theoretical model was developed analytically, and solved numerically to study the nonlinear characteristics of the HC-PEH. The results estimated by the developed model agree well with the experimental data from the fabricated prototype. The HC-PEH shows strong nonlinear responses, favorable working bandwidth and superior power output. Under a weak excitation of 0.3 g (g = 9.8 m/s2), a maximum power output 30 mW is generated at 22 Hz, which is about ten times better than current energy harvesters. The HC-PEH demonstrates the capability of generating enough power for most of wireless sensors.

  2. Perspectives of High-Achieving Women on Teaching

    ERIC Educational Resources Information Center

    Snodgrass, Helen

    2010-01-01

    High-achieving women are significantly less likely to enter the teaching profession than they were just 40 years ago. Why? While the social and economic reasons for this decline have been well documented in the literature, what is lacking is a discussion with high-achieving women, as they make their first career decisions, about their perceptions…

  3. Real-time 3D video compression for tele-immersive environments

    NASA Astrophysics Data System (ADS)

    Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William

    2006-01-01

    Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).

  4. Fracto-mechanoluminescent light emission of EuD4TEA-PDMS composites subjected to high strain-rate compressive loading

    NASA Astrophysics Data System (ADS)

    Ryu, Donghyeon; Castaño, Nicolas; Bhakta, Raj; Kimberley, Jamie

    2017-08-01

    The objective of this study is to understand light emission characteristics of fracto-mechanoluminescent (FML) europium tetrakis(dibenzoylmethide)-triethylammonium (EuD4TEA) crystals under high strain-rate compressive loading. As a sensing material that can play a pivotal role for the self-powered impact sensor technology, it is important to understand transformative light emission characteristics of the FML EuD4TEA crystals under high strain-rate compressive loading. First, EuD4TEA crystals were synthesized and embedded into polydimethylsiloxane (PDMS) elastomer to fabricate EuD4TEA-PDMS composite test specimens. Second, the prepared EuD4TEA-PDMS composites were tested using the modified Kolsky bar setup equipped with a high-speed camera. Third, FML light emission was captured to yield 12 bit grayscale video footage, which was processed to quantify the FML light emission. Finally, quantitative parameters were generated by taking into account pixel values and population of pixels of the 12 bit grayscale images to represent FML light intensity. The FML light intensity was correlated with high strain-rate compressive strain and strain rate to understand the FML light emission characteristics under high strain-rate compressive loading that can result from impact occurrences.

  5. High Order Entropy-Constrained Residual VQ for Lossless Compression of Images

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen

    1995-01-01

    High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance.

  6. A comparison of inferface pressures of three compression bandage systems.

    PubMed

    Hanna, Richard; Bohbot, Serge; Connolly, Nicki

    To measure and compare the interface pressures achieved with two compression bandage systems - a four-layer system (4LB) and a two-layer short-stretch system (SSB) - with a new two-layer system (2LB), which uses an etalonnage (performance indicator) to help achieve the correct therapeutic pressure for healing venous leg ulcers - recommended as 40 mmHg. 32 nurses with experience of using compression bandages applied each of the three systems to a healthy female volunteer in a sitting position. The interface pressures and time taken to apply the systems were measured. A questionnaire regarding the concept of the new system and its application in comparison to the existing two systems was then completed by the nurses. The interface pressures achieved show that many nurses applied very high pressures with the 4LB (25% achieving pressures > 50 mmHg) whereas the majority of the nurses (75%) achieved a pressure of < 30 mmHg when using the SSB. A pressure of 30-50 mmHg was achieved with the new 2LB. The SSB took the least time to be applied (mean: 1 minute 50 seconds) with the 4LB the slowest (mean: 3 minutes 46 seconds). A mean time of 2 minutes 35 seconds was taken to apply the 2LB. Over 63% of the nurses felt the 2LB was very easy to apply. These results suggest that the 2LB achieves the required therapeutic pressure necessary for the management of venous leg ulcers, is easy to apply and may provide a suitable alternative to other multi-layer bandage systems.

  7. A Hybrid Data Compression Scheme for Power Reduction in Wireless Sensors for IoT.

    PubMed

    Deepu, Chacko John; Heng, Chun-Huat; Lian, Yong

    2017-04-01

    This paper presents a novel data compression and transmission scheme for power reduction in Internet-of-Things (IoT) enabled wireless sensors. In the proposed scheme, data is compressed with both lossy and lossless techniques, so as to enable hybrid transmission mode, support adaptive data rate selection and save power in wireless transmission. Applying the method to electrocardiogram (ECG), the data is first compressed using a lossy compression technique with a high compression ratio (CR). The residual error between the original data and the decompressed lossy data is preserved using entropy coding, enabling a lossless restoration of the original data when required. Average CR of 2.1 × and 7.8 × were achieved for lossless and lossy compression respectively with MIT/BIH database. The power reduction is demonstrated using a Bluetooth transceiver and is found to be reduced to 18% for lossy and 53% for lossless transmission respectively. Options for hybrid transmission mode, adaptive rate selection and system level power reduction make the proposed scheme attractive for IoT wireless sensors in healthcare applications.

  8. Low-Temperature Combustion of High Octane Fuels in a Gasoline Compression Ignition Engine

    DOE PAGES

    Cung, Khanh Duc; Ciatti, Stephen Anthony; Tanov, Slavey; ...

    2017-12-21

    Gasoline Compression Ignition (GCI) has been shown as one of the advanced combustion concepts that could potentially provide a pathway to achieve cleaner and more efficient combustion engines. Fuel and air in GCI are not fully premixed as compared to homogeneous charge compression ignition (HCCI) which is a completely kinetic-controlled combustion system. Therefore, the combustion phasing can be controlled by the time of injection, usually post injection in a multiple-injection scheme, to mitigate combustion noise. Gasoline fuels ignite more difficult than Diesel. The autoignition quality of gasoline can be indicated by research octane number (RON). Fuels with high octane tendmore » to have more resistance to auto-ignition, hence more time for fuel-air mixing. In this study, three fuels, namely, Aromatic, Alkylate, and E30, with similar RON value of 98 but different hydrocarbon compositions were tested in a multi-cylinder engine under GCI combustion mode. Considerations of EGR, start of injection (SOI), and boost were investigated to study the sensitivity of dilution, local stratification, and reactivity of the charge, respectively, for each fuel. Combustion phasing was kept constant during the experiments to the changes in ignition and combustion process before and after 50% of the fuel mass is burned. Emission characteristics at different levels of EGR and lambda were revealed for all fuels with E30 having the lowest filter smoke number (FSN) and was also most sensitive to the change in dilution. Reasonably low combustion noise (< 90 dB) and stable combustion (COVIMEP < 3%) were maintained during the experiments. The second part of this paper contains visualization of the combustion process obtained from endoscope imaging for each fuel at selected conditions. Soot radiation signal from GCI combustion were strong during late injection, and also more intense at low EGR conditions. Furthermore, soot/temperature profiles indicated only the high

  9. Low-Temperature Combustion of High Octane Fuels in a Gasoline Compression Ignition Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cung, Khanh Duc; Ciatti, Stephen Anthony; Tanov, Slavey

    Gasoline Compression Ignition (GCI) has been shown as one of the advanced combustion concepts that could potentially provide a pathway to achieve cleaner and more efficient combustion engines. Fuel and air in GCI are not fully premixed as compared to homogeneous charge compression ignition (HCCI) which is a completely kinetic-controlled combustion system. Therefore, the combustion phasing can be controlled by the time of injection, usually post injection in a multiple-injection scheme, to mitigate combustion noise. Gasoline fuels ignite more difficult than Diesel. The autoignition quality of gasoline can be indicated by research octane number (RON). Fuels with high octane tendmore » to have more resistance to auto-ignition, hence more time for fuel-air mixing. In this study, three fuels, namely, Aromatic, Alkylate, and E30, with similar RON value of 98 but different hydrocarbon compositions were tested in a multi-cylinder engine under GCI combustion mode. Considerations of EGR, start of injection (SOI), and boost were investigated to study the sensitivity of dilution, local stratification, and reactivity of the charge, respectively, for each fuel. Combustion phasing was kept constant during the experiments to the changes in ignition and combustion process before and after 50% of the fuel mass is burned. Emission characteristics at different levels of EGR and lambda were revealed for all fuels with E30 having the lowest filter smoke number (FSN) and was also most sensitive to the change in dilution. Reasonably low combustion noise (< 90 dB) and stable combustion (COVIMEP < 3%) were maintained during the experiments. The second part of this paper contains visualization of the combustion process obtained from endoscope imaging for each fuel at selected conditions. Soot radiation signal from GCI combustion were strong during late injection, and also more intense at low EGR conditions. Furthermore, soot/temperature profiles indicated only the high

  10. Modeling of Compressive Strength for Self-Consolidating High-Strength Concrete Incorporating Palm Oil Fuel Ash

    PubMed Central

    Safiuddin, Md.; Raman, Sudharshan N.; Abdus Salam, Md.; Jumaat, Mohd. Zamin

    2016-01-01

    Modeling is a very useful method for the performance prediction of concrete. Most of the models available in literature are related to the compressive strength because it is a major mechanical property used in concrete design. Many attempts were taken to develop suitable mathematical models for the prediction of compressive strength of different concretes, but not for self-consolidating high-strength concrete (SCHSC) containing palm oil fuel ash (POFA). The present study has used artificial neural networks (ANN) to predict the compressive strength of SCHSC incorporating POFA. The ANN model has been developed and validated in this research using the mix proportioning and experimental strength data of 20 different SCHSC mixes. Seventy percent (70%) of the data were used to carry out the training of the ANN model. The remaining 30% of the data were used for testing the model. The training of the ANN model was stopped when the root mean square error (RMSE) and the percentage of good patterns was 0.001 and ≈100%, respectively. The predicted compressive strength values obtained from the trained ANN model were much closer to the experimental values of compressive strength. The coefficient of determination (R2) for the relationship between the predicted and experimental compressive strengths was 0.9486, which shows the higher degree of accuracy of the network pattern. Furthermore, the predicted compressive strength was found very close to the experimental compressive strength during the testing process of the ANN model. The absolute and percentage relative errors in the testing process were significantly low with a mean value of 1.74 MPa and 3.13%, respectively, which indicated that the compressive strength of SCHSC including POFA can be efficiently predicted by the ANN. PMID:28773520

  11. Modeling of Compressive Strength for Self-Consolidating High-Strength Concrete Incorporating Palm Oil Fuel Ash.

    PubMed

    Safiuddin, Md; Raman, Sudharshan N; Abdus Salam, Md; Jumaat, Mohd Zamin

    2016-05-20

    Modeling is a very useful method for the performance prediction of concrete. Most of the models available in literature are related to the compressive strength because it is a major mechanical property used in concrete design. Many attempts were taken to develop suitable mathematical models for the prediction of compressive strength of different concretes, but not for self-consolidating high-strength concrete (SCHSC) containing palm oil fuel ash (POFA). The present study has used artificial neural networks (ANN) to predict the compressive strength of SCHSC incorporating POFA. The ANN model has been developed and validated in this research using the mix proportioning and experimental strength data of 20 different SCHSC mixes. Seventy percent (70%) of the data were used to carry out the training of the ANN model. The remaining 30% of the data were used for testing the model. The training of the ANN model was stopped when the root mean square error (RMSE) and the percentage of good patterns was 0.001 and ≈100%, respectively. The predicted compressive strength values obtained from the trained ANN model were much closer to the experimental values of compressive strength. The coefficient of determination ( R ²) for the relationship between the predicted and experimental compressive strengths was 0.9486, which shows the higher degree of accuracy of the network pattern. Furthermore, the predicted compressive strength was found very close to the experimental compressive strength during the testing process of the ANN model. The absolute and percentage relative errors in the testing process were significantly low with a mean value of 1.74 MPa and 3.13%, respectively, which indicated that the compressive strength of SCHSC including POFA can be efficiently predicted by the ANN.

  12. Achieving High Performance Perovskite Solar Cells

    NASA Astrophysics Data System (ADS)

    Yang, Yang

    2015-03-01

    Recently, metal halide perovskite based solar cell with the characteristics of rather low raw materials cost, great potential for simple process and scalable production, and extreme high power conversion efficiency (PCE), have been highlighted as one of the most competitive technologies for next generation thin film photovoltaic (PV). In UCLA, we have realized an efficient pathway to achieve high performance pervoskite solar cells, where the findings are beneficial to this unique materials/devices system. Our recent progress lies in perovskite film formation, defect passivation, transport materials design, interface engineering with respect to high performance solar cell, as well as the exploration of its applications beyond photovoltaics. These achievements include: 1) development of vapor assisted solution process (VASP) and moisture assisted solution process, which produces perovskite film with improved conformity, high crystallinity, reduced recombination rate, and the resulting high performance; 2) examination of the defects property of perovskite materials, and demonstration of a self-induced passivation approach to reduce carrier recombination; 3) interface engineering based on design of the carrier transport materials and the electrodes, in combination with high quality perovskite film, which delivers 15 ~ 20% PCEs; 4) a novel integration of bulk heterojunction to perovskite solar cell to achieve better light harvest; 5) fabrication of inverted solar cell device with high efficiency and flexibility and 6) exploration the application of perovskite materials to photodetector. Further development in film, device architecture, and interfaces will lead to continuous improved perovskite solar cells and other organic-inorganic hybrid optoelectronics.

  13. The application of compressed sensing to long-term acoustic emission-based structural health monitoring

    NASA Astrophysics Data System (ADS)

    Cattaneo, Alessandro; Park, Gyuhae; Farrar, Charles; Mascareñas, David

    2012-04-01

    The acoustic emission (AE) phenomena generated by a rapid release in the internal stress of a material represent a promising technique for structural health monitoring (SHM) applications. AE events typically result in a discrete number of short-time, transient signals. The challenge associated with capturing these events using classical techniques is that very high sampling rates must be used over extended periods of time. The result is that a very large amount of data is collected to capture a phenomenon that rarely occurs. Furthermore, the high energy consumption associated with the required high sampling rates makes the implementation of high-endurance, low-power, embedded AE sensor nodes difficult to achieve. The relatively rare occurrence of AE events over long time scales implies that these measurements are inherently sparse in the spike domain. The sparse nature of AE measurements makes them an attractive candidate for the application of compressed sampling techniques. Collecting compressed measurements of sparse AE signals will relax the requirements on the sampling rate and memory demands. The focus of this work is to investigate the suitability of compressed sensing techniques for AE-based SHM. The work explores estimating AE signal statistics in the compressed domain for low-power classification applications. In the event compressed classification finds an event of interest, ι1 norm minimization will be used to reconstruct the measurement for further analysis. The impact of structured noise on compressive measurements is specifically addressed. The suitability of a particular algorithm, called Justice Pursuit, to increase robustness to a small amount of arbitrary measurement corruption is investigated.

  14. Advanced application flight experiment breadboard pulse compression radar altimeter program

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Design, development and performance of the pulse compression radar altimeter is described. The high resolution breadboard system is designed to operate from an aircraft at 10 Kft above the ocean and to accurately measure altitude, sea wave height and sea reflectivity. The minicomputer controlled Ku band system provides six basic variables and an extensive digital recording capability for experimentation purposes. Signal bandwidths of 360 MHz are obtained using a reflective array compression line. Stretch processing is used to achieve 1000:1 pulse compression. The system range command LSB is 0.62 ns or 9.25 cm. A second order altitude tracker, aided by accelerometer inputs is implemented in the system software. During flight tests the system demonstrated an altitude resolution capability of 2.1 cm and sea wave height estimation accuracy of 10%. The altitude measurement performance exceeds that of the Skylab and GEOS-C predecessors by approximately an order of magnitude.

  15. Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di, Sheng; Cappello, Franck

    Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points canmore » be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.« less

  16. Psychosocial Keys to African American Achievement? Examining the Relationship between Achievement and Psychosocial Variables in High Achieving African Americans

    ERIC Educational Resources Information Center

    Dixson, Dante D.; Roberson, Cyrell C. B.; Worrell, Frank C.

    2017-01-01

    Grit, growth mindset, ethnic identity, and other group orientation are four psychosocial variables that have been associated with academic achievement in adolescent populations. In a sample of 105 high achieving African American high school students (cumulative grade point average [GPA] > 3.0), we examined whether these four psychosocial…

  17. New experimental platform to study high density laser-compressed matter

    DOE PAGES

    Doppner, T.; LePape, S.; Ma, T.; ...

    2014-09-26

    We have developed a new experimental platform at the Linac Coherent Light Source (LCLS) which combines simultaneous angularly and spectrally resolved x-ray scatteringmeasurements. This technique offers a new insights on the structural and thermodynamic properties of warm dense matter. The < 50 fs temporal duration of the x-ray pulse provides near instantaneous snapshots of the dynamics of the compression. We present a proof of principle experiment for this platform to characterize a shock-compressed plastic foil. We observe the disappearance of the plastic semi-crystal structure and the formation of a compressed liquid ion-ion correlation peak. As a result, the plasma parametersmore » of shock-compressed plastic can be measured as well, but requires an averaging over a few tens of shots.« less

  18. [Effects of real-time audiovisual feedback on secondary-school students' performance of chest compressions].

    PubMed

    Abelairas-Gómez, Cristian; Rodríguez-Núñez, Antonio; Vilas-Pintos, Elisardo; Prieto Saborit, José Antonio; Barcala-Furelos, Roberto

    2015-06-01

    To describe the quality of chest compressions performed by secondary-school students trained with a realtime audiovisual feedback system. The learners were 167 students aged 12 to 15 years who had no prior experience with cardiopulmonary resuscitation (CPR). They received an hour of instruction in CPR theory and practice and then took a 2-minute test, performing hands-only CPR on a child mannequin (Prestan Professional Child Manikin). Lights built into the mannequin gave learners feedback about how many compressions they had achieved and clicking sounds told them when compressions were deep enough. All the learners were able to maintain a steady enough rhythm of compressions and reached at least 80% of the targeted compression depth. Fewer correct compressions were done in the second minute than in the first (P=.016). Real-time audiovisual feedback helps schoolchildren aged 12 to 15 years to achieve quality chest compressions on a mannequin.

  19. 22nd Annual Survey of High Achievers: Attitudes and Opinions from the Nation's High Achieving Teens.

    ERIC Educational Resources Information Center

    Who's Who among American High School Students, Northbrook, IL.

    This study surveyed high school students (N=1,879) who were student leaders or high achievers in the spring of 1991 for the purpose of determining their attitudes. Students were members of the junior or senior high school class during the 1990-91 academic year and were selected for recognition by their principals or guidance counselors, other…

  20. Size dependent compressibility of nano-ceria: Minimum near 33 nm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodenbough, Philip P.; Chemistry Department, Columbia University, New York, New York 10027; Song, Junhua

    2015-04-20

    We report the crystallite-size-dependency of the compressibility of nanoceria under hydrostatic pressure for a wide variety of crystallite diameters and comment on the size-based trends indicating an extremum near 33 nm. Uniform nano-crystals of ceria were synthesized by basic precipitation from cerium (III) nitrate. Size-control was achieved by adjusting mixing time and, for larger particles, a subsequent annealing temperature. The nano-crystals were characterized by transmission electron microscopy and standard ambient x-ray diffraction (XRD). Compressibility, or its reciprocal, bulk modulus, was measured with high-pressure XRD at LBL-ALS, using helium, neon, or argon as the pressure-transmitting medium for all samples. As crystallite sizemore » decreased below 100 nm, the bulk modulus first increased, and then decreased, achieving a maximum near a crystallite diameter of 33 nm. We review earlier work and examine several possible explanations for the peaking of bulk modulus at an intermediate crystallite size.« less

  1. The Compressibility Burble

    NASA Technical Reports Server (NTRS)

    Stack, John

    1935-01-01

    Simultaneous air-flow photographs and pressure-distribution measurements have been made of the NACA 4412 airfoil at high speeds in order to determine the physical nature of the compressibility burble. The flow photographs were obtained by the Schlieren method and the pressures were simultaneously measured for 54 stations on the 5-inch-chord wing by means of a multiple-tube photographic manometer. Pressure-measurement results and typical Schlieren photographs are presented. The general nature of the phenomenon called the "compressibility burble" is shown by these experiments. The source of the increased drag is the compression shock that occurs, the excess drag being due to the conversion of a considerable amount of the air-stream kinetic energy into heat at the compression shock.

  2. Survey of Header Compression Techniques

    NASA Technical Reports Server (NTRS)

    Ishac, Joseph

    2001-01-01

    This report provides a summary of several different header compression techniques. The different techniques included are: (1) Van Jacobson's header compression (RFC 1144); (2) SCPS (Space Communications Protocol Standards) header compression (SCPS-TP, SCPS-NP); (3) Robust header compression (ROHC); and (4) The header compression techniques in RFC2507 and RFC2508. The methodology for compression and error correction for these schemes are described in the remainder of this document. All of the header compression schemes support compression over simplex links, provided that the end receiver has some means of sending data back to the sender. However, if that return path does not exist, then neither Van Jacobson's nor SCPS can be used, since both rely on TCP (Transmission Control Protocol). In addition, under link conditions of low delay and low error, all of the schemes perform as expected. However, based on the methodology of the schemes, each scheme is likely to behave differently as conditions degrade. Van Jacobson's header compression relies heavily on the TCP retransmission timer and would suffer an increase in loss propagation should the link possess a high delay and/or bit error rate (BER). The SCPS header compression scheme protects against high delay environments by avoiding delta encoding between packets. Thus, loss propagation is avoided. However, SCPS is still affected by an increased BER (bit-error-rate) since the lack of delta encoding results in larger header sizes. Next, the schemes found in RFC2507 and RFC2508 perform well for non-TCP connections in poor conditions. RFC2507 performance with TCP connections is improved by various techniques over Van Jacobson's, but still suffers a performance hit with poor link properties. Also, RFC2507 offers the ability to send TCP data without delta encoding, similar to what SCPS offers. ROHC is similar to the previous two schemes, but adds additional CRCs (cyclic redundancy check) into headers and improves

  3. Curvelet-based compressive sensing for InSAR raw data

    NASA Astrophysics Data System (ADS)

    Costa, Marcello G.; da Silva Pinho, Marcelo; Fernandes, David

    2015-10-01

    The aim of this work is to evaluate the compression performance of SAR raw data for interferometry applications collected by airborne from BRADAR (Brazilian SAR System operating in X and P bands) using the new approach based on compressive sensing (CS) to achieve an effective recovery with a good phase preserving. For this framework is desirable a real-time capability, where the collected data can be compressed to reduce onboard storage and bandwidth required for transmission. In the CS theory, a sparse unknown signals can be recovered from a small number of random or pseudo-random measurements by sparsity-promoting nonlinear recovery algorithms. Therefore, the original signal can be significantly reduced. To achieve the sparse representation of SAR signal, was done a curvelet transform. The curvelets constitute a directional frame, which allows an optimal sparse representation of objects with discontinuities along smooth curves as observed in raw data and provides an advanced denoising optimization. For the tests were made available a scene of 8192 x 2048 samples in range and azimuth in X-band with 2 m of resolution. The sparse representation was compressed using low dimension measurements matrices in each curvelet subband. Thus, an iterative CS reconstruction method based on IST (iterative soft/shrinkage threshold) was adjusted to recover the curvelets coefficients and then the original signal. To evaluate the compression performance were computed the compression ratio (CR), signal to noise ratio (SNR), and because the interferometry applications require more reconstruction accuracy the phase parameters like the standard deviation of the phase (PSD) and the mean phase error (MPE) were also computed. Moreover, in the image domain, a single-look complex image was generated to evaluate the compression effects. All results were computed in terms of sparsity analysis to provides an efficient compression and quality recovering appropriated for inSAR applications

  4. Compression Ratio Adjuster

    NASA Technical Reports Server (NTRS)

    Akkerman, J. W.

    1982-01-01

    New mechanism alters compression ratio of internal-combustion engine according to load so that engine operates at top fuel efficiency. Ordinary gasoline, diesel and gas engines with their fixed compression ratios are inefficient at partial load and at low-speed full load. Mechanism ensures engines operate as efficiently under these conditions as they do at highload and high speed.

  5. A novel 3D Cartesian random sampling strategy for Compressive Sensing Magnetic Resonance Imaging.

    PubMed

    Valvano, Giuseppe; Martini, Nicola; Santarelli, Maria Filomena; Chiappino, Dante; Landini, Luigi

    2015-01-01

    In this work we propose a novel acquisition strategy for accelerated 3D Compressive Sensing Magnetic Resonance Imaging (CS-MRI). This strategy is based on a 3D cartesian sampling with random switching of the frequency encoding direction with other K-space directions. Two 3D sampling strategies are presented. In the first strategy, the frequency encoding direction is randomly switched with one of the two phase encoding directions. In the second strategy, the frequency encoding direction is randomly chosen between all the directions of the K-Space. These strategies can lower the coherence of the acquisition, in order to produce reduced aliasing artifacts and to achieve a better image quality after Compressive Sensing (CS) reconstruction. Furthermore, the proposed strategies can reduce the typical smoothing of CS due to the limited sampling of high frequency locations. We demonstrated by means of simulations that the proposed acquisition strategies outperformed the standard Compressive Sensing acquisition. This results in a better quality of the reconstructed images and in a greater achievable acceleration.

  6. Achieving High Resolution Measurements Within Limited Bandwidth Via Sensor Data Compression

    DTIC Science & Technology

    2013-06-01

    MIDAS , high-g accelerometer 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 16 19a. NAME OF...instrumentation boards are a miniaturization of the Multifunctional Instrumentation and Data Acquisition System ( MIDAS ) designed by ARL and detailed in several...technical reports (1). The original MIDAS has a diameter of 1.4 inches and height of 1.6 inches. This miniaturization for a 30mm round is

  7. File compression and encryption based on LLS and arithmetic coding

    NASA Astrophysics Data System (ADS)

    Yu, Changzhi; Li, Hengjian; Wang, Xiyu

    2018-03-01

    e propose a file compression model based on arithmetic coding. Firstly, the original symbols, to be encoded, are input to the encoder one by one, we produce a set of chaotic sequences by using the Logistic and sine chaos system(LLS), and the values of this chaotic sequences are randomly modified the Upper and lower limits of current symbols probability. In order to achieve the purpose of encryption, we modify the upper and lower limits of all character probabilities when encoding each symbols. Experimental results show that the proposed model can achieve the purpose of data encryption while achieving almost the same compression efficiency as the arithmetic coding.

  8. Temporal compressive imaging for video

    NASA Astrophysics Data System (ADS)

    Zhou, Qun; Zhang, Linxia; Ke, Jun

    2018-01-01

    In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.

  9. 21st Annual Survey of High Achievers: Attitudes and Opinions from the Nation's High Achieving Teens.

    ERIC Educational Resources Information Center

    Who's Who among American High School Students, Lake Forest, IL.

    This survey was conducted by Who's Who Among American High School Students during the spring of 1990, to determine the attitudes of student leaders in U.S. high schools. A survey of high achievers sent to 5,000 students was completed and returned by approximately 2,000 students. All students were members of the junior or senior class during the…

  10. Shock compression experiments on Lithium Deuteride (LiD) single crystals

    DOE PAGES

    Knudson, M. D.; Desjarlais, M. P.; Lemke, R. W.

    2016-12-21

    Shock compression experiments in the few hundred GPa (multi-Mabr) regime were performed on Lithium Deuteride (LiD) single crystals. This study utilized the high velocity flyer plate capability of the Sandia Z Machine to perform impact experiments at flyer plate velocities in the range of 17-32 km/s. Measurements included pressure, density, and temperature between ~200-600 GPa along the Principal Hugoniot – the locus of end states achievable through compression by large amplitude shock waves – as well as pressure and density of re - shock states up to ~900 GPa. Lastly, the experimental measurements are compared with recent density functional theorymore » calculations as well as a new tabular equation of state developed at Los Alamos National Labs.« less

  11. Video compression of coronary angiograms based on discrete wavelet transform with block classification.

    PubMed

    Ho, B T; Tsai, M J; Wei, J; Ma, M; Saipetch, P

    1996-01-01

    A new method of video compression for angiographic images has been developed to achieve high compression ratio (~20:1) while eliminating block artifacts which leads to loss of diagnostic accuracy. This method adopts motion picture experts group's (MPEGs) motion compensated prediction to takes advantage of frame to frame correlation. However, in contrast to MPEG, the error images arising from mismatches in the motion estimation are encoded by discrete wavelet transform (DWT) rather than block discrete cosine transform (DCT). Furthermore, the authors developed a classification scheme which label each block in an image as intra, error, or background type and encode it accordingly. This hybrid coding can significantly improve the compression efficiency in certain eases. This method can be generalized for any dynamic image sequences applications sensitive to block artifacts.

  12. Reference-free compression of high throughput sequencing data with a probabilistic de Bruijn graph.

    PubMed

    Benoit, Gaëtan; Lemaitre, Claire; Lavenier, Dominique; Drezen, Erwan; Dayris, Thibault; Uricaru, Raluca; Rizk, Guillaume

    2015-09-14

    Data volumes generated by next-generation sequencing (NGS) technologies is now a major concern for both data storage and transmission. This triggered the need for more efficient methods than general purpose compression tools, such as the widely used gzip method. We present a novel reference-free method meant to compress data issued from high throughput sequencing technologies. Our approach, implemented in the software LEON, employs techniques derived from existing assembly principles. The method is based on a reference probabilistic de Bruijn Graph, built de novo from the set of reads and stored in a Bloom filter. Each read is encoded as a path in this graph, by memorizing an anchoring kmer and a list of bifurcations. The same probabilistic de Bruijn Graph is used to perform a lossy transformation of the quality scores, which allows to obtain higher compression rates without losing pertinent information for downstream analyses. LEON was run on various real sequencing datasets (whole genome, exome, RNA-seq or metagenomics). In all cases, LEON showed higher overall compression ratios than state-of-the-art compression software. On a C. elegans whole genome sequencing dataset, LEON divided the original file size by more than 20. LEON is an open source software, distributed under GNU affero GPL License, available for download at http://gatb.inria.fr/software/leon/.

  13. Attitudes and Opinions from the Nation's High Achieving Teens: 26th Annual Survey of High Achievers.

    ERIC Educational Resources Information Center

    Who's Who among American High School Students, Lake Forest, IL.

    A national survey of 3,351 high achieving high school students (junior and senior level) was conducted. All students had A or B averages. Topics covered include lifestyles, political beliefs, violence and entertainment, education, cheating, school violence, sexual violence and date rape, peer pressure, popularity, suicide, drugs and alcohol,…

  14. Interactive calculation procedures for mixed compression inlets

    NASA Technical Reports Server (NTRS)

    Reshotko, Eli

    1983-01-01

    The proper design of engine nacelle installations for supersonic aircraft depends on a sophisticated understanding of the interactions between the boundary layers and the bounding external flows. The successful operation of mixed external-internal compression inlets depends significantly on the ability to closely control the operation of the internal compression portion of the inlet. This portion of the inlet is one where compression is achieved by multiple reflection of oblique shock waves and weak compression waves in a converging internal flow passage. However weak these shocks and waves may seem gas-dynamically, they are of sufficient strength to separate a laminar boundary layer and generally even strong enough for separation or incipient separation of the turbulent boundary layers. An understanding was developed of the viscous-inviscid interactions and of the shock wave boundary layer interactions and reflections.

  15. Alternative Compression Garments

    NASA Technical Reports Server (NTRS)

    Stenger, M. B.; Lee, S. M. C.; Ribeiro, L. C.; Brown, A. K.; Westby, C. M.; Platts, S. H.

    2011-01-01

    Orthostatic intolerance after spaceflight is still an issue for astronauts as no in-flight countermeasure has been 100% effective. Future anti-gravity suits (AGS) may be similar to the Shuttle era inflatable AGS or may be a mechanical compression device like the Russian Kentavr. We have evaluated the above garments as well as elastic, gradient compression garments of varying magnitude and determined that breast-high elastic compression garments may be a suitable replacement to the current AGS. This new garment should be more comfortable than the AGS, easy to don and doff, and as effective a countermeasure to orthostatic intolerance. Furthermore, these new compression garments could be worn for several days after space flight as necessary if symptoms persisted. We conducted two studies to evaluate elastic, gradient compression garments. The purpose of these studies was to evaluate the comfort and efficacy of an alternative compression garment (ACG) immediately after actual space flight and 6 degree head-down tilt bed rest as a model of space flight, and to determine if they would impact recovery if worn for up to three days after bed rest.

  16. Real-Time Data Filtering and Compression in Wide Area Simulation Networks

    DTIC Science & Technology

    1992-10-02

    Area Simulation Networks Achieving the real-time linkage among multiple , geographically-distant, local area networks that support distributed...November 1989, pp. 52-61. [IEEE85] IEEE/ANSI Standard 8802/3 "Carrier sense multiple access with collision detection (CSMA/CD) access method and...decoding/encoding of multiple bits. The hardware is programmable, easily adaptable and yields a high compression rate. A prototype 2-micron VLSI chip

  17. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  18. Attitudes and Opinions from the Nation's High Achieving Teens. 24th Annual Survey of High Achievers.

    ERIC Educational Resources Information Center

    Who's Who among American High School Students, Lake Forest, IL.

    This survey represents information compiled by the largest national survey of adolescent leaders and high achievers. Of the 5,000 students selected demographically from "Who's Who Among American High School Students," 1,957 responded. All students surveyed had "A" or "B" averages, and 98% planned on attending college. Questions were asked about…

  19. Competitive Parallel Processing For Compression Of Data

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Fender, Antony R. H.

    1990-01-01

    Momentarily-best compression algorithm selected. Proposed competitive-parallel-processing system compresses data for transmission in channel of limited band-width. Likely application for compression lies in high-resolution, stereoscopic color-television broadcasting. Data from information-rich source like color-television camera compressed by several processors, each operating with different algorithm. Referee processor selects momentarily-best compressed output.

  20. ERGC: an efficient referential genome compression algorithm

    PubMed Central

    Saha, Subrata; Rajasekaran, Sanguthevar

    2015-01-01

    Motivation: Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. Results: We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. Availability and implementation: The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. Contact: rajasek@engr.uconn.edu PMID:26139636

  1. Pneumatic microfluidic cell compression device for high-throughput study of chondrocyte mechanobiology.

    PubMed

    Lee, Donghee; Erickson, Alek; You, Taesun; Dudley, Andrew T; Ryu, Sangjin

    2018-06-13

    Hyaline cartilage is a specialized type of connective tissue that lines many moveable joints (articular cartilage) and contributes to bone growth (growth plate cartilage). Hyaline cartilage is composed of a single cell type, the chondrocyte, which produces a unique hydrated matrix to resist compressive stress. Although compressive stress has profound effects on transcriptional networks and matrix biosynthesis in chondrocytes, mechanistic relationships between strain, signal transduction, cell metabolism, and matrix production remain superficial. Here, we describe development and validation of a polydimethylsiloxane (PDMS)-based pneumatic microfluidic cell compression device which generates multiple compression conditions in a single platform. The device contained an array of PDMS balloons of different sizes which were actuated by pressurized air, and the balloons compressed chondrocytes cells in alginate hydrogel constructs. Our characterization and testing of the device showed that the developed platform could compress chondrocytes with various magnitudes simultaneously with negligible effect on cell viability. Also, the device is compatible with live cell imaging to probe early effects of compressive stress, and it can be rapidly dismantled to facilitate molecular studies of compressive stress on transcriptional networks. Therefore, the proposed device will enhance the productivity of chondrocyte mechanobiology studies, and it can be applied to study mechanobiology of other cell types.

  2. SAR data compression: Application, requirements, and designs

    NASA Technical Reports Server (NTRS)

    Curlander, John C.; Chang, C. Y.

    1991-01-01

    The feasibility of reducing data volume and data rate is evaluated for the Earth Observing System (EOS) Synthetic Aperture Radar (SAR). All elements of data stream from the sensor downlink data stream to electronic delivery of browse data products are explored. The factors influencing design of a data compression system are analyzed, including the signal data characteristics, the image quality requirements, and the throughput requirements. The conclusion is that little or no reduction can be achieved in the raw signal data using traditional data compression techniques (e.g., vector quantization, adaptive discrete cosine transform) due to the induced phase errors in the output image. However, after image formation, a number of techniques are effective for data compression.

  3. A Novel Range Compression Algorithm for Resolution Enhancement in GNSS-SARs.

    PubMed

    Zheng, Yu; Yang, Yang; Chen, Wu

    2017-06-25

    In this paper, a novel range compression algorithm for enhancing range resolutions of a passive Global Navigation Satellite System-based Synthetic Aperture Radar (GNSS-SAR) is proposed. In the proposed algorithm, within each azimuth bin, firstly range compression is carried out by correlating a reflected GNSS intermediate frequency (IF) signal with a synchronized direct GNSS base-band signal in the range domain. Thereafter, spectrum equalization is applied to the compressed results for suppressing side lobes to obtain a final range-compressed signal. Both theoretical analysis and simulation results have demonstrated that significant range resolution improvement in GNSS-SAR images can be achieved by the proposed range compression algorithm, compared to the conventional range compression algorithm.

  4. A Novel Range Compression Algorithm for Resolution Enhancement in GNSS-SARs

    PubMed Central

    Zheng, Yu; Yang, Yang; Chen, Wu

    2017-01-01

    In this paper, a novel range compression algorithm for enhancing range resolutions of a passive Global Navigation Satellite System-based Synthetic Aperture Radar (GNSS-SAR) is proposed. In the proposed algorithm, within each azimuth bin, firstly range compression is carried out by correlating a reflected GNSS intermediate frequency (IF) signal with a synchronized direct GNSS base-band signal in the range domain. Thereafter, spectrum equalization is applied to the compressed results for suppressing side lobes to obtain a final range-compressed signal. Both theoretical analysis and simulation results have demonstrated that significant range resolution improvement in GNSS-SAR images can be achieved by the proposed range compression algorithm, compared to the conventional range compression algorithm. PMID:28672830

  5. Optimal Compressed Sensing and Reconstruction of Unstructured Mesh Datasets

    DOE PAGES

    Salloum, Maher; Fabian, Nathan D.; Hensinger, David M.; ...

    2017-08-09

    Exascale computing promises quantities of data too large to efficiently store and transfer across networks in order to be able to analyze and visualize the results. We investigate compressed sensing (CS) as an in situ method to reduce the size of the data as it is being generated during a large-scale simulation. CS works by sampling the data on the computational cluster within an alternative function space such as wavelet bases and then reconstructing back to the original space on visualization platforms. While much work has gone into exploring CS on structured datasets, such as image data, we investigate itsmore » usefulness for point clouds such as unstructured mesh datasets often found in finite element simulations. We sample using a technique that exhibits low coherence with tree wavelets found to be suitable for point clouds. We reconstruct using the stagewise orthogonal matching pursuit algorithm that we improved to facilitate automated use in batch jobs. We analyze the achievable compression ratios and the quality and accuracy of reconstructed results at each compression ratio. In the considered case studies, we are able to achieve compression ratios up to two orders of magnitude with reasonable reconstruction accuracy and minimal visual deterioration in the data. Finally, our results suggest that, compared to other compression techniques, CS is attractive in cases where the compression overhead has to be minimized and where the reconstruction cost is not a significant concern.« less

  6. Ultrasonic data compression via parameter estimation.

    PubMed

    Cardoso, Guilherme; Saniie, Jafar

    2005-02-01

    Ultrasonic imaging in medical and industrial applications often requires a large amount of data collection. Consequently, it is desirable to use data compression techniques to reduce data and to facilitate the analysis and remote access of ultrasonic information. The precise data representation is paramount to the accurate analysis of the shape, size, and orientation of ultrasonic reflectors, as well as to the determination of the properties of the propagation path. In this study, a successive parameter estimation algorithm based on a modified version of the continuous wavelet transform (CWT) to compress and denoise ultrasonic signals is presented. It has been shown analytically that the CWT (i.e., time x frequency representation) yields an exact solution for the time-of-arrival and a biased solution for the center frequency. Consequently, a modified CWT (MCWT) based on the Gabor-Helstrom transform is introduced as a means to exactly estimate both time-of-arrival and center frequency of ultrasonic echoes. Furthermore, the MCWT also has been used to generate a phase x bandwidth representation of the ultrasonic echo. This representation allows the exact estimation of the phase and the bandwidth. The performance of this algorithm for data compression and signal analysis is studied using simulated and experimental ultrasonic signals. The successive parameter estimation algorithm achieves a data compression ratio of (1-5N/J), where J is the number of samples and N is the number of echoes in the signal. For a signal with 10 echoes and 2048 samples, a compression ratio of 96% is achieved with a signal-to-noise ratio (SNR) improvement above 20 dB. Furthermore, this algorithm performs robustly, yields accurate echo estimation, and results in SNR enhancements ranging from 10 to 60 dB for composite signals having SNR as low as -10 dB.

  7. β-tricalcium phosphate composite ceramics with high compressive strength, enhanced osteogenesis and inhibited osteoclastic activities.

    PubMed

    Tian, Ye; Lu, Teliang; He, Fupo; Xu, Yubin; Shi, Haishan; Shi, Xuetao; Zuo, Fei; Wu, Shanghua; Ye, Jiandong

    2018-04-13

    β-tricalcium phosphate (β-TCP) is well known as a resorbable bone repair material due to its inherent excellent biocompatibility and osteoconductivity. However, β-TCP is encountered with osteostimulation-deficiency and poor mechanical strength because of poor sinterability. Herein, we prepared novel β-TCP composite ceramics (TCP/SPGs) by introducing strontium-containing phosphate-based glass (SPG; 45P 2 O 5 -32SrO-23Na 2 O) as sintering additive. The SPG helped to achieve efficient liquid-phase sintering of β-TCP at 1100 °C. The compressive strength of TCP/SPGs with 15 wt.% SPG (TCP/SPG15) was 2.65 times as high as that of plain β-TCP ceramic. The SPG reacted with β-TCP, and the Sr 2+ and Na 2+ from SPG replaced Ca 2+ in the lattice structure of β-TCP, enabling the sustained release of strontium from TCP/SPGs. In vitro cytological test indicated that TCP/SPGs with certain amount of SPG were highly biocompatible, and noticeably promoted osteogenesis, and inhibited osteoclastic activities. Our results suggested that the TCP/SPG15 might be potential high-strength bone grafts used for bone defect repair, especially in the osteoporotic condition. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Contact Behavior of Composite CrTiSiN Coated Dies in Compressing of Mg Alloy Sheets under High Pressure

    PubMed Central

    Yang, T.S.; Yao, S.H.; Chang, Y.Y.; Deng, J.H.

    2018-01-01

    Hard coatings have been adopted in cutting and forming applications for nearly two decades. The major purpose of using hard coatings is to reduce the friction coefficient between contact surfaces, to increase strength, toughness and anti-wear performance of working tools and molds, and then to obtain a smooth work surface and an increase in service life of tools and molds. In this report, we deposited a composite CrTiSiN hard coating, and a traditional single-layered TiAlN coating as a reference. Then, the coatings were comparatively studied by a series of tests. A field emission SEM was used to characterize the microstructure. Hardness was measured using a nano-indentation tester. Adhesion of coatings was evaluated using a Rockwell C hardness indentation tester. A pin-on-disk wear tester with WC balls as sliding counterparts was used to determine the wear properties. A self-designed compression and friction tester, by combining a Universal Testing Machine and a wear tester, was used to evaluate the contact behavior of composite CrTiSiN coated dies in compressing of Mg alloy sheets under high pressure. The results indicated that the hardness of composite CrTiSiN coating was lower than that of the TiAlN coating. However, the CrTiSiN coating showed better anti-wear performance. The CrTiSiN coated dies achieved smooth surfaces on the Mg alloy sheet in the compressing test and lower friction coefficient in the friction test, as compared with the TiAlN coating. PMID:29316687

  9. Contact Behavior of Composite CrTiSiN Coated Dies in Compressing of Mg Alloy Sheets under High Pressure.

    PubMed

    Yang, T S; Yao, S H; Chang, Y Y; Deng, J H

    2018-01-08

    Hard coatings have been adopted in cutting and forming applications for nearly two decades. The major purpose of using hard coatings is to reduce the friction coefficient between contact surfaces, to increase strength, toughness and anti-wear performance of working tools and molds, and then to obtain a smooth work surface and an increase in service life of tools and molds. In this report, we deposited a composite CrTiSiN hard coating, and a traditional single-layered TiAlN coating as a reference. Then, the coatings were comparatively studied by a series of tests. A field emission SEM was used to characterize the microstructure. Hardness was measured using a nano-indentation tester. Adhesion of coatings was evaluated using a Rockwell C hardness indentation tester. A pin-on-disk wear tester with WC balls as sliding counterparts was used to determine the wear properties. A self-designed compression and friction tester, by combining a Universal Testing Machine and a wear tester, was used to evaluate the contact behavior of composite CrTiSiN coated dies in compressing of Mg alloy sheets under high pressure. The results indicated that the hardness of composite CrTiSiN coating was lower than that of the TiAlN coating. However, the CrTiSiN coating showed better anti-wear performance. The CrTiSiN coated dies achieved smooth surfaces on the Mg alloy sheet in the compressing test and lower friction coefficient in the friction test, as compared with the TiAlN coating.

  10. High speed X-ray phase contrast imaging of energetic composites under dynamic compression

    NASA Astrophysics Data System (ADS)

    Parab, Niranjan D.; Roberts, Zane A.; Harr, Michael H.; Mares, Jesus O.; Casey, Alex D.; Gunduz, I. Emre; Hudspeth, Matthew; Claus, Benjamin; Sun, Tao; Fezzaa, Kamel; Son, Steven F.; Chen, Weinong W.

    2016-09-01

    Fracture of crystals and frictional heating are associated with the formation of "hot spots" (localized heating) in energetic composites such as polymer bonded explosives (PBXs). Traditional high speed optical imaging methods cannot be used to study the dynamic sub-surface deformation and the fracture behavior of such materials due to their opaque nature. In this study, high speed synchrotron X-ray experiments are conducted to visualize the in situ deformation and the fracture mechanisms in PBXs composed of octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX) crystals and hydroxyl-terminated polybutadiene binder doped with iron (III) oxide. A modified Kolsky bar apparatus was used to apply controlled dynamic compression on the PBX specimens, and a high speed synchrotron X-ray phase contrast imaging (PCI) setup was used to record the in situ deformation and failure in the specimens. The experiments show that synchrotron X-ray PCI provides a sufficient contrast between the HMX crystals and the doped binder, even at ultrafast recording rates. Under dynamic compression, most of the cracking in the crystals was observed to be due to the tensile stress generated by the diametral compression applied from the contacts between the crystals. Tensile stress driven cracking was also observed for some of the crystals due to the transverse deformation of the binder and superior bonding between the crystal and the binder. The obtained results are vital to develop improved understanding and to validate the macroscopic and mesoscopic numerical models for energetic composites so that eventually hot spot formation can be predicted.

  11. High speed X-ray phase contrast imaging of energetic composites under dynamic compression

    DOE PAGES

    Parab, Niranjan D.; Roberts, Zane A.; Harr, Michael H.; ...

    2016-09-26

    Fracture of crystals and subsequent frictional heating are associated with formation of hot spots in energetic composites such as polymer bonded explosives (PBXs). Traditional high speed optical imaging methods cannot be used to study the dynamic sub-surface deformation and fracture behavior of such materials due to their opaque nature. In this study, high speed synchrotron X-ray experiments are conducted to visualize the in situ deformation and fracture mechanisms in PBXs manufactured using octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX) crystals and hydroxyl-terminated polybutadiene (HTPB) binder. A modified Kolsky bar apparatus was used to apply controlled dynamic compression on the PBX specimens, and a high speedmore » synchrotron X-ray phase contrast imaging (PCI) setup was used to record the in situ deformation and failure in the specimens. The experiments show that synchrotron X-ray PCI provides a sufficient contrast between the HMX crystals and the doped binder, even at ultrafast recording rates. Under dynamic compression, most of the cracking in the crystals was observed to be due to the tensile stress generated by the diametral compression applied from the contacts between the crystals. Tensile stress driven cracking was also observed for some of the crystals due to the transverse deformation of the binder and superior bonding between the crystal and the binder. In conclusion, the obtained results are vital to develop improved understanding and to validate the macroscopic and mesoscopic numerical models for energetic composites so that eventually hot spot formation can be predicted.« less

  12. Chemical treatments for improving compressive strength of linerboard at high moisture conditions

    Treesearch

    D. J. Fahey

    1964-01-01

    Various chemical treatments have been investigated at the Forest Products Laboratory for improving the compressive strength of linerboard exposed at high humidities and after water-soaking. Phenolic resins have been among the more promising chemicals studied, but they vary in performance. The low-condensed water-soluble phenolic resins have given some of the highest...

  13. Student Perceptions of High-Achieving Classmates

    ERIC Educational Resources Information Center

    Händel, Marion; Vialle, Wilma; Ziegler, Albert

    2013-01-01

    The reported study investigated students' perceptions of their high-performing classmates in terms of intelligence, social skills, and conscientiousness in different school subjects. The school subjects for study were examined with regard to cognitive, physical, and gender-specific issues. The results show that high academic achievements in…

  14. In-Situ Welding Carbon Nanotubes into a Porous Solid with Super-High Compressive Strength and Fatigue Resistance

    PubMed Central

    Lin, Zhiqiang; Gui, Xuchun; Gan, Qiming; Chen, Wenjun; Cheng, Xiaoping; Liu, Ming; Zhu, Yuan; Yang, Yanbing; Cao, Anyuan; Tang, Zikang

    2015-01-01

    Carbon nanotube (CNT) and graphene-based sponges and aerogels have an isotropic porous structure and their mechanical strength and stability are relatively lower. Here, we present a junction-welding approach to fabricate porous CNT solids in which all CNTs are coated and welded in situ by an amorphous carbon layer, forming an integral three-dimensional scaffold with fixed joints. The resulting CNT solids are robust, yet still highly porous and compressible, with compressive strengths up to 72 MPa, flexural strengths up to 33 MPa, and fatigue resistance (recovery after 100,000 large-strain compression cycles at high frequency). Significant enhancement of mechanical properties is attributed to the welding-induced interconnection and reinforcement of structural units, and synergistic effects stemming from the core-shell microstructures consisting of a flexible CNT framework and a rigid amorphous carbon shell. Our results provide a simple and effective method to manufacture high-strength porous materials by nanoscale welding. PMID:26067176

  15. In-Situ Welding Carbon Nanotubes into a Porous Solid with Super-High Compressive Strength and Fatigue Resistance.

    PubMed

    Lin, Zhiqiang; Gui, Xuchun; Gan, Qiming; Chen, Wenjun; Cheng, Xiaoping; Liu, Ming; Zhu, Yuan; Yang, Yanbing; Cao, Anyuan; Tang, Zikang

    2015-06-11

    Carbon nanotube (CNT) and graphene-based sponges and aerogels have an isotropic porous structure and their mechanical strength and stability are relatively lower. Here, we present a junction-welding approach to fabricate porous CNT solids in which all CNTs are coated and welded in situ by an amorphous carbon layer, forming an integral three-dimensional scaffold with fixed joints. The resulting CNT solids are robust, yet still highly porous and compressible, with compressive strengths up to 72 MPa, flexural strengths up to 33 MPa, and fatigue resistance (recovery after 100,000 large-strain compression cycles at high frequency). Significant enhancement of mechanical properties is attributed to the welding-induced interconnection and reinforcement of structural units, and synergistic effects stemming from the core-shell microstructures consisting of a flexible CNT framework and a rigid amorphous carbon shell. Our results provide a simple and effective method to manufacture high-strength porous materials by nanoscale welding.

  16. In-Situ Welding Carbon Nanotubes into a Porous Solid with Super-High Compressive Strength and Fatigue Resistance

    NASA Astrophysics Data System (ADS)

    Lin, Zhiqiang; Gui, Xuchun; Gan, Qiming; Chen, Wenjun; Cheng, Xiaoping; Liu, Ming; Zhu, Yuan; Yang, Yanbing; Cao, Anyuan; Tang, Zikang

    2015-06-01

    Carbon nanotube (CNT) and graphene-based sponges and aerogels have an isotropic porous structure and their mechanical strength and stability are relatively lower. Here, we present a junction-welding approach to fabricate porous CNT solids in which all CNTs are coated and welded in situ by an amorphous carbon layer, forming an integral three-dimensional scaffold with fixed joints. The resulting CNT solids are robust, yet still highly porous and compressible, with compressive strengths up to 72 MPa, flexural strengths up to 33 MPa, and fatigue resistance (recovery after 100,000 large-strain compression cycles at high frequency). Significant enhancement of mechanical properties is attributed to the welding-induced interconnection and reinforcement of structural units, and synergistic effects stemming from the core-shell microstructures consisting of a flexible CNT framework and a rigid amorphous carbon shell. Our results provide a simple and effective method to manufacture high-strength porous materials by nanoscale welding.

  17. CoNi2 S4 Nanoparticle/Carbon Nanotube Sponge Cathode with Ultrahigh Capacitance for Highly Compressible Asymmetric Supercapacitor.

    PubMed

    Cao, Xin; He, Jin; Li, Huan; Kang, Liping; He, Xuexia; Sun, Jie; Jiang, Ruibing; Xu, Hua; Lei, Zhibin; Liu, Zong-Huai

    2018-05-30

    Compared with other flexible energy-storage devices, the design and construction of the compressible energy-storage devices face more difficulty because they must accommodate large strain and shape deformations. In the present work, CoNi 2 S 4 nanoparticles/3D porous carbon nanotube (CNT) sponge cathode with highly compressible property and excellent capacitance is prepared by electrodepositing CoNi 2 S 4 on CNT sponge, in which CoNi 2 S 4 nanoparticles with size among 10-15 nm are uniformly anchored on CNT, causing the cathode to show a high compression property and gives high specific capacitance of 1530 F g -1 . Meanwhile, Fe 2 O 3 /CNT sponge anode with specific capacitance of 460 F g -1 in a prolonged voltage window is also prepared by electrodepositing Fe 2 O 3 nanosheets on CNT sponge. An asymmetric supercapacitor (CoNi 2 S 4 /CNT//Fe 2 O 3 /CNT) is assembled by using CoNi 2 S 4 /CNT sponge as positive electrode and Fe 2 O 3 /CNT sponge as negative electrode in 2 m KOH solution. It exhibits excellent energy density of up to 50 Wh kg -1 at a power density of 847 W kg -1 and excellent cycling stability at high compression. Even at a strain of 85%, about 75% of the initial capacitance is retained after 10 000 consecutive cycles. The CoNi 2 S 4 /CNT//Fe 2 O 3 /CNT device is a promising candidate for flexible energy devices due to its excellent compressibility and high energy density. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Sweep and Compressibility Effects on Active Separation Control at High Reynolds Numbers

    NASA Technical Reports Server (NTRS)

    Seifert, Avi; Pack, LaTunia G.

    2000-01-01

    This paper explores the effects of compressibility, sweep and excitation location on active separation control at high Reynolds numbers. The model, which was tested in a cryogenic pressurized wind tunnel, simulates the upper surface of a 20% thick GlauertGoldschmied type airfoil at zero angle of attack. The flow is fully turbulent since the tunnel sidewall boundary layer flows over the model. Without control, the flow separates at the highly convex area and a large turbulent separation bubble is formed. Periodic excitation is applied to gradually eliminate the separation bubble. Two alternative blowing slot locations as well as the effect of compressibility, sweep and steady suction or blowing were studied. During the test the Reynolds numbers ranged from 2 to 40 million and Mach numbers ranged from 0.2 to 0.7. Sweep angles were 0 and 30 deg. It was found that excitation must be introduced slightly upstream of the separation region regardless of the sweep angle at low Mach number. Introduction of excitation upstream of the shock wave is more effective than at its foot. Compressibility reduces the ability of steady mass transfer and periodic excitation to control the separation bubble but excitation has an effect on the integral parameters, which is similar to that observed in low Mach numbers. The conventional swept flow scaling is valid for fully and even partially attached flow, but different scaling is required for the separated 3D flow. The effectiveness of the active control is not reduced by sweep. Detailed flow field dynamics are described in the accompanying paper.

  19. Sweep and Compressibility Effects on Active Separation Control at High Reynolds Numbers

    NASA Technical Reports Server (NTRS)

    Seifert, Avi; Pack, LaTunia G.

    2000-01-01

    This paper explores the effects of compressibility, sweep and excitation location on active separation control at high Reynolds numbers. The model, which was tested in a cryogenic pressurized wind tunnel, simulates the upper surface of a 20% thick Glauert Goldschmied type airfoil at zero angle of attack. The flow is fully turbulent since the tunnel sidewall boundary layer flows over the model. Without control, the flow separates at the highly convex area and a large turbulent separation bubble is formed. Periodic excitation is applied to gradually eliminate the separation bubble. Two alternative blowing slot locations as well as the effect of compressibility, sweep and steady suction or blowing were studied. During the test the Reynolds numbers ranged from 2 to 40 million and Mach numbers ranged from 0.2 to 0.7. Sweep angles were 0 and 30 deg. It was found that excitation must be introduced slightly upstream of the separation region regardless of the sweep angle at low Mach number. Introduction of excitation upstream of the shock wave is more effective than at its foot. Compressibility reduces the ability of steady mass transfer and periodic excitation to control the separation bubble but excitation has an effect on the integral parameters, which is similar to that observed in low Mach numbers. The conventional swept flow scaling is valid for fully and even partially attached flow, but different scaling is required for the separated 3D flow. The effectiveness of the active control is not reduced by sweep. Detailed flow field dynamics are described in the accompanying paper.

  20. The Compressive Behavior of Isocyanate-crosslinked Silica Aerogel at High Strain Rates

    NASA Technical Reports Server (NTRS)

    Luo, H.; Lu, H.; Leventis, N.

    2006-01-01

    Aerogels are low-density, highly nano-porous materials. Their engineering applications are limited due to their brittleness and hydrophilicity. Recently, a strong lightweight crosslinked silica aerogel has been developed by encapsulating the skeletal framework of amine-modified silica aerogels with polyureas derived by isocyanate. The mesoporous structure of the underlying silica framework is preserved through conformal polymer coating, and the thermal conductivity remains low. Characterization has been conducted on the thermal, physical properties and the mechanical properties under quasi-static loading conditions. In this paper, we present results on the dynamic compressive behavior of the crosslinked silica aerogel (CSA) using a split Hopkinson pressure bar (SHPB). A new tubing pulse shaper was employed to help reach the dynamic stress equilibrium and constant strain rate. The stress-strain relationship was determined at high strain rates within 114-4386/s. The effects of strain rate, density, specimen thickness and water absorption on the dynamic behavior of the CSA were investigated through a series of dynamic experiments. The Young's moduli (or 0.2% offset compressive yield strengths) at a strain rate approx.350/s were determined as 10.96/2.08, 159.5/6.75, 192.2/7.68, 304.6/11.46, 407.0/20.91 and 640.5/30.47 MPa for CSA with densities 0.205, 0.454, 0.492, 0.551,0.628 and 0.731 g/cu cm, respectively. The deformation and failure behaviors of a native silica aerogel with density (0.472 g/cu cm ), approximately the same as a typical CSA sample were observed with a high speed digital camera. Digital image correlation technique was used to determine the surface strains through a series of images acquired using high speed photography. The relative uniform axial deformation indicated that localized compaction did not occur at a compressive strain level of approx.17%, suggesting most likely failure mechanism at high strain rate to be different from that under quasi

  1. High frequency chest wall compression and carbon dioxide elimination in obstructed dogs.

    PubMed

    Gross, D; Vartian, V; Minami, H; Chang, H K; Zidulka, A

    1984-01-01

    High frequency chest wall compression (HFCWC) was studied as a method of assisting ventilation in six spontaneously breathing anesthetized dogs. Under a constant level of anesthesia, the dogs became hypercapneic after airflow obstruction was created by metal beads inserted in the airways. HFCWC was achieved by a piston pump rapidly oscillating the pressure in a modified double blood pressure cuff wrapped around the lower thorax. Thirty minute periods of spontaneous ventilation were alternated with thirty minute periods of spontaneous breathing plus HFCWC at 3, 5 or 8 Hz. The superimposition of HFCWC to spontaneous ventilation resulted in little change in the PaO2. The PaCO2, however, was reduced in every case from a mean of 6.55 +/- 0.59 to 4.72 +/- 0.32 kPa at 3 Hz (p less than 0.05), 6.92 +/- 0.57 to 3.9 +/- 0.45 kPa at 5 Hz (p less than 0.01) and 7.10 +/- 0.65 to 4.56 +/- 0.59 kPa at 8 Hz (p less than 0.05). This occurred despite a decrease in spontaneous minute ventilation. We conclude that HFCWC can assist in elimination of CO2 in obstructed spontaneous breathing dogs with hypercapnea.

  2. Mechanical properties in crumple-formed paper derived materials subjected to compression.

    PubMed

    Hanaor, D A H; Flores Johnson, E A; Wang, S; Quach, S; Dela-Torre, K N; Gan, Y; Shen, L

    2017-06-01

    The crumpling of precursor materials to form dense three dimensional geometries offers an attractive route towards the utilisation of minor-value waste materials. Crumple-forming results in a mesostructured system in which mechanical properties of the material are governed by complex cross-scale deformation mechanisms. Here we investigate the physical and mechanical properties of dense compacted structures fabricated by the confined uniaxial compression of a cellulose tissue to yield crumpled mesostructuring. A total of 25 specimens of various densities were tested under compression. Crumple formed specimens exhibited densities in the range 0.8-1.3 g cm -3 , and showed high strength to weight characteristics, achieving ultimate compressive strength values of up to 200 MPa under both quasi-static and high strain rate loading conditions and deformation energy that compares well to engineering materials of similar density. The materials fabricated in this work and their mechanical attributes demonstrate the potential of crumple-forming approaches in the fabrication of novel energy-absorbing materials from low-cost precursors such as recycled paper. Stiffness and toughness of the materials exhibit density dependence suggesting this forming technique further allows controllable impact energy dissipation rates in dynamic applications.

  3. Influence of rate of force application during compression on tablet capping.

    PubMed

    Sarkar, Srimanta; Ooi, Shing Ming; Liew, Celine Valeria; Heng, Paul Wan Sia

    2015-04-01

    Root cause and possible processing remediation of tablet capping were investigated using a specially designed tablet press with an air compensator installed above the precompression roll to limit compression force and allow extended dwell time in the precompression event. Using acetaminophen-starch (77.9:22.1) as a model formulation, tablets were prepared by various combinations of precompression and main compression forces, set precompression thickness, and turret speed. The rate of force application (RFA) was the main factor contributing to the tablet mechanical strength and capping. When target force above the force required for strong interparticulate bond formation, the resultant high RFA contributed to more pronounced air entrapment, uneven force distribution, and consequently, stratified densification in compact together with high viscoelastic recovery. These factors collectively had contributed to the tablet capping. As extended dwell time assisted particle rearrangement and air escape, a denser and more homogenous packing in the die could be achieved. This occurred during the extended dwell time when a low precompression force was applied, followed by application of main compression force for strong interparticulate bond formation that was the most beneficial option to solve capping problem. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  4. Energy compression of nanosecond high-voltage pulses based on two-stage hybrid scheme

    NASA Astrophysics Data System (ADS)

    Ulmaskulov, M. R.; Mesyats, G. A.; Sadykova, A. G.; Sharypov, K. A.; Shpak, V. G.; Shunailov, S. A.; Yalandin, M. I.

    2017-04-01

    Test results of high-voltage subnanosecond pulse generator with a hybrid, two-stage energy compression scheme are presented. After the first compression section with a gas discharger, a ferrite-filled gyromagnetic nonlinear transmitting line is used. The offered technical solution makes it possible to increase the voltage pulse amplitude from -185 kV to -325 kV, with a 2-ns pulse rise time minimized down to ˜180 ps. For the small output voltage amplitude of -240 kV, the shortest pulse front of ˜85 ps was obtained. The generator with maximum amplitude was utilized to form an ultra-short flow of runaway electrons in air-filled discharge gap with particles' energy approaching to 700 keV.

  5. Influence of compression parameters on mechanical behavior of mozzarella cheese.

    PubMed

    Fogaça, Davi Novaes Ladeia; da Silva, William Soares; Rodrigues, Luciano Brito

    2017-10-01

    Studies on the interaction between direction and degree of compression in the Texture Profile Analysis (TPA) of cheeses are limited. For this reason the present study aimed to evaluate the mechanical properties of Mozzarella cheese by TPA at different compression degrees (65, 75, and 85%) and directions (axes X, Y, and Z). Data obtained were compared in order to identify possible interaction between both factors. Compression direction did not affect any mechanical variable, or rather, the cheese had an isotropic behavior for TPA. Compression degree had a significant influence (p < 0.05) on TPA responses, excepting for chewiness TPA (N), which remained constant. Data from texture profile were adjusted to models to explain the mechanical behavior according to the compression degree used in the test. The isotropic behavior observed may be result of differences in production method of Mozzarella cheese especially on stretching of cheese mass. Texture Profile Analysis (TPA) is a technique largely used to assess the mechanical properties of food, particularly cheese. The precise choice of the instrumental test configuration is essential for achieving results that represent the material analyzed. The method of manufacturing is another factor that may directly influence the mechanical properties of food. This can be seen, for instance, in stretched curd cheese, such as Mozzarella. Knowledge on such mechanical properties is highly relevant for food industries due to the mechanical resistance in piling, pressing, manufacture of packages, and food transport, or to melting features presented by the food at high temperatures in preparation of several foods, such as pizzas, snacks, sandwiches, and appetizers. © 2016 Wiley Periodicals, Inc.

  6. [Ambulant compression therapy for crural ulcers; an effective treatment when applied skilfully].

    PubMed

    de Boer, Edith M; Geerkens, Maud; Mooij, Michael C

    2015-01-01

    The incidence of crural ulcers is high. They reduce quality of life considerably and create a burden on the healthcare budget. The key treatment is ambulant compression therapy (ACT). We describe two patients with crural ulcers whose ambulant compression treatment was suboptimal and did not result in healing. When the bandages were applied correctly healing was achieved. If correctly applied ACT should provide sufficient pressure to eliminate oedema, whilst taking local circumstances such as bony structures and arterial qualities into consideration. To provide pressure-to-measure regular practical training, skills and regular quality checks are needed. Knowledge of the properties of bandages and the proper use of materials for padding under the bandage enables good personalised ACT. In trained hands adequate compression and making use of simple bandages and dressings provides good care for patients suffering from crural ulcers in contrast to inadequate ACT using the same materials.

  7. Self-Concept and Achievement Motivation of High School Students

    ERIC Educational Resources Information Center

    Lawrence, A. S. Arul; Vimala, A.

    2013-01-01

    The present study "Self-concept and Achievement Motivation of High School Students" was investigated to find the relationship between Self-concept and Achievement Motivation of High School Students. Data for the study were collected using Self-concept Questionnaire developed by Raj Kumar Saraswath (1984) and Achievement Motive Test (ACMT)…

  8. Demonstration of Isothermal Compressed Air Energy Storage to Support Renewable Energy Production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bollinger, Benjamin

    This project develops and demonstrates a megawatt (MW)-scale Energy Storage System that employs compressed air as the storage medium. An isothermal compressed air energy storage (ICAES TM) system rated for 1 MW or more will be demonstrated in a full-scale prototype unit. Breakthrough cost-effectiveness will be achieved through the use of proprietary methods for isothermal gas cycling and staged gas expansion implemented using industrially mature, readily-available components.The ICAES approach uses an electrically driven mechanical system to raise air to high pressure for storage in low-cost pressure vessels, pipeline, or lined-rock cavern (LRC). This air is later expanded through the samemore » mechanical system to drive the electric motor as a generator. The approach incorporates two key efficiency-enhancing innovations: (1) isothermal (constant temperature) gas cycling, which is achieved by mixing liquid with air (via spray or foam) to exchange heat with air undergoing compression or expansion; and (2) a novel, staged gas-expansion scheme that allows the drivetrain to operate at constant power while still allowing the stored gas to work over its entire pressure range. The ICAES system will be scalable, non-toxic, and cost-effective, making it suitable for firming renewables and for other grid applications.« less

  9. POLYCOMP: Efficient and configurable compression of astronomical timelines

    NASA Astrophysics Data System (ADS)

    Tomasi, M.

    2016-07-01

    This paper describes the implementation of polycomp, a open-sourced, publicly available program for compressing one-dimensional data series in tabular format. The program is particularly suited for compressing smooth, noiseless streams of data like pointing information, as one of the algorithms it implements applies a combination of least squares polynomial fitting and discrete Chebyshev transforms that is able to achieve a compression ratio Cr up to ≈ 40 in the examples discussed in this work. This performance comes at the expense of a loss of information, whose upper bound is configured by the user. I show two areas in which the usage of polycomp is interesting. In the first example, I compress the ephemeris table of an astronomical object (Ganymede), obtaining Cr ≈ 20, with a compression error on the x , y , z coordinates smaller than 1 m. In the second example, I compress the publicly available timelines recorded by the Low Frequency Instrument (LFI), an array of microwave radiometers onboard the ESA Planck spacecraft. The compression reduces the needed storage from ∼ 6.5 TB to ≈ 0.75 TB (Cr ≈ 9), thus making them small enough to be kept in a portable hard drive.

  10. New Experimental Capabilities and Theoretical Insights of High Pressure Compression Waves

    NASA Astrophysics Data System (ADS)

    Orlikowski, Daniel; Nguyen, Jeffrey H.; Patterson, J. Reed; Minich, Roger; Martin, L. Peter; Holmes, Neil C.

    2007-12-01

    Currently there are three platforms that offer quasi-isentropic compression or ramp-wave compression (RWC): light-gas gun, magnetic flux (Z-pinch), and laser. We focus here on the light-gas gun technique and on some current theoretical insights from experimental data. An impedance gradient through the length of the impactor provides the pressure pulse upon impact to the subject material. Applications and results are given concerning high-pressure strength and the liquid-to-solid, phase transition of water giving its first associated phase fraction history. We also introduce the Korteweg-deVries-Burgers equation as a means to understand the evolution of these RWC waves as they propagate through the thickness of the subject material. This model equation has the necessary competition between non-linear, dispersion, and dissipation processes, which is shown through observed structures that are manifested in the experimental particle velocity histories. Such methodology points towards a possibility of quantifying dissipation, through which RWC experiments may be analyzed.

  11. Combustion in a High-Speed Compression-Ignition Engine

    NASA Technical Reports Server (NTRS)

    Rothrock, A M

    1933-01-01

    An investigation conducted to determine the factors which control the combustion in a high-speed compression-ignition engine is presented. Indicator cards were taken with the Farnboro indicator and analyzed according to the tangent method devised by Schweitzer. The analysis show that in a quiescent combustion chamber increasing the time lag of auto-ignition increases the maximum rate of combustion. Increasing the maximum rate of combustion increases the tendency for detonation to occur. The results show that by increasing the air temperature during injection the start of combustion can be forced to take place during injection and so prevent detonation from occurring. It is shown that the rate of fuel injection does not in itself control the rate of combustion.

  12. Biomedical sensor design using analog compressed sensing

    NASA Astrophysics Data System (ADS)

    Balouchestani, Mohammadreza; Krishnan, Sridhar

    2015-05-01

    The main drawback of current healthcare systems is the location-specific nature of the system due to the use of fixed/wired biomedical sensors. Since biomedical sensors are usually driven by a battery, power consumption is the most important factor determining the life of a biomedical sensor. They are also restricted by size, cost, and transmission capacity. Therefore, it is important to reduce the load of sampling by merging the sampling and compression steps to reduce the storage usage, transmission times, and power consumption in order to expand the current healthcare systems to Wireless Healthcare Systems (WHSs). In this work, we present an implementation of a low-power biomedical sensor using analog Compressed Sensing (CS) framework for sparse biomedical signals that addresses both the energy and telemetry bandwidth constraints of wearable and wireless Body-Area Networks (BANs). This architecture enables continuous data acquisition and compression of biomedical signals that are suitable for a variety of diagnostic and treatment purposes. At the transmitter side, an analog-CS framework is applied at the sensing step before Analog to Digital Converter (ADC) in order to generate the compressed version of the input analog bio-signal. At the receiver side, a reconstruction algorithm based on Restricted Isometry Property (RIP) condition is applied in order to reconstruct the original bio-signals form the compressed bio-signals with high probability and enough accuracy. We examine the proposed algorithm with healthy and neuropathy surface Electromyography (sEMG) signals. The proposed algorithm achieves a good level for Average Recognition Rate (ARR) at 93% and reconstruction accuracy at 98.9%. In addition, The proposed architecture reduces total computation time from 32 to 11.5 seconds at sampling-rate=29 % of Nyquist rate, Percentage Residual Difference (PRD)=26 %, Root Mean Squared Error (RMSE)=3 %.

  13. Dynamic Range Enhancement of High-Speed Electrical Signal Data via Non-Linear Compression

    NASA Technical Reports Server (NTRS)

    Laun, Matthew C. (Inventor)

    2016-01-01

    Systems and methods for high-speed compression of dynamic electrical signal waveforms to extend the measuring capabilities of conventional measuring devices such as oscilloscopes and high-speed data acquisition systems are discussed. Transfer function components and algorithmic transfer functions can be used to accurately measure signals that are within the frequency bandwidth but beyond the voltage range and voltage resolution capabilities of the measuring device.

  14. Development and evaluation of a novel lossless image compression method (AIC: artificial intelligence compression method) using neural networks as artificial intelligence.

    PubMed

    Fukatsu, Hiroshi; Naganawa, Shinji; Yumura, Shinnichiro

    2008-04-01

    This study was aimed to validate the performance of a novel image compression method using a neural network to achieve a lossless compression. The encoding consists of the following blocks: a prediction block; a residual data calculation block; a transformation and quantization block; an organization and modification block; and an entropy encoding block. The predicted image is divided into four macro-blocks using the original image for teaching; and then redivided into sixteen sub-blocks. The predicted image is compared to the original image to create the residual image. The spatial and frequency data of the residual image are compared and transformed. Chest radiography, computed tomography (CT), magnetic resonance imaging, positron emission tomography, radioisotope mammography, ultrasonography, and digital subtraction angiography images were compressed using the AIC lossless compression method; and the compression rates were calculated. The compression rates were around 15:1 for chest radiography and mammography, 12:1 for CT, and around 6:1 for other images. This method thus enables greater lossless compression than the conventional methods. This novel method should improve the efficiency of handling of the increasing volume of medical imaging data.

  15. Using off-the-shelf lossy compression for wireless home sleep staging.

    PubMed

    Lan, Kun-Chan; Chang, Da-Wei; Kuo, Chih-En; Wei, Ming-Zhi; Li, Yu-Hung; Shaw, Fu-Zen; Liang, Sheng-Fu

    2015-05-15

    Recently, there has been increasing interest in the development of wireless home sleep staging systems that allow the patient to be monitored remotely while remaining in the comfort of their home. However, transmitting large amount of Polysomnography (PSG) data over the Internet is an important issue needed to be considered. In this work, we aim to reduce the amount of PSG data which has to be transmitted or stored, while having as little impact as possible on the information in the signal relevant to classify sleep stages. We examine the effects of off-the-shelf lossy compression on an all-night PSG dataset from 20 healthy subjects, in the context of automated sleep staging. The popular compression method Set Partitioning in Hierarchical Trees (SPIHT) was used, and a range of compression levels was selected in order to compress the signals with various degrees of loss. In addition, a rule-based automatic sleep staging method was used to automatically classify the sleep stages. Considering the criteria of clinical usefulness, the experimental results show that the system can achieve more than 60% energy saving with a high accuracy (>84%) in classifying sleep stages by using a lossy compression algorithm like SPIHT. As far as we know, our study is the first that focuses how much loss can be tolerated in compressing complex multi-channel PSG data for sleep analysis. We demonstrate the feasibility of using lossy SPIHT compression for wireless home sleep staging. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Sugar Determination in Foods with a Radially Compressed High Performance Liquid Chromatography Column.

    ERIC Educational Resources Information Center

    Ondrus, Martin G.; And Others

    1983-01-01

    Advocates use of Waters Associates Radial Compression Separation System for high performance liquid chromatography. Discusses instrumentation and reagents, outlining procedure for analyzing various foods and discussing typical student data. Points out potential problems due to impurities and pump seal life. Suggests use of ribose as internal…

  17. Lossless Compression of Data into Fixed-Length Packets

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron B.; Klimesh, Matthew A.

    2009-01-01

    A computer program effects lossless compression of data samples from a one-dimensional source into fixed-length data packets. The software makes use of adaptive prediction: it exploits the data structure in such a way as to increase the efficiency of compression beyond that otherwise achievable. Adaptive linear filtering is used to predict each sample value based on past sample values. The difference between predicted and actual sample values is encoded using a Golomb code.

  18. Biomechanical Comparison of External Fixation and Compression Screws for Transverse Tarsal Joint Arthrodesis.

    PubMed

    Latt, L Daniel; Glisson, Richard R; Adams, Samuel B; Schuh, Reinhard; Narron, John A; Easley, Mark E

    2015-10-01

    Transverse tarsal joint arthrodesis is commonly performed in the operative treatment of hindfoot arthritis and acquired flatfoot deformity. While fixation is typically achieved using screws, failure to obtain and maintain joint compression sometimes occurs, potentially leading to nonunion. External fixation is an alternate method of achieving arthrodesis site compression and has the advantage of allowing postoperative compression adjustment when necessary. However, its performance relative to standard screw fixation has not been quantified in this application. We hypothesized that external fixation could provide transverse tarsal joint compression exceeding that possible with screw fixation. Transverse tarsal joint fixation was performed sequentially, first with a circular external fixator and then with compression screws, on 9 fresh-frozen cadaveric legs. The external fixator was attached in abutting rings fixed to the tibia and the hindfoot and a third anterior ring parallel to the hindfoot ring using transverse wires and half-pins in the tibial diaphysis, calcaneus, and metatarsals. Screw fixation comprised two 4.3 mm headless compression screws traversing the talonavicular joint and 1 across the calcaneocuboid joint. Compressive forces generated during incremental fixator foot ring displacement to 20 mm and incremental screw tightening were measured using a custom-fabricated instrumented miniature external fixator spanning the transverse tarsal joint. The maximum compressive force generated by the external fixator averaged 186% of that produced by the screws (range, 104%-391%). Fixator compression surpassed that obtainable with screws at 12 mm of ring displacement and decreased when the tibial ring was detached. No correlation was found between bone density and the compressive force achievable by either fusion method. The compression across the transverse tarsal joint that can be obtained with a circular external fixator including a tibial ring exceeds that

  19. Understanding turbulence in compressing plasmas and its exploitation or prevention.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidovits, Seth

    Unprecedented densities and temperatures are now achieved in compressions of plasma, by lasers and by pulsed power, in major experimental facilities. These compressions, carried out at the largest scale at the National Ignition Facility and at the Z Pulsed Power Facility, have important applications, including fusion, X-ray production, and materials research. Several experimental and simulation results suggest that the plasma in some of these compressions is turbulent. In fact, measurements suggest that in certain laboratory plasma compressions the turbulent energy is a dominant energy component. Similarly, turbulence is dominant in some compressing astrophysical plasmas, such as in molecular clouds. Turbulencemore » need not be dominant to be important; even small quantities could greatly influence experiments that are sensitive to mixing of non-fuel into fuel, such as compressions seeking fusion ignition. Despite its important role in major settings, bulk plasma turbulence under compression is insufficiently understood to answer or even to pose some of the most fundamental questions about it. This thesis both identifies and answers key questions in compressing turbulent motion, while providing a description of the behavior of three-dimensional, isotropic, compressions of homogeneous turbulence with a plasma viscosity. This description includes a simple, but successful, new model for the turbulent energy of plasma undergoing compression. The unique features of compressing turbulence with a plasma viscosity are shown, including the sensitivity of the turbulence to plasma ionization, and a sudden viscous dissipation'' effect which rapidly converts plasma turbulent energy into thermal energy. This thesis then examines turbulence in both laboratory compression experiments and molecular clouds. It importantly shows: the possibility of exploiting turbulence to make fusion or X-ray production more efficient; conditions under which hot-spot turbulence can be prevented

  20. Understanding Turbulence in Compressing Plasmas and Its Exploitation or Prevention

    NASA Astrophysics Data System (ADS)

    Davidovits, Seth

    Unprecedented densities and temperatures are now achieved in compressions of plasma, by lasers and by pulsed power, in major experimental facilities. These compressions, carried out at the largest scale at the National Ignition Facility and at the Z Pulsed Power Facility, have important applications, including fusion, X-ray production, and materials research. Several experimental and simulation results suggest that the plasma in some of these compressions is turbulent. In fact, measurements suggest that in certain laboratory plasma compressions the turbulent energy is a dominant energy component. Similarly, turbulence is dominant in some compressing astrophysical plasmas, such as in molecular clouds. Turbulence need not be dominant to be important; even small quantities could greatly influence experiments that are sensitive to mixing of non-fuel into fuel, such as compressions seeking fusion ignition. Despite its important role in major settings, bulk plasma turbulence under compression is insufficiently understood to answer or even to pose some of the most fundamental questions about it. This thesis both identifies and answers key questions in compressing turbulent motion, while providing a description of the behavior of three-dimensional, isotropic, compressions of homogeneous turbulence with a plasma viscosity. This description includes a simple, but successful, new model for the turbulent energy of plasma undergoing compression. The unique features of compressing turbulence with a plasma viscosity are shown, including the sensitivity of the turbulence to plasma ionization, and a "sudden viscous dissipation'' effect which rapidly converts plasma turbulent energy into thermal energy. This thesis then examines turbulence in both laboratory compression experiments and molecular clouds. It importantly shows: the possibility of exploiting turbulence to make fusion or X-ray production more efficient; conditions under which hot-spot turbulence can be prevented; and a

  1. ERGC: an efficient referential genome compression algorithm.

    PubMed

    Saha, Subrata; Rajasekaran, Sanguthevar

    2015-11-01

    Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. rajasek@engr.uconn.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm

    NASA Astrophysics Data System (ADS)

    Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan

    2017-12-01

    Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.

  3. Correlation between compressive strength and ultrasonic pulse velocity of high strength concrete incorporating chopped basalt fibre

    NASA Astrophysics Data System (ADS)

    Shafiq, Nasir; Fadhilnuruddin, Muhd; Elshekh, Ali Elheber Ahmed; Fathi, Ahmed

    2015-07-01

    Ultrasonic pulse velocity (UPV), is considered as the most important test for non-destructive techniques that are used to evaluate the mechanical characteristics of high strength concrete (HSC). The relationship between the compressive strength of HSC containing chopped basalt fibre stands (CBSF) and UPV was investigated. The concrete specimens were prepared using a different ratio of CBSF as internal strengthening materials. The compressive strength measurements were conducted at the sample ages of 3, 7, 28, 56 and 90 days; whilst, the ultrasonic pulse velocity was measured at 28 days. The result of HSC's compressive strength with the chopped basalt fibre did not show any improvement; instead, it was decreased. The UPV of the chopped basalt fibre reinforced concrete has been found to be less than that of the control mix for each addition ratio of the basalt fibre. A relationship plot is gained between the cube compressive strength for HSC and UPV with various amounts of chopped basalt fibres.

  4. Comparison of reversible methods for data compression

    NASA Astrophysics Data System (ADS)

    Heer, Volker K.; Reinfelder, Hans-Erich

    1990-07-01

    Widely differing methods for data compression described in the ACR-NEMA draft are used in medical imaging. In our contribution we will review various methods briefly and discuss the relevant advantages and disadvantages. In detail we evaluate 1st order DPCM pyramid transformation and S transformation. We compare as coding algorithms both fixed and adaptive Huffman coding and Lempel-Ziv coding. Our comparison is performed on typical medical images from CT MR DSA and DLR (Digital Luminescence Radiography). Apart from the achieved compression factors we take into account CPU time required and main memory requirement both for compression and for decompression. For a realistic comparison we have implemented the mentioned algorithms in the C program language on a MicroVAX II and a SPARC station 1. 2.

  5. The Relationship between Self-Esteem and Academic Achievement in a Group of High, Medium, and Low Secondary Public High School Achievers.

    ERIC Educational Resources Information Center

    Thomas-Brantley, Betty J.

    This study investigated the relationship between self-esteem and academic achievement in a group of 150 high, medium, and low achievers at a large midwestern public high school. Correlating data from the Coopersmith Inventory of self-esteem with grades, cumulative grade point averages, and class rank, the study disclosed a positive correlation…

  6. Compressed Air Working in Chennai During Metro Tunnel Construction: Occupational Health Problems.

    PubMed

    Kulkarni, Ajit C

    2017-01-01

    Chennai metropolis has been growing rapidly. Need was felt of a metro rail system. Two corridors were planned. Corridor 1, of 23 km starting from Washermanpet to Airport. 14.3 km of this would be underground. Corridor 2, of 22 km starting from Chennai Central Railway station to St. Thomas Mount. 9.7 km of this would be underground. Occupational health centre's role involved selection of miners and assessing their fitness to work under compressed air. Planning and execution of compression and decompression, health monitoring and treatment of compression related illnesses. More than thirty five thousand man hours of work was carried out under compressed air pressure ranged from 1.2 to 1.9 bar absolute. There were only three cases of pain only ( Type I) decompression sickness which were treated with recompression. Vigilant medical supervision, experienced lock operators and reduced working hours under pressure because of inclement environmental conditions viz. high temperature and humidity, has helped achieve this low incident. Tunnelling activity will increase in India as more cities will soon opt for underground metro railway. Indian standard IS 4138 - 1977 " Safety code for working in compressed air" needs to be updated urgently keeping pace with modern working methods.

  7. Compressed Air Working in Chennai During Metro Tunnel Construction: Occupational Health Problems

    PubMed Central

    Kulkarni, Ajit C.

    2017-01-01

    Chennai metropolis has been growing rapidly. Need was felt of a metro rail system. Two corridors were planned. Corridor 1, of 23 km starting from Washermanpet to Airport. 14.3 km of this would be underground. Corridor 2, of 22 km starting from Chennai Central Railway station to St. Thomas Mount. 9.7 km of this would be underground. Occupational health centre's role involved selection of miners and assessing their fitness to work under compressed air. Planning and execution of compression and decompression, health monitoring and treatment of compression related illnesses. More than thirty five thousand man hours of work was carried out under compressed air pressure ranged from 1.2 to 1.9 bar absolute. There were only three cases of pain only ( Type I) decompression sickness which were treated with recompression. Vigilant medical supervision, experienced lock operators and reduced working hours under pressure because of inclement environmental conditions viz. high temperature and humidity, has helped achieve this low incident. Tunnelling activity will increase in India as more cities will soon opt for underground metro railway. Indian standard IS 4138 – 1977 ” Safety code for working in compressed air” needs to be updated urgently keeping pace with modern working methods. PMID:29618908

  8. Compressed/reconstructed test images for CRAF/Cassini

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.

    1991-01-01

    A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.

  9. Compression Frequency Choice for Compression Mass Gauge Method and Effect on Measurement Accuracy

    NASA Astrophysics Data System (ADS)

    Fu, Juan; Chen, Xiaoqian; Huang, Yiyong

    2013-12-01

    It is a difficult job to gauge the liquid fuel mass in a tank on spacecrafts under microgravity condition. Without the presence of strong buoyancy, the configuration of the liquid and gas in the tank is uncertain and more than one bubble may exist in the liquid part. All these will affect the measure accuracy of liquid mass gauge, especially for a method called Compression Mass Gauge (CMG). Four resonance resources affect the choice of compression frequency for CMG method. There are the structure resonance, liquid sloshing, transducer resonance and bubble resonance. Ground experimental apparatus are designed and built to validate the gauging method and the influence of different compression frequencies at different fill levels on the measurement accuracy. Harmonic phenomenon should be considered during filter design when processing test data. Results demonstrate the ground experiment system performances well with high accuracy and the measurement accuracy increases as the compression frequency climbs in low fill levels. But low compression frequencies should be the better choice for high fill levels. Liquid sloshing induces the measurement accuracy to degrade when the surface is excited to wave by external disturbance at the liquid natural frequency. The measurement accuracy is still acceptable at small amplitude vibration.

  10. A JPEG backward-compatible HDR image compression

    NASA Astrophysics Data System (ADS)

    Korshunov, Pavel; Ebrahimi, Touradj

    2012-10-01

    High Dynamic Range (HDR) imaging is expected to become one of the technologies that could shape next generation of consumer digital photography. Manufacturers are rolling out cameras and displays capable of capturing and rendering HDR images. The popularity and full public adoption of HDR content is however hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of Low Dynamic Range (LDR) displays that are unable to render HDR. To facilitate wide spread of HDR usage, the backward compatibility of HDR technology with commonly used legacy image storage, rendering, and compression is necessary. Although many tone-mapping algorithms were developed for generating viewable LDR images from HDR content, there is no consensus on which algorithm to use and under which conditions. This paper, via a series of subjective evaluations, demonstrates the dependency of perceived quality of the tone-mapped LDR images on environmental parameters and image content. Based on the results of subjective tests, it proposes to extend JPEG file format, as the most popular image format, in a backward compatible manner to also deal with HDR pictures. To this end, the paper provides an architecture to achieve such backward compatibility with JPEG and demonstrates efficiency of a simple implementation of this framework when compared to the state of the art HDR image compression.

  11. Review of "High-Achieving Students in the Era of NCLB"

    ERIC Educational Resources Information Center

    Camilli, Gregory

    2008-01-01

    A recent report from the Fordham Institute considers potential instructional policies for high-achieving students that should be considered in the forthcoming reauthorization of the No Child Left Behind Act. The report finds: 1) achievement growth among high-achieving students has been slower than that of low-achieving students; 2) this trend can…

  12. Thermo-electrochemical production of compressed hydrogen from methane with near-zero energy loss

    NASA Astrophysics Data System (ADS)

    Malerød-Fjeld, Harald; Clark, Daniel; Yuste-Tirados, Irene; Zanón, Raquel; Catalán-Martinez, David; Beeaff, Dustin; Morejudo, Selene H.; Vestre, Per K.; Norby, Truls; Haugsrud, Reidar; Serra, José M.; Kjølseth, Christian

    2017-11-01

    Conventional production of hydrogen requires large industrial plants to minimize energy losses and capital costs associated with steam reforming, water-gas shift, product separation and compression. Here we present a protonic membrane reformer (PMR) that produces high-purity hydrogen from steam methane reforming in a single-stage process with near-zero energy loss. We use a BaZrO3-based proton-conducting electrolyte deposited as a dense film on a porous Ni composite electrode with dual function as a reforming catalyst. At 800 °C, we achieve full methane conversion by removing 99% of the formed hydrogen, which is simultaneously compressed electrochemically up to 50 bar. A thermally balanced operation regime is achieved by coupling several thermo-chemical processes. Modelling of a small-scale (10 kg H2 day-1) hydrogen plant reveals an overall energy efficiency of >87%. The results suggest that future declining electricity prices could make PMRs a competitive alternative for industrial-scale hydrogen plants integrating CO2 capture.

  13. High-order ENO schemes applied to two- and three-dimensional compressible flow

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang; Erlebacher, Gordon; Zang, Thomas A.; Whitaker, David; Osher, Stanley

    1991-01-01

    High order essentially non-oscillatory (ENO) finite difference schemes are applied to the 2-D and 3-D compressible Euler and Navier-Stokes equations. Practical issues, such as vectorization, efficiency of coding, cost comparison with other numerical methods, and accuracy degeneracy effects, are discussed. Numerical examples are provided which are representative of computational problems of current interest in transition and turbulence physics. These require both nonoscillatory shock capturing and high resolution for detailed structures in the smooth regions and demonstrate the advantage of ENO schemes.

  14. Novel Use of a Pneumatic Compression Device for Haemostasis of Haemodialysis Fistula Access Catheterisation Sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Reilly, Michael K., E-mail: moreilly1@mater.ie; Ryan, David; Sugrue, Gavin

    PurposeTransradial pneumatic compression devices can be used to achieve haemostasis following radial artery puncture. This article describes a novel technique for acquiring haemostasis of arterio-venous haemodialysis fistula access sites without the need for suture placement using one such compression device.Materials and MethodsA retrospective review of fistulograms with or without angioplasty/thrombectomy in a single institution was performed. 20 procedures performed on 12 patients who underwent percutaneous intervention of failing or thrombosed arterio-venous fistulas (AVF) had 27 puncture sites. Haemostasis was achieved using a pneumatic compression device at all access sites. Procedure details including size of access sheath, heparin administration and complicationsmore » were recorded.ResultsTwo diagnostic fistulograms, 14 fistulograms and angioplasties and four thrombectomies were performed via access sheaths with an average size (±SD) of 6 Fr (±1.12). IV unfractionated heparin was administered in 11 of 20 procedures. Haemostasis was achieved in 26 of 27 access sites following 15–20 min of compression using the pneumatic compression device. One case experienced limited bleeding from an inflow access site that was successfully treated with reinflation of the device for a further 5 min. No other complication was recorded.ConclusionsHaemostasis of arterio-venous haemodialysis fistula access sites can be safely and effectively achieved using a pneumatic compression device. This is a technically simple, safe and sutureless technique for acquiring haemostasis after AVF intervention.« less

  15. Ignition and combustion: Low compression ratio, high output diesel

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The feasibility of converting a spark ignition aircraft engine GTSI0-520 to compression ignition without increasing the peak combustion pressure of 1100 lbs/sq.in. was determined. The final contemplated utilized intake air heating at idle and light load and a compression ratio of about 10:1 with a small amount of fumigation (the addition of about 15% fuel into the combustion air before the cylinder). The engine used was a modification of a Continental-Teledyne gasoline engine cylinder from the GTSI0-520 supercharged aircraft engine.

  16. Single stock dynamics on high-frequency data: from a compressed coding perspective.

    PubMed

    Fushing, Hsieh; Chen, Shu-Chun; Hwang, Chii-Ruey

    2014-01-01

    High-frequency return, trading volume and transaction number are digitally coded via a nonparametric computing algorithm, called hierarchical factor segmentation (HFS), and then are coupled together to reveal a single stock dynamics without global state-space structural assumptions. The base-8 digital coding sequence, which is capable of revealing contrasting aggregation against sparsity of extreme events, is further compressed into a shortened sequence of state transitions. This compressed digital code sequence vividly demonstrates that the aggregation of large absolute returns is the primary driving force for stimulating both the aggregations of large trading volumes and transaction numbers. The state of system-wise synchrony is manifested with very frequent recurrence in the stock dynamics. And this data-driven dynamic mechanism is seen to correspondingly vary as the global market transiting in and out of contraction-expansion cycles. These results not only elaborate the stock dynamics of interest to a fuller extent, but also contradict some classical theories in finance. Overall this version of stock dynamics is potentially more coherent and realistic, especially when the current financial market is increasingly powered by high-frequency trading via computer algorithms, rather than by individual investors.

  17. Single Stock Dynamics on High-Frequency Data: From a Compressed Coding Perspective

    PubMed Central

    Fushing, Hsieh; Chen, Shu-Chun; Hwang, Chii-Ruey

    2014-01-01

    High-frequency return, trading volume and transaction number are digitally coded via a nonparametric computing algorithm, called hierarchical factor segmentation (HFS), and then are coupled together to reveal a single stock dynamics without global state-space structural assumptions. The base-8 digital coding sequence, which is capable of revealing contrasting aggregation against sparsity of extreme events, is further compressed into a shortened sequence of state transitions. This compressed digital code sequence vividly demonstrates that the aggregation of large absolute returns is the primary driving force for stimulating both the aggregations of large trading volumes and transaction numbers. The state of system-wise synchrony is manifested with very frequent recurrence in the stock dynamics. And this data-driven dynamic mechanism is seen to correspondingly vary as the global market transiting in and out of contraction-expansion cycles. These results not only elaborate the stock dynamics of interest to a fuller extent, but also contradict some classical theories in finance. Overall this version of stock dynamics is potentially more coherent and realistic, especially when the current financial market is increasingly powered by high-frequency trading via computer algorithms, rather than by individual investors. PMID:24586235

  18. Multiresolution Distance Volumes for Progressive Surface Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laney, D E; Bertram, M; Duchaineau, M A

    2002-04-18

    We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distancemore » volumes for surface compression and progressive reconstruction for complex high genus surfaces.« less

  19. Nonlinear compression of temporal solitons in an optical waveguide via inverse engineering

    NASA Astrophysics Data System (ADS)

    Paul, Koushik; Sarma, Amarendra K.

    2018-03-01

    We propose a novel method based on the so-called shortcut-to-adiabatic passage techniques to achieve fast compression of temporal solitons in a nonlinear waveguide. We demonstrate that soliton compression could be achieved, in principle, at an arbitrarily small distance by inverse-engineering the pulse width and the nonlinearity of the medium. The proposed scheme could possibly be exploited for various short-distance communication protocols and may be even in nonlinear guided wave-optics devices and generation of ultrashort soliton pulses.

  20. Video Compression

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.

  1. Single fraction spine radiosurgery for myeloma epidural spinal cord compression.

    PubMed

    Jin, Ryan; Rock, Jack; Jin, Jian-Yue; Janakiraman, Nalini; Kim, Jae Ho; Movsas, Benjamin; Ryu, Samuel

    2009-01-01

    Radiosurgery delivers highly focused radiation beams to the defined target with high precision and accuracy. It has been demonstrated that spine radiosurgery can be safely used for treatment of spine metastasis with rapid and durable pain control, but without detrimental effects to the spinal cord. This study was carried out to determine the role of single fraction radiosurgery for epidural spinal cord compression due to multiple myeloma. A total of 31 lesions in 24 patients with multiple myeloma, who presented with epidural spinal cord compression, were treated with spine radiosurgery. Single fraction radiation dose of 10-18 Gy (median of 16 Gy) was administered to the involved spine including the epidural or paraspinal tumor. Patients were followed up with clinical exams and imaging studies. Median follow-up was 11.2 months (range 1-55). Primary endpoints of this study were pain control, neurological improvement, and radiographic tumor control. Overall pain control rate was 86%; complete relief in 54%, and partial relief in 32% of the patients. Seven patients presented with neurological deficits. Five patients neurologically improved or became normal after radiosurgery. Complete radiographic response of the epidural tumor was noted in 81% at 3 months after radiosurgery. During the follow-up time, there was no radiographic or neurological progression at the treated spine. The treatment was non-invasive and well tolerated. Single fraction radiosurgery achieved an excellent clinical and radiographic response of myeloma epidural spinal cord compression. Radiosurgery can be a viable treatment option for myeloma epidural compression.

  2. Efficient burst image compression using H.265/HEVC

    NASA Astrophysics Data System (ADS)

    Roodaki-Lavasani, Hoda; Lainema, Jani

    2014-02-01

    New imaging use cases are emerging as more powerful camera hardware is entering consumer markets. One family of such use cases is based on capturing multiple pictures instead of just one when taking a photograph. That kind of a camera operation allows e.g. selecting the most successful shot from a sequence of images, showing what happened right before or after the shot was taken or combining the shots by computational means to improve either visible characteristics of the picture (such as dynamic range or focus) or the artistic aspects of the photo (e.g. by superimposing pictures on top of each other). Considering that photographic images are typically of high resolution and quality and the fact that these kind of image bursts can consist of at least tens of individual pictures, an efficient compression algorithm is desired. However, traditional video coding approaches fail to provide the random access properties these use cases require to achieve near-instantaneous access to the pictures in the coded sequence. That feature is critical to allow users to browse the pictures in an arbitrary order or imaging algorithms to extract desired pictures from the sequence quickly. This paper proposes coding structures that provide such random access properties while achieving coding efficiency superior to existing image coders. The results indicate that using HEVC video codec with a single reference picture fixed for the whole sequence can achieve nearly as good compression as traditional IPPP coding structures. It is also shown that the selection of the reference frame can further improve the coding efficiency.

  3. High-Frequency Subband Compressed Sensing MRI Using Quadruplet Sampling

    PubMed Central

    Sung, Kyunghyun; Hargreaves, Brian A

    2013-01-01

    Purpose To presents and validates a new method that formalizes a direct link between k-space and wavelet domains to apply separate undersampling and reconstruction for high- and low-spatial-frequency k-space data. Theory and Methods High- and low-spatial-frequency regions are defined in k-space based on the separation of wavelet subbands, and the conventional compressed sensing (CS) problem is transformed into one of localized k-space estimation. To better exploit wavelet-domain sparsity, CS can be used for high-spatial-frequency regions while parallel imaging can be used for low-spatial-frequency regions. Fourier undersampling is also customized to better accommodate each reconstruction method: random undersampling for CS and regular undersampling for parallel imaging. Results Examples using the proposed method demonstrate successful reconstruction of both low-spatial-frequency content and fine structures in high-resolution 3D breast imaging with a net acceleration of 11 to 12. Conclusion The proposed method improves the reconstruction accuracy of high-spatial-frequency signal content and avoids incoherent artifacts in low-spatial-frequency regions. This new formulation also reduces the reconstruction time due to the smaller problem size. PMID:23280540

  4. Effect of compressive force on PEM fuel cell performance

    NASA Astrophysics Data System (ADS)

    MacDonald, Colin Stephen

    Polymer electrolyte membrane (PEM) fuel cells possess the potential, as a zero-emission power source, to replace the internal combustion engine as the primary option for transportation applications. Though there are a number of obstacles to vast PEM fuel cell commercialization, such as high cost and limited durability, there has been significant progress in the field to achieve this goal. Experimental testing and analysis of fuel cell performance has been an important tool in this advancement. Experimental studies of the PEM fuel cell not only identify unfiltered performance response to manipulation of variables, but also aid in the advancement of fuel cell modelling, by allowing for validation of computational schemes. Compressive force used to contain a fuel cell assembly can play a significant role in how effectively the cell functions, the most obvious example being to ensure proper sealing within the cell. Compression can have a considerable impact on cell performance beyond the sealing aspects. The force can manipulate the ability to deliver reactants and the electrochemical functions of the cell, by altering the layers in the cell susceptible to this force. For these reasons an experimental study was undertaken, presented in this thesis, with specific focus placed on cell compression; in order to study its effect on reactant flow fields and performance response. The goal of the thesis was to develop a consistent and accurate general test procedure for the experimental analysis of a PEM fuel cell in order to analyse the effects of compression on performance. The factors potentially affecting cell performance, which were a function of compression, were identified as: (1) Sealing and surface contact; (2) Pressure drop across the flow channel; (3) Porosity of the GDL. Each factor was analysed independently in order to determine the individual contribution to changes in performance. An optimal degree of compression was identified for the cell configuration in

  5. View compensated compression of volume rendered images for remote visualization.

    PubMed

    Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S

    2009-07-01

    Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.

  6. Effects of bandwidth, compression speed, and gain at high frequencies on preferences for amplified music.

    PubMed

    Moore, Brian C J

    2012-09-01

    This article reviews a series of studies on the factors influencing sound quality preferences, mostly for jazz and classical music stimuli. The data were obtained using ratings of individual stimuli or using the method of paired comparisons. For normal-hearing participants, the highest ratings of sound quality were obtained when the reproduction bandwidth was wide (55 to 16000 Hz) and ripples in the frequency response were small (less than ± 5 dB). For hearing-impaired participants listening via a simulated five-channel compression hearing aid with gains set using the CAM2 fitting method, preferences for upper cutoff frequency varied across participants: Some preferred a 7.5- or 10-kHz upper cutoff frequency over a 5-kHz cutoff frequency, and some showed the opposite preference. Preferences for a higher upper cutoff frequency were associated with a shallow high-frequency slope of the audiogram. A subsequent study comparing the CAM2 and NAL-NL2 fitting methods, with gains slightly reduced for participants who were not experienced hearing aid users, showed a consistent preference for CAM2. Since the two methods differ mainly in the gain applied for frequencies above 4 kHz (CAM2 recommending higher gain than NAL-NL2), these results suggest that extending the upper cutoff frequency is beneficial. A system for reducing "overshoot" effects produced by compression gave small but significant benefits for sound quality of a percussion instrument (xylophone). For a high-input level (80 dB SPL), slow compression was preferred over fast compression.

  7. An efficient coding algorithm for the compression of ECG signals using the wavelet transform.

    PubMed

    Rajoub, Bashar A

    2002-04-01

    A wavelet-based electrocardiogram (ECG) data compression algorithm is proposed in this paper. The ECG signal is first preprocessed, the discrete wavelet transform (DWT) is then applied to the preprocessed signal. Preprocessing guarantees that the magnitudes of the wavelet coefficients be less than one, and reduces the reconstruction errors near both ends of the compressed signal. The DWT coefficients are divided into three groups, each group is thresholded using a threshold based on a desired energy packing efficiency. A binary significance map is then generated by scanning the wavelet decomposition coefficients and outputting a binary one if the scanned coefficient is significant, and a binary zero if it is insignificant. Compression is achieved by 1) using a variable length code based on run length encoding to compress the significance map and 2) using direct binary representation for representing the significant coefficients. The ability of the coding algorithm to compress ECG signals is investigated, the results were obtained by compressing and decompressing the test signals. The proposed algorithm is compared with direct-based and wavelet-based compression algorithms and showed superior performance. A compression ratio of 24:1 was achieved for MIT-BIH record 117 with a percent root mean square difference as low as 1.08%.

  8. Metal Hydride Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Terry A.; Bowman, Robert; Smith, Barton

    Conventional hydrogen compressors often contribute over half of the cost of hydrogen stations, have poor reliability, and have insufficient flow rates for a mature FCEV market. Fatigue associated with their moving parts including cracking of diaphragms and failure of seal leads to failure in conventional compressors, which is exacerbated by the repeated starts and stops expected at fueling stations. Furthermore, the conventional lubrication of these compressors with oil is generally unacceptable at fueling stations due to potential fuel contamination. Metal hydride (MH) technology offers a very good alternative to both conventional (mechanical) and newly developed (electrochemical, ionic liquid pistons) methodsmore » of hydrogen compression. Advantages of MH compression include simplicity in design and operation, absence of moving parts, compactness, safety and reliability, and the possibility to utilize waste industrial heat to power the compressor. Beyond conventional H2 supplies of pipelines or tanker trucks, another attractive scenario is the on-site generating, pressuring and delivering pure H 2 at pressure (≥ 875 bar) for refueling vehicles at electrolysis, wind, or solar generating production facilities in distributed locations that are too remote or widely distributed for cost effective bulk transport. MH hydrogen compression utilizes a reversible heat-driven interaction of a hydride-forming metal alloy with hydrogen gas to form the MH phase and is a promising process for hydrogen energy applications [1,2]. To deliver hydrogen continuously, each stage of the compressor must consist of multiple MH beds with synchronized hydrogenation & dehydrogenation cycles. Multistage pressurization allows achievement of greater compression ratios using reduced temperature swings compared to single stage compressors. The objectives of this project are to investigate and demonstrate on a laboratory scale a two-stage MH hydrogen (H 2) gas compressor with a feed pressure

  9. GPU Lossless Hyperspectral Data Compression System for Space Applications

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didier; Aranki, Nazeeh; Hopson, Ben; Kiely, Aaron; Klimesh, Matthew; Benkrid, Khaled

    2012-01-01

    On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. At JPL, a novel, adaptive and predictive technique for lossless compression of hyperspectral data, named the Fast Lossless (FL) algorithm, was recently developed. This technique uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. Because of its outstanding performance and suitability for real-time onboard hardware implementation, the FL compressor is being formalized as the emerging CCSDS Standard for Lossless Multispectral & Hyperspectral image compression. The FL compressor is well-suited for parallel hardware implementation. A GPU hardware implementation was developed for FL targeting the current state-of-the-art GPUs from NVIDIA(Trademark). The GPU implementation on a NVIDIA(Trademark) GeForce(Trademark) GTX 580 achieves a throughput performance of 583.08 Mbits/sec (44.85 MSamples/sec) and an acceleration of at least 6 times a software implementation running on a 3.47 GHz single core Intel(Trademark) Xeon(Trademark) processor. This paper describes the design and implementation of the FL algorithm on the GPU. The massively parallel implementation will provide in the future a fast and practical real-time solution for airborne and space applications.

  10. Compressed Speech: Potential Application for Air Force Technical Training. Final Report, August 73-November 73.

    ERIC Educational Resources Information Center

    Dailey, K. Anne

    Time-compressed speech (also called compressed speech, speeded speech, or accelerated speech) is an extension of the normal recording procedure for reproducing the spoken word. Compressed speech can be used to achieve dramatic reductions in listening time without significant loss in comprehension. The implications of such temporal reductions in…

  11. Lattice Anharmonicity and Thermal Conductivity from Compressive Sensing of First-Principles Calculations

    DOE PAGES

    Zhou, Fei; Nielson, Weston; Xia, Yi; ...

    2014-10-27

    First-principles prediction of lattice thermal conductivity K L of strongly anharmonic crystals is a long-standing challenge in solid state physics. Using recent advances in information science, we propose a systematic and rigorous approach to this problem, compressive sensing lattice dynamics (CSLD). Compressive sensing is used to select the physically important terms in the lattice dynamics model and determine their values in one shot. Non-intuitively, high accuracy is achieved when the model is trained on first-principles forces in quasi-random atomic configurations. The method is demonstrated for Si, NaCl, and Cu 12Sb 4S 13, an earth-abundant thermoelectric with strong phononphonon interactions thatmore » limit the room-temperature K L to values near the amorphous limit.« less

  12. CoGI: Towards Compressing Genomes as an Image.

    PubMed

    Xie, Xiaojing; Zhou, Shuigeng; Guan, Jihong

    2015-01-01

    Genomic science is now facing an explosive increase of data thanks to the fast development of sequencing technology. This situation poses serious challenges to genomic data storage and transferring. It is desirable to compress data to reduce storage and transferring cost, and thus to boost data distribution and utilization efficiency. Up to now, a number of algorithms / tools have been developed for compressing genomic sequences. Unlike the existing algorithms, most of which treat genomes as one-dimensional text strings and compress them based on dictionaries or probability models, this paper proposes a novel approach called CoGI (the abbreviation of Compressing Genomes as an Image) for genome compression, which transforms the genomic sequences to a two-dimensional binary image (or bitmap), then applies a rectangular partition coding algorithm to compress the binary image. CoGI can be used as either a reference-based compressor or a reference-free compressor. For the former, we develop two entropy-based algorithms to select a proper reference genome. Performance evaluation is conducted on various genomes. Experimental results show that the reference-based CoGI significantly outperforms two state-of-the-art reference-based genome compressors GReEn and RLZ-opt in both compression ratio and compression efficiency. It also achieves comparable compression ratio but two orders of magnitude higher compression efficiency in comparison with XM--one state-of-the-art reference-free genome compressor. Furthermore, our approach performs much better than Gzip--a general-purpose and widely-used compressor, in both compression speed and compression ratio. So, CoGI can serve as an effective and practical genome compressor. The source code and other related documents of CoGI are available at: http://admis.fudan.edu.cn/projects/cogi.htm.

  13. Wavelet-based compression of pathological images for telemedicine applications

    NASA Astrophysics Data System (ADS)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  14. On-chip frame memory reduction using a high-compression-ratio codec in the overdrives of liquid-crystal displays

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Min, Kyeong-Yuk; Chong, Jong-Wha

    2010-11-01

    Overdrive is commonly used to reduce the liquid-crystal response time and motion blur in liquid-crystal displays (LCDs). However, overdrive requires a large frame memory in order to store the previous frame for reference. In this paper, a high-compression-ratio codec is presented to compress the image data stored in the on-chip frame memory so that only 1 Mbit of on-chip memory is required in the LCD overdrives of mobile devices. The proposed algorithm further compresses the color bitmaps and representative values (RVs) resulting from the block truncation coding (BTC). The color bitmaps are represented by a luminance bitmap, which is further reduced and reconstructed using median filter interpolation in the decoder, while the RVs are compressed using adaptive quantization coding (AQC). Interpolation and AQC can provide three-level compression, which leads to 16 combinations. Using a rate-distortion analysis, we select the three optimal schemes to compress the image data for video graphics array (VGA), wide-VGA LCD, and standard-definitionTV applications. Our simulation results demonstrate that the proposed schemes outperform interpolation BTC both in PSNR (by 1.479 to 2.205 dB) and in subjective visual quality.

  15. Simulations of turbulent compressible flows using periodic boundary conditions: high fidelity on a budget

    NASA Astrophysics Data System (ADS)

    Beardsell, Guillaume; Blanquart, Guillaume

    2017-11-01

    In direct numerical simulations (DNS) of turbulent flows, it is often prohibitively expensive to simulate complete flow geometries. For example, to study turbulence-flame interactions, one cannot perform a DNS of a full combustor. Usually, a well-selected portion of the domain is chosen, in this particular case the region around the flame front. In this work, we perform a Reynolds decomposition of the velocity field and solve for the fluctuating part only. The resulting equations are the same as the original Navier-Stokes equations, except for turbulence-generating large scale features of the flow such as mean shear, which appear as forcing terms. This approach allows us to achieve high Reynolds numbers and sustained turbulence while keeping the computational cost reasonable. We have already applied this strategy to incompressible flows, but not to compressible ones, where special care has to be taken regarding the energy equation. Implementation of the resulting additional terms in the finite-difference code NGA is discussed and preliminary results are presented. In particular, we look at the budget of turbulent kinetic energy and internal energy. We are considering applying this technique to turbulent premixed flames.

  16. Academic attainment and the high school science experiences among high-achieving African American males

    NASA Astrophysics Data System (ADS)

    Trice, Rodney Nathaniel

    This study examines the educational experiences of high achieving African American males. More specifically, it analyzes the influences on their successful navigation through high school science. Through a series of interviews, observations, questionnaires, science portfolios, and review of existing data the researcher attempted to obtain a deeper understanding of high achieving African American males and their limitations to academic attainment and high school science experiences. The investigation is limited to ten high achieving African American male science students at Woodcrest High School. Woodcrest is situated at the cross section of a suburban and rural community located in the southeastern section of the United States. Although this investigation involves African American males, all of whom are successful in school, its findings should not be generalized to this nor any other group of students. The research question that guided this study is: What are the limitations to academic attainment and the high school science experiences of high achieving African American males? The student participants expose how suspension and expulsion, special education placement, academic tracking, science instruction, and teacher expectation influence academic achievement. The role parents play, student self-concept, peer relationships, and student learning styles are also analyzed. The anthology of data rendered three overarching themes: (1) unequal access to education, (2) maintenance of unfair educational structures, and (3) authentic characterizations of African American males. Often the policies and practices set in place by school officials aid in creating hurdles to academic achievement. These policies and practices are often formed without meaningful consideration of the unintended consequences that may affect different student populations, particularly the most vulnerable. The findings from this study expose that high achieving African American males face major

  17. Shock-adiabatic to quasi-isentropic compression of warm dense helium up to 150 GPa

    NASA Astrophysics Data System (ADS)

    Zheng, J.; Chen, Q. F.; Gu, Y. J.; Li, J. T.; Li, Z. G.; Li, C. J.; Chen, Z. Y.

    2017-06-01

    Multiple reverberation compression can achieve higher pressure, higher temperature, but lower entropy. It is available to provide an important validation for the elaborate and wider planetary models and simulate the inertial confinement fusion capsule implosion process. In the work, we have developed the thermodynamic and optical properties of helium from shock-adiabatic to quasi-isentropic compression by means of a multiple reverberation technique. By this technique, the initial dense gaseous helium was compressed to high pressure and high temperature and entered the warm dense matter (WDM) region. The experimental equation of state (EOS) of WDM helium in the pressure-density-temperature (P-ρ -T) range of 1 -150 GPa , 0.1 -1.1 g c m-3 , and 4600-24 000 K were measured. The optical radiations emanating from the WDM helium were recorded, and the particle velocity profiles detecting from the sample/window interface were obtained successfully up to 10 times compression. The optical radiation results imply that dense He has become rather opaque after the 2nd compression with a density of about 0.3 g c m-3 and a temperature of about 1 eV. The opaque states of helium under multiple compression were analyzed by the particle velocity measurements. The multiple compression technique could efficiently enhanced the density and the compressibility, and our multiple compression ratios (ηi=ρi/ρ0,i =1 -10 ) of helium are greatly improved from 3.5 to 43 based on initial precompressed density (ρ0) . For the relative compression ratio (ηi'=ρi/ρi -1) , it increases with pressure in the lower density regime and reversely decreases in the higher density regime, and a turning point occurs at the 3rd and 4th compression states under the different loading conditions. This nonmonotonic evolution of the compression is controlled by two factors, where the excitation of internal degrees of freedom results in the increasing compressibility and the repulsive interactions between the

  18. Hyper-spectral image compression algorithm based on mixing transform of wave band grouping to eliminate redundancy

    NASA Astrophysics Data System (ADS)

    Xie, ChengJun; Xu, Lin

    2008-03-01

    This paper presents an algorithm based on mixing transform of wave band grouping to eliminate spectral redundancy, the algorithm adapts to the relativity difference between different frequency spectrum images, and still it works well when the band number is not the power of 2. Using non-boundary extension CDF(2,2)DWT and subtraction mixing transform to eliminate spectral redundancy, employing CDF(2,2)DWT to eliminate spatial redundancy and SPIHT+CABAC for compression coding, the experiment shows that a satisfied lossless compression result can be achieved. Using hyper-spectral image Canal of American JPL laboratory as the data set for lossless compression test, when the band number is not the power of 2, lossless compression result of this compression algorithm is much better than the results acquired by JPEG-LS, WinZip, ARJ, DPCM, the research achievements of a research team of Chinese Academy of Sciences, Minimum Spanning Tree and Near Minimum Spanning Tree, on the average the compression ratio of this algorithm exceeds the above algorithms by 41%,37%,35%,29%,16%,10%,8% respectively; when the band number is the power of 2, for 128 frames of the image Canal, taking 8, 16 and 32 respectively as the number of one group for groupings based on different numbers, considering factors like compression storage complexity, the type of wave band and the compression effect, we suggest using 8 as the number of bands included in one group to achieve a better compression effect. The algorithm of this paper has priority in operation speed and hardware realization convenience.

  19. Nonlinear frequency compression: effects on sound quality ratings of speech and music.

    PubMed

    Parsa, Vijay; Scollie, Susan; Glista, Danielle; Seelisch, Andreas

    2013-03-01

    Frequency lowering technologies offer an alternative amplification solution for severe to profound high frequency hearing losses. While frequency lowering technologies may improve audibility of high frequency sounds, the very nature of this processing can affect the perceived sound quality. This article reports the results from two studies that investigated the impact of a nonlinear frequency compression (NFC) algorithm on perceived sound quality. In the first study, the cutoff frequency and compression ratio parameters of the NFC algorithm were varied, and their effect on the speech quality was measured subjectively with 12 normal hearing adults, 12 normal hearing children, 13 hearing impaired adults, and 9 hearing impaired children. In the second study, 12 normal hearing and 8 hearing impaired adult listeners rated the quality of speech in quiet, speech in noise, and music after processing with a different set of NFC parameters. Results showed that the cutoff frequency parameter had more impact on sound quality ratings than the compression ratio, and that the hearing impaired adults were more tolerant to increased frequency compression than normal hearing adults. No statistically significant differences were found in the sound quality ratings of speech-in-noise and music stimuli processed through various NFC settings by hearing impaired listeners. These findings suggest that there may be an acceptable range of NFC settings for hearing impaired individuals where sound quality is not adversely affected. These results may assist an Audiologist in clinical NFC hearing aid fittings for achieving a balance between high frequency audibility and sound quality.

  20. Radiometric resolution enhancement by lossy compression as compared to truncation followed by lossless compression

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Manohar, Mareboyana

    1994-01-01

    Recent advances in imaging technology make it possible to obtain imagery data of the Earth at high spatial, spectral and radiometric resolutions from Earth orbiting satellites. The rate at which the data is collected from these satellites can far exceed the channel capacity of the data downlink. Reducing the data rate to within the channel capacity can often require painful trade-offs in which certain scientific returns are sacrificed for the sake of others. In this paper we model the radiometric version of this form of lossy compression by dropping a specified number of least significant bits from each data pixel and compressing the remaining bits using an appropriate lossless compression technique. We call this approach 'truncation followed by lossless compression' or TLLC. We compare the TLLC approach with applying a lossy compression technique to the data for reducing the data rate to the channel capacity, and demonstrate that each of three different lossy compression techniques (JPEG/DCT, VQ and Model-Based VQ) give a better effective radiometric resolution than TLLC for a given channel rate.

  1. The Instructional Effects of Diagrams and Time-Compressed Instruction on Student Achievement and Learners' Perceptions of Cognitive Load

    ERIC Educational Resources Information Center

    Pastore, Raymond S.

    2009-01-01

    The purpose of this study was to examine the effects of visual representations and time-compressed instruction on learning and learners' perceptions of cognitive load. Time-compressed instruction refers to instruction that has been increased in speed without sacrificing quality. It was anticipated that learners would be able to gain a conceptual…

  2. Setting Educational Priorities: High Achievers Speak Out. White Paper.

    ERIC Educational Resources Information Center

    Dickeson, Robert C.

    Noting that high achieving Indiana high school students can provide important insights into the educational system in the state, this study examined the opinions of recipients of Ameritchieve recognition, National Merit finalists, African-American students who were National Achievement finalists, and national Hispanic Scholar finalists, all from…

  3. CPAC: Energy-Efficient Data Collection through Adaptive Selection of Compression Algorithms for Sensor Networks

    PubMed Central

    Lee, HyungJune; Kim, HyunSeok; Chang, Ik Joon

    2014-01-01

    We propose a technique to optimize the energy efficiency of data collection in sensor networks by exploiting a selective data compression. To achieve such an aim, we need to make optimal decisions regarding two aspects: (1) which sensor nodes should execute compression; and (2) which compression algorithm should be used by the selected sensor nodes. We formulate this problem into binary integer programs, which provide an energy-optimal solution under the given latency constraint. Our simulation results show that the optimization algorithm significantly reduces the overall network-wide energy consumption for data collection. In the environment having a stationary sink from stationary sensor nodes, the optimized data collection shows 47% energy savings compared to the state-of-the-art collection protocol (CTP). More importantly, we demonstrate that our optimized data collection provides the best performance in an intermittent network under high interference. In such networks, we found that the selective compression for frequent packet retransmissions saves up to 55% energy compared to the best known protocol. PMID:24721763

  4. Effect of rice husk ash and fly ash on the compressive strength of high performance concrete

    NASA Astrophysics Data System (ADS)

    Van Lam, Tang; Bulgakov, Boris; Aleksandrova, Olga; Larsen, Oksana; Anh, Pham Ngoc

    2018-03-01

    The usage of industrial and agricultural wastes for building materials production plays an important role to improve the environment and economy by preserving nature materials and land resources, reducing land, water and air pollution as well as organizing and storing waste costs. This study mainly focuses on mathematical modeling dependence of the compressive strength of high performance concrete (HPC) at the ages of 3, 7 and 28 days on the amount of rice husk ash (RHA) and fly ash (FA), which are added to the concrete mixtures by using the Central composite rotatable design. The result of this study provides the second-order regression equation of objective function, the images of the surface expression and the corresponding contours of the objective function of the regression equation, as the optimal points of HPC compressive strength. These objective functions, which are the compressive strength values of HPC at the ages of 3, 7 and 28 days, depend on two input variables as: x1 (amount of RHA) and x2 (amount of FA). The Maple 13 program, solving the second-order regression equation, determines the optimum composition of the concrete mixture for obtaining high performance concrete and calculates the maximum value of the HPC compressive strength at the ages of 28 days. The results containMaxR28HPC = 76.716 MPa when RHA = 0.1251 and FA = 0.3119 by mass of Portland cement.

  5. Detailed thermodynamic analyses of high-speed compressible turbulence

    NASA Astrophysics Data System (ADS)

    Towery, Colin; Darragh, Ryan; Poludnenko, Alexei; Hamlington, Peter

    2016-11-01

    Interactions between high-speed turbulence and flames (or chemical reactions) are important in the dynamics and description of many different combustion phenomena, including autoignition and deflagration-to-detonation transition. The probability of these phenomena to occur depends on the magnitude and spectral content of turbulence fluctuations, which can impact a wide range of science and engineering problems, from the hypersonic scramjet engine to the onset of Type Ia supernovae. In this talk, we present results from new direct numerical simulations (DNS) of homogeneous isotropic turbulence with turbulence Mach numbers ranging from 0 . 05 to 1 . 0 and Taylor-scale Reynolds numbers as high as 700. A set of detailed analyses are described in both Eulerian and Lagrangian reference frames in order to assess coherent (structural) and incoherent (stochastic) thermodynamic flow features. These analyses provide direct insights into the thermodynamics of strongly compressible turbulence. Furthermore, presented results provide a non-reacting baseline for future studies of turbulence-chemistry interactions in DNS with complex chemistry mechanisms. This work was supported by the Air Force Office of Scientific Research (AFOSR) under Award No. FA9550-14-1-0273, and the Department of Defense (DoD) High Performance Computing Modernization Program (HPCMP) under a Frontier project award.

  6. Image compression using singular value decomposition

    NASA Astrophysics Data System (ADS)

    Swathi, H. R.; Sohini, Shah; Surbhi; Gopichand, G.

    2017-11-01

    We often need to transmit and store the images in many applications. Smaller the image, less is the cost associated with transmission and storage. So we often need to apply data compression techniques to reduce the storage space consumed by the image. One approach is to apply Singular Value Decomposition (SVD) on the image matrix. In this method, digital image is given to SVD. SVD refactors the given digital image into three matrices. Singular values are used to refactor the image and at the end of this process, image is represented with smaller set of values, hence reducing the storage space required by the image. Goal here is to achieve the image compression while preserving the important features which describe the original image. SVD can be adapted to any arbitrary, square, reversible and non-reversible matrix of m × n size. Compression ratio and Mean Square Error is used as performance metrics.

  7. Image Segmentation, Registration, Compression, and Matching

    NASA Technical Reports Server (NTRS)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  8. Optimized satellite image compression and reconstruction via evolution strategies

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael

    2009-05-01

    This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.

  9. Strain-rate/temperature behavior of high density polyethylene in compression

    NASA Technical Reports Server (NTRS)

    Clements, L. L.; Sherby, O. D.

    1978-01-01

    The compressive strain rate/temperature behavior of highly linear, high density polyethylene was analyzed in terms of the predictive relations developed for metals and other crystalline materials. For strains of 5 percent and above, the relationship between applied strain rate, dotted epsilon, and resulting flow stress, sigma, was found to be: dotted epsilon exp times (Q sub f/RT) = k'(sigma/sigma sub c) to the nth power; the left-hand side is the activation-energy-compensated strain rate, where Q sub f is activation energy for flow, R is gas constant, and T is temperature; k is a constant, n is temperature-independent stress exponent, and sigma/sigma sub c is structure-compensated stress. A master curve resulted from a logarithmic plot of activation-energy-compensated strain rate versus structure-compensated stress.

  10. A Lossless hybrid wavelet-fractal compression for welding radiographic images.

    PubMed

    Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud

    2016-01-01

    In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm.

  11. Benchmark Dataset for Whole Genome Sequence Compression.

    PubMed

    C L, Biji; S Nair, Achuthsankar

    2017-01-01

    The research in DNA data compression lacks a standard dataset to test out compression tools specific to DNA. This paper argues that the current state of achievement in DNA compression is unable to be benchmarked in the absence of such scientifically compiled whole genome sequence dataset and proposes a benchmark dataset using multistage sampling procedure. Considering the genome sequence of organisms available in the National Centre for Biotechnology and Information (NCBI) as the universe, the proposed dataset selects 1,105 prokaryotes, 200 plasmids, 164 viruses, and 65 eukaryotes. This paper reports the results of using three established tools on the newly compiled dataset and show that their strength and weakness are evident only with a comparison based on the scientifically compiled benchmark dataset. The sample dataset and the respective links are available @ https://sourceforge.net/projects/benchmarkdnacompressiondataset/.

  12. Neural network for image compression

    NASA Astrophysics Data System (ADS)

    Panchanathan, Sethuraman; Yeap, Tet H.; Pilache, B.

    1992-09-01

    In this paper, we propose a new scheme for image compression using neural networks. Image data compression deals with minimization of the amount of data required to represent an image while maintaining an acceptable quality. Several image compression techniques have been developed in recent years. We note that the coding performance of these techniques may be improved by employing adaptivity. Over the last few years neural network has emerged as an effective tool for solving a wide range of problems involving adaptivity and learning. A multilayer feed-forward neural network trained using the backward error propagation algorithm is used in many applications. However, this model is not suitable for image compression because of its poor coding performance. Recently, a self-organizing feature map (SOFM) algorithm has been proposed which yields a good coding performance. However, this algorithm requires a long training time because the network starts with random initial weights. In this paper we have used the backward error propagation algorithm (BEP) to quickly obtain the initial weights which are then used to speedup the training time required by the SOFM algorithm. The proposed approach (BEP-SOFM) combines the advantages of the two techniques and, hence, achieves a good coding performance in a shorter training time. Our simulation results demonstrate the potential gains using the proposed technique.

  13. Compression strategies for LiDAR waveform cube

    NASA Astrophysics Data System (ADS)

    Jóźków, Grzegorz; Toth, Charles; Quirk, Mihaela; Grejner-Brzezinska, Dorota

    2015-01-01

    Full-waveform LiDAR data (FWD) provide a wealth of information about the shape and materials of the surveyed areas. Unlike discrete data that retains only a few strong returns, FWD generally keeps the whole signal, at all times, regardless of the signal intensity. Hence, FWD will have an increasingly well-deserved role in mapping and beyond, in the much desired classification in the raw data format. Full-waveform systems currently perform only the recording of the waveform data at the acquisition stage; the return extraction is mostly deferred to post-processing. Although the full waveform preserves most of the details of the real data, it presents a serious practical challenge for a wide use: much larger datasets compared to those from the classical discrete return systems. Atop the need for more storage space, the acquisition speed of the FWD may also limit the pulse rate on most systems that cannot store data fast enough, and thus, reduces the perceived system performance. This work introduces a waveform cube model to compress waveforms in selected subsets of the cube, aimed at achieving decreased storage while maintaining the maximum pulse rate of FWD systems. In our experiments, the waveform cube is compressed using classical methods for 2D imagery that are further tested to assess the feasibility of the proposed solution. The spatial distribution of airborne waveform data is irregular; however, the manner of the FWD acquisition allows the organization of the waveforms in a regular 3D structure similar to familiar multi-component imagery, as those of hyper-spectral cubes or 3D volumetric tomography scans. This study presents the performance analysis of several lossy compression methods applied to the LiDAR waveform cube, including JPEG-1, JPEG-2000, and PCA-based techniques. Wide ranges of tests performed on real airborne datasets have demonstrated the benefits of the JPEG-2000 Standard where high compression rates incur fairly small data degradation. In

  14. Single-event transient imaging with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor.

    PubMed

    Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji

    2016-02-22

    In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.

  15. A Sparsity-Promoted Decomposition for Compressed Fault Diagnosis of Roller Bearings

    PubMed Central

    Wang, Huaqing; Ke, Yanliang; Song, Liuyang; Tang, Gang; Chen, Peng

    2016-01-01

    The traditional approaches for condition monitoring of roller bearings are almost always achieved under Shannon sampling theorem conditions, leading to a big-data problem. The compressed sensing (CS) theory provides a new solution to the big-data problem. However, the vibration signals are insufficiently sparse and it is difficult to achieve sparsity using the conventional techniques, which impedes the application of CS theory. Therefore, it is of great significance to promote the sparsity when applying the CS theory to fault diagnosis of roller bearings. To increase the sparsity of vibration signals, a sparsity-promoted method called the tunable Q-factor wavelet transform based on decomposing the analyzed signals into transient impact components and high oscillation components is utilized in this work. The former become sparser than the raw signals with noise eliminated, whereas the latter include noise. Thus, the decomposed transient impact components replace the original signals for analysis. The CS theory is applied to extract the fault features without complete reconstruction, which means that the reconstruction can be completed when the components with interested frequencies are detected and the fault diagnosis can be achieved during the reconstruction procedure. The application cases prove that the CS theory assisted by the tunable Q-factor wavelet transform can successfully extract the fault features from the compressed samples. PMID:27657063

  16. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  17. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.

  18. Comparative data compression techniques and multi-compression results

    NASA Astrophysics Data System (ADS)

    Hasan, M. R.; Ibrahimy, M. I.; Motakabber, S. M. A.; Ferdaus, M. M.; Khan, M. N. H.

    2013-12-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms.

  19. A test data compression scheme based on irrational numbers stored coding.

    PubMed

    Wu, Hai-feng; Cheng, Yu-sheng; Zhan, Wen-fa; Cheng, Yi-fei; Wu, Qiong; Zhu, Shi-juan

    2014-01-01

    Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS), is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL.

  20. High-Strain-Rate Compression Testing of Ice

    NASA Technical Reports Server (NTRS)

    Shazly, Mostafa; Prakash, Vikas; Lerch, Bradley A.

    2006-01-01

    In the present study a modified split Hopkinson pressure bar (SHPB) was employed to study the effect of strain rate on the dynamic material response of ice. Disk-shaped ice specimens with flat, parallel end faces were either provided by Dartmouth College (Hanover, NH) or grown at Case Western Reserve University (Cleveland, OH). The SHPB was adapted to perform tests at high strain rates in the range 60 to 1400/s at test temperatures of -10 and -30 C. Experimental results showed that the strength of ice increases with increasing strain rates and this occurs over a change in strain rate of five orders of magnitude. Under these strain rate conditions the ice microstructure has a slight influence on the strength, but it is much less than the influence it has under quasi-static loading conditions. End constraint and frictional effects do not influence the compression tests like they do at slower strain rates, and therefore the diameter/thickness ratio of the samples is not as critical. The strength of ice at high strain rates was found to increase with decreasing test temperatures. Ice has been identified as a potential source of debris to impact the shuttle; data presented in this report can be used to validate and/or develop material models for ice impact analyses for shuttle Return to Flight efforts.

  1. Performance of a Discrete Wavelet Transform for Compressing Plasma Count Data and its Application to the Fast Plasma Investigation on NASA's Magnetospheric Multiscale Mission

    NASA Technical Reports Server (NTRS)

    Barrie, Alexander C.; Yeh, Penshu; Dorelli, John C.; Clark, George B.; Paterson, William R.; Adrian, Mark L.; Holland, Matthew P.; Lobell, James V.; Simpson, David G.; Pollock, Craig J.; hide

    2015-01-01

    Plasma measurements in space are becoming increasingly faster, higher resolution, and distributed over multiple instruments. As raw data generation rates can exceed available data transfer bandwidth, data compression is becoming a critical design component. Data compression has been a staple of imaging instruments for years, but only recently have plasma measurement designers become interested in high performance data compression. Missions will often use a simple lossless compression technique yielding compression ratios of approximately 2:1, however future missions may require compression ratios upwards of 10:1. This study aims to explore how a Discrete Wavelet Transform combined with a Bit Plane Encoder (DWT/BPE), implemented via a CCSDS standard, can be used effectively to compress count information common to plasma measurements to high compression ratios while maintaining little or no compression error. The compression ASIC used for the Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale mission (MMS) is used for this study. Plasma count data from multiple sources is examined: resampled data from previous missions, randomly generated data from distribution functions, and simulations of expected regimes. These are run through the compression routines with various parameters to yield the greatest possible compression ratio while maintaining little or no error, the latter indicates that fully lossless compression is obtained. Finally, recommendations are made for future missions as to what can be achieved when compressing plasma count data and how best to do so.

  2. Prediction of compression-induced image interpretability degradation

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Chen, Hua-Mei; Irvine, John M.; Wang, Zhonghai; Chen, Genshe; Nagy, James; Scott, Stephen

    2018-04-01

    Image compression is an important component in modern imaging systems as the volume of the raw data collected is increasing. To reduce the volume of data while collecting imagery useful for analysis, choosing the appropriate image compression method is desired. Lossless compression is able to preserve all the information, but it has limited reduction power. On the other hand, lossy compression, which may result in very high compression ratios, suffers from information loss. We model the compression-induced information loss in terms of the National Imagery Interpretability Rating Scale or NIIRS. NIIRS is a user-based quantification of image interpretability widely adopted by the Geographic Information System community. Specifically, we present the Compression Degradation Image Function Index (CoDIFI) framework that predicts the NIIRS degradation (i.e., a decrease of NIIRS level) for a given compression setting. The CoDIFI-NIIRS framework enables a user to broker the maximum compression setting while maintaining a specified NIIRS rating.

  3. A novel color image compression algorithm using the human visual contrast sensitivity characteristics

    NASA Astrophysics Data System (ADS)

    Yao, Juncai; Liu, Guizhong

    2017-03-01

    In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.

  4. Compression of electromyographic signals using image compression techniques.

    PubMed

    Costa, Marcus Vinícius Chaffim; Berger, Pedro de Azevedo; da Rocha, Adson Ferreira; de Carvalho, João Luiz Azevedo; Nascimento, Francisco Assis de Oliveira

    2008-01-01

    Despite the growing interest in the transmission and storage of electromyographic signals for long periods of time, few studies have addressed the compression of such signals. In this article we present an algorithm for compression of electromyographic signals based on the JPEG2000 coding system. Although the JPEG2000 codec was originally designed for compression of still images, we show that it can also be used to compress EMG signals for both isotonic and isometric contractions. For EMG signals acquired during isometric contractions, the proposed algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.75% to 13.7%. For isotonic EMG signals, the algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.4% to 7%. The compression results using the JPEG2000 algorithm were compared to those using other algorithms based on the wavelet transform.

  5. Compression and Transmission of RF Signals for Telediagnosis

    NASA Astrophysics Data System (ADS)

    Seko, Toshihiro; Doi, Motonori; Oshiro, Osamu; Chihara, Kunihiro

    2000-05-01

    Health care is a critical issue nowadays. Much emphasis is given to quality care for all people. Telediagnosis has attracted public attention. We propose a new method of ultrasound image transmission for telediagnosis. In conventional methods, video image signals are transmitted. In our method, the RF signals which are acquired by an ultrasound probe, are transmitted. The RF signals can be transformed to color Doppler images or high-resolution images by a receiver. Because a stored form is adopted, the proposed system can be realized with existent technology such as hyper text transfer protocol (HTTP) and file transfer protocol (FTP). In this paper, we describe two lossless compression methods which specialize in the transmission of RF signals. One of the methods uses the characteristics of the RF signal. In the other method, the amount of the data is reduced. Measurements were performed in water targeting an iron block and triangular Styrofoam. Additionally, abdominal fat measurement was performed. Our method achieved a compression rate of 13% with 8 bit data.

  6. X-ray Computed Tomography Imaging of the Microstructure of Sand Particles Subjected to High Pressure One-Dimensional Compression.

    PubMed

    Al Mahbub, Asheque; Haque, Asadul

    2016-11-03

    This paper presents the results of X-ray CT imaging of the microstructure of sand particles subjected to high pressure one-dimensional compression leading to particle crushing. A high resolution X-ray CT machine capable of in situ imaging was employed to capture images of the whole volume of a sand sample subjected to compressive stresses up to 79.3 MPa. Images of the whole sample obtained at different load stages were analysed using a commercial image processing software (Avizo) to reveal various microstructural properties, such as pore and particle volume distributions, spatial distribution of void ratios, relative breakage, and anisotropy of particles.

  7. Compression of rehydratable vegetables and cereals

    NASA Technical Reports Server (NTRS)

    Burns, E. E.

    1978-01-01

    Characteristics of freeze-dried compressed carrots, such as rehydration, volatile retention, and texture, were studied by relating histological changes to textural quality evaluation, and by determining the effects of storage temperature on freeze-dried compressed carrot bars. Results show that samples compressed with a high moisture content undergo only slight structural damage and rehydrate quickly. Cellular disruption as a result of compression at low moisture levels was the main reason for rehydration and texture differences. Products prepared from carrot cubes having 48% moisture compared favorably with a freshly cooked product in cohesiveness and elasticity, but were found slightly harder and more chewy.

  8. Interleaved EPI diffusion imaging using SPIRiT-based reconstruction with virtual coil compression.

    PubMed

    Dong, Zijing; Wang, Fuyixue; Ma, Xiaodong; Zhang, Zhe; Dai, Erpeng; Yuan, Chun; Guo, Hua

    2018-03-01

    To develop a novel diffusion imaging reconstruction framework based on iterative self-consistent parallel imaging reconstruction (SPIRiT) for multishot interleaved echo planar imaging (iEPI), with computation acceleration by virtual coil compression. As a general approach for autocalibrating parallel imaging, SPIRiT improves the performance of traditional generalized autocalibrating partially parallel acquisitions (GRAPPA) methods in that the formulation with self-consistency is better conditioned, suggesting SPIRiT to be a better candidate in k-space-based reconstruction. In this study, a general SPIRiT framework is adopted to incorporate both coil sensitivity and phase variation information as virtual coils and then is applied to 2D navigated iEPI diffusion imaging. To reduce the reconstruction time when using a large number of coils and shots, a novel shot-coil compression method is proposed for computation acceleration in Cartesian sampling. Simulations and in vivo experiments were conducted to evaluate the performance of the proposed method. Compared with the conventional coil compression, the shot-coil compression achieved higher compression rates with reduced errors. The simulation and in vivo experiments demonstrate that the SPIRiT-based reconstruction outperformed the existing method, realigned GRAPPA, and provided superior images with reduced artifacts. The SPIRiT-based reconstruction with virtual coil compression is a reliable method for high-resolution iEPI diffusion imaging. Magn Reson Med 79:1525-1531, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  9. Using phase contrast imaging to measure the properties of shock compressed aerogel

    NASA Astrophysics Data System (ADS)

    Hawreliak, James; Erskine, Dave; Schropp, Andres; Galtier, Eric C.; Heimann, Phil

    2017-01-01

    The Hugoniot states of low density materials, such as silica aerogel, are used in high energy density physics research because they can achieve a range of high temperature and pressure states through shock compression. The shock properties of 100mg/cc silica aerogel were studied at the Materials in Extreme Conditions end station using x-ray phase contrast imaging of spherically expanding shock waves. The shockwaves were generated by focusing a high power 532nm laser to a 50μm focal spot on a thin aluminum ablator. The shock speed was measured in separate experiments using line-VISAR measurements from the reflecting shock front. The relative timing between the x-ray probe and the optical laser pump was varied so x-ray PCI images were taken at pressures between 10GPa and 30GPa. Modeling the compression of the foam in the strong shock limit uses a Gruneisen parameter of 0.49 to fit the data rather than a value of 0.66 that would correspond to a plasma state.

  10. Data Assimilation by Ensemble Kalman Filter during One-Dimensional Nonlinear Consolidation in Randomly Heterogeneous Highly Compressible Aquitards

    NASA Astrophysics Data System (ADS)

    Zapata Norberto, B.; Morales-Casique, E.; Herrera, G. S.

    2017-12-01

    Severe land subsidence due to groundwater extraction may occur in multiaquifer systems where highly compressible aquitards are present. The highly compressible nature of the aquitards leads to nonlinear consolidation where the groundwater flow parameters are stress-dependent. The case is further complicated by the heterogeneity of the hydrogeologic and geotechnical properties of the aquitards. We explore the effect of realistic vertical heterogeneity of hydrogeologic and geotechnical parameters on the consolidation of highly compressible aquitards by means of 1-D Monte Carlo numerical simulations. 2000 realizations are generated for each of the following parameters: hydraulic conductivity (K), compression index (Cc) and void ratio (e). The correlation structure, the mean and the variance for each parameter were obtained from a literature review about field studies in the lacustrine sediments of Mexico City. The results indicate that among the parameters considered, random K has the largest effect on the ensemble average behavior of the system. Random K leads to the largest variance (and therefore largest uncertainty) of total settlement, groundwater flux and time to reach steady state conditions. We further propose a data assimilation scheme by means of ensemble Kalman filter to estimate the ensemble mean distribution of K, pore-pressure and total settlement. We consider the case where pore-pressure measurements are available at given time intervals. We test our approach by generating a 1-D realization of K with exponential spatial correlation, and solving the nonlinear flow and consolidation problem. These results are taken as our "true" solution. We take pore-pressure "measurements" at different times from this "true" solution. The ensemble Kalman filter method is then employed to estimate ensemble mean distribution of K, pore-pressure and total settlement based on the sequential assimilation of these pore-pressure measurements. The ensemble-mean estimates from

  11. Optimum SNR data compression in hardware using an Eigencoil array.

    PubMed

    King, Scott B; Varosi, Steve M; Duensing, G Randy

    2010-05-01

    With the number of receivers available on clinical MRI systems now ranging from 8 to 32 channels, data compression methods are being explored to lessen the demands on the computer for data handling and processing. Although software-based methods of compression after reception lessen computational requirements, a hardware-based method before the receiver also reduces the number of receive channels required. An eight-channel Eigencoil array is constructed by placing a hardware radiofrequency signal combiner inline after preamplification, before the receiver system. The Eigencoil array produces signal-to-noise ratio (SNR) of an optimal reconstruction using a standard sum-of-squares reconstruction, with peripheral SNR gains of 30% over the standard array. The concept of "receiver channel reduction" or MRI data compression is demonstrated, with optimal SNR using only four channels, and with a three-channel Eigencoil, superior sum-of-squares SNR was achieved over the standard eight-channel array. A three-channel Eigencoil portion of a product neurovascular array confirms in vivo SNR performance and demonstrates parallel MRI up to R = 3. This SNR-preserving data compression method advantageously allows users of MRI systems with fewer receiver channels to achieve the SNR of higher-channel MRI systems. (c) 2010 Wiley-Liss, Inc.

  12. Grid Convergence of High Order Methods for Multiscale Complex Unsteady Viscous Compressible Flows

    NASA Technical Reports Server (NTRS)

    Sjoegreen, B.; Yee, H. C.

    2001-01-01

    Grid convergence of several high order methods for the computation of rapidly developing complex unsteady viscous compressible flows with a wide range of physical scales is studied. The recently developed adaptive numerical dissipation control high order methods referred to as the ACM and wavelet filter schemes are compared with a fifth-order weighted ENO (WENO) scheme. The two 2-D compressible full Navier-Stokes models considered do not possess known analytical and experimental data. Fine grid solutions from a standard second-order TVD scheme and a MUSCL scheme with limiters are used as reference solutions. The first model is a 2-D viscous analogue of a shock tube problem which involves complex shock/shear/boundary-layer interactions. The second model is a supersonic reactive flow concerning fuel breakup. The fuel mixing involves circular hydrogen bubbles in air interacting with a planar moving shock wave. Both models contain fine scale structures and are stiff in the sense that even though the unsteadiness of the flows are rapidly developing, extreme grid refinement and time step restrictions are needed to resolve all the flow scales as well as the chemical reaction scales.

  13. Dissipative processes under the shock compression of glass

    NASA Astrophysics Data System (ADS)

    Savinykh, A. S.; Kanel, G. I.; Cherepanov, I. A.; Razorenov, S. V.

    2016-03-01

    New experimental data on the behavior of the K8 and TF1 glasses under shock-wave loading conditions are obtained. It is found that the propagation of shock waves is close to the self-similar one in the maximum compression stress range 4-12 GPa. Deviations from a general deformation diagram, which are related to viscous dissipation, take place when the final state of compression is approached. The parameter region in which failure waves form in glass is found not to be limited to the elastic compression stress range, as was thought earlier. The failure front velocity increases with the shock compression stress. Outside the region covered by a failure wave, the glasses demonstrate a high tensile dynamic strength (6-7 GPa) in the case of elastic compression, and this strength is still very high after transition through the elastic limit in a compression wave.

  14. AFRESh: an adaptive framework for compression of reads and assembled sequences with random access functionality.

    PubMed

    Paridaens, Tom; Van Wallendael, Glenn; De Neve, Wesley; Lambert, Peter

    2017-05-15

    The past decade has seen the introduction of new technologies that lowered the cost of genomic sequencing increasingly. We can even observe that the cost of sequencing is dropping significantly faster than the cost of storage and transmission. The latter motivates a need for continuous improvements in the area of genomic data compression, not only at the level of effectiveness (compression rate), but also at the level of functionality (e.g. random access), configurability (effectiveness versus complexity, coding tool set …) and versatility (support for both sequenced reads and assembled sequences). In that regard, we can point out that current approaches mostly do not support random access, requiring full files to be transmitted, and that current approaches are restricted to either read or sequence compression. We propose AFRESh, an adaptive framework for no-reference compression of genomic data with random access functionality, targeting the effective representation of the raw genomic symbol streams of both reads and assembled sequences. AFRESh makes use of a configurable set of prediction and encoding tools, extended by a Context-Adaptive Binary Arithmetic Coding scheme (CABAC), to compress raw genetic codes. To the best of our knowledge, our paper is the first to describe an effective implementation CABAC outside of its' original application. By applying CABAC, the compression effectiveness improves by up to 19% for assembled sequences and up to 62% for reads. By applying AFRESh to the genomic symbols of the MPEG genomic compression test set for reads, a compression gain is achieved of up to 51% compared to SCALCE, 42% compared to LFQC and 44% compared to ORCOM. When comparing to generic compression approaches, a compression gain is achieved of up to 41% compared to GNU Gzip and 22% compared to 7-Zip at the Ultra setting. Additionaly, when compressing assembled sequences of the Human Genome, a compression gain is achieved up to 34% compared to GNU Gzip and 16

  15. Waste Heat Approximation for Understanding Dynamic Compression in Nature and Experiments

    NASA Astrophysics Data System (ADS)

    Jeanloz, R.

    2015-12-01

    Energy dissipated during dynamic compression quantifies the residual heat left in a planet due to impact and accretion, as well as the deviation of a loading path from an ideal isentrope. Waste heat ignores the difference between the pressure-volume isentrope and Hugoniot in approximating the dissipated energy as the area between the Rayleigh line and Hugoniot (assumed given by a linear dependence of shock velocity on particle velocity). Strength and phase transformations are ignored: justifiably, when considering sufficiently high dynamic pressures and reversible transformations. Waste heat mis-estimates the dissipated energy by less than 10-20 percent for volume compressions under 30-60 percent. Specific waste heat (energy per mass) reaches 0.2-0.3 c02 at impact velocities 2-4 times the zero-pressure bulk sound velocity (c0), its maximum possible value being 0.5 c02. As larger impact velocities are implied for typical orbital velocities of Earth-like planets, and c02 ≈ 2-30 MJ/kg for rock, the specific waste heat due to accretion corresponds to temperature rises of about 3-15 x 103 K for rock: melting accompanies accretion even with only 20-30 percent waste heat retained. Impact sterilization is similarly quantified in terms of waste heat relative to the energy required to vaporize H2O (impact velocity of 7-8 km/s, or 4.5-5 c0, is sufficient). Waste heat also clarifies the relationship between shock, multi-shock and ramp loading experiments, as well as the effect of (static) pre-compression. Breaking a shock into 2 steps significantly reduces the dissipated energy, with minimum waste heat achieved for two equal volume compressions in succession. Breaking a shock into as few as 4 steps reduces the waste heat to within a few percent of zero, documenting how multi-shock loading approaches an isentrope. Pre-compression, being less dissipative than an initial shock to the same strain, further reduces waste heat. Multi-shock (i.e., high strain-rate) loading of pre-compressed

  16. Compressive sensing for efficient health monitoring and effective damage detection of structures

    NASA Astrophysics Data System (ADS)

    Jayawardhana, Madhuka; Zhu, Xinqun; Liyanapathirana, Ranjith; Gunawardana, Upul

    2017-02-01

    Real world Structural Health Monitoring (SHM) systems consist of sensors in the scale of hundreds, each sensor generating extremely large amounts of data, often arousing the issue of the cost associated with data transfer and storage. Sensor energy is a major component included in this cost factor, especially in Wireless Sensor Networks (WSN). Data compression is one of the techniques that is being explored to mitigate the effects of these issues. In contrast to traditional data compression techniques, Compressive Sensing (CS) - a very recent development - introduces the means of accurately reproducing a signal by acquiring much less number of samples than that defined by Nyquist's theorem. CS achieves this task by exploiting the sparsity of the signal. By the reduced amount of data samples, CS may help reduce the energy consumption and storage costs associated with SHM systems. This paper investigates CS based data acquisition in SHM, in particular, the implications of CS on damage detection and localization. CS is implemented in a simulation environment to compress structural response data from a Reinforced Concrete (RC) structure. Promising results were obtained from the compressed data reconstruction process as well as the subsequent damage identification process using the reconstructed data. A reconstruction accuracy of 99% could be achieved at a Compression Ratio (CR) of 2.48 using the experimental data. Further analysis using the reconstructed signals provided accurate damage detection and localization results using two damage detection algorithms, showing that CS has not compromised the crucial information on structural damages during the compression process.

  17. Fixed-Rate Compressed Floating-Point Arrays.

    PubMed

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

  18. Laser-pulse compression in a collisional plasma under weak-relativistic ponderomotive nonlinearity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Mamta; Gupta, D. N., E-mail: dngupta@physics.du.ac.in

    We present theory and numerical analysis which demonstrate laser-pulse compression in a collisional plasma under the weak-relativistic ponderomotive nonlinearity. Plasma equilibrium density is modified due to the ohmic heating of electrons, the collisions, and the weak relativistic-ponderomotive force during the interaction of a laser pulse with plasmas. First, within one-dimensional analysis, the longitudinal self-compression mechanism is discussed. Three-dimensional analysis (spatiotemporal) of laser pulse propagation is also investigated by coupling the self-compression with the self-focusing. In the regime in which the laser becomes self-focused due to the weak relativistic-ponderomotive nonlinearity, we provide results for enhanced pulse compression. The results show thatmore » the matched interplay between self-focusing and self-compression can improve significantly the temporal profile of the compressed pulse. Enhanced pulse compression can be achieved by optimizing and selecting the parameters such as collision frequency, ion-temperature, and laser intensity.« less

  19. Compressed sensing system considerations for ECG and EMG wireless biosensors.

    PubMed

    Dixon, Anna M R; Allstot, Emily G; Gangopadhyay, Daibashish; Allstot, David J

    2012-04-01

    Compressed sensing (CS) is an emerging signal processing paradigm that enables sub-Nyquist processing of sparse signals such as electrocardiogram (ECG) and electromyogram (EMG) biosignals. Consequently, it can be applied to biosignal acquisition systems to reduce the data rate to realize ultra-low-power performance. CS is compared to conventional and adaptive sampling techniques and several system-level design considerations are presented for CS acquisition systems including sparsity and compression limits, thresholding techniques, encoder bit-precision requirements, and signal recovery algorithms. Simulation studies show that compression factors greater than 16X are achievable for ECG and EMG signals with signal-to-quantization noise ratios greater than 60 dB.

  20. Channel coding/decoding alternatives for compressed TV data on advanced planetary missions.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1972-01-01

    The compatibility of channel coding/decoding schemes with a specific TV compressor developed for advanced planetary missions is considered. Under certain conditions, it is shown that compressed data can be transmitted at approximately the same rate as uncompressed data without any loss in quality. Thus, the full gains of data compression can be achieved in real-time transmission.

  1. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  2. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  3. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  4. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  5. Real time network traffic monitoring for wireless local area networks based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Balouchestani, Mohammadreza

    2017-05-01

    A wireless local area network (WLAN) is an important type of wireless networks which connotes different wireless nodes in a local area network. WLANs suffer from important problems such as network load balancing, large amount of energy, and load of sampling. This paper presents a new networking traffic approach based on Compressed Sensing (CS) for improving the quality of WLANs. The proposed architecture allows reducing Data Delay Probability (DDP) to 15%, which is a good record for WLANs. The proposed architecture is increased Data Throughput (DT) to 22 % and Signal to Noise (S/N) ratio to 17 %, which provide a good background for establishing high qualified local area networks. This architecture enables continuous data acquisition and compression of WLAN's signals that are suitable for a variety of other wireless networking applications. At the transmitter side of each wireless node, an analog-CS framework is applied at the sensing step before analog to digital converter in order to generate the compressed version of the input signal. At the receiver side of wireless node, a reconstruction algorithm is applied in order to reconstruct the original signals from the compressed signals with high probability and enough accuracy. The proposed algorithm out-performs existing algorithms by achieving a good level of Quality of Service (QoS). This ability allows reducing 15 % of Bit Error Rate (BER) at each wireless node.

  6. High Temperature Uniaxial Compression and Stress-Relaxation Behavior of India-Specific RAFM Steel

    NASA Astrophysics Data System (ADS)

    Shah, Naimish S.; Sunil, Saurav; Sarkar, Apu

    2018-07-01

    India-specific reduced activity ferritic martensitic steel (INRAFM), a modified 9Cr-1Mo grade, has been developed by India as its own structural material for fabrication of the Indian Test Blanket Module (TBM) to be installed in the International Thermonuclear Energy Reactor (ITER). The extensive study on mechanical and physical properties of this material has been currently going on for appraisal of this material before being put to use in the ITER. High temperature compression, stress-relaxation, and strain-rate change behavior of the INRAFM steel have been investigated. The optical microscopic and scanning electron microscopic characterizations were carried out to observe the microstructural changes that occur during uniaxial compressive deformation test. Comparable true plastic stress values at 300 °C and 500 °C and a high drop in true plastic stress at 600 °C were observed during the compression test. Stress-relaxation behaviors were investigated at 500 °C, 550 °C, and 600 °C at a strain rate of 10-3 s-1. The creep properties of the steel at different temperatures were predicted from the stress-relaxation test. The Norton's stress exponent ( n) was found to decrease with the increasing temperature. Using Bird-Mukherjee-Dorn relationship, the temperature-compensated normalized strain rate vs stress was plotted. The stress exponent ( n) value of 10.05 was obtained from the normalized plot. The increasing nature of the strain rate sensitivity ( m) with the test temperature was found from strain-rate change test. The low plastic stability with m 0.06 was observed at 600 °C. The activation volume ( V *) values were obtained in the range of 100 to 300 b3. By comparing the experimental values with the literature, the rate-controlling mechanisms at the thermally activated region of high temperature were found to be the nonconservative movement of jogged screw dislocations and thermal breaking of attractive junctions.

  7. High Temperature Uniaxial Compression and Stress-Relaxation Behavior of India-Specific RAFM Steel

    NASA Astrophysics Data System (ADS)

    Shah, Naimish S.; Sunil, Saurav; Sarkar, Apu

    2018-05-01

    India-specific reduced activity ferritic martensitic steel (INRAFM), a modified 9Cr-1Mo grade, has been developed by India as its own structural material for fabrication of the Indian Test Blanket Module (TBM) to be installed in the International Thermonuclear Energy Reactor (ITER). The extensive study on mechanical and physical properties of this material has been currently going on for appraisal of this material before being put to use in the ITER. High temperature compression, stress-relaxation, and strain-rate change behavior of the INRAFM steel have been investigated. The optical microscopic and scanning electron microscopic characterizations were carried out to observe the microstructural changes that occur during uniaxial compressive deformation test. Comparable true plastic stress values at 300 °C and 500 °C and a high drop in true plastic stress at 600 °C were observed during the compression test. Stress-relaxation behaviors were investigated at 500 °C, 550 °C, and 600 °C at a strain rate of 10-3 s-1. The creep properties of the steel at different temperatures were predicted from the stress-relaxation test. The Norton's stress exponent (n) was found to decrease with the increasing temperature. Using Bird-Mukherjee-Dorn relationship, the temperature-compensated normalized strain rate vs stress was plotted. The stress exponent (n) value of 10.05 was obtained from the normalized plot. The increasing nature of the strain rate sensitivity (m) with the test temperature was found from strain-rate change test. The low plastic stability with m 0.06 was observed at 600 °C. The activation volume (V *) values were obtained in the range of 100 to 300 b3. By comparing the experimental values with the literature, the rate-controlling mechanisms at the thermally activated region of high temperature were found to be the nonconservative movement of jogged screw dislocations and thermal breaking of attractive junctions.

  8. Compressed normalized block difference for object tracking

    NASA Astrophysics Data System (ADS)

    Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge

    2018-04-01

    Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.

  9. Free-beam soliton self-compression in air

    NASA Astrophysics Data System (ADS)

    Voronin, A. A.; Mitrofanov, A. V.; Sidorov-Biryukov, D. A.; Fedotov, A. B.; Pugžlys, A.; Panchenko, V. Ya; Shumakova, V.; Ališauskas, S.; Baltuška, A.; Zheltikov, A. M.

    2018-02-01

    We identify a physical scenario whereby soliton transients generated in freely propagating laser beams within the regions of anomalous dispersion in air can be compressed as a part of their free-beam spatiotemporal evolution to yield few-cycle mid- and long-wavelength-infrared field waveforms, whose peak power is substantially higher than the peak power of the input pulses. We show that this free-beam soliton self-compression scenario does not require ionization or laser-induced filamentation, enabling high-throughput self-compression of mid- and long-wavelength-infrared laser pulses within a broad range of peak powers from tens of gigawatts up to the terawatt level. We also demonstrate that this method of pulse compression can be extended to long-range propagation, providing self-compression of high-peak-power laser pulses in atmospheric air within propagation ranges as long as hundreds of meters, suggesting new ways towards longer-range standoff detection and remote sensing.

  10. Simulations of in situ x-ray diffraction from uniaxially compressed highly textured polycrystalline targets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGonegle, David, E-mail: d.mcgonegle1@physics.ox.ac.uk; Wark, Justin S.; Higginbotham, Andrew

    2015-08-14

    A growing number of shock compression experiments, especially those involving laser compression, are taking advantage of in situ x-ray diffraction as a tool to interrogate structure and microstructure evolution. Although these experiments are becoming increasingly sophisticated, there has been little work on exploiting the textured nature of polycrystalline targets to gain information on sample response. Here, we describe how to generate simulated x-ray diffraction patterns from materials with an arbitrary texture function subject to a general deformation gradient. We will present simulations of Debye-Scherrer x-ray diffraction from highly textured polycrystalline targets that have been subjected to uniaxial compression, as maymore » occur under planar shock conditions. In particular, we study samples with a fibre texture, and find that the azimuthal dependence of the diffraction patterns contains information that, in principle, affords discrimination between a number of similar shock-deformation mechanisms. For certain cases, we compare our method with results obtained by taking the Fourier transform of the atomic positions calculated by classical molecular dynamics simulations. Illustrative results are presented for the shock-induced α–ϵ phase transition in iron, the α–ω transition in titanium and deformation due to twinning in tantalum that is initially preferentially textured along [001] and [011]. The simulations are relevant to experiments that can now be performed using 4th generation light sources, where single-shot x-ray diffraction patterns from crystals compressed via laser-ablation can be obtained on timescales shorter than a phonon period.« less

  11. Simulations of in situ x-ray diffraction from uniaxially compressed highly textured polycrystalline targets

    DOE PAGES

    McGonegle, David; Milathianaki, Despina; Remington, Bruce A.; ...

    2015-08-11

    A growing number of shock compression experiments, especially those involving laser compression, are taking advantage of in situ x-ray diffraction as a tool to interrogate structure and microstructure evolution. Although these experiments are becoming increasingly sophisticated, there has been little work on exploiting the textured nature of polycrystalline targets to gain information on sample response. Here, we describe how to generate simulated x-ray diffraction patterns from materials with an arbitrary texture function subject to a general deformation gradient. We will present simulations of Debye-Scherrer x-ray diffraction from highly textured polycrystalline targets that have been subjected to uniaxial compression, as maymore » occur under planar shock conditions. In particular, we study samples with a fibre texture, and find that the azimuthal dependence of the diffraction patterns contains information that, in principle, affords discrimination between a number of similar shock-deformation mechanisms. For certain cases, we compare our method with results obtained by taking the Fourier transform of the atomic positions calculated by classical molecular dynamics simulations. Illustrative results are presented for the shock-induced α–ϵ phase transition in iron, the α–ω transition in titanium and deformation due to twinning in tantalum that is initially preferentially textured along [001] and [011]. In conclusion, the simulations are relevant to experiments that can now be performed using 4th generation light sources, where single-shot x-ray diffraction patterns from crystals compressed via laser-ablation can be obtained on timescales shorter than a phonon period.« less

  12. Identification of high shears and compressive discontinuities in the inner heliosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greco, A.; Perri, S.

    2014-04-01

    Two techniques, the Partial Variance of Increments (PVI) and the Local Intermittency Measure (LIM), have been applied and compared using MESSENGER magnetic field data in the solar wind at a heliocentric distance of about 0.3 AU. The spatial properties of the turbulent field at different scales, spanning the whole inertial range of magnetic turbulence down toward the proton scales have been studied. LIM and PVI methodologies allow us to identify portions of an entire time series where magnetic energy is mostly accumulated, and regions of intermittent bursts in the magnetic field vector increments, respectively. A statistical analysis has revealed thatmore » at small time scales and for high level of the threshold, the bursts present in the PVI and the LIM series correspond to regions of high shear stress and high magnetic field compressibility.« less

  13. Experiences of High-Achieving High School Students Who Have Taken Multiple Concurrent Advanced Placement Courses

    ERIC Educational Resources Information Center

    Milburn, Kristine M.

    2011-01-01

    Problem: An increasing number of high-achieving American high school students are enrolling in multiple Advanced Placement (AP) courses. As a result, high schools face a growing need to understand the impact of taking multiple AP courses concurrently on the social-emotional lives of high-achieving students. Procedures: This phenomenological…

  14. Knock-Limited Performance of Triptane and Xylidines Blended with 28-R Aviation Fuel at High Compression Ratios and Maximum-Economy Spark Setting

    NASA Technical Reports Server (NTRS)

    Held, Louis F.; Pritchard, Ernest I.

    1946-01-01

    An investigation was conducted to evaluate the possibilities of utilizing the high-performance characteristics of triptane and xylidines blended with 28-R fuel in order to increase fuel economy by the use of high compression ratios and maximum-economy spark setting. Full-scale single-cylinder knock tests were run with 20 deg B.T.C. and maximum-economy spark settings at compression ratios of 6.9, 8.0, and 10.0, and with two inlet-air temperatures. The fuels tested consisted of triptane, four triptane and one xylidines blend with 28-R, and 28-R fuel alone. Indicated specific fuel consumption at lean mixtures was decreased approximately 17 percent at a compression ratio of 10.0 and maximum-economy spark setting, as compared to that obtained with a compression ratio of 6.9 and normal spark setting. When compression ratio was increased from 6.9 to 10.0 at an inlet-air temperature of 150 F, normal spark setting, and a fuel-air ratio of 0.065, 55-percent triptane was required with 28-R fuel to maintain the knock-limited brake power level obtained with 28-R fuel at a compression ratio of 6.9. Brake specific fuel consumption was decreased 17.5 percent at a compression ratio of 10.0 relative to that obtained at a compression ratio of 6.9. Approximately similar results were noted at an inlet-air temperature of 250 F. For concentrations up through at least 20 percent, triptane can be more efficiently used at normal than at maximum-economy spark setting to maintain a constant knock-limited power output over the range of compression ratios tested.

  15. Highly compressible 3D periodic graphene aerogel microlattices

    PubMed Central

    Zhu, Cheng; Han, T. Yong-Jin; Duoss, Eric B.; Golobic, Alexandra M.; Kuntz, Joshua D.; Spadaccini, Christopher M.; Worsley, Marcus A.

    2015-01-01

    Graphene is a two-dimensional material that offers a unique combination of low density, exceptional mechanical properties, large surface area and excellent electrical conductivity. Recent progress has produced bulk 3D assemblies of graphene, such as graphene aerogels, but they possess purely stochastic porous networks, which limit their performance compared with the potential of an engineered architecture. Here we report the fabrication of periodic graphene aerogel microlattices, possessing an engineered architecture via a 3D printing technique known as direct ink writing. The 3D printed graphene aerogels are lightweight, highly conductive and exhibit supercompressibility (up to 90% compressive strain). Moreover, the Young's moduli of the 3D printed graphene aerogels show an order of magnitude improvement over bulk graphene materials with comparable geometric density and possess large surface areas. Adapting the 3D printing technique to graphene aerogels realizes the possibility of fabricating a myriad of complex aerogel architectures for a broad range of applications. PMID:25902277

  16. Highly compressible 3D periodic graphene aerogel microlattices

    DOE PAGES

    Zhu, Cheng; Han, T. Yong-Jin; Duoss, Eric B.; ...

    2015-04-22

    Graphene is a two-dimensional material that offers a unique combination of low density, exceptional mechanical properties, large surface area and excellent electrical conductivity. Recent progress has produced bulk 3D assemblies of graphene, such as graphene aerogels, but they possess purely stochastic porous networks, which limit their performance compared with the potential of an engineered architecture. Here we report the fabrication of periodic graphene aerogel microlattices, possessing an engineered architecture via a 3D printing technique known as direct ink writing. The 3D printed graphene aerogels are lightweight, highly conductive and exhibit supercompressibility (up to 90% compressive strain). Moreover, the Young’s modulimore » of the 3D printed graphene aerogels show an order of magnitude improvement over bulk graphene materials with comparable geometric density and possess large surface areas. Ultimately, adapting the 3D printing technique to graphene aerogels realizes the possibility of fabricating a myriad of complex aerogel architectures for a broad range of applications.« less

  17. Assessment and application of Reynolds stress closure models to high-speed compressible flows

    NASA Technical Reports Server (NTRS)

    Gatski, T. B.; Sarkar, S.; Speziale, C. G.; Balakrishnan, L.; Abid, R.; Anderson, E. C.

    1990-01-01

    The paper presents results from the development of higher order closure models for the phenomological modeling of high-speed compressible flows. The work presented includes the introduction of an improved pressure-strain correlationi model applicable in both the low- and high-speed regime as well as modifications to the isotropic dissipation rate to account for dilatational effects. Finally, the question of stiffness commonly associated with the solution of two-equation and Reynolds stress transport equations in wall-bounded flows is examined and ways of relaxing these restrictions are discussed.

  18. Scientific Temper among Academically High and Low Achieving Adolescent Girls

    ERIC Educational Resources Information Center

    Kour, Sunmeet

    2015-01-01

    The present study was undertaken to compare the scientific temper of high and low achieving adolescent girl students. Random sampling technique was used to draw the sample from various high schools of District Srinagar. The sample for the present study consisted of 120 school going adolescent girls (60 high and 60 low achievers). Data was…

  19. Stem compression reversibly reduces phloem transport in Pinus sylvestris trees.

    PubMed

    Henriksson, Nils; Tarvainen, Lasse; Lim, Hyungwoo; Tor-Ngern, Pantana; Palmroth, Sari; Oren, Ram; Marshall, John; Näsholm, Torgny

    2015-10-01

    Manipulating tree belowground carbon (C) transport enables investigation of the ecological and physiological roles of tree roots and their associated mycorrhizal fungi, as well as a range of other soil organisms and processes. Girdling remains the most reliable method for manipulating this flux and it has been used in numerous studies. However, girdling is destructive and irreversible. Belowground C transport is mediated by phloem tissue, pressurized through the high osmotic potential resulting from its high content of soluble sugars. We speculated that phloem transport may be reversibly blocked through the application of an external pressure on tree stems. Thus, we here introduce a technique based on compression of the phloem, which interrupts belowground flow of assimilates, but allows trees to recover when the external pressure is removed. Metal clamps were wrapped around the stems and tightened to achieve a pressure theoretically sufficient to collapse the phloem tissue, thereby aiming to block transport. The compression's performance was tested in two field experiments: a (13)C canopy labelling study conducted on small Scots pine (Pinus sylvestris L.) trees [2-3 m tall, 3-7 cm diameter at breast height (DBH)] and a larger study involving mature pines (∼15 m tall, 15-25 cm DBH) where stem respiration, phloem and root carbohydrate contents, and soil CO2 efflux were measured. The compression's effectiveness was demonstrated by the successful blockage of (13)C transport. Stem compression doubled stem respiration above treatment, reduced soil CO2 efflux by 34% and reduced phloem sucrose content by 50% compared with control trees. Stem respiration and soil CO2 efflux returned to normal within 3 weeks after pressure release, and (13)C labelling revealed recovery of phloem function the following year. Thus, we show that belowground phloem C transport can be reduced by compression, and we also demonstrate that trees recover after treatment, resuming C

  20. JPEG2000 Image Compression on Solar EUV Images

    NASA Astrophysics Data System (ADS)

    Fischer, Catherine E.; Müller, Daniel; De Moortel, Ineke

    2017-01-01

    For future solar missions as well as ground-based telescopes, efficient ways to return and process data have become increasingly important. Solar Orbiter, which is the next ESA/NASA mission to explore the Sun and the heliosphere, is a deep-space mission, which implies a limited telemetry rate that makes efficient onboard data compression a necessity to achieve the mission science goals. Missions like the Solar Dynamics Observatory (SDO) and future ground-based telescopes such as the Daniel K. Inouye Solar Telescope, on the other hand, face the challenge of making petabyte-sized solar data archives accessible to the solar community. New image compression standards address these challenges by implementing efficient and flexible compression algorithms that can be tailored to user requirements. We analyse solar images from the Atmospheric Imaging Assembly (AIA) instrument onboard SDO to study the effect of lossy JPEG2000 (from the Joint Photographic Experts Group 2000) image compression at different bitrates. To assess the quality of compressed images, we use the mean structural similarity (MSSIM) index as well as the widely used peak signal-to-noise ratio (PSNR) as metrics and compare the two in the context of solar EUV images. In addition, we perform tests to validate the scientific use of the lossily compressed images by analysing examples of an on-disc and off-limb coronal-loop oscillation time-series observed by AIA/SDO.

  1. Comparison of chest compression quality between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method during CPR.

    PubMed

    Park, Sang-Sub

    2014-01-01

    The purpose of this study is to grasp difference in quality of chest compression accuracy between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method. Participants were progressed 64 people except 6 absentees among 70 people who agreed to participation with completing the CPR curriculum. In the classification of group in participants, the modified chest compression method was called as smartphone group (33 people). The standardized chest compression method was called as traditional group (31 people). The common equipments in both groups were used Manikin for practice and Manikin for evaluation. In the meantime, the smartphone group for application was utilized Android and iOS Operating System (OS) of 2 smartphone products (G, i). The measurement period was conducted from September 25th to 26th, 2012. Data analysis was used SPSS WIN 12.0 program. As a result of research, the proper compression depth (mm) was shown the proper compression depth (p< 0.01) in traditional group (53.77 mm) compared to smartphone group (48.35 mm). Even the proper chest compression (%) was formed suitably (p< 0.05) in traditional group (73.96%) more than smartphone group (60.51%). As for the awareness of chest compression accuracy, the traditional group (3.83 points) had the higher awareness of chest compression accuracy (p< 0.001) than the smartphone group (2.32 points). In the questionnaire that was additionally carried out 1 question only in smartphone group, the modified chest compression method with the use of smartphone had the high negative reason in rescuer for occurrence of hand back pain (48.5%) and unstable posture (21.2%).

  2. Exploring High-Achieving Students' Images of Mathematicians

    ERIC Educational Resources Information Center

    Aguilar, Mario Sánchez; Rosas, Alejandro; Zavaleta, Juan Gabriel Molina; Romo-Vázquez, Avenilde

    2016-01-01

    The aim of this study is to describe the images that a group of high-achieving Mexican students hold of mathematicians. For this investigation, we used a research method based on the Draw-A-Scientist Test (DAST) with a sample of 63 Mexican high school students. The group of students' pictorial and written descriptions of mathematicians assisted us…

  3. Real-time demonstration hardware for enhanced DPCM video compression algorithm

    NASA Technical Reports Server (NTRS)

    Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.

    1992-01-01

    The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development

  4. Compression and compaction properties of plasticised high molecular weight hydroxypropylmethylcellulose (HPMC) as a hydrophilic matrix carrier.

    PubMed

    Hardy, I J; Cook, W G; Melia, C D

    2006-03-27

    The compression and compaction properties of plasticised high molecular weight USP2208 HPMC were investigated with the aim of improving tablet formation in HPMC matrices. Experiments were conducted on binary polymer-plasticiser mixtures containing 17 wt.% plasticiser, and on a model hydrophilic matrix formulation. A selection of common plasticisers, propylene glycol (PG) glycerol (GLY), dibutyl sebacate (DBS) and triacetin (TRI), were chosen to provide a range of plasticisation efficiencies. T(g) values of binary mixtures determined by Dynamic Mechanical Thermal Analysis (DMTA) were in rank order PG>GLY>DBS>TRI>unplasticised HPMC. Mean yield pressure, strain rate sensitivity (SRS) and plastic compaction energy were measured during the compression process, and matrix properties were monitored by tensile strength and axial expansion post-compression. Compression of HPMC:PG binary mixtures resulted in a marked reduction in mean yield pressure and a significant increase in SRS, suggesting a classical plasticisation of HPMC analogous to that produced by water. The effect of PG was also reflected in matrix properties. At compression pressures below 70 MPa, compacts had greater tensile strength than those from native polymer, and over the range 35 and 70 MPa, lower plastic compaction values showed that less energy was required to produce the compacts. Axial expansion was also reduced. Above 70 MPa tensile strength was limited to 3 MPa. These results suggest a useful improvement of HPMC compaction and matrix properties by PG plasticisation, with lowering of T(g) resulting in improved deformation and internal bonding. These effects were also detectable in the model formulation containing a minimal polymer content for an HPMC matrix. Other plasticisers were largely ineffective, matrix strength was poor and axial expansion high. The hydrophobic plasticisers (DBS, TRI) reduced yield pressure substantially, but were poor plasticisers and showed compaction mechanisms that could

  5. X-ray Computed Tomography Imaging of the Microstructure of Sand Particles Subjected to High Pressure One-Dimensional Compression

    PubMed Central

    al Mahbub, Asheque; Haque, Asadul

    2016-01-01

    This paper presents the results of X-ray CT imaging of the microstructure of sand particles subjected to high pressure one-dimensional compression leading to particle crushing. A high resolution X-ray CT machine capable of in situ imaging was employed to capture images of the whole volume of a sand sample subjected to compressive stresses up to 79.3 MPa. Images of the whole sample obtained at different load stages were analysed using a commercial image processing software (Avizo) to reveal various microstructural properties, such as pore and particle volume distributions, spatial distribution of void ratios, relative breakage, and anisotropy of particles. PMID:28774011

  6. Entropy Splitting for High Order Numerical Simulation of Compressible Turbulence

    NASA Technical Reports Server (NTRS)

    Sandham, N. D.; Yee, H. C.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    A stable high order numerical scheme for direct numerical simulation (DNS) of shock-free compressible turbulence is presented. The method is applicable to general geometries. It contains no upwinding, artificial dissipation, or filtering. Instead the method relies on the stabilizing mechanisms of an appropriate conditioning of the governing equations and the use of compatible spatial difference operators for the interior points (interior scheme) as well as the boundary points (boundary scheme). An entropy splitting approach splits the inviscid flux derivatives into conservative and non-conservative portions. The spatial difference operators satisfy a summation by parts condition leading to a stable scheme (combined interior and boundary schemes) for the initial boundary value problem using a generalized energy estimate. A Laplacian formulation of the viscous and heat conduction terms on the right hand side of the Navier-Stokes equations is used to ensure that any tendency to odd-even decoupling associated with central schemes can be countered by the fluid viscosity. A special formulation of the continuity equation is used, based on similar arguments. The resulting methods are able to minimize spurious high frequency oscillation producing nonlinear instability associated with pure central schemes, especially for long time integration simulation such as DNS. For validation purposes, the methods are tested in a DNS of compressible turbulent plane channel flow at a friction Mach number of 0.1 where a very accurate turbulence data base exists. It is demonstrated that the methods are robust in terms of grid resolution, and in good agreement with incompressible channel data, as expected at this Mach number. Accurate turbulence statistics can be obtained with moderate grid sizes. Stability limits on the range of the splitting parameter are determined from numerical tests.

  7. Structure of shock compressed model basaltic glass: Insights from O K-edge X-ray Raman scattering and high-resolution 27Al NMR spectroscopy

    NASA Astrophysics Data System (ADS)

    Lee, Sung Keun; Park, Sun Young; Kim, Hyo-Im; Tschauner, Oliver; Asimow, Paul; Bai, Ligang; Xiao, Yuming; Chow, Paul

    2012-03-01

    The detailed atomic structures of shock compressed basaltic glasses are not well understood. Here, we explore the structures of shock compressed silicate glass with a diopside-anorthite eutectic composition (Di64An36), a common Fe-free model basaltic composition, using oxygen K-edge X-ray Raman scattering and high- resolution 27Al solid-state NMR spectroscopy and report previously unknown details of shock-induced changes in the atomic configurations. A topologically driven densification of the Di64An36 glass is indicated by the increase in oxygen K-edge energy for the glass upon shock compression. The first experimental evidence of the increase in the fraction of highly coordinated Al in shock compressed glass is found in the 27Al NMR spectra. This unambiguous evidence of shock-induced changes in Al coordination environments provides atomistic insights into shock compression in basaltic glasses and allows us to microscopically constrain the magnitude of impact events or relevant processes involving natural basalts on Earth and planetary surfaces.

  8. An Ultra-Low Power Turning Angle Based Biomedical Signal Compression Engine with Adaptive Threshold Tuning.

    PubMed

    Zhou, Jun; Wang, Chao

    2017-08-06

    Intelligent sensing is drastically changing our everyday life including healthcare by biomedical signal monitoring, collection, and analytics. However, long-term healthcare monitoring generates tremendous data volume and demands significant wireless transmission power, which imposes a big challenge for wearable healthcare sensors usually powered by batteries. Efficient compression engine design to reduce wireless transmission data rate with ultra-low power consumption is essential for wearable miniaturized healthcare sensor systems. This paper presents an ultra-low power biomedical signal compression engine for healthcare data sensing and analytics in the era of big data and sensor intelligence. It extracts the feature points of the biomedical signal by window-based turning angle detection. The proposed approach has low complexity and thus low power consumption while achieving a large compression ratio (CR) and good quality of reconstructed signal. Near-threshold design technique is adopted to further reduce the power consumption on the circuit level. Besides, the angle threshold for compression can be adaptively tuned according to the error between the original signal and reconstructed signal to address the variation of signal characteristics from person to person or from channel to channel to meet the required signal quality with optimal CR. For demonstration, the proposed biomedical compression engine has been used and evaluated for ECG compression. It achieves an average (CR) of 71.08% and percentage root-mean-square difference (PRD) of 5.87% while consuming only 39 nW. Compared to several state-of-the-art ECG compression engines, the proposed design has significantly lower power consumption while achieving similar CRD and PRD, making it suitable for long-term wearable miniaturized sensor systems to sense and collect healthcare data for remote data analytics.

  9. Achieving high performance on the Intel Paragon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenberg, D.S.; Maccabe, B.; Riesen, R.

    1993-11-01

    When presented with a new supercomputer most users will first ask {open_quotes}How much faster will my applications run?{close_quotes} and then add a fearful {open_quotes}How much effort will it take me to convert to the new machine?{close_quotes} This paper describes some lessons learned at Sandia while asking these questions about the new 1800+ node Intel Paragon. The authors conclude that the operating system is crucial to both achieving high performance and allowing easy conversion from previous parallel implementations to a new machine. Using the Sandia/UNM Operating System (SUNMOS) they were able to port a LU factorization of dense matrices from themore » nCUBE2 to the Paragon and achieve 92% scaled speed-up on 1024 nodes. Thus on a 44,000 by 44,000 matrix which had required over 10 hours on the previous machine, they completed in less than 1/2 hour at a rate of over 40 GFLOPS. Two keys to achieving such high performance were the small size of SUNMOS (less than 256 kbytes) and the ability to send large messages with very low overhead.« less

  10. Control of traumatic wound bleeding by compression with a compact elastic adhesive dressing.

    PubMed

    Naimer, Sody Abby; Tanami, Menachem; Malichi, Avishai; Moryosef, David

    2006-07-01

    Compression dressing has been assumed effective, but never formally examined in the field. A prospective interventional trial examined efficacy and feasibility of an elastic adhesive dressing compression device in the arena of the traumatic incident. The primary variable examined was the bleeding rate from wounds compared before and after dressing. Sixty-two consecutive bleeding wounds resulting from penetrating trauma were treated. Bleeding intensity was profuse in 58%, moderate 23%, and mild in 19%. Full control of bleeding was achieved in 87%, a significantly diminished rate in 11%, and, in 1 case, the technique had no influence on the bleeding rate. The Wilcoxon test for variables comparing bleeding rates before and after the procedure obtained significant difference (Z = -6.9, p < 0.01). No significant complications were observed. Caregivers were highly satisfied in 90% of cases. Elastic adhesive dressing was observed as an effective and reliable technique, demonstrating a high rate of success without complications.

  11. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    prior to the loss can be used to reconstruct that partition at lower fidelity. By virtue of the compression improvement it achieves relative to previous means of onboard data compression, this software enables (1) increased return of hyperspectral scientific data in the presence of limits on the rates of transmission of data from spacecraft to Earth via radio communication links and/or (2) reduction in spacecraft radio-communication power and/or cost through reduction in the amounts of data required to be downlinked and stored onboard prior to downlink. The software is also suitable for compressing hyperspectral images for ground storage or archival purposes.

  12. Laser shock compression experiments on precompressed water in ``SG-II'' laser facility

    NASA Astrophysics Data System (ADS)

    Shu, Hua; Huang, Xiuguang; Ye, Junjian; Fu, Sizu

    2017-06-01

    Laser shock compression experiments on precompressed samples offer the possibility to obtain new hugoniot data over a significantly broader range of density-temperature phase than was previously achievable. This technique was developed in ``SG-II'' laser facility. Hugoniot data were obtained for water in 300 GPa pressure range by laser-driven shock compression of samples statically precompressed in diamond-anvil cells.

  13. Supercomputer implementation of finite element algorithms for high speed compressible flows

    NASA Technical Reports Server (NTRS)

    Thornton, E. A.; Ramakrishnan, R.

    1986-01-01

    Prediction of compressible flow phenomena using the finite element method is of recent origin and considerable interest. Two shock capturing finite element formulations for high speed compressible flows are described. A Taylor-Galerkin formulation uses a Taylor series expansion in time coupled with a Galerkin weighted residual statement. The Taylor-Galerkin algorithms use explicit artificial dissipation, and the performance of three dissipation models are compared. A Petrov-Galerkin algorithm has as its basis the concepts of streamline upwinding. Vectorization strategies are developed to implement the finite element formulations on the NASA Langley VPS-32. The vectorization scheme results in finite element programs that use vectors of length of the order of the number of nodes or elements. The use of the vectorization procedure speeds up processing rates by over two orders of magnitude. The Taylor-Galerkin and Petrov-Galerkin algorithms are evaluated for 2D inviscid flows on criteria such as solution accuracy, shock resolution, computational speed and storage requirements. The convergence rates for both algorithms are enhanced by local time-stepping schemes. Extension of the vectorization procedure for predicting 2D viscous and 3D inviscid flows are demonstrated. Conclusions are drawn regarding the applicability of the finite element procedures for realistic problems that require hundreds of thousands of nodes.

  14. The Meaning High-Achieving African-American Males in an Urban High School Ascribe to Mathematics

    ERIC Educational Resources Information Center

    Thompson, LaTasha; Davis, Julius

    2013-01-01

    Many researchers, educators, administrators, policymakers and members of the general public doubt the prevalence of high-achieving African-American males in urban high schools capable of excelling in mathematics. As part of a larger study, the current study explored the educational experiences of four high-achieving African-American males…

  15. Chloride Permeability of Damaged High-Performance Fiber-Reinforced Cement Composite by Repeated Compressive Loads.

    PubMed

    Lee, Byung Jae; Hyun, Jung Hwan; Kim, Yun Yong; Shin, Kyung Joon

    2014-08-11

    The development of cracking in concrete structures leads to significant permeability and to durability problems as a result. Approaches to controlling crack development and crack width in concrete structures have been widely debated. Recently, it was recognized that a high-performance fiber-reinforced cement composite (HPFRCC) provides a possible solution to this inherent problem of cracking by smearing one or several dominant cracks into many distributed microcracks under tensile loading conditions. However, the chloride permeability of HPFRCC under compressive loading conditions is not yet fully understood. Therefore, the goal of the present study is to explore the chloride diffusion characteristics of HPFRCC damaged by compressive loads. The chloride diffusivity of HPFRCC is measured after being subjected to various repeated loads. The results show that the residual axial strain, lateral strain and specific crack area of HPFRCC specimens increase with an increase in the damage induced by repeated loads. However, the chloride diffusion coefficient increases only up to 1.5-times, whereas the specific crack area increases up to 3-times with an increase in damage. Although HPFRCC shows smeared distributed cracks in tensile loads, a significant reduction in the diffusion coefficient of HPFRCC is not obtained compared to plain concrete when the cyclic compressive load is applied below 85% of the strength.

  16. Least Median of Squares Filtering of Locally Optimal Point Matches for Compressible Flow Image Registration

    PubMed Central

    Castillo, Edward; Castillo, Richard; White, Benjamin; Rojo, Javier; Guerrero, Thomas

    2012-01-01

    Compressible flow based image registration operates under the assumption that the mass of the imaged material is conserved from one image to the next. Depending on how the mass conservation assumption is modeled, the performance of existing compressible flow methods is limited by factors such as image quality, noise, large magnitude voxel displacements, and computational requirements. The Least Median of Squares Filtered Compressible Flow (LFC) method introduced here is based on a localized, nonlinear least squares, compressible flow model that describes the displacement of a single voxel that lends itself to a simple grid search (block matching) optimization strategy. Spatially inaccurate grid search point matches, corresponding to erroneous local minimizers of the nonlinear compressible flow model, are removed by a novel filtering approach based on least median of squares fitting and the forward search outlier detection method. The spatial accuracy of the method is measured using ten thoracic CT image sets and large samples of expert determined landmarks (available at www.dir-lab.com). The LFC method produces an average error within the intra-observer error on eight of the ten cases, indicating that the method is capable of achieving a high spatial accuracy for thoracic CT registration. PMID:22797602

  17. A Two-Stage Reconstruction Processor for Human Detection in Compressive Sensing CMOS Radar.

    PubMed

    Tsao, Kuei-Chi; Lee, Ling; Chu, Ta-Shun; Huang, Yuan-Hao

    2018-04-05

    Complementary metal-oxide-semiconductor (CMOS) radar has recently gained much research attraction because small and low-power CMOS devices are very suitable for deploying sensing nodes in a low-power wireless sensing system. This study focuses on the signal processing of a wireless CMOS impulse radar system that can detect humans and objects in the home-care internet-of-things sensing system. The challenges of low-power CMOS radar systems are the weakness of human signals and the high computational complexity of the target detection algorithm. The compressive sensing-based detection algorithm can relax the computational costs by avoiding the utilization of matched filters and reducing the analog-to-digital converter bandwidth requirement. The orthogonal matching pursuit (OMP) is one of the popular signal reconstruction algorithms for compressive sensing radar; however, the complexity is still very high because the high resolution of human respiration leads to high-dimension signal reconstruction. Thus, this paper proposes a two-stage reconstruction algorithm for compressive sensing radar. The proposed algorithm not only has lower complexity than the OMP algorithm by 75% but also achieves better positioning performance than the OMP algorithm especially in noisy environments. This study also designed and implemented the algorithm by using Vertex-7 FPGA chip (Xilinx, San Jose, CA, USA). The proposed reconstruction processor can support the 256 × 13 real-time radar image display with a throughput of 28.2 frames per second.

  18. A Two-Stage Reconstruction Processor for Human Detection in Compressive Sensing CMOS Radar

    PubMed Central

    Tsao, Kuei-Chi; Lee, Ling; Chu, Ta-Shun

    2018-01-01

    Complementary metal-oxide-semiconductor (CMOS) radar has recently gained much research attraction because small and low-power CMOS devices are very suitable for deploying sensing nodes in a low-power wireless sensing system. This study focuses on the signal processing of a wireless CMOS impulse radar system that can detect humans and objects in the home-care internet-of-things sensing system. The challenges of low-power CMOS radar systems are the weakness of human signals and the high computational complexity of the target detection algorithm. The compressive sensing-based detection algorithm can relax the computational costs by avoiding the utilization of matched filters and reducing the analog-to-digital converter bandwidth requirement. The orthogonal matching pursuit (OMP) is one of the popular signal reconstruction algorithms for compressive sensing radar; however, the complexity is still very high because the high resolution of human respiration leads to high-dimension signal reconstruction. Thus, this paper proposes a two-stage reconstruction algorithm for compressive sensing radar. The proposed algorithm not only has lower complexity than the OMP algorithm by 75% but also achieves better positioning performance than the OMP algorithm especially in noisy environments. This study also designed and implemented the algorithm by using Vertex-7 FPGA chip (Xilinx, San Jose, CA, USA). The proposed reconstruction processor can support the 256×13 real-time radar image display with a throughput of 28.2 frames per second. PMID:29621170

  19. A comparative study of several compressibility corrections to turbulence models applied to high-speed shear layers

    NASA Technical Reports Server (NTRS)

    Viegas, John R.; Rubesin, Morris W.

    1991-01-01

    Several recently published compressibility corrections to the standard k-epsilon turbulence model are used with the Navier-Stokes equations to compute the mixing region of a large variety of high speed flows. These corrections, specifically developed to address the weakness of higher order turbulence models to accurately predict the spread rate of compressible free shear flows, are applied to two stream flows of the same gas mixing under a large variety of free stream conditions. Results are presented for two types of flows: unconfined streams with either (1) matched total temperatures and static pressures, or (2) matched static temperatures and pressures, and a confined stream.

  20. Coil Compression for Accelerated Imaging with Cartesian Sampling

    PubMed Central

    Zhang, Tao; Pauly, John M.; Vasanawala, Shreyas S.; Lustig, Michael

    2012-01-01

    MRI using receiver arrays with many coil elements can provide high signal-to-noise ratio and increase parallel imaging acceleration. At the same time, the growing number of elements results in larger datasets and more computation in the reconstruction. This is of particular concern in 3D acquisitions and in iterative reconstructions. Coil compression algorithms are effective in mitigating this problem by compressing data from many channels into fewer virtual coils. In Cartesian sampling there often are fully sampled k-space dimensions. In this work, a new coil compression technique for Cartesian sampling is presented that exploits the spatially varying coil sensitivities in these non-subsampled dimensions for better compression and computation reduction. Instead of directly compressing in k-space, coil compression is performed separately for each spatial location along the fully-sampled directions, followed by an additional alignment process that guarantees the smoothness of the virtual coil sensitivities. This important step provides compatibility with autocalibrating parallel imaging techniques. Its performance is not susceptible to artifacts caused by a tight imaging fieldof-view. High quality compression of in-vivo 3D data from a 32 channel pediatric coil into 6 virtual coils is demonstrated. PMID:22488589

  1. SeqCompress: an algorithm for biological sequence compression.

    PubMed

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Deterministic compressive sampling for high-quality image reconstruction of ultrasound tomography.

    PubMed

    Huy, Tran Quang; Tue, Huynh Huu; Long, Ton That; Duc-Tan, Tran

    2017-05-25

    A well-known diagnostic imaging modality, termed ultrasound tomography, was quickly developed for the detection of very small tumors whose sizes are smaller than the wavelength of the incident pressure wave without ionizing radiation, compared to the current gold-standard X-ray mammography. Based on inverse scattering technique, ultrasound tomography uses some material properties such as sound contrast or attenuation to detect small targets. The Distorted Born Iterative Method (DBIM) based on first-order Born approximation is an efficient diffraction tomography approach. One of the challenges for a high quality reconstruction is to obtain many measurements from the number of transmitters and receivers. Given the fact that biomedical images are often sparse, the compressed sensing (CS) technique could be therefore effectively applied to ultrasound tomography by reducing the number of transmitters and receivers, while maintaining a high quality of image reconstruction. There are currently several work on CS that dispose randomly distributed locations for the measurement system. However, this random configuration is relatively difficult to implement in practice. Instead of it, we should adopt a methodology that helps determine the locations of measurement devices in a deterministic way. For this, we develop the novel DCS-DBIM algorithm that is highly applicable in practice. Inspired of the exploitation of the deterministic compressed sensing technique (DCS) introduced by the authors few years ago with the image reconstruction process implemented using l 1 regularization. Simulation results of the proposed approach have demonstrated its high performance, with the normalized error approximately 90% reduced, compared to the conventional approach, this new approach can save half of number of measurements and only uses two iterations. Universal image quality index is also evaluated in order to prove the efficiency of the proposed approach. Numerical simulation results

  3. Low complexity lossless compression of underwater sound recordings.

    PubMed

    Johnson, Mark; Partan, Jim; Hurst, Tom

    2013-03-01

    Autonomous listening devices are increasingly used to study vocal aquatic animals, and there is a constant need to record longer or with greater bandwidth, requiring efficient use of memory and battery power. Real-time compression of sound has the potential to extend recording durations and bandwidths at the expense of increased processing operations and therefore power consumption. Whereas lossy methods such as MP3 introduce undesirable artifacts, lossless compression algorithms (e.g., flac) guarantee exact data recovery. But these algorithms are relatively complex due to the wide variety of signals they are designed to compress. A simpler lossless algorithm is shown here to provide compression factors of three or more for underwater sound recordings over a range of noise environments. The compressor was evaluated using samples from drifting and animal-borne sound recorders with sampling rates of 16-240 kHz. It achieves >87% of the compression of more-complex methods but requires about 1/10 of the processing operations resulting in less than 1 mW power consumption at a sampling rate of 192 kHz on a low-power microprocessor. The potential to triple recording duration with a minor increase in power consumption and no loss in sound quality may be especially valuable for battery-limited tags and robotic vehicles.

  4. Compression fractures detection on CT

    NASA Astrophysics Data System (ADS)

    Bar, Amir; Wolf, Lior; Bergman Amitai, Orna; Toledano, Eyal; Elnekave, Eldad

    2017-03-01

    The presence of a vertebral compression fracture is highly indicative of osteoporosis and represents the single most robust predictor for development of a second osteoporotic fracture in the spine or elsewhere. Less than one third of vertebral compression fractures are diagnosed clinically. We present an automated method for detecting spine compression fractures in Computed Tomography (CT) scans. The algorithm is composed of three processes. First, the spinal column is segmented and sagittal patches are extracted. The patches are then binary classified using a Convolutional Neural Network (CNN). Finally a Recurrent Neural Network (RNN) is utilized to predict whether a vertebral fracture is present in the series of patches.

  5. Accelerated high-resolution photoacoustic tomography via compressed sensing

    NASA Astrophysics Data System (ADS)

    Arridge, Simon; Beard, Paul; Betcke, Marta; Cox, Ben; Huynh, Nam; Lucka, Felix; Ogunlade, Olumide; Zhang, Edward

    2016-12-01

    Current 3D photoacoustic tomography (PAT) systems offer either high image quality or high frame rates but are not able to deliver high spatial and temporal resolution simultaneously, which limits their ability to image dynamic processes in living tissue (4D PAT). A particular example is the planar Fabry-Pérot (FP) photoacoustic scanner, which yields high-resolution 3D images but takes several minutes to sequentially map the incident photoacoustic field on the 2D sensor plane, point-by-point. However, as the spatio-temporal complexity of many absorbing tissue structures is rather low, the data recorded in such a conventional, regularly sampled fashion is often highly redundant. We demonstrate that combining model-based, variational image reconstruction methods using spatial sparsity constraints with the development of novel PAT acquisition systems capable of sub-sampling the acoustic wave field can dramatically increase the acquisition speed while maintaining a good spatial resolution: first, we describe and model two general spatial sub-sampling schemes. Then, we discuss how to implement them using the FP interferometer and demonstrate the potential of these novel compressed sensing PAT devices through simulated data from a realistic numerical phantom and through measured data from a dynamic experimental phantom as well as from in vivo experiments. Our results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction techniques that describe the tissues structures with suitable sparsity-constraints are used. In particular, we examine the use of total variation (TV) regularization enhanced by Bregman iterations. These novel reconstruction strategies offer new opportunities to dramatically increase the acquisition speed of photoacoustic scanners that employ point-by-point sequential scanning as well as reducing the channel count of parallelized schemes that use detector arrays.

  6. Divided-pulse nonlinear amplification and simultaneous compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hao, Qiang; Zhang, Qingshan; Sun, Tingting

    2015-03-09

    We report on a fiber laser system delivering 122 fs pulse duration and 600 mW average power at 1560 nm by the interplay between divided pulse amplification and nonlinear pulse compression. A small-core double-clad erbium-doped fiber with anomalous dispersion carries out the pulse amplification and simultaneously compresses the laser pulses such that a separate compressor is no longer necessary. A numeric simulation reveals the existence of an optimum fiber length for producing transform-limited pulses. Furthermore, frequency doubling to 780 nm with 240 mW average power and 98 fs pulse duration is achieved by using a periodically poled lithium niobate crystal at roommore » temperature.« less

  7. An infrared-visible image fusion scheme based on NSCT and compressed sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Maldague, Xavier

    2015-05-01

    Image fusion, as a research hot point nowadays in the field of infrared computer vision, has been developed utilizing different varieties of methods. Traditional image fusion algorithms are inclined to bring problems, such as data storage shortage and computational complexity increase, etc. Compressed sensing (CS) uses sparse sampling without knowing the priori knowledge and greatly reconstructs the image, which reduces the cost and complexity of image processing. In this paper, an advanced compressed sensing image fusion algorithm based on non-subsampled contourlet transform (NSCT) is proposed. NSCT provides better sparsity than the wavelet transform in image representation. Throughout the NSCT decomposition, the low-frequency and high-frequency coefficients can be obtained respectively. For the fusion processing of low-frequency coefficients of infrared and visible images , the adaptive regional energy weighting rule is utilized. Thus only the high-frequency coefficients are specially measured. Here we use sparse representation and random projection to obtain the required values of high-frequency coefficients, afterwards, the coefficients of each image block can be fused via the absolute maximum selection rule and/or the regional standard deviation rule. In the reconstruction of the compressive sampling results, a gradient-based iterative algorithm and the total variation (TV) method are employed to recover the high-frequency coefficients. Eventually, the fused image is recovered by inverse NSCT. Both the visual effects and the numerical computation results after experiments indicate that the presented approach achieves much higher quality of image fusion, accelerates the calculations, enhances various targets and extracts more useful information.

  8. GPU-accelerated algorithms for compressed signals recovery with application to astronomical imagery deblurring

    NASA Astrophysics Data System (ADS)

    Fiandrotti, Attilio; Fosson, Sophie M.; Ravazzi, Chiara; Magli, Enrico

    2018-04-01

    Compressive sensing promises to enable bandwidth-efficient on-board compression of astronomical data by lifting the encoding complexity from the source to the receiver. The signal is recovered off-line, exploiting GPUs parallel computation capabilities to speedup the reconstruction process. However, inherent GPU hardware constraints limit the size of the recoverable signal and the speedup practically achievable. In this work, we design parallel algorithms that exploit the properties of circulant matrices for efficient GPU-accelerated sparse signals recovery. Our approach reduces the memory requirements, allowing us to recover very large signals with limited memory. In addition, it achieves a tenfold signal recovery speedup thanks to ad-hoc parallelization of matrix-vector multiplications and matrix inversions. Finally, we practically demonstrate our algorithms in a typical application of circulant matrices: deblurring a sparse astronomical image in the compressed domain.

  9. Effectiveness of compressed sensing and transmission in wireless sensor networks for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Fujiwara, Takahiro; Uchiito, Haruki; Tokairin, Tomoya; Kawai, Hiroyuki

    2017-04-01

    Regarding Structural Health Monitoring (SHM) for seismic acceleration, Wireless Sensor Networks (WSN) is a promising tool for low-cost monitoring. Compressed sensing and transmission schemes have been drawing attention to achieve effective data collection in WSN. Especially, SHM systems installing massive nodes of WSN require efficient data transmission due to restricted communications capability. The dominant frequency band of seismic acceleration is occupied within 100 Hz or less. In addition, the response motions on upper floors of a structure are activated at a natural frequency, resulting in induced shaking at the specified narrow band. Focusing on the vibration characteristics of structures, we introduce data compression techniques for seismic acceleration monitoring in order to reduce the amount of transmission data. We carry out a compressed sensing and transmission scheme by band pass filtering for seismic acceleration data. The algorithm executes the discrete Fourier transform for the frequency domain and band path filtering for the compressed transmission. Assuming that the compressed data is transmitted through computer networks, restoration of the data is performed by the inverse Fourier transform in the receiving node. This paper discusses the evaluation of the compressed sensing for seismic acceleration by way of an average error. The results present the average error was 0.06 or less for the horizontal acceleration, in conditions where the acceleration was compressed into 1/32. Especially, the average error on the 4th floor achieved a small error of 0.02. Those results indicate that compressed sensing and transmission technique is effective to reduce the amount of data with maintaining the small average error.

  10. High-Achieving High School Students and Not so High-Achieving College Students: A Look at Lack of Self-Control, Academic Ability, and Performance in College

    ERIC Educational Resources Information Center

    Honken, Nora B.; Ralston, Patricia A. S.

    2013-01-01

    This study investigated the relationship among lack of self-control, academic ability, and academic performance for a cohort of freshman engineering students who were, with a few exceptions, extremely high achievers in high school. Structural equation modeling analysis led to the conclusion that lack of self-control in high school, as measured by…

  11. Compression of Intense Laser Pulses in Plasma

    NASA Astrophysics Data System (ADS)

    Fisch, Nathaniel J.; Malkin, Vladimir M.; Shvets, Gennady

    2001-10-01

    A counterpropagating short pulse can absorb the energy of a long laser pulse in plasma, resulting in pulse compression. For processing very high power and very high total energy, plasma is an ideal medium. Thus, in plasma one can contemplate the compression of micron light pulses to exawatts per square cm or fluences to kilojoules per square cm, prior to the vacuum focus. Two nonlinear plasma effects have recently been proposed to accomplish compression at very high power in counterpropagating geometry: One is compression by means of Compton or so-called superradiant scattering, where the nonlinear interaction of the plasma electrons with the lasers dominates the plasma restoring motion due to charge imbalance [G. Shvets, N. J. Fisch, A. Pukhov, and J. Meyer-ter-Vehn, Phys. Rev. Lett. v. 81, 4879 (1998)]. The second is fast compression by means of stimulated backward Raman scattering (SBRS), where the amplification process outruns deleterious processes associated with the ultraintense pulse [V. M. Malkin, G. Shvets, N. J. Fisch, Phys. Rev. Lett., v. 82, 4448 (1999)]. In each of these regimes, in a realistic plasma, there are technological challenges that must be met and competing effects that must be kept smaller than the desired interaction.

  12. Cardiovascular causes of airway compression.

    PubMed

    Kussman, Barry D; Geva, Tal; McGowan, Francis X

    2004-01-01

    Compression of the paediatric airway is a relatively common and often unrecognized complication of congenital cardiac and aortic arch anomalies. Airway obstruction may be the result of an anomalous relationship between the tracheobronchial tree and vascular structures (producing a vascular ring) or the result of extrinsic compression caused by dilated pulmonary arteries, left atrial enlargement, massive cardiomegaly, or intraluminal bronchial obstruction. A high index of suspicion of mechanical airway compression should be maintained in infants and children with recurrent respiratory difficulties, stridor, wheezing, dysphagia, or apnoea unexplained by other causes. Prompt diagnosis is required to avoid death and minimize airway damage. In addition to plain chest radiography and echocardiography, diagnostic investigations may consist of barium oesophagography, magnetic resonance imaging (MRI), computed tomography, cardiac catheterization and bronchoscopy. The most important recent advance is MRI, which can produce high quality three-dimensional reconstruction of all anatomic elements allowing for precise anatomic delineation and improved surgical planning. Anaesthetic technique will depend on the type of vascular ring and the presence of any congenital heart disease or intrinsic lesions of the tracheobronchial tree. Vascular rings may be repaired through a conventional posterolateral thoracotomy, or utilizing video-assisted thoracoscopic surgery (VATS) or robotic endoscopic surgery. Persistent airway obstruction following surgical repair may be due to residual compression, secondary airway wall instability (malacia), or intrinsic lesions of the airway. Simultaneous repair of cardiac defects and vascular tracheobronchial compression carries a higher risk of morbidity and mortality.

  13. EBLAST: an efficient high-compression image transformation 3. application to Internet image and video transmission

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.

    2001-12-01

    A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.

  14. Compression of facsimile graphics for transmission over digital mobile satellite circuits

    NASA Astrophysics Data System (ADS)

    Dimolitsas, Spiros; Corcoran, Frank L.

    A technique for reducing the transmission requirements of facsimile images while maintaining high intelligibility in mobile communications environments is described. The algorithms developed are capable of achieving a compression of approximately 32 to 1. The technique focuses on the implementation of a low-cost interface unit suitable for facsimile communication between low-power mobile stations and fixed stations for both point-to-point and point-to-multipoint transmissions. This interface may be colocated with the transmitting facsimile terminals. The technique was implemented and tested by intercepting facsimile documents in a store-and-forward mode.

  15. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.

  16. High Achievement in Mathematics Education in India: A Report from Mumbai

    ERIC Educational Resources Information Center

    Raman, Manya

    2010-01-01

    This paper reports a study aimed at characterizing the conditions that lead to high achievement in mathematics in India. The study involved eight schools in the greater Mumbai region. The main result of the study is that the notion of high achievement itself is problematic, as reflected in the reports about mathematics achievement within and…

  17. High Involvement Mothers of High Achieving Children: Potential Theoretical Explanations

    ERIC Educational Resources Information Center

    Hunsaker, Scott L.

    2013-01-01

    In American society, parents who have high aspirations for the achievements of their children are often viewed by others in a negative light. Various pejoratives such as "pushy parent," "helicopter parent," "stage mother," and "soccer mom" are used in the common vernacular to describe these parents. Multiple…

  18. Compressibility characteristics of Sabak Bernam Marine Clay

    NASA Astrophysics Data System (ADS)

    Lat, D. C.; Ali, N.; Jais, I. B. M.; Baharom, B.; Yunus, N. Z. M.; Salleh, S. M.; Azmi, N. A. C.

    2018-04-01

    This study is carried out to determine the geotechnical properties and compressibility characteristics of marine clay collected at Sabak Bernam. The compressibility characteristics of this soil are determined from 1-D consolidation test and verified by existing correlations by other researchers. No literature has been found on the compressibility characteristics of Sabak Bernam Marine Clay. It is important to carry out this study since this type of marine clay covers large coastal area of west coast Malaysia. This type of marine clay was found on the main road connecting Klang to Perak and the road keeps experiencing undulation and uneven settlement which jeopardise the safety of the road users. The soil is indicated in the Generalised Soil Map of Peninsular Malaysia as a CLAY with alluvial soil on recent marine and riverine alluvium. Based on the British Standard Soil Classification and Plasticity Chart, the soil is classified as a CLAY with very high plasticity (CV). Results from laboratory test on physical properties and compressibility parameters show that Sabak Bernam Marine Clay (SBMC) is highly compressible, has low permeability and poor drainage characteristics. The compressibility parameters obtained for SBMC is in a good agreement with other researchers in the same field.

  19. Magnetized Plasma Compression for Fusion Energy

    NASA Astrophysics Data System (ADS)

    Degnan, James; Grabowski, Christopher; Domonkos, Matthew; Amdahl, David

    2013-10-01

    Magnetized Plasma Compression (MPC) uses magnetic inhibition of thermal conduction and enhancement of charge particle product capture to greatly reduce the temporal and spatial compression required relative to un-magnetized inertial fusion (IFE)--to microseconds, centimeters vs nanoseconds, sub-millimeter. MPC greatly reduces the required confinement time relative to MFE--to microseconds vs minutes. Proof of principle can be demonstrated or refuted using high current pulsed power driven compression of magnetized plasmas using magnetic pressure driven implosions of metal shells, known as imploding liners. This can be done at a cost of a few tens of millions of dollars. If demonstrated, it becomes worthwhile to develop repetitive implosion drivers. One approach is to use arrays of heavy ion beams for energy production, though with much less temporal and spatial compression than that envisioned for un-magnetized IFE, with larger compression targets, and with much less ambitious compression ratios. A less expensive, repetitive pulsed power driver, if feasible, would require engineering development for transient, rapidly replaceable transmission lines such as envisioned by Sandia National Laboratories. Supported by DOE-OFES.

  20. Gender Differences in Attitudes toward Mathematics between Low-Achieving and High-Achieving Fifth Grade Elementary Students.

    ERIC Educational Resources Information Center

    Rathbone, A. Sue

    Possible gender differences in attitudes toward mathematics were studied between low-achieving and high-achieving fifth-grade students in selected elementary schools within a large, metropolitan area. The attitudes of pre-adolescent children at an intermediate grade level were assessed to determine the effects of rapidly emerging gender-related…