Shi, Jianyong; Qian, Xuede; Liu, Xiaodong; Sun, Long; Liao, Zhiqiang
2016-09-01
The total compression of municipal solid waste (MSW) consists of primary, secondary, and decomposition compressions. It is usually difficult to distinguish between the three parts of compressions. In this study, the odeometer test was used to distinguish between the primary and secondary compressions to determine the primary and secondary compression coefficient. In addition, the ending time of the primary compressions were proposed based on municipal solid waste compression tests in a degradation-inhibited condition by adding vinegar. The amount of the secondary compression occurring in the primary compression stage has a relatively high percentage to either the total compression or the total secondary compression. The relationship between the degradation ratio and time was obtained from the tests independently. Furthermore, a combined compression calculation method of municipal solid waste for all three parts of compressions including considering organics degradation is proposed based on a one-dimensional compression method. The relationship between the methane generation potential L0 of LandGEM model and degradation compression index was also discussed in the paper. A special column compression apparatus system, which can be used to simulate the whole compression process of municipal solid waste in China, was designed. According to the results obtained from 197-day column compression test, the new combined calculation method for municipal solid waste compression was analyzed. The degradation compression is the main part of the compression of MSW in the medium test period. Copyright © 2015 Elsevier Ltd. All rights reserved.
DNABIT Compress - Genome compression algorithm.
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-22
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.
Graphics processing unit-assisted lossless decompression
Loughry, Thomas A.
2016-04-12
Systems and methods for decompressing compressed data that has been compressed by way of a lossless compression algorithm are described herein. In a general embodiment, a graphics processing unit (GPU) is programmed to receive compressed data packets and decompress such packets in parallel. The compressed data packets are compressed representations of an image, and the lossless compression algorithm is a Rice compression algorithm.
2014-01-15
in a Light Duty Engine Under Conventional Diesel, Homogeneous Charge Compression Ignition , and Reactivity Controlled Compression Ignition ...Conventional Diesel (CDC), Homogeneous Charge Compression Ignition (HCCI), and Reactivity Controlled Compression Ignition (RCCI) combustion...LTC) regimes, including reactivity controlled compression ignition (RCCI), partially premixed combustion (PPC), and homogenous charge compression
30 CFR 75.1730 - Compressed air; general; compressed air systems.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...
30 CFR 75.1730 - Compressed air; general; compressed air systems.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...
30 CFR 75.1730 - Compressed air; general; compressed air systems.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...
30 CFR 75.1730 - Compressed air; general; compressed air systems.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...
An Image Processing Technique for Achieving Lossy Compression of Data at Ratios in Excess of 100:1
1992-11-01
5 Lempel , Ziv , Welch (LZW) Compression ............... 7 Lossless Compression Tests Results ................. 9 Exact...since IBM holds the patent for this technique. Lempel , Ziv , Welch (LZW) Compression The LZW compression is related to two compression techniques known as... compression , using the input stream as data . This step is possible because the compression algorithm always outputs the phrase and character components of a
Composeable Chat over Low-Bandwidth Intermittent Communication Links
2007-04-01
Compression (STC), introduced in this report, is a data compression algorithm intended to compress alphanumeric... Ziv - Lempel coding, the grandfather of most modern general-purpose file compression programs, watches for input symbol sequences that have previously... data . This section applies these techniques to create a new compression algorithm called Small Text Compression . Various sequence compression
A hybrid data compression approach for online backup service
NASA Astrophysics Data System (ADS)
Wang, Hua; Zhou, Ke; Qin, MingKang
2009-08-01
With the popularity of Saas (Software as a service), backup service has becoming a hot topic of storage application. Due to the numerous backup users, how to reduce the massive data load is a key problem for system designer. Data compression provides a good solution. Traditional data compression application used to adopt a single method, which has limitations in some respects. For example data stream compression can only realize intra-file compression, de-duplication is used to eliminate inter-file redundant data, compression efficiency cannot meet the need of backup service software. This paper proposes a novel hybrid compression approach, which includes two levels: global compression and block compression. The former can eliminate redundant inter-file copies across different users, the latter adopts data stream compression technology to realize intra-file de-duplication. Several compressing algorithms were adopted to measure the compression ratio and CPU time. Adaptability using different algorithm in certain situation is also analyzed. The performance analysis shows that great improvement is made through the hybrid compression policy.
Cardiopulmonary resuscitation by chest compression alone or with mouth-to-mouth ventilation.
Hallstrom, A; Cobb, L; Johnson, E; Copass, M
2000-05-25
Despite extensive training of citizens of Seattle in cardiopulmonary resuscitation (CPR), bystanders do not perform CPR in almost half of witnessed cardiac arrests. Instructions in chest compression plus mouth-to-mouth ventilation given by dispatchers over the telephone can require 2.4 minutes. In experimental studies, chest compression alone is associated with survival rates similar to those with chest compression plus mouth-to-mouth ventilation. We conducted a randomized study to compare CPR by chest compression alone with CPR by chest compression plus mouth-to-mouth ventilation. The setting of the trial was an urban, fire-department-based, emergency-medical-care system with central dispatching. In a randomized manner, telephone dispatchers gave bystanders at the scene of apparent cardiac arrest instructions in either chest compression alone or chest compression plus mouth-to-mouth ventilation. The primary end point was survival to hospital discharge. Data were analyzed for 241 patients randomly assigned to receive chest compression alone and 279 assigned to chest compression plus mouth-to-mouth ventilation. Complete instructions were delivered in 62 percent of episodes for the group receiving chest compression plus mouth-to-mouth ventilation and 81 percent of episodes for the group receiving chest compression alone (P=0.005). Instructions for compression required 1.4 minutes less to complete than instructions for compression plus mouth-to-mouth ventilation. Survival to hospital discharge was better among patients assigned to chest compression alone than among those assigned to chest compression plus mouth-to-mouth ventilation (14.6 percent vs. 10.4 percent), but the difference was not statistically significant (P=0.18). The outcome after CPR with chest compression alone is similar to that after chest compression with mouth-to-mouth ventilation, and chest compression alone may be the preferred approach for bystanders inexperienced in CPR.
Kılıç, D; Göksu, E; Kılıç, T; Buyurgan, C S
2018-05-01
The aim of this randomized cross-over study was to compare one-minute and two-minute continuous chest compressions in terms of chest compression only CPR quality metrics on a mannequin model in the ED. Thirty-six emergency medicine residents participated in this study. In the 1-minute group, there was no statistically significant difference in the mean compression rate (p=0.83), mean compression depth (p=0.61), good compressions (p=0.31), the percentage of complete release (p=0.07), adequate compression depth (p=0.11) or the percentage of good rate (p=51) over the four-minute time period. Only flow time was statistically significant among the 1-minute intervals (p<0.001). In the 2-minute group, the mean compression depth (p=0.19), good compression (p=0.92), the percentage of complete release (p=0.28), adequate compression depth (p=0.96), and the percentage of good rate (p=0.09) were not statistically significant over time. In this group, the number of compressions (248±31 vs 253±33, p=0.01) and mean compression rates (123±15 vs 126±17, p=0.01) and flow time (p=0.001) were statistically significant along the two-minute intervals. There was no statistically significant difference in the mean number of chest compressions per minute, mean chest compression depth, the percentage of good compressions, complete release, adequate chest compression depth and percentage of good compression between the 1-minute and 2-minute groups. There was no statistically significant difference in the quality metrics of chest compressions between 1- and 2-minute chest compression only groups. Copyright © 2017 Elsevier Inc. All rights reserved.
Chung, Tae Nyoung; Bae, Jinkun; Kim, Eui Chung; Cho, Yun Kyung; You, Je Sung; Choi, Sung Wook; Kim, Ok Jun
2013-07-01
Recent studies have shown that there may be an interaction between duty cycle and other factors related to the quality of chest compression. Duty cycle represents the fraction of compression phase. We aimed to investigate the effect of shorter compression phase on average chest compression depth during metronome-guided cardiopulmonary resuscitation. Senior medical students performed 12 sets of chest compressions following the guiding sounds, with three down-stroke patterns (normal, fast and very fast) and four rates (80, 100, 120 and 140 compressions/min) in random sequence. Repeated-measures analysis of variance was used to compare the average chest compression depth and duty cycle among the trials. The average chest compression depth increased and the duty cycle decreased in a linear fashion as the down-stroke pattern shifted from normal to very fast (p<0.001 for both). Linear increase of average chest compression depth following the increase of the rate of chest compression was observed only with normal down-stroke pattern (p=0.004). Induction of a shorter compression phase is correlated with a deeper chest compression during metronome-guided cardiopulmonary resuscitation.
DNABIT Compress – Genome compression algorithm
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-01
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923
The effect of compression on individual pressure vessel nickel/hydrogen components
NASA Technical Reports Server (NTRS)
Manzo, Michelle A.; Perez-Davis, Marla E.
1988-01-01
Compression tests were performed on representative Individual Pressure Vessel (IPV) Nickel/Hydrogen cell components in an effort to better understand the effects of force on component compression and the interactions of components under compression. It appears that the separator is the most easily compressed of all of the stack components. It will typically partially compress before any of the other components begin to compress. The compression characteristics of the cell components in assembly differed considerably from what would be predicted based on individual compression characteristics. Component interactions played a significant role in the stack response to compression. The results of the compression tests were factored into the design and selection of Belleville washers added to the cell stack to accommodate nickel electrode expansion while keeping the pressure on the stack within a reasonable range of the original preset.
Sperlich, Billy; Born, Dennis-Peter; Kaskinoro, Kimmo; Kalliokoski, Kari K; Laaksonen, Marko S
2013-01-01
The purpose of this experiment was to investigate skeletal muscle blood flow and glucose uptake in m. biceps (BF) and m. quadriceps femoris (QF) 1) during recovery from high intensity cycle exercise, and 2) while wearing a compression short applying ~37 mmHg to the thigh muscles. Blood flow and glucose uptake were measured in the compressed and non-compressed leg of 6 healthy men by using positron emission tomography. At baseline blood flow in QF (P = 0.79) and BF (P = 0.90) did not differ between the compressed and the non-compressed leg. During recovery muscle blood flow was higher compared to baseline in both compressed (P<0.01) and non-compressed QF (P<0.001) but not in compressed (P = 0.41) and non-compressed BF (P = 0.05; effect size = 2.74). During recovery blood flow was lower in compressed QF (P<0.01) but not in BF (P = 0.26) compared to the non-compressed muscles. During baseline and recovery no differences in blood flow were detected between the superficial and deep parts of QF in both, compressed (baseline P = 0.79; recovery P = 0.68) and non-compressed leg (baseline P = 0.64; recovery P = 0.06). During recovery glucose uptake was higher in QF compared to BF in both conditions (P<0.01) with no difference between the compressed and non-compressed thigh. Glucose uptake was higher in the deep compared to the superficial parts of QF (compression leg P = 0.02). These results demonstrate that wearing compression shorts with ~37 mmHg of external pressure reduces blood flow both in the deep and superficial regions of muscle tissue during recovery from high intensity exercise but does not affect glucose uptake in BF and QF.
Lossless Astronomical Image Compression and the Effects of Random Noise
NASA Technical Reports Server (NTRS)
Pence, William
2009-01-01
In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.
Image quality (IQ) guided multispectral image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik
2016-05-01
Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.
Application of content-based image compression to telepathology
NASA Astrophysics Data System (ADS)
Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace
2002-05-01
Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.
Data Compression Techniques for Maps
1989-01-01
Lempel - Ziv compression is applied to the classified and unclassified images as also to the output of the compression algorithms . The algorithms ...resulted in a compression of 7:1. The output of the quadtree coding algorithm was then compressed using Lempel - Ziv coding. The compression ratio achieved...using Lempel - Ziv coding. The unclassified image gave a compression ratio of only 1.4:1. The K means classified image
The effect of hydraulic bed movement on the quality of chest compressions.
Park, Maeng Real; Lee, Dae Sup; In Kim, Yong; Ryu, Ji Ho; Cho, Young Mo; Kim, Hyung Bin; Yeom, Seok Ran; Min, Mun Ki
2017-08-01
The hydraulic height control systems of hospital beds provide convenience and shock absorption. However, movements in a hydraulic bed may reduce the effectiveness of chest compressions. This study investigated the effects of hydraulic bed movement on chest compressions. Twenty-eight participants were recruited for this study. All participants performed chest compressions for 2min on a manikin and three surfaces: the floor (Day 1), a firm plywood bed (Day 2), and a hydraulic bed (Day 3). We considered 28 participants of Day 1 as control and each 28 participants of Day 2 and Day 3 as study subjects. The compression rates, depths, and good compression ratios (>5-cm compressions/all compressions) were compared between the three surfaces. When we compared the three surfaces, we did not detect a significant difference in the speed of chest compressions (p=0.582). However, significantly lower values were observed on the hydraulic bed in terms of compression depth (p=0.001) and the good compression ratio (p=0.003) compared to floor compressions. When we compared the plywood and hydraulic beds, we did not detect significant differences in compression depth (p=0.351) and the good compression ratio (p=0.391). These results indicate that the movements in our hydraulic bed were associated with a non-statistically significant trend towards lower-quality chest compressions. Copyright © 2017 Elsevier Inc. All rights reserved.
Compression of surface myoelectric signals using MP3 encoding.
Chan, Adrian D C
2011-01-01
The potential of MP3 compression of surface myoelectric signals is explored in this paper. MP3 compression is a perceptual-based encoder scheme, used traditionally to compress audio signals. The ubiquity of MP3 compression (e.g., portable consumer electronics and internet applications) makes it an attractive option for remote monitoring and telemedicine applications. The effects of muscle site and contraction type are examined at different MP3 encoding bitrates. Results demonstrate that MP3 compression is sensitive to the myoelectric signal bandwidth, with larger signal distortion associated with myoelectric signals that have higher bandwidths. Compared to other myoelectric signal compression techniques reported previously (embedded zero-tree wavelet compression and adaptive differential pulse code modulation), MP3 compression demonstrates superior performance (i.e., lower percent residual differences for the same compression ratios).
Low-Complexity Lossless and Near-Lossless Data Compression Technique for Multispectral Imagery
NASA Technical Reports Server (NTRS)
Xie, Hua; Klimesh, Matthew A.
2009-01-01
This work extends the lossless data compression technique described in Fast Lossless Compression of Multispectral- Image Data, (NPO-42517) NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26. The original technique was extended to include a near-lossless compression option, allowing substantially smaller compressed file sizes when a small amount of distortion can be tolerated. Near-lossless compression is obtained by including a quantization step prior to encoding of prediction residuals. The original technique uses lossless predictive compression and is designed for use on multispectral imagery. A lossless predictive data compression algorithm compresses a digitized signal one sample at a time as follows: First, a sample value is predicted from previously encoded samples. The difference between the actual sample value and the prediction is called the prediction residual. The prediction residual is encoded into the compressed file. The decompressor can form the same predicted sample and can decode the prediction residual from the compressed file, and so can reconstruct the original sample. A lossless predictive compression algorithm can generally be converted to a near-lossless compression algorithm by quantizing the prediction residuals prior to encoding them. In this case, since the reconstructed sample values will not be identical to the original sample values, the encoder must determine the values that will be reconstructed and use these values for predicting later sample values. The technique described here uses this method, starting with the original technique, to allow near-lossless compression. The extension to allow near-lossless compression adds the ability to achieve much more compression when small amounts of distortion are tolerable, while retaining the low complexity and good overall compression effectiveness of the original algorithm.
FRESCO: Referential compression of highly similar sequences.
Wandelt, Sebastian; Leser, Ulf
2013-01-01
In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.
Karaarslan, A A; Acar, N
2018-02-01
Rotation instability and locking screws failure are common problems. We aimed to determine optimal torque wrench offering maximum rotational stiffness without locking screw failure. We used 10 conventional compression nails, 10 novel compression nails and 10 interlocking nails with 30 composite femurs. We examined rotation stiffness and fracture site compression value by load cell with 3, 6 and 8 Nm torque wrenches using torsion apparatus with a maximum torque moment of 5 Nm in both directions. Rotational stiffness of composite femur-nail constructs was calculated. Rotational stiffness of composite femur-compression nail constructs compressed by 6 Nm torque wrench was 3.27 ± 1.81 Nm/angle (fracture site compression: 1588 N) and 60% more than that compressed with 3 Nm torque wrench (advised previously) with 2.04 ± 0.81 Nm/angle (inter fragmentary compression: 818 N) (P = 0.000). Rotational stiffness of composite-femur-compression nail constructs compressed by 3 Nm torque wrench was 2.04 ± 0.81 Nm/angle (fracture site compression: 818 N) and 277% more than that of interlocking nail with 0.54 ± 0.08 Nm/angle (fracture site compression: 0 N) (P = 0.000). Rotational stiffness and fracture site compression value produced by 3 Nm torque wrench was not satisfactory. To obtain maximum rotational stiffness and fracture site compression value without locking screw failure, 6 Nm torque wrench in compression nails and 8 Nm torque wrench in novel compression nails should be used.
Costa, Marcus V C; Carvalho, Joao L A; Berger, Pedro A; Zaghetto, Alexandre; da Rocha, Adson F; Nascimento, Francisco A O
2009-01-01
We present a new preprocessing technique for two-dimensional compression of surface electromyographic (S-EMG) signals, based on correlation sorting. We show that the JPEG2000 coding system (originally designed for compression of still images) and the H.264/AVC encoder (video compression algorithm operating in intraframe mode) can be used for compression of S-EMG signals. We compare the performance of these two off-the-shelf image compression algorithms for S-EMG compression, with and without the proposed preprocessing step. Compression of both isotonic and isometric contraction S-EMG signals is evaluated. The proposed methods were compared with other S-EMG compression algorithms from the literature.
Compressing DNA sequence databases with coil.
White, W Timothy J; Hendy, Michael D
2008-05-20
Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression - an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression - the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.
Compressing DNA sequence databases with coil
White, W Timothy J; Hendy, Michael D
2008-01-01
Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work. PMID:18489794
Method for preventing jamming conditions in a compression device
Williams, Paul M.; Faller, Kenneth M.; Bauer, Edward J.
2002-06-18
A compression device for feeding a waste material to a reactor includes a waste material feed assembly having a hopper, a supply tube and a compression tube. Each of the supply and compression tubes includes feed-inlet and feed-outlet ends. A feed-discharge valve assembly is located between the feed-outlet end of the compression tube and the reactor. A feed auger-screw extends axially in the supply tube between the feed-inlet and feed-outlet ends thereof. A compression auger-screw extends axially in the compression tube between the feed-inlet and feed-outlet ends thereof. The compression tube is sloped downwardly towards the reactor to drain fluid from the waste material to the reactor and is oriented at generally right angle to the supply tube such that the feed-outlet end of the supply tube is adjacent to the feed-inlet end of the compression tube. A programmable logic controller is provided for controlling the rotational speed of the feed and compression auger-screws for selectively varying the compression of the waste material and for overcoming jamming conditions within either the supply tube or the compression tube.
JPEG and wavelet compression of ophthalmic images
NASA Astrophysics Data System (ADS)
Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.
1999-05-01
This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.
Bae, Jinkun; Chung, Tae Nyoung; Je, Sang Mo
2016-01-01
Objectives To assess how the quality of metronome-guided cardiopulmonary resuscitation (CPR) was affected by the chest compression rate familiarised by training before the performance and to determine a possible mechanism for any effect shown. Design Prospective crossover trial of a simulated, one-person, chest-compression-only CPR. Setting Participants were recruited from a medical school and two paramedic schools of South Korea. Participants 42 senior students of a medical school and two paramedic schools were enrolled but five dropped out due to physical restraints. Intervention Senior medical and paramedic students performed 1 min of metronome-guided CPR with chest compressions only at a speed of 120 compressions/min after training for chest compression with three different rates (100, 120 and 140 compressions/min). Friedman's test was used to compare average compression depths based on the different rates used during training. Results Average compression depths were significantly different according to the rate used in training (p<0.001). A post hoc analysis showed that average compression depths were significantly different between trials after training at a speed of 100 compressions/min and those at speeds of 120 and 140 compressions/min (both p<0.001). Conclusions The depth of chest compression during metronome-guided CPR is affected by the relative difference between the rate of metronome guidance and the chest compression rate practised in previous training. PMID:26873050
Displaying radiologic images on personal computers: image storage and compression--Part 2.
Gillespy, T; Rowberg, A H
1994-02-01
This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.
NASA Technical Reports Server (NTRS)
Barrie, Alexander C.; Yeh, Penshu; Dorelli, John C.; Clark, George B.; Paterson, William R.; Adrian, Mark L.; Holland, Matthew P.; Lobell, James V.; Simpson, David G.; Pollock, Craig J.;
2015-01-01
Plasma measurements in space are becoming increasingly faster, higher resolution, and distributed over multiple instruments. As raw data generation rates can exceed available data transfer bandwidth, data compression is becoming a critical design component. Data compression has been a staple of imaging instruments for years, but only recently have plasma measurement designers become interested in high performance data compression. Missions will often use a simple lossless compression technique yielding compression ratios of approximately 2:1, however future missions may require compression ratios upwards of 10:1. This study aims to explore how a Discrete Wavelet Transform combined with a Bit Plane Encoder (DWT/BPE), implemented via a CCSDS standard, can be used effectively to compress count information common to plasma measurements to high compression ratios while maintaining little or no compression error. The compression ASIC used for the Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale mission (MMS) is used for this study. Plasma count data from multiple sources is examined: resampled data from previous missions, randomly generated data from distribution functions, and simulations of expected regimes. These are run through the compression routines with various parameters to yield the greatest possible compression ratio while maintaining little or no error, the latter indicates that fully lossless compression is obtained. Finally, recommendations are made for future missions as to what can be achieved when compressing plasma count data and how best to do so.
Chest compression rates and survival following out-of-hospital cardiac arrest.
Idris, Ahamed H; Guffey, Danielle; Pepe, Paul E; Brown, Siobhan P; Brooks, Steven C; Callaway, Clifton W; Christenson, Jim; Davis, Daniel P; Daya, Mohamud R; Gray, Randal; Kudenchuk, Peter J; Larsen, Jonathan; Lin, Steve; Menegazzi, James J; Sheehan, Kellie; Sopko, George; Stiell, Ian; Nichol, Graham; Aufderheide, Tom P
2015-04-01
Guidelines for cardiopulmonary resuscitation recommend a chest compression rate of at least 100 compressions/min. A recent clinical study reported optimal return of spontaneous circulation with rates between 100 and 120/min during cardiopulmonary resuscitation for out-of-hospital cardiac arrest. However, the relationship between compression rate and survival is still undetermined. Prospective, observational study. Data is from the Resuscitation Outcomes Consortium Prehospital Resuscitation IMpedance threshold device and Early versus Delayed analysis clinical trial. Adults with out-of-hospital cardiac arrest treated by emergency medical service providers. None. Data were abstracted from monitor-defibrillator recordings for the first five minutes of emergency medical service cardiopulmonary resuscitation. Multiple logistic regression assessed odds ratio for survival by compression rate categories (<80, 80-99, 100-119, 120-139, ≥140), both unadjusted and adjusted for sex, age, witnessed status, attempted bystander cardiopulmonary resuscitation, location of arrest, chest compression fraction and depth, first rhythm, and study site. Compression rate data were available for 10,371 patients; 6,399 also had chest compression fraction and depth data. Age (mean±SD) was 67±16 years. Chest compression rate was 111±19 per minute, compression fraction was 0.70±0.17, and compression depth was 42±12 mm. Circulation was restored in 34%; 9% survived to hospital discharge. After adjustment for covariates without chest compression depth and fraction (n=10,371), a global test found no significant relationship between compression rate and survival (p=0.19). However, after adjustment for covariates including chest compression depth and fraction (n=6,399), the global test found a significant relationship between compression rate and survival (p=0.02), with the reference group (100-119 compressions/min) having the greatest likelihood for survival. After adjustment for chest compression fraction and depth, compression rates between 100 and 120 per minute were associated with greatest survival to hospital discharge.
System using data compression and hashing adapted for use for multimedia encryption
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coffland, Douglas R
2011-07-12
A system and method is disclosed for multimedia encryption. Within the system of the present invention, a data compression module receives and compresses a media signal into a compressed data stream. A data acquisition module receives and selects a set of data from the compressed data stream. And, a hashing module receives and hashes the set of data into a keyword. The method of the present invention includes the steps of compressing a media signal into a compressed data stream; selecting a set of data from the compressed data stream; and hashing the set of data into a keyword.
Effect of compressibility on the hypervelocity penetration
NASA Astrophysics Data System (ADS)
Song, W. J.; Chen, X. W.; Chen, P.
2018-02-01
We further consider the effect of rod strength by employing the compressible penetration model to study the effect of compressibility on hypervelocity penetration. Meanwhile, we define different instances of penetration efficiency in various modified models and compare these penetration efficiencies to identify the effects of different factors in the compressible model. To systematically discuss the effect of compressibility in different metallic rod-target combinations, we construct three cases, i.e., the penetrations by the more compressible rod into the less compressible target, rod into the analogously compressible target, and the less compressible rod into the more compressible target. The effects of volumetric strain, internal energy, and strength on the penetration efficiency are analyzed simultaneously. It indicates that the compressibility of the rod and target increases the pressure at the rod/target interface. The more compressible rod/target has larger volumetric strain and higher internal energy. Both the larger volumetric strain and higher strength enhance the penetration or anti-penetration ability. On the other hand, the higher internal energy weakens the penetration or anti-penetration ability. The two trends conflict, but the volumetric strain dominates in the variation of the penetration efficiency, which would not approach the hydrodynamic limit if the rod and target are not analogously compressible. However, if the compressibility of the rod and target is analogous, it has little effect on the penetration efficiency.
Competitive Parallel Processing For Compression Of Data
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Fender, Antony R. H.
1990-01-01
Momentarily-best compression algorithm selected. Proposed competitive-parallel-processing system compresses data for transmission in channel of limited band-width. Likely application for compression lies in high-resolution, stereoscopic color-television broadcasting. Data from information-rich source like color-television camera compressed by several processors, each operating with different algorithm. Referee processor selects momentarily-best compressed output.
Wang, X H; Mao, T T; Pan, Y Y; Xie, H H; Zhang, H Y; Xiao, J; Jiang, L P
2016-03-01
To observe the expressions of tumor necrosis factor alpha (TNF-α), matrix metalloproteinase 2 (MMP-2) and collagen in local skin tissue of pressure ulcer of rats, and to explore the possible mechanism of the pathogenesis of pressure ulcer. Forty male SD rats were divided into normal control group, 3 d compression group, 5 d compression group, 7 d compression group, and 9 d compression group according to the random number table, with 8 rats in each group. The rats in normal control group did not receive any treatment, whereas the rats in the latter 4 groups were established the deep tissue injury model (3 d compression group) and pressure ulcer model (the other 3 groups) on the gracilis muscle on both hind limbs using a way of cycle compression of ischemia-reperfusion magnet. The rats in 3 d compression group received only three cycles of compression, while the compressed skin of the rats in 5 d compression group, 7 d compression group, and 9 d compression group were cut through and received pressure to 5, 7 and 9 cycles after three cycles of compression, respectively. The rats in 3 d compression group were sacrificed immediately after receiving compression for 3 d (the rats in normal control group were sacrificed at the same time), and the rats in the other 3 groups were respectively sacrificed after receiving compression for 5, 7, and 9 d, and the skin tissue on the central part of gracilis muscle on both hind limbs were harvested. The morphology of the skin tissue was observed with HE staining. The expression of collagen fiber was observed with Masson staining. The expressions of collagen type Ⅳ and MMP-2 were detected by immunohistochemical method. The expressions of TNF-α and phosphorylated NF kappa B (NF-κB) were determined by Western blotting. Data were processed with one-way analysis of variance and LSD test. (1) In normal control group, the skin tissue of rats was stratified squamous epithelium, with the clear skin structure, and there was no obvious infiltration of inflammatory cells. In 3 d compression group, the skin layers of rats were clear, with quite a few fibroblasts, and the inflammatory cells began to infiltrate. In 5 d compression group, 7 d compression group, and 9 d compression group, the epidermis of rats thickened, with the number of fibroblasts reduced, and the infiltration of inflammatory cells enhanced with the compressed time prolonging. (2) In normal control group, the collagen fibers in skin tissue of rats were arranged in order, with rich content. In 3 d compression group, the collagen fibers in skin tissue of rats were arranged orderly, with high expression level, which was similar to that in normal control group (P>0.05). In 5 d compression group and 7 d compression group, the collagen fibers in skin tissue of rats were arranged in disorder, with the expression level gradually reduced, which were significantly lower than that in normal control group (with P values below 0.01). In 9 d compression group, the expression of collagen fiber in skin tissue of rats was a little higher than that in 7 d compression group, but it was still significantly lower than that in normal control group (P<0.01). (3) The expressions of collagen type Ⅳ in skin tissue of rats in normal control group, 3 d compression group, 5 d compression group, 7 d compression group, and 9 d compression group were respectively 11.0±2.8, 9.0±1.7, 8.3±2.8, 5.1±1.8, and 5.4±1.2. The expression of collagen type Ⅳ in skin tissue of rats in 3 d compression group was similar to that in normal control group (P>0.05). The expressions of collagen type Ⅳ in skin tissue of rats in 5 d compression group, 7 d compression group, and 9 d compression group were significantly lower than that in normal control group (P<0.05 or P<0.01). The expression of MMP-2 in skin tissue of rats in 3 d compression group was similar to that in normal control group (P>0.05). The expressions of MMP-2 in skin tissue of rats in 5 d compression group, 7 d compression group, and 9 d compression group were significantly higher than that in normal control group (P<0.05 or P<0.01). (4) The expression of TNF-α in skin tissue of rats in normal control group was 0.48±0.11, and the expressions of TNF-α in skin tissue of rats in 3 d compression group, 5 d compression group, 7 d compression group, and 9 d compression group were respectively 0.84±0.08, 1.13±0.19, 1.34±0.16, and 1.52±0.23, which were all significantly higher than that in normal control group (with P values below 0.01). The expressions of phosphorylated NF-κB in skin tissue of rats in 3 d compression group and 9 d compression group were similar to that in normal control group (with P values above 0.05), and the expressions of phosphorylated NF-κB in skin tissue of rats in 5 d compression group and 7 d compression group were significantly higher than that in normal control group (P<0.05 or P<0.01). The high expression of MMP-2 and reduction of collagen induced by inflammatory reaction mediated by the high expression of TNF-α in local skin tissue of pressure ulcer of rats may be one of the important reasons for the formation of pressure ulcer.
Mammographic compression in Asian women.
Lau, Susie; Abdul Aziz, Yang Faridah; Ng, Kwan Hoong
2017-01-01
To investigate: (1) the variability of mammographic compression parameters amongst Asian women; and (2) the effects of reducing compression force on image quality and mean glandular dose (MGD) in Asian women based on phantom study. We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35-80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD) measurement software (Volpara) to assess compression force, compression pressure, compressed breast thickness (CBT), breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA) slabs. Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (p<0.0001). Compression parameters including compression force, compression pressure, CBT and breast contact area were widely variable between [relative standard deviation (RSD)≥21.0%] and within (p<0.0001) Asian women. The median compression force should be about 8.1 daN compared to the current 12.0 daN. Decreasing compression force from 12.0 daN to 9.0 daN increased CBT by 3.3±1.4 mm, MGD by 6.2-11.0%, and caused no significant effects on image quality (p>0.05). Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD.
Real-time compression of raw computed tomography data: technology, architecture, and benefits
NASA Astrophysics Data System (ADS)
Wegener, Albert; Chandra, Naveen; Ling, Yi; Senzig, Robert; Herfkens, Robert
2009-02-01
Compression of computed tomography (CT) projection samples reduces slip ring and disk drive costs. A lowcomplexity, CT-optimized compression algorithm called Prism CTTM achieves at least 1.59:1 and up to 2.75:1 lossless compression on twenty-six CT projection data sets. We compare the lossless compression performance of Prism CT to alternative lossless coders, including Lempel-Ziv, Golomb-Rice, and Huffman coders using representative CT data sets. Prism CT provides the best mean lossless compression ratio of 1.95:1 on the representative data set. Prism CT compression can be integrated into existing slip rings using a single FPGA. Prism CT decompression operates at 100 Msamp/sec using one core of a dual-core Xeon CPU. We describe a methodology to evaluate the effects of lossy compression on image quality to achieve even higher compression ratios. We conclude that lossless compression of raw CT signals provides significant cost savings and performance improvements for slip rings and disk drive subsystems in all CT machines. Lossy compression should be considered in future CT data acquisition subsystems because it provides even more system benefits above lossless compression while achieving transparent diagnostic image quality. This result is demonstrated on a limited dataset using appropriately selected compression ratios and an experienced radiologist.
Prediction of compression-induced image interpretability degradation
NASA Astrophysics Data System (ADS)
Blasch, Erik; Chen, Hua-Mei; Irvine, John M.; Wang, Zhonghai; Chen, Genshe; Nagy, James; Scott, Stephen
2018-04-01
Image compression is an important component in modern imaging systems as the volume of the raw data collected is increasing. To reduce the volume of data while collecting imagery useful for analysis, choosing the appropriate image compression method is desired. Lossless compression is able to preserve all the information, but it has limited reduction power. On the other hand, lossy compression, which may result in very high compression ratios, suffers from information loss. We model the compression-induced information loss in terms of the National Imagery Interpretability Rating Scale or NIIRS. NIIRS is a user-based quantification of image interpretability widely adopted by the Geographic Information System community. Specifically, we present the Compression Degradation Image Function Index (CoDIFI) framework that predicts the NIIRS degradation (i.e., a decrease of NIIRS level) for a given compression setting. The CoDIFI-NIIRS framework enables a user to broker the maximum compression setting while maintaining a specified NIIRS rating.
Fundamental study of compression for movie files of coronary angiography
NASA Astrophysics Data System (ADS)
Ando, Takekazu; Tsuchiya, Yuichiro; Kodera, Yoshie
2005-04-01
When network distribution of movie files was considered as reference, it could be useful that the lossy compression movie files which has small file size. We chouse three kinds of coronary stricture movies with different moving speed as an examination object; heart rate of slow, normal and fast movies. The movies of MPEG-1, DivX5.11, WMV9 (Windows Media Video 9), and WMV9-VCM (Windows Media Video 9-Video Compression Manager) were made from three kinds of AVI format movies with different moving speeds. Five kinds of movies that are four kinds of compression movies and non-compression AVI instead of the DICOM format were evaluated by Thurstone's method. The Evaluation factors of movies were determined as "sharpness, granularity, contrast, and comprehensive evaluation." In the virtual bradycardia movie, AVI was the best evaluation at all evaluation factors except the granularity. In the virtual normal movie, an excellent compression technique is different in all evaluation factors. In the virtual tachycardia movie, MPEG-1 was the best evaluation at all evaluation factors expects the contrast. There is a good compression form depending on the speed of movies because of the difference of compression algorithm. It is thought that it is an influence by the difference of the compression between frames. The compression algorithm for movie has the compression between the frames and the intra-frame compression. As the compression algorithm give the different influence to image by each compression method, it is necessary to examine the relation of the compression algorithm and our results.
Bae, Jinkun; Chung, Tae Nyoung; Je, Sang Mo
2016-02-12
To assess how the quality of metronome-guided cardiopulmonary resuscitation (CPR) was affected by the chest compression rate familiarised by training before the performance and to determine a possible mechanism for any effect shown. Prospective crossover trial of a simulated, one-person, chest-compression-only CPR. Participants were recruited from a medical school and two paramedic schools of South Korea. 42 senior students of a medical school and two paramedic schools were enrolled but five dropped out due to physical restraints. Senior medical and paramedic students performed 1 min of metronome-guided CPR with chest compressions only at a speed of 120 compressions/min after training for chest compression with three different rates (100, 120 and 140 compressions/min). Friedman's test was used to compare average compression depths based on the different rates used during training. Average compression depths were significantly different according to the rate used in training (p<0.001). A post hoc analysis showed that average compression depths were significantly different between trials after training at a speed of 100 compressions/min and those at speeds of 120 and 140 compressions/min (both p<0.001). The depth of chest compression during metronome-guided CPR is affected by the relative difference between the rate of metronome guidance and the chest compression rate practised in previous training. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Digital compression algorithms for HDTV transmission
NASA Technical Reports Server (NTRS)
Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.
1990-01-01
Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.
Sensitivity Analysis in RIPless Compressed Sensing
2014-10-01
SECURITY CLASSIFICATION OF: The compressive sensing framework finds a wide range of applications in signal processing and analysis. Within this...Analysis of Compressive Sensing Solutions Report Title The compressive sensing framework finds a wide range of applications in signal processing and...compressed sensing. More specifically, we show that in a noiseless and RIP-less setting [11], the recovery process of a compressed sensing framework is
Compression device for feeding a waste material to a reactor
Williams, Paul M.; Faller, Kenneth M.; Bauer, Edward J.
2001-08-21
A compression device for feeding a waste material to a reactor includes a waste material feed assembly having a hopper, a supply tube and a compression tube. Each of the supply and compression tubes includes feed-inlet and feed-outlet ends. A feed-discharge valve assembly is located between the feed-outlet end of the compression tube and the reactor. A feed auger-screw extends axially in the supply tube between the feed-inlet and feed-outlet ends thereof. A compression auger-screw extends axially in the compression tube between the feed-inlet and feed-outlet ends thereof. The compression tube is sloped downwardly towards the reactor to drain fluid from the waste material to the reactor and is oriented at generally right angle to the supply tube such that the feed-outlet end of the supply tube is adjacent to the feed-inlet end of the compression tube. A programmable logic controller is provided for controlling the rotational speed of the feed and compression auger-screws for selectively varying the compression of the waste material and for overcoming jamming conditions within either the supply tube or the compression tube.
Cheremkhin, Pavel A; Kurbatova, Ekaterina A
2018-01-01
Compression of digital holograms can significantly help with the storage of objects and data in 2D and 3D form, its transmission, and its reconstruction. Compression of standard images by methods based on wavelets allows high compression ratios (up to 20-50 times) with minimum losses of quality. In the case of digital holograms, application of wavelets directly does not allow high values of compression to be obtained. However, additional preprocessing and postprocessing can afford significant compression of holograms and the acceptable quality of reconstructed images. In this paper application of wavelet transforms for compression of off-axis digital holograms are considered. The combined technique based on zero- and twin-order elimination, wavelet compression of the amplitude and phase components of the obtained Fourier spectrum, and further additional compression of wavelet coefficients by thresholding and quantization is considered. Numerical experiments on reconstruction of images from the compressed holograms are performed. The comparative analysis of applicability of various wavelets and methods of additional compression of wavelet coefficients is performed. Optimum parameters of compression of holograms by the methods can be estimated. Sizes of holographic information were decreased up to 190 times.
Delos Reyes, Arthur P; Partsch, Hugo; Mosti, Giovanni; Obi, Andrea; Lurie, Fedor
2014-10-01
The International Compression Club, a collaboration of medical experts and industry representatives, was founded in 2005 to develop consensus reports and recommendations regarding the use of compression therapy in the treatment of acute and chronic vascular disease. During the recent meeting of the International Compression Club, member presentations were focused on the clinical application of intermittent pneumatic compression in different disease scenarios as well as on the use of inelastic and short stretch compression therapy. In addition, several new compression devices and systems were introduced by industry representatives. This article summarizes the presentations and subsequent discussions and provides a description of the new compression therapies presented. Copyright © 2014 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Recognizable or Not: Towards Image Semantic Quality Assessment for Compression
NASA Astrophysics Data System (ADS)
Liu, Dong; Wang, Dandan; Li, Houqiang
2017-12-01
Traditionally, image compression was optimized for the pixel-wise fidelity or the perceptual quality of the compressed images given a bit-rate budget. But recently, compressed images are more and more utilized for automatic semantic analysis tasks such as recognition and retrieval. For these tasks, we argue that the optimization target of compression is no longer perceptual quality, but the utility of the compressed images in the given automatic semantic analysis task. Accordingly, we propose to evaluate the quality of the compressed images neither at pixel level nor at perceptual level, but at semantic level. In this paper, we make preliminary efforts towards image semantic quality assessment (ISQA), focusing on the task of optical character recognition (OCR) from compressed images. We propose a full-reference ISQA measure by comparing the features extracted from text regions of original and compressed images. We then propose to integrate the ISQA measure into an image compression scheme. Experimental results show that our proposed ISQA measure is much better than PSNR and SSIM in evaluating the semantic quality of compressed images; accordingly, adopting our ISQA measure to optimize compression for OCR leads to significant bit-rate saving compared to using PSNR or SSIM. Moreover, we perform subjective test about text recognition from compressed images, and observe that our ISQA measure has high consistency with subjective recognizability. Our work explores new dimensions in image quality assessment, and demonstrates promising direction to achieve higher compression ratio for specific semantic analysis tasks.
Sandford, M.T. II; Handel, T.G.; Bradley, J.N.
1998-07-07
A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.
Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.
1998-01-01
A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.
Fpack and Funpack Utilities for FITS Image Compression and Uncompression
NASA Technical Reports Server (NTRS)
Pence, W.
2008-01-01
Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see http://fits.gsfc.nasa.gov). The associated funpack program restores the compressed image file back to its original state (as long as a lossless compression algorithm is used). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression algorithms. Fpack stores the compressed image using the FITS tiled image compression convention (see http://fits.gsfc.nasa.gov/fits_registry.html). Under this convention, the image is first divided into a user-configurable grid of rectangular tiles, and then each tile is individually compressed and stored in a variable-length array column in a FITS binary table. By default, fpack usually adopts a row-by-row tiling pattern. The FITS image header keywords remain uncompressed for fast access by FITS reading and writing software. The tiled image compression convention can in principle support any number of different compression algorithms. The fpack and funpack utilities call on routines in the CFITSIO library (http://hesarc.gsfc.nasa.gov/fitsio) to perform the actual compression and uncompression of the FITS images, which currently supports the GZIP, Rice, H-compress, and PLIO IRAF pixel list compression algorithms.
Subjective evaluation of compressed image quality
NASA Astrophysics Data System (ADS)
Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin
1992-05-01
Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.
Outer planet Pioneer imaging communications system study. [data compression
NASA Technical Reports Server (NTRS)
1974-01-01
The effects of different types of imaging data compression on the elements of the Pioneer end-to-end data system were studied for three imaging transmission methods. These were: no data compression, moderate data compression, and the advanced imaging communications system. It is concluded that: (1) the value of data compression is inversely related to the downlink telemetry bit rate; (2) the rolling characteristics of the spacecraft limit the selection of data compression ratios; and (3) data compression might be used to perform acceptable outer planet mission at reduced downlink telemetry bit rates.
NASA Technical Reports Server (NTRS)
Hodge, Andrew J.; Nettles, Alan T.; Jackson, Justin R.
2011-01-01
Notched (open hole) composite laminates were tested in compression. The effect on strength of various sizes of through holes was examined. Results were compared to the average stress criterion model. Additionally, laminated sandwich structures were damaged from low-velocity impact with various impact energy levels and different impactor geometries. The compression strength relative to damage size was compared to the notched compression result strength. Open-hole compression strength was found to provide a reasonable bound on compression after impact.
Operations and maintenance in the glass container industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbieri, D.; Jacobson, D.
1999-07-01
Compressed air is a significant electrical end-use at most manufacturing facilities, and few industries utilize compressed air to the extent of the glass container industry. Unfortunately, compressed air is often a significant source of wasted energy because many customers view it as a low-maintenance system. In the case of the glass container industry, compressed air is a mission-critical system used for driving production machinery, blowing glass, cooling plungers and product, and packaging. Leakage totaling 10% of total compressed air capacity is not uncommon, and leakage rates upwards of 40% have been observed. Even though energy savings from repairing compressed airmore » leaks can be substantial, regular maintenance procedures are often not in place for compressed air systems. In order to achieve future savings in the compressed air end-use, O and M programs must make a special effort to educate customers on the significant energy impacts of regular compressed air system maintenance. This paper will focus on the glass industry, its reliability on compressed air, and the unique savings potential in the glass container industry. Through a technical review of the glass production process, this paper will identify compressed air as a highly significant electrical consumer in these facilities and present ideas on how to produce and deliver compressed air in a more efficient manner. It will also examine a glass container manufacturer with extremely high savings potential in compressed air systems, but little initiative to establish and perform compressed air maintenance due to an if it works, don't mess with it maintenance philosophy. Finally, this paper will address the economic benefit of compressed air maintenance in this and other manufacturing industries.« less
Radiological Image Compression
NASA Astrophysics Data System (ADS)
Lo, Shih-Chung Benedict
The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.
Comparative data compression techniques and multi-compression results
NASA Astrophysics Data System (ADS)
Hasan, M. R.; Ibrahimy, M. I.; Motakabber, S. M. A.; Ferdaus, M. M.; Khan, M. N. H.
2013-12-01
Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms.
Nootheti, Pavan K; Cadag, Kristian M; Magpantay, Angela; Goldman, Mitchel P
2009-01-01
Sclerotherapy with post-treatment graduated compression remains the criterion standard for treating lower leg telangiectatic, reticular, and varicose veins, but the optimal duration for that postsclerotherapy compression is unknown. To determine whether 3 weeks of additional graduated compression with Class I compression stockings (20-30 mmHg) improves efficacy when used immediately after 1 week of Class II (30-40 mmHg) graduated compression stockings. Twenty-nine patients with reticular or telangiectatic leg veins were treated with sclerotherapy; one leg was assigned to wear Class II compression stocking for 1 week only, and the contralateral leg was assigned an additional 3 weeks of Class I graduated compression stocking. Postsclerotherapy pigmentation and bruising was significantly less with the addition of 3 weeks of Class I graduated compression stockings.
Comparative performance between compressed and uncompressed airborne imagery
NASA Astrophysics Data System (ADS)
Phan, Chung; Rupp, Ronald; Agarwal, Sanjeev; Trang, Anh; Nair, Sumesh
2008-04-01
The US Army's RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD), Countermine Division is evaluating the compressibility of airborne multi-spectral imagery for mine and minefield detection application. Of particular interest is to assess the highest image data compression rate that can be afforded without the loss of image quality for war fighters in the loop and performance of near real time mine detection algorithm. The JPEG-2000 compression standard is used to perform data compression. Both lossless and lossy compressions are considered. A multi-spectral anomaly detector such as RX (Reed & Xiaoli), which is widely used as a core algorithm baseline in airborne mine and minefield detection on different mine types, minefields, and terrains to identify potential individual targets, is used to compare the mine detection performance. This paper presents the compression scheme and compares detection performance results between compressed and uncompressed imagery for various level of compressions. The compression efficiency is evaluated and its dependence upon different backgrounds and other factors are documented and presented using multi-spectral data.
Analysis-Preserving Video Microscopy Compression via Correlation and Mathematical Morphology
Shao, Chong; Zhong, Alfred; Cribb, Jeremy; Osborne, Lukas D.; O’Brien, E. Timothy; Superfine, Richard; Mayer-Patel, Ketan; Taylor, Russell M.
2015-01-01
The large amount video data produced by multi-channel, high-resolution microscopy system drives the need for a new high-performance domain-specific video compression technique. We describe a novel compression method for video microscopy data. The method is based on Pearson's correlation and mathematical morphology. The method makes use of the point-spread function (PSF) in the microscopy video acquisition phase. We compare our method to other lossless compression methods and to lossy JPEG, JPEG2000 and H.264 compression for various kinds of video microscopy data including fluorescence video and brightfield video. We find that for certain data sets, the new method compresses much better than lossless compression with no impact on analysis results. It achieved a best compressed size of 0.77% of the original size, 25× smaller than the best lossless technique (which yields 20% for the same video). The compressed size scales with the video's scientific data content. Further testing showed that existing lossy algorithms greatly impacted data analysis at similar compression sizes. PMID:26435032
The analysis and modelling of dilatational terms in compressible turbulence
NASA Technical Reports Server (NTRS)
Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.; Kreiss, H. O.
1991-01-01
It is shown that the dilatational terms that need to be modeled in compressible turbulence include not only the pressure-dilatation term but also another term - the compressible dissipation. The nature of these dilatational terms in homogeneous turbulence is explored by asymptotic analysis of the compressible Navier-Stokes equations. A non-dimensional parameter which characterizes some compressible effects in moderate Mach number, homogeneous turbulence is identified. Direct numerical simulations (DNS) of isotropic, compressible turbulence are performed, and their results are found to be in agreement with the theoretical analysis. A model for the compressible dissipation is proposed; the model is based on the asymptotic analysis and the direct numerical simulations. This model is calibrated with reference to the DNS results regarding the influence of compressibility on the decay rate of isotropic turbulence. An application of the proposed model to the compressible mixing layer has shown that the model is able to predict the dramatically reduced growth rate of the compressible mixing layer.
The analysis and modeling of dilatational terms in compressible turbulence
NASA Technical Reports Server (NTRS)
Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.; Kreiss, H. O.
1989-01-01
It is shown that the dilatational terms that need to be modeled in compressible turbulence include not only the pressure-dilatation term but also another term - the compressible dissipation. The nature of these dilatational terms in homogeneous turbulence is explored by asymptotic analysis of the compressible Navier-Stokes equations. A non-dimensional parameter which characterizes some compressible effects in moderate Mach number, homogeneous turbulence is identified. Direct numerical simulations (DNS) of isotropic, compressible turbulence are performed, and their results are found to be in agreement with the theoretical analysis. A model for the compressible dissipation is proposed; the model is based on the asymptotic analysis and the direct numerical simulations. This model is calibrated with reference to the DNS results regarding the influence of compressibility on the decay rate of isotropic turbulence. An application of the proposed model to the compressible mixing layer has shown that the model is able to predict the dramatically reduced growth rate of the compressible mixing layer.
Effect of compression pressure on inhalation grade lactose as carrier for dry powder inhalations
Raut, Neha Sureshrao; Jamaiwar, Swapnil; Umekar, Milind Janrao; Kotagale, Nandkishor Ramdas
2016-01-01
Introduction: This study focused on the potential effects of compression forces experienced during lactose (InhaLac 70, 120, and 230) storage and transport on the flowability and aerosol performance in dry powder inhaler formulation. Materials and Methods: Lactose was subjected to typical compression forces 4, 10, and 20 N/cm2. Powder flowability and particle size distribution analysis of un-compressed and compressed lactose was evaluated by Carr's index, Hausner's ratio, the angle of repose and by laser diffraction method. Aerosol performance of un-compressed and compressed lactose was assessed in dispersion studies using glass twin-stage-liquid-impenger at flow rate 40-80 L/min. Results: At compression forces, the flowability of compressed lactose was observed same or slightly improved. Furthermore, compression of lactose caused a decrease in in vitro aerosol dispersion performance. Conclusion: The present study illustrates that, as carrier size increases, a concurrent decrease in drug aerosolization performance was observed. Thus, the compression of the lactose fines onto the surfaces of the larger lactose particles due to compression pressures was hypothesized to be the cause of these observed performance variations. The simulations of storage and transport in an industrial scale can induce significant variations in formulation performance, and it could be a source of batch-to-batch variations. PMID:27014618
Piippo-Huotari, Oili; Norrman, Eva; Anderzén-Carlsson, Agneta; Geijer, Håkan
2018-05-01
The radiation dose for patients can be reduced with many methods and one way is to use abdominal compression. In this study, the radiation dose and image quality for a new patient-controlled compression device were compared with conventional compression and compression in the prone position . To compare radiation dose and image quality of patient-controlled compression compared with conventional and prone compression in general radiography. An experimental design with quantitative approach. After obtaining the approval of the ethics committee, a consecutive sample of 48 patients was examined with the standard clinical urography protocol. The radiation doses were measured as dose-area product and analyzed with a paired t-test. The image quality was evaluated by visual grading analysis. Four radiologists evaluated each image individually by scoring nine criteria modified from the European quality criteria for diagnostic radiographic images. There was no significant difference in radiation dose or image quality between conventional and patient-controlled compression. Prone position resulted in both higher dose and inferior image quality. Patient-controlled compression gave similar dose levels as conventional compression and lower than prone compression. Image quality was similar with both patient-controlled and conventional compression and was judged to be better than in the prone position.
Jäntti, H; Silfvast, T; Turpeinen, A; Kiviniemi, V; Uusaro, A
2009-04-01
The adequate chest compression rate during CPR is associated with improved haemodynamics and primary survival. To explore whether the use of a metronome would affect also chest compression depth beside the rate, we evaluated CPR quality using a metronome in a simulated CPR scenario. Forty-four experienced intensive care unit nurses participated in two-rescuer basic life support given to manikins in 10min scenarios. The target chest compression to ventilation ratio was 30:2 performed with bag and mask ventilation. The rescuer performing the compressions was changed every 2min. CPR was performed first without and then with a metronome that beeped 100 times per minute. The quality of CPR was analysed with manikin software. The effect of rescuer fatigue on CPR quality was analysed separately. The mean compression rate between ventilation pauses was 137+/-18compressions per minute (cpm) without and 98+/-2cpm with metronome guidance (p<0.001). The mean number of chest compressions actually performed was 104+/-12cpm without and 79+/-3cpm with the metronome (p<0.001). The mean compression depth during the scenario was 46.9+/-7.7mm without and 43.2+/-6.3mm with metronome guidance (p=0.09). The total number of chest compressions performed was 1022 without metronome guidance, 42% at the correct depth; and 780 with metronome guidance, 61% at the correct depth (p=0.09 for difference for percentage of compression with correct depth). Metronome guidance corrected chest compression rates for each compression cycle to within guideline recommendations, but did not affect chest compression quality or rescuer fatigue.
30 CFR 77.412 - Compressed air systems.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not be...
29 CFR 1917.154 - Compressed air.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 29 Labor 7 2013-07-01 2013-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed a...
30 CFR 77.412 - Compressed air systems.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not be...
29 CFR 1917.154 - Compressed air.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 29 Labor 7 2012-07-01 2012-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed a...
29 CFR 1917.154 - Compressed air.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 29 Labor 7 2014-07-01 2014-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed a...
1993-12-01
0~0 S* NAVAL POSTGRADUATE SCHOOL Monterey, California DTIC ELECTE THESIS S APR 11 1994DU A SIMPLE, LOW OVERHEAD DATA COMPRESSION ALGORITHM FOR...A SIMPLE. LOW OVERHEAD DATA COMPRESSION ALGORITHM FOR CONVERTING LOSSY COMPRESSION PROCESSES TO LOSSLESS. 6. AUTHOR(S) Abbott, Walter D., III 7...Approved for public release; distribution is unlimited. A Simple, Low Overhead Data Compression Algorithm for Converting Lossy Processes to Lossless by
Failure of a laminated composite under tension-compression fatigue loading
NASA Technical Reports Server (NTRS)
Rotem, A.; Nelson, H. G.
1989-01-01
The fatigue behavior of composite laminates under tension-compression loading is analyzed and compared with behavior under tension-tension and compression-compression loading. It is shown that for meaningful fatigue conditions, the tension-compression case is the dominant one. Both tension and compression failure modes can occur under the reversed loading, and failure is dependent on the specific lay-up of the laminate and the difference between the tensile static strength and the absolute value of the compressive static strength. The use of a fatigue failure envelope for determining the fatigue life and mode of failure is proposed and demonstrated.
Spatial compression algorithm for the analysis of very large multivariate images
Keenan, Michael R [Albuquerque, NM
2008-07-15
A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.
Wavelet-based audio embedding and audio/video compression
NASA Astrophysics Data System (ADS)
Mendenhall, Michael J.; Claypoole, Roger L., Jr.
2001-12-01
Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.
Recce imagery compression options
NASA Astrophysics Data System (ADS)
Healy, Donald J.
1995-09-01
The errors introduced into reconstructed RECCE imagery by ATARS DPCM compression are compared to those introduced by the more modern DCT-based JPEG compression algorithm. For storage applications in which uncompressed sensor data is available JPEG provides better mean-square-error performance while also providing more flexibility in the selection of compressed data rates. When ATARS DPCM compression has already been performed, lossless encoding techniques may be applied to the DPCM deltas to achieve further compression without introducing additional errors. The abilities of several lossless compression algorithms including Huffman, Lempel-Ziv, Lempel-Ziv-Welch, and Rice encoding to provide this additional compression of ATARS DPCM deltas are compared. It is shown that the amount of noise in the original imagery significantly affects these comparisons.
Compressed normalized block difference for object tracking
NASA Astrophysics Data System (ADS)
Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge
2018-04-01
Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.
Method and apparatus for holding two separate metal pieces together for welding
NASA Technical Reports Server (NTRS)
Mcclure, S. R. (Inventor)
1980-01-01
A method of holding two separate metal pieces together for welding is described including the steps of overlapping a portion of one of the metal pieces on a portion of the other metal piece, encasing the overlapping metal piece in a compressible device, drawing the compressible device into an enclosure, and compressing a portion of the compressible device around the overlapping portions of the metal pieces for holding the metal pieces under constant and equal pressure during welding. The preferred apparatus for performing the method utilizes a support mechanism to support the two separate metal pieces in an overlapping configuration; a compressible device surrounding the support mechanism and at least one of the metal pieces, and a compressing device surrounding the compressible device for compressing the compressible device around the overlapping portions of the metal pieces, thus providing constant and equal pressure at all points on the overlapping portions of the metal pieces.
A Lower Bound on Adiabatic Heating of Compressed Turbulence for Simulation and Model Validation
Davidovits, Seth; Fisch, Nathaniel J.
2017-03-31
The energy in turbulent flow can be amplied by compression, when the compression occurs on a timescale shorter than the turbulent dissipation time. This mechanism may play a part in sustaining turbulence in various astrophysical systems, including molecular clouds. The amount of turbulent amplification depends on the net effect of the compressive forcing and turbulent dissipation. By giving an argument for a bound on this dissipation, we give a lower bound for the scaling of the turbulent velocity with compression ratio in compressed turbulence. That is, turbulence undergoing compression will be enhanced at least as much as the bound givenmore » here, subject to a set of caveats that will be outlined. Used as a validation check, this lower bound suggests that some models of compressing astrophysical turbulence are too dissipative. As a result, the technique used highlights the relationship between compressed turbulence and decaying turbulence.« less
Compression of electromyographic signals using image compression techniques.
Costa, Marcus Vinícius Chaffim; Berger, Pedro de Azevedo; da Rocha, Adson Ferreira; de Carvalho, João Luiz Azevedo; Nascimento, Francisco Assis de Oliveira
2008-01-01
Despite the growing interest in the transmission and storage of electromyographic signals for long periods of time, few studies have addressed the compression of such signals. In this article we present an algorithm for compression of electromyographic signals based on the JPEG2000 coding system. Although the JPEG2000 codec was originally designed for compression of still images, we show that it can also be used to compress EMG signals for both isotonic and isometric contractions. For EMG signals acquired during isometric contractions, the proposed algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.75% to 13.7%. For isotonic EMG signals, the algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.4% to 7%. The compression results using the JPEG2000 algorithm were compared to those using other algorithms based on the wavelet transform.
NASA Technical Reports Server (NTRS)
Rotem, Assa
1990-01-01
Laminated composite materials tend to fail differently under tensile or compressive load. Under tension, the material accumulates cracks and fiber fractures, while under compression, the material delaminates and buckles. Tensile-compressive fatigue may cause either of these failure modes depending on the specific damage occurring in the laminate. This damage depends on the stress ratio of the fatigue loading. Analysis of the fatigue behavior of the composite laminate under tension-tension, compression-compression, and tension-compression had led to the development of a fatigue envelope presentation of the failure behavior. This envelope indicates the specific failure mode for any stress ratio and number of loading cycles. The construction of the fatigue envelope is based on the applied stress-cycles to failure (S-N) curves of both tensile-tensile and compressive-compressive fatigue. Test results are presented to verify the theoretical analysis.
Data compression for near Earth and deep space to Earth transmission
NASA Technical Reports Server (NTRS)
Erickson, Daniel E.
1991-01-01
Key issues of data compression for near Earth and deep space to Earth transmission discussion group are briefly presented. Specific recommendations as made by the group are as follows: (1) since data compression is a cost effective way to improve communications and storage capacity, NASA should use lossless data compression wherever possible; (2) NASA should conduct experiments and studies on the value and effectiveness of lossy data compression; (3) NASA should develop and select approaches to high ratio compression of operational data such as voice and video; (4) NASA should develop data compression integrated circuits for a few key approaches identified in the preceding recommendation; (5) NASA should examine new data compression approaches such as combining source and channel encoding, where high payoff gaps are identified in currently available schemes; and (6) users and developers of data compression technologies should be in closer communication within NASA and with academia, industry, and other government agencies.
A block-based JPEG-LS compression technique with lossless region of interest
NASA Astrophysics Data System (ADS)
Deng, Lihua; Huang, Zhenghua; Yao, Shoukui
2018-03-01
JPEG-LS lossless compression algorithm is used in many specialized applications that emphasize on the attainment of high fidelity for its lower complexity and better compression ratios than the lossless JPEG standard. But it cannot prevent error diffusion because of the context dependence of the algorithm, and have low compression rate when compared to lossy compression. In this paper, we firstly divide the image into two parts: ROI regions and non-ROI regions. Then we adopt a block-based image compression technique to decrease the range of error diffusion. We provide JPEG-LS lossless compression for the image blocks which include the whole or part region of interest (ROI) and JPEG-LS near lossless compression for the image blocks which are included in the non-ROI (unimportant) regions. Finally, a set of experiments are designed to assess the effectiveness of the proposed compression method.
Oblivious image watermarking combined with JPEG compression
NASA Astrophysics Data System (ADS)
Chen, Qing; Maitre, Henri; Pesquet-Popescu, Beatrice
2003-06-01
For most data hiding applications, the main source of concern is the effect of lossy compression on hidden information. The objective of watermarking is fundamentally in conflict with lossy compression. The latter attempts to remove all irrelevant and redundant information from a signal, while the former uses the irrelevant information to mask the presence of hidden data. Compression on a watermarked image can significantly affect the retrieval of the watermark. Past investigations of this problem have heavily relied on simulation. It is desirable not only to measure the effect of compression on embedded watermark, but also to control the embedding process to survive lossy compression. In this paper, we focus on oblivious watermarking by assuming that the watermarked image inevitably undergoes JPEG compression prior to watermark extraction. We propose an image-adaptive watermarking scheme where the watermarking algorithm and the JPEG compression standard are jointly considered. Watermark embedding takes into consideration the JPEG compression quality factor and exploits an HVS model to adaptively attain a proper trade-off among transparency, hiding data rate, and robustness to JPEG compression. The scheme estimates the image-dependent payload under JPEG compression to achieve the watermarking bit allocation in a determinate way, while maintaining consistent watermark retrieval performance.
Zhou, Haibo; Shi, Jianmin; Zhang, Chao; Li, Pei
2018-02-28
Mechanical compression often induces degenerative changes of disc nucleus pulposus (NP) tissue. It has been indicated that N-cadherin (N-CDH)-mediated signaling helps to preserve the NP cell phenotype. However, N-CDH expression and the resulting NP-specific phenotype alteration under the static compression and dynamic compression remain unclear. To study the effects of static compression and dynamic compression on N-CDH expression and NP-specific phenotype in an in vitro disc organ culture. Porcine discs were organ cultured in a self-developed mechanically active bioreactor for 7 days and subjected to static or dynamic compression (0.4 MPa for 2 h once per day). The noncompressed discs were used as controls. Compared with the dynamic compression, static compression significantly down-regulated the expression of N-CDH and NP-specific markers (laminin, brachyury, and keratin 19); decreased the Alcian Blue staining intensity, glycosaminoglycan and hydroxyproline contents; and declined the matrix macromolecule (aggrecan and collagen II) expression. Compared with the dynamic compression, static compression causes N-CDH down-regulation, loss of NP-specific phenotype, and the resulting decrease in NP matrix synthesis. © 2018 The Author(s).
Optimal Compression Methods for Floating-point Format Images
NASA Technical Reports Server (NTRS)
Pence, W. D.; White, R. L.; Seaman, R.
2009-01-01
We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.
Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di, Sheng; Cappello, Franck
Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points canmore » be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.« less
Lv, Peng; Wang, Yaru; Ji, Chenglong; Yuan, Jiajiao
2017-01-01
Ultra-compressible electrodes with high electrochemical performance, reversible compressibility and extreme durability are in high demand in compression-tolerant energy storage devices. Herein, an ultra-compressible ternary composite was synthesized by successively electrodepositing poly(3,4-ethylenedioxythiophene) (PEDOT) and MnO2 into the superelastic graphene aerogel (SEGA). In SEGA/PEDOT/MnO2 ternary composite, SEGA provides the compressible backbone and conductive network; MnO2 is mainly responsible for pseudo reactions; the middle PEDOT not only reduces the interface resistance between MnO2 and graphene, but also further reinforces the strength of graphene cellar walls. The synergistic effect of the three components in the ternary composite electrode leads to high electrochemical performances and good compression-tolerant ability. The gravimetric capacitance of the compressible ternary composite electrodes reaches 343 F g−1 and can retain 97% even at 95% compressive strain. And a volumetric capacitance of 147.4 F cm−3 is achieved, which is much higher than that of other graphene-based compressible electrodes. This value of volumetric capacitance can be preserved by 80% after 3500 charge/discharge cycles under various compression strains, indicating an extreme durability.
High-quality JPEG compression history detection for fake uncompressed images
NASA Astrophysics Data System (ADS)
Zhang, Rong; Wang, Rang-Ding; Guo, Li-Jun; Jiang, Bao-Chuan
2017-05-01
Authenticity is one of the most important evaluation factors of images for photography competitions or journalism. Unusual compression history of an image often implies the illicit intent of its author. Our work aims at distinguishing real uncompressed images from fake uncompressed images that are saved in uncompressed formats but have been previously compressed. To detect the potential image JPEG compression, we analyze the JPEG compression artifacts based on the tetrolet covering, which corresponds to the local image geometrical structure. Since the compression can alter the structure information, the tetrolet covering indexes may be changed if a compression is performed on the test image. Such changes can provide valuable clues about the image compression history. To be specific, the test image is first compressed with different quality factors to generate a set of temporary images. Then, the test image is compared with each temporary image block-by-block to investigate whether the tetrolet covering index of each 4×4 block is different between them. The percentages of the changed tetrolet covering indexes corresponding to the quality factors (from low to high) are computed and used to form the p-curve, the local minimum of which may indicate the potential compression. Our experimental results demonstrate the advantage of our method to detect JPEG compressions of high quality, even the highest quality factors such as 98, 99, or 100 of the standard JPEG compression, from uncompressed-format images. At the same time, our detection algorithm can accurately identify the corresponding compression quality factor.
Effect of Compression Garments on Physiological Responses After Uphill Running.
Struhár, Ivan; Kumstát, Michal; Králová, Dagmar Moc
2018-03-01
Limited practical recommendations related to wearing compression garments for athletes can be drawn from the literature at the present time. We aimed to identify the effects of compression garments on physiological and perceptual measures of performance and recovery after uphill running with different pressure and distributions of applied compression. In a random, double blinded study, 10 trained male runners undertook three 8 km treadmill runs at a 6% elevation rate, with the intensity of 75% VO2max while wearing low, medium grade compression garments and high reverse grade compression. In all the trials, compression garments were worn during 4 hours post run. Creatine kinase, measurements of muscle soreness, ankle strength of plantar/dorsal flexors and mean performance time were then measured. The best mean performance time was observed in the medium grade compression garments with the time difference being: medium grade compression garments vs. high reverse grade compression garments. A positive trend in increasing peak torque of plantar flexion (60º·s-1, 120º·s-1) was found in the medium grade compression garments: a difference between 24 and 48 hours post run. The highest pain tolerance shift in the gastrocnemius muscle was the medium grade compression garments, 24 hour post run, with the shift being +11.37% for the lateral head and 6.63% for the medial head. In conclusion, a beneficial trend in the promotion of running performance and decreasing muscle soreness within 24 hour post exercise was apparent in medium grade compression garments.
Marouane, H; Shirazi-Adl, A; Adouni, M
2015-01-01
Knee joints are subject to large compression forces in daily activities. Due to artefact moments and instability under large compression loads, biomechanical studies impose additional constraints to circumvent the compression position-dependency in response. To quantify the effect of compression on passive knee moment resistance and stiffness, two validated finite element models of the tibiofemoral (TF) joint, one refined with depth-dependent fibril-reinforced cartilage and the other less refined with homogeneous isotropic cartilage, are used. The unconstrained TF joint response in sagittal and frontal planes is investigated at different flexion angles (0°, 15°, 30° and 45°) up to 1800 N compression preloads. The compression is applied at a novel joint mechanical balance point (MBP) identified as a point at which the compression does not cause any coupled rotations in sagittal and frontal planes. The MBP of the unconstrained joint is located at the lateral plateau in small compressions and shifts medially towards the inter-compartmental area at larger compression forces. The compression force substantially increases the joint moment-bearing capacities and instantaneous angular rigidities in both frontal and sagittal planes. The varus-valgus laxities diminish with compression preloads despite concomitant substantial reductions in collateral ligament forces. While the angular rigidity would enhance the joint stability, the augmented passive moment resistance under compression preloads plays a role in supporting external moments and should as such be considered in the knee joint musculoskeletal models.
Tibiotalocalcaneal Arthrodesis Nails: A Comparison of Nails With and Without Internal Compression.
Taylor, James; Lucas, Douglas E; Riley, Aimee; Simpson, G Alex; Philbin, Terrence M
2016-03-01
Hindfoot arthrodesis with tibiotalocalcaneal (TTC) intramedullary nails is used commonly when treating ankle and subtalar arthritis and other hindfoot pathology. Adequate compression is paramount to avoid nonunion and fatigue fracture of the hardware. Arthrodesis systems with internal compression have demonstrated superior compression to systems relying on external methods. This study examined the speed of union with TTC fusion nails with internal compression over nails without internal compression. A retrospective review was performed identifying nail type and time to union of the subtalar joint (STJ) and tibiotalar joint (TTJ). A total of 198 patients were included from 2003 to 2011. The median time to STJ fusion without internal compression was 104 days compared to 92 days with internal compression (P = .044). The median time to TTJ fusion without internal compression was 111 days compared to 93 days with internal compression (P = .010). Adjusting for diabetes, there was no significant difference in fusion speed with or without internal compression for the STJ (P = .561) or TTJ (P = .358). Nonunion rates were 24.5% for the STJ and 17.0% for the TTJ with internal compression, and 43.4% for the STJ and 42.1% for the TTJ without internal compression. This difference remained statistically significant after adjusting for diabetes for the TTJ (P = .001) but not for the STJ (P = .194). The intramedullary hindfoot arthrodesis nail was a viable treatment option in degenerative joint disease of the TTC joint. There appeared to be an advantage using systems with internal compression; however, there was no statistically significant difference after controlling for diabetes. Level III, retrospective comparative series. © The Author(s) 2015.
Cardiopulmonary resuscitation duty cycle in out-of-hospital cardiac arrest.
Johnson, Bryce V; Johnson, Bryce; Coult, Jason; Fahrenbruch, Carol; Blackwood, Jennifer; Sherman, Larry; Kudenchuk, Peter; Sayre, Michael; Rea, Thomas
2015-02-01
Duty cycle is the portion of time spent in compression relative to total time of the compression-decompression cycle. Guidelines recommend a 50% duty cycle based largely on animal investigation. We undertook a descriptive evaluation of duty cycle in human resuscitation, and whether duty cycle correlates with other CPR measures. We calculated the duty cycle, compression depth, and compression rate during EMS resuscitation of 164 patients with out-of-hospital ventricular fibrillation cardiac arrest. We captured force recordings from a chest accelerometer to measure ten-second CPR epochs that preceded rhythm analysis. Duty cycle was calculated using two methods. Effective compression time (ECT) is the time from beginning to end of compression divided by total period for that compression-decompression cycle. Area duty cycle (ADC) is the ratio of area under the force curve divided by total area of one compression-decompression cycle. We evaluated the compression depth and compression rate according to duty cycle quartiles. There were 369 ten-second epochs among 164 patients. The median duty cycle was 38.8% (SD=5.5%) using ECT and 32.2% (SD=4.3%) using ADC. A relatively shorter compression phase (lower duty cycle) was associated with greater compression depth (test for trend <0.05 for ECT and ADC) and slower compression rate (test for trend <0.05 for ADC). Sixty-one of 164 patients (37%) survived to hospital discharge. Duty cycle was below the 50% recommended guideline, and was associated with compression depth and rate. These findings provider rationale to incorporate duty cycle into research aimed at understanding optimal CPR metrics. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Park, Sang-Sub
2014-01-01
The purpose of this study is to grasp difference in quality of chest compression accuracy between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method. Participants were progressed 64 people except 6 absentees among 70 people who agreed to participation with completing the CPR curriculum. In the classification of group in participants, the modified chest compression method was called as smartphone group (33 people). The standardized chest compression method was called as traditional group (31 people). The common equipments in both groups were used Manikin for practice and Manikin for evaluation. In the meantime, the smartphone group for application was utilized Android and iOS Operating System (OS) of 2 smartphone products (G, i). The measurement period was conducted from September 25th to 26th, 2012. Data analysis was used SPSS WIN 12.0 program. As a result of research, the proper compression depth (mm) was shown the proper compression depth (p< 0.01) in traditional group (53.77 mm) compared to smartphone group (48.35 mm). Even the proper chest compression (%) was formed suitably (p< 0.05) in traditional group (73.96%) more than smartphone group (60.51%). As for the awareness of chest compression accuracy, the traditional group (3.83 points) had the higher awareness of chest compression accuracy (p< 0.001) than the smartphone group (2.32 points). In the questionnaire that was additionally carried out 1 question only in smartphone group, the modified chest compression method with the use of smartphone had the high negative reason in rescuer for occurrence of hand back pain (48.5%) and unstable posture (21.2%).
Energy recovery during expansion of compressed gas using power plant low-quality heat sources
Ochs, Thomas L [Albany, OR; O'Connor, William K [Lebanon, OR
2006-03-07
A method of recovering energy from a cool compressed gas, compressed liquid, vapor, or supercritical fluid is disclosed which includes incrementally expanding the compressed gas, compressed liquid, vapor, or supercritical fluid through a plurality of expansion engines and heating the gas, vapor, compressed liquid, or supercritical fluid entering at least one of the expansion engines with a low quality heat source. Expansion engines such as turbines and multiple expansions with heating are disclosed.
A higher chest compression rate may be necessary for metronome-guided cardiopulmonary resuscitation.
Chung, Tae Nyoung; Kim, Sun Wook; You, Je Sung; Cho, Young Soon; Chung, Sung Phil; Park, Incheol
2012-01-01
Metronome guidance is a simple and economical feedback system for guiding cardiopulmonary resuscitation (CPR). However, a recent study showed that metronome guidance reduced the depth of chest compression. The results of previous studies suggest that a higher chest compression rate is associated with a better CPR outcome as compared with a lower chest compression rate, irrespective of metronome use. Based on this finding, we hypothesized that a lower chest compression rate promotes a reduction in chest compression depth in the recent study rather than metronome use itself. One minute of chest compression-only CPR was performed following the metronome sound played at 1 of 4 different rates: 80, 100, 120, and 140 ticks/min. Average compression depths (ACDs) and duty cycles were compared using repeated measures analysis of variance, and the values in the absence and presence of metronome guidance were compared. Both the ACD and duty cycle increased when the metronome rate increased (P = .017, <.001). Average compression depths for the CPR procedures following the metronome rates of 80 and 100 ticks/min were significantly lower than those for the procedures without metronome guidance. The ACD and duty cyle for chest compression increase as the metronome rate increases during metronome-guided CPR. A higher rate of chest compression is necessary for metronome-guided CPR to prevent suboptimal quality of chest compression. Copyright © 2012 Elsevier Inc. All rights reserved.
Compression of high-density EMG signals for trapezius and gastrocnemius muscles.
Itiki, Cinthia; Furuie, Sergio S; Merletti, Roberto
2014-03-10
New technologies for data transmission and multi-electrode arrays increased the demand for compressing high-density electromyography (HD EMG) signals. This article aims the compression of HD EMG signals recorded by two-dimensional electrode matrices at different muscle-contraction forces. It also shows methodological aspects of compressing HD EMG signals for non-pinnate (upper trapezius) and pinnate (medial gastrocnemius) muscles, using image compression techniques. HD EMG signals were placed in image rows, according to two distinct electrode orders: parallel and perpendicular to the muscle longitudinal axis. For the lossless case, the images obtained from single-differential signals as well as their differences in time were compressed. For the lossy algorithm, the images associated to the recorded monopolar or single-differential signals were compressed for different compression levels. Lossless compression provided up to 59.3% file-size reduction (FSR), with lower contraction forces associated to higher FSR. For lossy compression, a 90.8% reduction on the file size was attained, while keeping the signal-to-noise ratio (SNR) at 21.19 dB. For a similar FSR, higher contraction forces corresponded to higher SNR CONCLUSIONS: The computation of signal differences in time improves the performance of lossless compression while the selection of signals in the transversal order improves the lossy compression of HD EMG, for both pinnate and non-pinnate muscles.
Compression of high-density EMG signals for trapezius and gastrocnemius muscles
2014-01-01
Background New technologies for data transmission and multi-electrode arrays increased the demand for compressing high-density electromyography (HD EMG) signals. This article aims the compression of HD EMG signals recorded by two-dimensional electrode matrices at different muscle-contraction forces. It also shows methodological aspects of compressing HD EMG signals for non-pinnate (upper trapezius) and pinnate (medial gastrocnemius) muscles, using image compression techniques. Methods HD EMG signals were placed in image rows, according to two distinct electrode orders: parallel and perpendicular to the muscle longitudinal axis. For the lossless case, the images obtained from single-differential signals as well as their differences in time were compressed. For the lossy algorithm, the images associated to the recorded monopolar or single-differential signals were compressed for different compression levels. Results Lossless compression provided up to 59.3% file-size reduction (FSR), with lower contraction forces associated to higher FSR. For lossy compression, a 90.8% reduction on the file size was attained, while keeping the signal-to-noise ratio (SNR) at 21.19 dB. For a similar FSR, higher contraction forces corresponded to higher SNR Conclusions The computation of signal differences in time improves the performance of lossless compression while the selection of signals in the transversal order improves the lossy compression of HD EMG, for both pinnate and non-pinnate muscles. PMID:24612604
Some Results Relevant to Statistical Closures for Compressible Turbulence
NASA Technical Reports Server (NTRS)
Ristorcelli, J. R.
1998-01-01
For weakly compressible turbulent fluctuations there exists a small parameter, the square of the fluctuating Mach number, that allows an investigation using a perturbative treatment. The consequences of such a perturbative analysis in three different subject areas are described: 1) initial conditions in direct numerical simulations, 2) an explanation for the oscillations seen in the compressible pressure in the direct numerical simulations of homogeneous shear, and 3) for turbulence closures accounting for the compressibility of velocity fluctuations. Initial conditions consistent with small turbulent Mach number asymptotics are constructed. The importance of consistent initial conditions in the direct numerical simulation of compressible turbulence is dramatically illustrated: spurious oscillations associated with inconsistent initial conditions are avoided, and the fluctuating dilatational field is some two orders of magnitude smaller for a compressible isotropic turbulence. For the isotropic decay it is shown that the choice of initial conditions can change the scaling law for the compressible dissipation. A two-time expansion of the Navier-Stokes equations is used to distinguish compressible acoustic and compressible advective modes. A simple conceptual model for weakly compressible turbulence - a forced linear oscillator is described. It is shown that the evolution equations for the compressible portions of turbulence can be understood as a forced wave equation with refraction. Acoustic modes of the flow can be amplified by refraction and are able to manifest themselves in large fluctuations of the compressible pressure.
Mental Aptitude and Comprehension of Time-Compressed and Compressed-Expanded Listening Selections.
ERIC Educational Resources Information Center
Sticht, Thomas G.
The comprehensibility of materials compressed and then expanded by means of an electromechanical process was tested with 280 Army inductees divided into groups of high and low mental aptitude. Three short listening selections relating to military activities were subjected to compression and compression-expansion to produce seven versions. Data…
30 CFR 57.13020 - Use of compressed air.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward a...
30 CFR 56.13020 - Use of compressed air.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a person...
30 CFR 57.13020 - Use of compressed air.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward a...
30 CFR 56.13020 - Use of compressed air.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a person...
30 CFR 56.13020 - Use of compressed air.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a person...
30 CFR 57.13020 - Use of compressed air.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward a...
30 CFR 57.13015 - Inspection of compressed-air receivers and other unfired pressure vessels.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Inspection of compressed-air receivers and...-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13015 Inspection of compressed-air receivers and other unfired pressure vessels. (a) Compressed-air receivers and other unfired pressure...
30 CFR 56.13020 - Use of compressed air.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a person...
30 CFR 56.13015 - Inspection of compressed-air receivers and other unfired pressure vessels.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Inspection of compressed-air receivers and... METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13015 Inspection of compressed-air receivers and other unfired pressure vessels. (a) Compressed-air receivers and other unfired pressure vessels...
Alternative Fuels Data Center: Animation of a Hydrogen Fueling Station
containers (only pertains to dispersing equipment) - 3-foot setback Setbacks are applicable to a 7,000 psi , Safe Handling of Compressed Gases in Containers (Compressed Gas Association, 2006) 4.1 Transportation Storage Containers for Compressed Gases (Compressed Gas Association, 2005) 5.3.2 Nonliquid Compressed
Turbulence in Compressible Flows
NASA Technical Reports Server (NTRS)
1997-01-01
Lecture notes for the AGARD Fluid Dynamics Panel (FDP) Special Course on 'Turbulence in Compressible Flows' have been assembled in this report. The following topics were covered: Compressible Turbulent Boundary Layers, Compressible Turbulent Free Shear Layers, Turbulent Combustion, DNS/LES and RANS Simulations of Compressible Turbulent Flows, and Case Studies of Applications of Turbulence Models in Aerospace.
NASA Astrophysics Data System (ADS)
Wan, Tat C.; Kabuka, Mansur R.
1994-05-01
With the tremendous growth in imaging applications and the development of filmless radiology, the need for compression techniques that can achieve high compression ratios with user specified distortion rates becomes necessary. Boundaries and edges in the tissue structures are vital for detection of lesions and tumors, which in turn requires the preservation of edges in the image. The proposed edge preserving image compressor (EPIC) combines lossless compression of edges with neural network compression techniques based on dynamic associative neural networks (DANN), to provide high compression ratios with user specified distortion rates in an adaptive compression system well-suited to parallel implementations. Improvements to DANN-based training through the use of a variance classifier for controlling a bank of neural networks speed convergence and allow the use of higher compression ratios for `simple' patterns. The adaptation and generalization capabilities inherent in EPIC also facilitate progressive transmission of images through varying the number of quantization levels used to represent compressed patterns. Average compression ratios of 7.51:1 with an averaged average mean squared error of 0.0147 were achieved.
Zeng, Xianglong; Guo, Hairun; Zhou, Binbin; Bache, Morten
2012-11-19
We propose an efficient approach to improve few-cycle soliton compression with cascaded quadratic nonlinearities by using an engineered multi-section structure of the nonlinear crystal. By exploiting engineering of the cascaded quadratic nonlinearities, in each section soliton compression with a low effective order is realized, and high-quality few-cycle pulses with large compression factors are feasible. Each subsequent section is designed so that the compressed pulse exiting the previous section experiences an overall effective self-defocusing cubic nonlinearity corresponding to a modest soliton order, which is kept larger than unity to ensure further compression. This is done by increasing the cascaded quadratic nonlinearity in the new section with an engineered reduced residual phase mismatch. The low soliton orders in each section ensure excellent pulse quality and high efficiency. Numerical results show that compressed pulses with less than three-cycle duration can be achieved even when the compression factor is very large, and in contrast to standard soliton compression, these compressed pulses have minimal pedestal and high quality factor.
Hot-compress: A new postdeposition treatment for ZnO-based flexible dye-sensitized solar cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haque Choudhury, Mohammad Shamimul, E-mail: shamimul129@gmail.com; Department of Electrical and Electronic Engineering, International Islamic University Chittagong, b154/a, College Road, Chittagong 4203; Kishi, Naoki
2016-08-15
Highlights: • A new postdeposition treatment named hot-compress is introduced. • Hot-compression gives homogeneous compact layer ZnO photoanode. • I-V and EIS analysis data confirms the efficacy of this method. • Charge transport resistance was reduced by the application of hot-compression. - Abstract: This article introduces a new postdeposition treatment named hot-compress for flexible zinc oxide–base dye-sensitized solar cells. This postdeposition treatment includes the application of compression pressure at an elevated temperature. The optimum compression pressure of 130 Ma at an optimum compression temperature of 70 °C heating gives better photovoltaic performance compared to the conventional cells. The aptness ofmore » this method was confirmed by investigating scanning electron microscopy image, X-ray diffraction, current-voltage and electrochemical impedance spectroscopy analysis of the prepared cells. Proper heating during compression lowers the charge transport resistance, longer the electron lifetime of the device. As a result, the overall power conversion efficiency of the device was improved about 45% compared to the conventional room temperature compressed cell.« less
Novel Data Reduction Based on Statistical Similarity
Lee, Dongeun; Sim, Alex; Choi, Jaesik; ...
2016-07-18
Applications such as scientific simulations and power grid monitoring are generating so much data quickly that compression is essential to reduce storage requirement or transmission capacity. To achieve better compression, one is often willing to discard some repeated information. These lossy compression methods are primarily designed to minimize the Euclidean distance between the original data and the compressed data. But this measure of distance severely limits either reconstruction quality or compression performance. In this paper, we propose a new class of compression method by redefining the distance measure with a statistical concept known as exchangeability. This approach reduces the storagemore » requirement and captures essential features, while reducing the storage requirement. In this paper, we report our design and implementation of such a compression method named IDEALEM. To demonstrate its effectiveness, we apply it on a set of power grid monitoring data, and show that it can reduce the volume of data much more than the best known compression method while maintaining the quality of the compressed data. Finally, in these tests, IDEALEM captures extraordinary events in the data, while its compression ratios can far exceed 100.« less
Compression for radiological images
NASA Astrophysics Data System (ADS)
Wilson, Dennis L.
1992-07-01
The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.
Compressive Properties and Anti-Erosion Characteristics of Foam Concrete in Road Engineering
NASA Astrophysics Data System (ADS)
Li, Jinzhu; Huang, Hongxiang; Wang, Wenjun; Ding, Yifan
2018-01-01
To analyse the compression properties and anti-erosion characteristics of foam concrete, one dimensional compression tests were carried out using ring specimens of foam concrete, and unconfined compression tests were carried out using foam concrete specimens cured in different conditions. The results of one dimensional compression tests show that the compression curve of foam concrete has two critical points and three stages, which has significant difference with ordinary geotechnical materials such as soil. Based on the compression curve the compression modulus of each stage were determined. The results of erosion tests show that sea water has a slight influence on the long-term strength of foam concrete, while the sulphate solution has a significant influence on the long-term strength of foam concrete, which needs to pay more attention.
Advances in high throughput DNA sequence data compression.
Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz
2016-06-01
Advances in high throughput sequencing technologies and reduction in cost of sequencing have led to exponential growth in high throughput DNA sequence data. This growth has posed challenges such as storage, retrieval, and transmission of sequencing data. Data compression is used to cope with these challenges. Various methods have been developed to compress genomic and sequencing data. In this article, we present a comprehensive review of compression methods for genome and reads compression. Algorithms are categorized as referential or reference free. Experimental results and comparative analysis of various methods for data compression are presented. Finally, key challenges and research directions in DNA sequence data compression are highlighted.
Niles, Dana E; Duval-Arnould, Jordan; Skellett, Sophie; Knight, Lynda; Su, Felice; Raymond, Tia T; Sweberg, Todd; Sen, Anita I; Atkins, Dianne L; Friess, Stuart H; de Caen, Allan R; Kurosawa, Hiroshi; Sutton, Robert M; Wolfe, Heather; Berg, Robert A; Silver, Annemarie; Hunt, Elizabeth A; Nadkarni, Vinay M
2018-05-01
Pediatric in-hospital cardiac arrest cardiopulmonary resuscitation quality metrics have been reported in few children less than 8 years. Our objective was to characterize chest compression fraction, rate, depth, and compliance with 2015 American Heart Association guidelines across multiple pediatric hospitals. Retrospective observational study of data from a multicenter resuscitation quality collaborative from October 2015 to April 2017. Twelve pediatric hospitals across United States, Canada, and Europe. In-hospital cardiac arrest patients (age < 18 yr) with quantitative cardiopulmonary resuscitation data recordings. None. There were 112 events yielding 2,046 evaluable 60-second epochs of cardiopulmonary resuscitation (196,669 chest compression). Event cardiopulmonary resuscitation metric summaries (median [interquartile range]) by age: less than 1 year (38/112): chest compression fraction 0.88 (0.61-0.98), chest compression rate 119/min (110-129), and chest compression depth 2.3 cm (1.9-3.0 cm); for 1 to less than 8 years (42/112): chest compression fraction 0.94 (0.79-1.00), chest compression rate 117/min (110-124), and chest compression depth 3.8 cm (2.9-4.6 cm); for 8 to less than 18 years (32/112): chest compression fraction 0.94 (0.85-1.00), chest compression rate 117/min (110-123), chest compression depth 5.5 cm (4.0-6.5 cm). "Compliance" with guideline targets for 60-second chest compression "epochs" was predefined: chest compression fraction greater than 0.80, chest compression rate 100-120/min, and chest compression depth: greater than or equal to 3.4 cm in less than 1 year, greater than or equal to 4.4 cm in 1 to less than 8 years, and 4.5 to less than 6.6 cm in 8 to less than 18 years. Proportion of less than 1 year, 1 to less than 8 years, and 8 to less than 18 years events with greater than or equal to 60% of 60-second epochs meeting compliance (respectively): chest compression fraction was 53%, 81%, and 78%; chest compression rate was 32%, 50%, and 63%; chest compression depth was 13%, 19%, and 44%. For all events combined, total compliance (meeting all three guideline targets) was 10% (11/112). Across an international pediatric resuscitation collaborative, we characterized the landscape of pediatric in-hospital cardiac arrest chest compression quality metrics and found that they often do not meet 2015 American Heart Association guidelines. Guideline compliance for rate and depth in children less than 18 years is poor, with the greatest difficulty in achieving chest compression depth targets in younger children.
NASA Technical Reports Server (NTRS)
Barrie, A. C.; Smith, S. E.; Dorelli, J. C.; Gershman, D. J.; Yeh, P.; Schiff, C.; Avanov, L. A.
2017-01-01
Data compression has been a staple of imaging instruments for years. Recently, plasma measurements have utilized compression with relatively low compression ratios. The Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale (MMS) mission generates data roughly 100 times faster than previous plasma instruments, requiring a higher compression ratio to fit within the telemetry allocation. This study investigates the performance of a space-based compression standard employing a Discrete Wavelet Transform and a Bit Plane Encoder (DWT/BPE) in compressing FPI plasma count data. Data from the first 6 months of FPI operation are analyzed to explore the error modes evident in the data and how to adapt to them. While approximately half of the Dual Electron Spectrometer (DES) maps had some level of loss, it was found that there is little effect on the plasma moments and that errors present in individual sky maps are typically minor. The majority of Dual Ion Spectrometer burst sky maps compressed in a lossless fashion, with no error introduced during compression. Because of induced compression error, the size limit for DES burst images has been increased for Phase 1B. Additionally, it was found that the floating point compression mode yielded better results when images have significant compression error, leading to floating point mode being used for the fast survey mode of operation for Phase 1B. Despite the suggested tweaks, it was found that wavelet-based compression, and a DWT/BPE algorithm in particular, is highly suitable to data compression for plasma measurement instruments and can be recommended for future missions.
NASA Technical Reports Server (NTRS)
Heier, W. C. (Inventor)
1974-01-01
A method is described for compression molding of thermosetting plastics composition. Heat is applied to the compressed load in a mold cavity and adjusted to hold molding temperature at the interface of the cavity surface and the compressed compound to produce a thermal front. This thermal front advances into the evacuated compound at mean right angles to the compression load and toward a thermal fence formed at the opposite surface of the compressed compound.
Comparison of two SVD-based color image compression schemes.
Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli
2017-01-01
Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR.
Comparison of two SVD-based color image compression schemes
Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli
2017-01-01
Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR. PMID:28257451
Murgier, J; Cassard, X
2014-05-01
Cryotherapy is a useful adjunctive analgesic measure in patients with postoperative pain following anterior cruciate ligament (ACL) surgery. Either static permanent compression or dynamic intermittent compression can be added to increase the analgesic effect of cryotherapy. Our objective was to compare the efficacy of these two compression modalities combined with cryotherapy in relieving postoperative pain and restoring range of knee motion after ligament reconstruction surgery. When combined with cryotherapy, a dynamic and intermittent compression is associated with decreased analgesic drug requirements, less postoperative pain, and better range of knee motion compared to static compression. We conducted a case-control study of consecutive patients who underwent anterior cruciate ligament reconstruction at a single institution over a 3-month period. Both groups received the same analgesic drug protocol. One group was managed with cryotherapy and dynamic intermittent compression (Game Ready(®)) and the other with cryotherapy and static compression (IceBand(®)). Of 39 patients, 20 received dynamic and 19 static compression. In the post-anaesthesia recovery unit, the mean visual analogue scale (VAS) pain score was 2.4 (range, 0-6) with dynamic compression and 2.7 (0-7) with static compression (P=0.3); corresponding values were 1.85 (0-9) vs. 3 (0-8) (P=0.16) after 6 hours and 0.6 (0-3) vs. 1.14 (0-3) (P=0.12) at discharge. The cumulative mean tramadol dose per patient was 57.5mg (0-200mg) with dynamic compression and 128.6 mg (0-250 mg) with static compression (P=0.023); corresponding values for morphine were 0mg vs. 1.14 mg (0-8 mg) (P<0.05). Mean range of knee flexion at discharge was 90.5° (80°-100°) with dynamic compression and 84.5° (75°-90°) with static compression (P=0.0015). Dynamic intermittent compression combined with cryotherapy decreases analgesic drug requirements after ACL reconstruction and improves the postoperative recovery of range of knee motion. Level III, case-control study. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Bunch length compression method for free electron lasers to avoid parasitic compressions
Douglas, David R.; Benson, Stephen; Nguyen, Dinh Cong; Tennant, Christopher; Wilson, Guy
2015-05-26
A method of bunch length compression method for a free electron laser (FEL) that avoids parasitic compressions by 1) applying acceleration on the falling portion of the RF waveform, 2) compressing using a positive momentum compaction (R.sub.56>0), and 3) compensating for aberration by using nonlinear magnets in the compressor beam line.
Compression fractures of the back
... treatments. Surgery can include: Balloon kyphoplasty Vertebroplasty Spinal fusion Other surgery may be done to remove bone ... Alternative Names Vertebral compression fractures; Osteoporosis - compression fracture Images Compression fracture References Cosman F, de Beur SJ, ...
Compression of next-generation sequencing quality scores using memetic algorithm
2014-01-01
Background The exponential growth of next-generation sequencing (NGS) derived DNA data poses great challenges to data storage and transmission. Although many compression algorithms have been proposed for DNA reads in NGS data, few methods are designed specifically to handle the quality scores. Results In this paper we present a memetic algorithm (MA) based NGS quality score data compressor, namely MMQSC. The algorithm extracts raw quality score sequences from FASTQ formatted files, and designs compression codebook using MA based multimodal optimization. The input data is then compressed in a substitutional manner. Experimental results on five representative NGS data sets show that MMQSC obtains higher compression ratio than the other state-of-the-art methods. Particularly, MMQSC is a lossless reference-free compression algorithm, yet obtains an average compression ratio of 22.82% on the experimental data sets. Conclusions The proposed MMQSC compresses NGS quality score data effectively. It can be utilized to improve the overall compression ratio on FASTQ formatted files. PMID:25474747
Image compression system and method having optimized quantization tables
NASA Technical Reports Server (NTRS)
Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)
1998-01-01
A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.
Compressed NMR: Combining compressive sampling and pure shift NMR techniques.
Aguilar, Juan A; Kenwright, Alan M
2017-12-26
Historically, the resolution of multidimensional nuclear magnetic resonance (NMR) has been orders of magnitude lower than the intrinsic resolution that NMR spectrometers are capable of producing. The slowness of Nyquist sampling as well as the existence of signals as multiplets instead of singlets have been two of the main reasons for this underperformance. Fortunately, two compressive techniques have appeared that can overcome these limitations. Compressive sensing, also known as compressed sampling (CS), avoids the first limitation by exploiting the compressibility of typical NMR spectra, thus allowing sampling at sub-Nyquist rates, and pure shift techniques eliminate the second issue "compressing" multiplets into singlets. This paper explores the possibilities and challenges presented by this combination (compressed NMR). First, a description of the CS framework is given, followed by a description of the importance of combining it with the right pure shift experiment. Second, examples of compressed NMR spectra and how they can be combined with covariance methods will be shown. Copyright © 2017 John Wiley & Sons, Ltd.
Ma, JiaLi; Zhang, TanTan; Dong, MingChui
2015-05-01
This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6-44.5 and percentage root mean square difference (PRD) of 0.8-2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications.
Combined Industry, Space and Earth Science Data Compression Workshop
NASA Technical Reports Server (NTRS)
Kiely, Aaron B. (Editor); Renner, Robert L. (Editor)
1996-01-01
The sixth annual Space and Earth Science Data Compression Workshop and the third annual Data Compression Industry Workshop were held as a single combined workshop. The workshop was held April 4, 1996 in Snowbird, Utah in conjunction with the 1996 IEEE Data Compression Conference, which was held at the same location March 31 - April 3, 1996. The Space and Earth Science Data Compression sessions seek to explore opportunities for data compression to enhance the collection, analysis, and retrieval of space and earth science data. Of particular interest is data compression research that is integrated into, or has the potential to be integrated into, a particular space or earth science data information system. Preference is given to data compression research that takes into account the scien- tist's data requirements, and the constraints imposed by the data collection, transmission, distribution and archival systems.
Visually lossless compression of digital hologram sequences
NASA Astrophysics Data System (ADS)
Darakis, Emmanouil; Kowiel, Marcin; Näsänen, Risto; Naughton, Thomas J.
2010-01-01
Digital hologram sequences have great potential for the recording of 3D scenes of moving macroscopic objects as their numerical reconstruction can yield a range of perspective views of the scene. Digital holograms inherently have large information content and lossless coding of holographic data is rather inefficient due to the speckled nature of the interference fringes they contain. Lossy coding of still holograms and hologram sequences has shown promising results. By definition, lossy compression introduces errors in the reconstruction. In all of the previous studies, numerical metrics were used to measure the compression error and through it, the coding quality. Digital hologram reconstructions are highly speckled and the speckle pattern is very sensitive to data changes. Hence, numerical quality metrics can be misleading. For example, for low compression ratios, a numerically significant coding error can have visually negligible effects. Yet, in several cases, it is of high interest to know how much lossy compression can be achieved, while maintaining the reconstruction quality at visually lossless levels. Using an experimental threshold estimation method, the staircase algorithm, we determined the highest compression ratio that was not perceptible to human observers for objects compressed with Dirac and MPEG-4 compression methods. This level of compression can be regarded as the point below which compression is perceptually lossless although physically the compression is lossy. It was found that up to 4 to 7.5 fold compression can be obtained with the above methods without any perceptible change in the appearance of video sequences.
Martin, Philip; Theobald, Peter; Kemp, Alison; Maguire, Sabine; Maconochie, Ian; Jones, Michael
2013-08-01
European and Advanced Paediatric Life Support training courses. Sixty-nine certified CPR providers. CPR providers were randomly allocated to a 'no-feedback' or 'feedback' group, performing two-thumb and two-finger chest compressions on a "physiological", instrumented resuscitation manikin. Baseline data was recorded without feedback, before chest compressions were repeated with one group receiving feedback. Indices were calculated that defined chest compression quality, based upon comparison of the chest wall displacement to the targets of four, internationally recommended parameters: chest compression depth, release force, chest compression rate and compression duty cycle. Baseline data were consistent with other studies, with <1% of chest compressions performed by providers simultaneously achieving the target of the four internationally recommended parameters. During the 'experimental' phase, 34 CPR providers benefitted from the provision of 'real-time' feedback which, on analysis, coincided with a statistical improvement in compression rate, depth and duty cycle quality across both compression techniques (all measures: p<0.001). Feedback enabled providers to simultaneously achieve the four targets in 75% (two-finger) and 80% (two-thumb) of chest compressions. Real-time feedback produced a dramatic increase in the quality of chest compression (i.e. from <1% to 75-80%). If these results transfer to a clinical scenario this technology could, for the first time, support providers in consistently performing accurate chest compressions during infant CPR and thus potentially improving clinical outcomes. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Lee, Seong Hwa; Ryu, Ji Ho; Min, Mun Ki; Kim, Yong In; Park, Maeng Real; Yeom, Seok Ran; Han, Sang Kyoon; Park, Seong Wook
2016-08-01
When performing cardiopulmonary resuscitation (CPR), the 2010 American Heart Association guidelines recommend a chest compression rate of at least 100 min, whereas the 2010 European Resuscitation Council guidelines recommend a rate of between 100 and 120 min. The aim of this study was to examine the rate of chest compression that fulfilled various quality indicators, thereby determining the optimal rate of compression. Thirty-two trainee emergency medical technicians and six paramedics were enrolled in this study. All participants had been trained in basic life support. Each participant performed 2 min of continuous compressions on a skill reporter manikin, while listening to a metronome sound at rates of 100, 120, 140, and 160 beats/min, in a random order. Mean compression depth, incomplete chest recoil, and the proportion of correctly performed chest compressions during the 2 min were measured and recorded. The rate of incomplete chest recoil was lower at compression rates of 100 and 120 min compared with that at 160 min (P=0.001). The numbers of compressions that fulfilled the criteria for high-quality CPR at a rate of 120 min were significantly higher than those at 100 min (P=0.016). The number of high-quality CPR compressions was the highest at a compression rate of 120 min, and increased incomplete recoil occurred with increasing compression rate. However, further studies are needed to confirm the results.
A Novel Range Compression Algorithm for Resolution Enhancement in GNSS-SARs.
Zheng, Yu; Yang, Yang; Chen, Wu
2017-06-25
In this paper, a novel range compression algorithm for enhancing range resolutions of a passive Global Navigation Satellite System-based Synthetic Aperture Radar (GNSS-SAR) is proposed. In the proposed algorithm, within each azimuth bin, firstly range compression is carried out by correlating a reflected GNSS intermediate frequency (IF) signal with a synchronized direct GNSS base-band signal in the range domain. Thereafter, spectrum equalization is applied to the compressed results for suppressing side lobes to obtain a final range-compressed signal. Both theoretical analysis and simulation results have demonstrated that significant range resolution improvement in GNSS-SAR images can be achieved by the proposed range compression algorithm, compared to the conventional range compression algorithm.
Micromechanics of composite laminate compression failure
NASA Technical Reports Server (NTRS)
Guynn, E. Gail; Bradley, Walter L.
1986-01-01
The Dugdale analysis for metals loaded in tension was adapted to model the failure of notched composite laminates loaded in compression. Compression testing details, MTS alignment verification, and equipment needs were resolved. Thus far, only 2 ductile material systems, HST7 and F155, were selected for study. A Wild M8 Zoom Stereomicroscope and necessary attachments for video taping and 35 mm pictures were purchased. Currently, this compression test system is fully operational. A specimen is loaded in compression, and load vs shear-crippling zone size is monitored and recorded. Data from initial compression tests indicate that the Dugdale model does not accurately predict the load vs damage zone size relationship of notched composite specimens loaded in compression.
Fast Lossless Compression of Multispectral-Image Data
NASA Technical Reports Server (NTRS)
Klimesh, Matthew
2006-01-01
An algorithm that effects fast lossless compression of multispectral-image data is based on low-complexity, proven adaptive-filtering algorithms. This algorithm is intended for use in compressing multispectral-image data aboard spacecraft for transmission to Earth stations. Variants of this algorithm could be useful for lossless compression of three-dimensional medical imagery and, perhaps, for compressing image data in general.
Schwartz, Andrew H; Shinn-Cunningham, Barbara G
2013-04-01
Many hearing aids introduce compressive gain to accommodate the reduced dynamic range that often accompanies hearing loss. However, natural sounds produce complicated temporal dynamics in hearing aid compression, as gain is driven by whichever source dominates at a given moment. Moreover, independent compression at the two ears can introduce fluctuations in interaural level differences (ILDs) important for spatial perception. While independent compression can interfere with spatial perception of sound, it does not always interfere with localization accuracy or speech identification. Here, normal-hearing listeners reported a target message played simultaneously with two spatially separated masker messages. We measured the amount of spatial separation required between the target and maskers for subjects to perform at threshold in this task. Fast, syllabic compression that was independent at the two ears increased the required spatial separation, but linking the compressors to provide identical gain to both ears (preserving ILDs) restored much of the deficit caused by fast, independent compression. Effects were less clear for slower compression. Percent-correct performance was lower with independent compression, but only for small spatial separations. These results may help explain differences in previous reports of the effect of compression on spatial perception of sound.
Effect of Impact Compression on the Age-Hardening of Rapidly Solidified Al-Zn-Mg Base Alloys
NASA Astrophysics Data System (ADS)
Horikawa, Keitaro; Kobayashi, Hidetoshi
Effect of impact compression on the age-hardening behavior and the mechanical properties of Mesoalite aluminum alloy was examined by means of the high-velocity plane collision between a projectile and Mesoalite by using a single powder gun. By imposing the impact compression to the Meso10 and Meso20 alloys in the state of quenching after the solution heat treatment, the following age-hardening at 110 °C was highly increased, comparing with the Mesoalite without the impact compression. XRD results revealed that high plastic strain was introduced on the specimen inside after the impact compression. Compression test results also clarified that both Meso10 and Meso20 alloy specimens imposed the impact compressive stresses more than 5 GPa after the peak-aging at 110°C showed higher yield stresses, comparing with the alloys without the impact compression. It was also shown that the Meso10 and Meso20 specimens after the solution heat treatment, followed by the high-velocity impact compression (12 GPa) and the peak-aging treatment indicated the highest compressive yield stresses such as 994 GPa in Meso10 and 1091 GPa in Meso20.
Koski, Antti; Tossavainen, Timo; Juhola, Martti
2004-01-01
Electrocardiogram (ECG) signals are the most prominent biomedical signal type used in clinical medicine. Their compression is important and widely researched in the medical informatics community. In the previous literature compression efficacy has been investigated only in the context of how much known or developed methods reduced the storage required by compressed forms of original ECG signals. Sometimes statistical signal evaluations based on, for example, root mean square error were studied. In previous research we developed a refined method for signal compression and tested it jointly with several known techniques for other biomedical signals. Our method of so-called successive approximation quantization used with wavelets was one of the most successful in those tests. In this paper, we studied to what extent these lossy compression methods altered values of medical parameters (medical information) computed from signals. Since the methods are lossy, some information is lost due to the compression when a high enough compression ratio is reached. We found that ECG signals sampled at 400 Hz could be compressed to one fourth of their original storage space, but the values of their medical parameters changed less than 5% due to compression, which indicates reliable results.
Efficient compression of molecular dynamics trajectory files.
Marais, Patrick; Kenwood, Julian; Smith, Keegan Carruthers; Kuttel, Michelle M; Gain, James
2012-10-15
We investigate whether specific properties of molecular dynamics trajectory files can be exploited to achieve effective file compression. We explore two classes of lossy, quantized compression scheme: "interframe" predictors, which exploit temporal coherence between successive frames in a simulation, and more complex "intraframe" schemes, which compress each frame independently. Our interframe predictors are fast, memory-efficient and well suited to on-the-fly compression of massive simulation data sets, and significantly outperform the benchmark BZip2 application. Our schemes are configurable: atomic positional accuracy can be sacrificed to achieve greater compression. For high fidelity compression, our linear interframe predictor gives the best results at very little computational cost: at moderate levels of approximation (12-bit quantization, maximum error ≈ 10(-2) Å), we can compress a 1-2 fs trajectory file to 5-8% of its original size. For 200 fs time steps-typically used in fine grained water diffusion experiments-we can compress files to ~25% of their input size, still substantially better than BZip2. While compression performance degrades with high levels of quantization, the simulation error is typically much greater than the associated approximation error in such cases. Copyright © 2012 Wiley Periodicals, Inc.
Taoka, Toshiaki; Iwasaki, Satoru; Okamoto, Shingo; Sakamoto, Masahiko; Nakagawa, Hiroyuki; Otake, Shoichiro; Fujioka, Masayuki; Hirohashi, Shinji; Kichikawa, Kimihiko
2006-06-01
The purpose of this study was to evaluate the relationship between pituitary stalk compression by the dorsum sellae and clinical or laboratory findings in short stature children. We retrospectively reviewed magnetic resonance images of the pituitary gland and pituitary stalk for 34 short stature children with growth hormone (GH) deficiency and 24 age-matched control cases. We evaluated the degree of pituitary stalk compression caused by the dorsum sellae. Body height, GH level, pituitary height and onset age of the short stature were statistically compared between cases of pituitary stalk compression with associated stalk deformity and cases without compression. Compression of the pituitary stalk with associated stalk deformity was seen in nine cases within the short stature group. There were no cases observed in the control group. There were no significant differences found for body height, GH level and pituitary height between the cases of pituitary stalk compression with associated stalk deformity and cases without compression. However, a significant difference was seen in the onset age between cases with and without stalk compression. Pituitary stalk compression with stalk deformity caused by the dorsum sellae was significantly correlated with late childhood onset of short stature.
Quan, Xin; Guo, Kai; Wang, Yuqing; Huang, Liangliang; Chen, Beiyu; Ye, Zhengxu; Luo, Zhuojing
2014-01-01
In a primary spinal cord injury, the amount of mechanical compression insult that the neurons experience is one of the most critical factors in determining the extent of the injury. The ultrastructural changes that neurons undergo when subjected to mechanical compression are largely unknown. In the present study, using a compression-driven instrument that can simulate mechanical compression insult, we applied mechanical compression stimulation at 0.3, 0.5, and 0.7 MPa to dorsal root ganglion (DRG) neurons for 10 min. Combined with atomic force microscopy, we investigated nanoscale changes in the membrane-skeleton, cytoskeleton alterations, and apoptosis induced by mechanical compression injury. The results indicated that mechanical compression injury leads to rearrangement of the membrane-skeleton compared with the control group. In addition, mechanical compression stimulation induced apoptosis and necrosis and also changed the distribution of the cytoskeleton in DRG neurons. Thus, the membrane-skeleton may play an important role in the response to mechanical insults in DRG neurons. Moreover, sudden insults caused by high mechanical compression, which is most likely conducted by the membrane-skeleton, may induce necrosis, apoptosis, and cytoskeletal alterations.
Cosmological Particle Data Compression in Practice
NASA Astrophysics Data System (ADS)
Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.
2017-12-01
In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.
Waninger, Kevin N; Goodbred, Andrew; Vanic, Keith; Hauth, John; Onia, Joshua; Stoltzfus, Jill; Melanson, Scott
2014-07-01
To investigate (1) cardiopulmonary resuscitation (CPR) adequacy during simulated cardiac arrest of equipped football players and (2) whether protective football equipment impedes CPR performance measures. Exploratory crossover study performed on Laerdal SimMan 3 G interactive manikin simulator. Temple University/St Luke's University Health Network Regional Medical School Simulation Laboratory. Thirty BCLS-certified ATCs and 6 ACLS-certified emergency department technicians. Subjects were given standardized rescuer scenarios to perform three 2-minute sequences of compression-only CPR. Baseline CPR sequences were captured on each subject. Experimental conditions included 2-minute sequences of CPR either over protective football shoulder pads or under unlaced pads. Subjects were instructed to adhere to 2010 American Heart Association guidelines (initiation of compressions alone at 100/min to 51 mm). Dependent variables included average compression depth, average compression rate, percentage of time chest wall recoiled, and percentage of hands-on contact during compressions. Differences between subject groups were not found to be statistically significant, so groups were combined (n = 36) for analysis of CPR compression adequacy. Compression depth was deeper under shoulder pads than over (P = 0.02), with mean depths of 36.50 and 31.50 mm, respectively. No significant difference was found with compression rate or chest wall recoil. Chest compression depth is significantly decreased when performed over shoulder pads, while there is no apparent effect on rate or chest wall recoil. Although the clinical outcomes from our observed 15% difference in compression depth are uncertain, chest compression under the pads significantly increases the depth of compressions and more closely approaches American Heart Association guidelines for chest compression depth in cardiac arrest.
Collazo Chao, Eliseo; Luque, María Antonia; González-Ripoll, Carmen
2010-10-01
There is still controversy on the best compression therapy after performing a greater saphenectomy. The purpose of this study is to establish whether the use of a controlled compression stocking has the same level of safety and efficacy as a compression bandage in the immediate post-operative period after a greater saphenectomy. A prospective, randomised, open-labelled study, comparing three groups: a) a conventional compression bandage for one week, b) a conventional compression bandage replaced by a controlled tubular compression stocking at 5h of its putting in place, c) immediate direct use of the controlled tubular compression stocking, was conducted on fifty-five consecutive outpatients with a greater saphenectomy in one of their legs, and who fulfilled the inclusion criteria. The working hypothesis was that the controlled tubular compression stocking could replace, in terms of efficacy, safety and comfort, the usual controlled compression in the immediate post-operative period after saphenous vein stripping. The analysis variables were pain, control of bleeding, analgesics in the post-operative period, bruising, incapacity during the first week after the operation and comfort level. There were no statistically significant differences found between the three types of compressions studied as regards, safety, efficacy, comfort level, pain and analgesic consumption, but there was as regards the level of convenience in favour of the use of the stocking. The controlled tubular compression stocking can replace the compression bandage with more advantages after greater saphenous vein stripping in outpatients, having the same safety and efficacy. Copyright © 2009 AEC. Published by Elsevier Espana. All rights reserved.
Park, Kyung-Mi; Kim, Suhn-Yeop; Oh, Duck-Won
2010-12-01
The aims of this study were to assess the effect of the pelvic compression belt on the electromyographic (EMG) activities of gluteus medius (GM), quadratus lumborum (QL), and lumbar multifidus (LM) during side-lying hip abduction. Thirty-one volunteers (15 men and 16 women) with no history of pathology volunteered for this study. Subjects were instructed to perform hip abduction in side-lying position with and without applying the pelvic compression belt. The pelvic compression belt was adjusted just below the anterior superior iliac spines with the stabilizing pressure using elastic compression bands. Surface EMG data were collected from the GM, QL, and LM of the dominant limb. Significantly decreased EMG activity in the QL (without the pelvic compression belt, 60.19±23.66% maximal voluntary isometric contraction [MVIC]; with the pelvic compression belt, 51.44±23.00% MVIC) and significantly increased EMG activity in the GM (without the pelvic compression belt, 26.71±12.88% MVIC; with the pelvic compression belt, 35.02±18.28% MVIC) and in the LM (without the pelvic compression belt, 30.28±14.60% MVIC; with the pelvic compression belt, 37.47±18.94% MVIC) were found when the pelvic compression belt was applied (p<0.05). However, there were no significant differences of the EMG activity between male and female subjects. The findings suggest that the pelvic compression belt may be helpful to prevent unwanted substitution movement during side-lying hip abduction, through increasing the GM and LM and decreasing the QL. Copyright © 2010 Elsevier Ltd. All rights reserved.
CoGI: Towards Compressing Genomes as an Image.
Xie, Xiaojing; Zhou, Shuigeng; Guan, Jihong
2015-01-01
Genomic science is now facing an explosive increase of data thanks to the fast development of sequencing technology. This situation poses serious challenges to genomic data storage and transferring. It is desirable to compress data to reduce storage and transferring cost, and thus to boost data distribution and utilization efficiency. Up to now, a number of algorithms / tools have been developed for compressing genomic sequences. Unlike the existing algorithms, most of which treat genomes as one-dimensional text strings and compress them based on dictionaries or probability models, this paper proposes a novel approach called CoGI (the abbreviation of Compressing Genomes as an Image) for genome compression, which transforms the genomic sequences to a two-dimensional binary image (or bitmap), then applies a rectangular partition coding algorithm to compress the binary image. CoGI can be used as either a reference-based compressor or a reference-free compressor. For the former, we develop two entropy-based algorithms to select a proper reference genome. Performance evaluation is conducted on various genomes. Experimental results show that the reference-based CoGI significantly outperforms two state-of-the-art reference-based genome compressors GReEn and RLZ-opt in both compression ratio and compression efficiency. It also achieves comparable compression ratio but two orders of magnitude higher compression efficiency in comparison with XM--one state-of-the-art reference-free genome compressor. Furthermore, our approach performs much better than Gzip--a general-purpose and widely-used compressor, in both compression speed and compression ratio. So, CoGI can serve as an effective and practical genome compressor. The source code and other related documents of CoGI are available at: http://admis.fudan.edu.cn/projects/cogi.htm.
Kim, Bohyoung; Lee, Kyoung Ho; Kim, Kil Joong; Mantiuk, Rafal; Kim, Hye-ri; Kim, Young Hoon
2008-06-01
The objective of our study was to assess the effects of compressing source thin-section abdominal CT images on final transverse average-intensity-projection (AIP) images. At reversible, 4:1, 6:1, 8:1, 10:1, and 15:1 Joint Photographic Experts Group (JPEG) 2000 compressions, we compared the artifacts in 20 matching compressed thin sections (0.67 mm), compressed thick sections (5 mm), and AIP images (5 mm) reformatted from the compressed thin sections. The artifacts were quantitatively measured with peak signal-to-noise ratio (PSNR) and a perceptual quality metric (High Dynamic Range Visual Difference Predictor [HDR-VDP]). By comparing the compressed and original images, three radiologists independently graded the artifacts as 0 (none, indistinguishable), 1 (barely perceptible), 2 (subtle), or 3 (significant). Friedman tests and exact tests for paired proportions were used. At irreversible compressions, the artifacts tended to increase in the order of AIP, thick-section, and thin-section images in terms of PSNR (p < 0.0001), HDR-VDP (p < 0.0001), and the readers' grading (p < 0.01 at 6:1 or higher compressions). At 6:1 and 8:1, distinguishable pairs (grades 1-3) tended to increase in the order of AIP, thick-section, and thin-section images. Visually lossless threshold for the compression varied between images but decreased in the order of AIP, thick-section, and thin-section images (p < 0.0001). Compression artifacts in thin sections are significantly attenuated in AIP images. On the premise that thin sections are typically reviewed using an AIP technique, it is justifiable to compress them to a compression level currently accepted for thick sections.
Compression of the Global Land 1-km AVHRR dataset
Kess, B. L.; Steinwand, D.R.; Reichenbach, S.E.
1996-01-01
Large datasets, such as the Global Land 1-km Advanced Very High Resolution Radiometer (AVHRR) Data Set (Eidenshink and Faundeen 1994), require compression methods that provide efficient storage and quick access to portions of the data. A method of lossless compression is described that provides multiresolution decompression within geographic subwindows of multi-spectral, global, 1-km, AVHRR images. The compression algorithm segments each image into blocks and compresses each block in a hierarchical format. Users can access the data by specifying either a geographic subwindow or the whole image and a resolution (1,2,4, 8, or 16 km). The Global Land 1-km AVHRR data are presented in the Interrupted Goode's Homolosine map projection. These images contain masked regions for non-land areas which comprise 80 per cent of the image. A quadtree algorithm is used to compress the masked regions. The compressed region data are stored separately from the compressed land data. Results show that the masked regions compress to 0·143 per cent of the bytes they occupy in the test image and the land areas are compressed to 33·2 per cent of their original size. The entire image is compressed hierarchically to 6·72 per cent of the original image size, reducing the data from 9·05 gigabytes to 623 megabytes. These results are compared to the first order entropy of the residual image produced with lossless Joint Photographic Experts Group predictors. Compression results are also given for Lempel-Ziv-Welch (LZW) and LZ77, the algorithms used by UNIX compress and GZIP respectively. In addition to providing multiresolution decompression of geographic subwindows of the data, the hierarchical approach and the use of quadtrees for storing the masked regions gives a marked improvement over these popular methods.
Wang, Juan; Tang, Ce; Zhang, Lei; Gong, Yushun; Yin, Changlin; Li, Yongqin
2015-07-01
The question of whether the placement of the dominant hand against the sternum could improve the quality of manual chest compressions remains controversial. In the present study, we evaluated the influence of dominant vs nondominant hand positioning on the quality of conventional cardiopulmonary resuscitation (CPR) during prolonged basic life support (BLS) by rescuers who performed optimal and suboptimal compressions. Six months after completing a standard BLS training course, 101 medical students were instructed to perform adult single-rescuer BLS for 8 minutes on a manikin with a randomized hand position. Twenty-four hours later, the students placed the opposite hand in contact with the sternum while performing CPR. Those with an average compression depth of less than 50 mm were considered suboptimal. Participants who had performed suboptimal compressions were significantly shorter (170.2 ± 6.8 vs 174.0 ± 5.6 cm, P = .008) and lighter (58.9 ± 7.6 vs 66.9 ± 9.6 kg, P < .001) than those who performed optimal compressions. No significant differences in CPR quality were observed between dominant and nondominant hand placements for these who had an average compression depth of greater than 50 mm. However, both the compression depth (49.7 ± 4.2 vs 46.5 ± 4.1 mm, P = .003) and proportion of chest compressions with an appropriate depth (47.6% ± 27.8% vs 28.0% ± 23.4%, P = .006) were remarkably higher when compressing the chest with the dominant hand against the sternum for those who performed suboptimal CPR. Chest compression quality significantly improved when the dominant hand was placed against the sternum for those who performed suboptimal compressions during conventional CPR. Copyright © 2015 Elsevier Inc. All rights reserved.
Schober, P; Krage, R; Lagerburg, V; Van Groeningen, D; Loer, S A; Schwarte, L A
2014-04-01
Current cardiopulmonary resuscitation (CPR)-guidelines recommend an increased chest compression depth and rate compared to previous guidelines, and the use of automatic feedback devices is encouraged. However, it is unclear whether this compression depth can be maintained at an increased frequency. Moreover, the underlying surface may influence accuracy of feedback devices. We investigated compression depths over time and evaluated the accuracy of a feedback device on different surfaces. Twenty-four volunteers performed four two-minute blocks of CPR targeting at current guideline recommendations on different surfaces (floor, mattress, 2 backboards) on a patient simulator. Participants rested for 2 minutes between blocks. Influences of time and different surfaces on chest compression depth (ANOVA, mean [95% CI]) and accuracy of a feedback device to determine compression depth (Bland-Altman) were assessed. Mean compression depth did not reach recommended depth and decreased over time during all blocks (first block: from 42 mm [39-46 mm] to 39 mm [37-42 mm]). A two-minute resting period was insufficient to restore compression depth to baseline. No differences in compression depth were observed on different surfaces. The feedback device slightly underestimated compression depth on the floor (bias -3.9 mm), but markedly overestimated on the mattress (bias +12.6 mm). This overestimation was eliminated after correcting compression depth by a second sensor between manikin and mattress. Strategies are needed to improve chest compression depth, and more than two providers should alternate with chest compressions. The underlying surface does not necessarily adversely affect CPR performance but influences accuracy of feedback devices. Accuracy is improved by a second, posterior, sensor.
NASA Astrophysics Data System (ADS)
Lindsay, R. A.; Cox, B. V.
Universal and adaptive data compression techniques have the capability to globally compress all types of data without loss of information but have the disadvantage of complexity and computation speed. Advances in hardware speed and the reduction of computational costs have made universal data compression feasible. Implementations of the Adaptive Huffman and Lempel-Ziv compression algorithms are evaluated for performance. Compression ratios versus run times for different size data files are graphically presented and discussed in the paper. Required adjustments needed for optimum performance of the algorithms relative to theoretical achievable limits will be outlined.
Video bandwidth compression system
NASA Astrophysics Data System (ADS)
Ludington, D.
1980-08-01
The objective of this program was the development of a Video Bandwidth Compression brassboard model for use by the Air Force Avionics Laboratory, Wright-Patterson Air Force Base, in evaluation of bandwidth compression techniques for use in tactical weapons and to aid in the selection of particular operational modes to be implemented in an advanced flyable model. The bandwidth compression system is partitioned into two major divisions: the encoder, which processes the input video with a compression algorithm and transmits the most significant information; and the decoder where the compressed data is reconstructed into a video image for display.
An Optimal Seed Based Compression Algorithm for DNA Sequences
Gopalakrishnan, Gopakumar; Karunakaran, Muralikrishnan
2016-01-01
This paper proposes a seed based lossless compression algorithm to compress a DNA sequence which uses a substitution method that is similar to the LempelZiv compression scheme. The proposed method exploits the repetition structures that are inherent in DNA sequences by creating an offline dictionary which contains all such repeats along with the details of mismatches. By ensuring that only promising mismatches are allowed, the method achieves a compression ratio that is at par or better than the existing lossless DNA sequence compression algorithms. PMID:27555868
Lossless compression of otoneurological eye movement signals.
Tossavainen, Timo; Juhola, Martti
2002-12-01
We studied the performance of several lossless compression algorithms on eye movement signals recorded in otoneurological balance and other physiological laboratories. Despite the wide use of these signals their compression has not been studied prior to our research. The compression methods were based on the common model of using a predictor to decorrelate the input and using an entropy coder to encode the residual. We found that these eye movement signals recorded at 400 Hz and with 13 bit amplitude resolution could losslessly be compressed with a compression ratio of about 2.7.
NASA Technical Reports Server (NTRS)
Cambon, C.; Coleman, G. N.; Mansour, N. N.
1992-01-01
The effect of rapid mean compression on compressible turbulence at a range of turbulent Mach numbers is investigated. Rapid distortion theory (RDT) and direct numerical simulation results for the case of axial (one-dimensional) compression are used to illustrate the existence of two distinct rapid compression regimes. These regimes are set by the relationships between the timescales of the mean distortion, the turbulence, and the speed of sound. A general RDT formulation is developed and is proposed as a means of improving turbulence models for compressible flows.
Effects of Instantaneous Multiband Dynamic Compression on Speech Intelligibility
NASA Astrophysics Data System (ADS)
Herzke, Tobias; Hohmann, Volker
2005-12-01
The recruitment phenomenon, that is, the reduced dynamic range between threshold and uncomfortable level, is attributed to the loss of instantaneous dynamic compression on the basilar membrane. Despite this, hearing aids commonly use slow-acting dynamic compression for its compensation, because this was found to be the most successful strategy in terms of speech quality and intelligibility rehabilitation. Former attempts to use fast-acting compression gave ambiguous results, raising the question as to whether auditory-based recruitment compensation by instantaneous compression is in principle applicable in hearing aids. This study thus investigates instantaneous multiband dynamic compression based on an auditory filterbank. Instantaneous envelope compression is performed in each frequency band of a gammatone filterbank, which provides a combination of time and frequency resolution comparable to the normal healthy cochlea. The gain characteristics used for dynamic compression are deduced from categorical loudness scaling. In speech intelligibility tests, the instantaneous dynamic compression scheme was compared against a linear amplification scheme, which used the same filterbank for frequency analysis, but employed constant gain factors that restored the sound level for medium perceived loudness in each frequency band. In subjective comparisons, five of nine subjects preferred the linear amplification scheme and would not accept the instantaneous dynamic compression in hearing aids. Four of nine subjects did not perceive any quality differences. A sentence intelligibility test in noise (Oldenburg sentence test) showed little to no negative effects of the instantaneous dynamic compression, compared to linear amplification. A word intelligibility test in quiet (one-syllable rhyme test) showed that the subjects benefit from the larger amplification at low levels provided by instantaneous dynamic compression. Further analysis showed that the increase in intelligibility resulting from a gain provided by instantaneous compression is as high as from a gain provided by linear amplification. No negative effects of the distortions introduced by the instantaneous compression scheme in terms of speech recognition are observed.
Two-thumb technique is superior to two-finger technique during lone rescuer infant manikin CPR.
Udassi, Sharda; Udassi, Jai P; Lamb, Melissa A; Theriaque, Douglas W; Shuster, Jonathan J; Zaritsky, Arno L; Haque, Ikram U
2010-06-01
Infant CPR guidelines recommend two-finger chest compression with a lone rescuer and two-thumb with two rescuers. Two-thumb provides better chest compression but is perceived to be associated with increased ventilation hands-off time. We hypothesized that lone rescuer two-thumb CPR is associated with increased ventilation cycle time, decreased ventilation quality and fewer chest compressions compared to two-finger CPR in an infant manikin model. Crossover observational study randomizing 34 healthcare providers to perform 2 min CPR at a compression rate of 100 min(-1) using a 30:2 compression:ventilation ratio comparing two-thumb vs. two-finger techniques. A Laerdal Baby ALS Trainer manikin was modified to digitally record compression rate, compression depth and compression pressure and ventilation cycle time (two mouth-to-mouth breaths). Manikin chest rise with breaths was video recorded and later reviewed by two blinded CPR instructors for percent effective breaths. Data (mean+/-SD) were analyzed using a two-tailed paired t-test. Significance was defined qualitatively as p< or =0.05. Mean % effective breaths were 90+/-18.6% in two-thumb and 88.9+/-21.1% in two-finger, p=0.65. Mean time (s) to deliver two mouth-to-mouth breaths was 7.6+/-1.6 in two-thumb and 7.0+/-1.5 in two-finger, p<0.0001. Mean delivered compressions per minute were 87+/-11 in two-thumb and 92+/-12 in two-finger, p=0.0005. Two-thumb resulted in significantly higher compression depth and compression pressure compared to the two-finger technique. Healthcare providers required 0.6s longer time to deliver two breaths during two-thumb lone rescuer infant CPR, but there was no significant difference in percent effective breaths delivered between the two techniques. Two-thumb CPR had 4 fewer delivered compressions per minute, which may be offset by far more effective compression depth and compression pressure compared to two-finger technique. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
Partsch, H; Stout, N; Forner-Cordero, I; Flour, M; Moffatt, C; Szuba, A; Milic, D; Szolnoky, G; Brorson, H; Abel, M; Schuren, J; Schingale, F; Vignes, S; Piller, N; Döller, W
2010-10-01
A mainstay of lymphedema management involves the use of compression therapy. Compression therapy application is variable at different levels of disease severity. Evidence is scant to direct clinicians in best practice regarding compression therapy use. Further, compression clinical trials are fragmented and poorly extrapolable to the greater population. An ideal construct for conducting clinical trials in regards to compression therapy will promote parallel global initiatives based on a standard research agenda. The purpose of this article is to review current evidence in practice regarding compression therapy for BCRL management and based on this evidence, offer an expert consensus recommendation for a research agenda and prescriptive trials. Recommendations herein focus solely on compression interventions. This document represents the proceedings of a session organized by the International Compression Club (ICC) in June 2009 in Ponzano (Veneto, Italy). The purpose of the meeting was to enable a group of experts to discuss the existing evidence for compression treatment in breast cancer related lymphedema (BCRL) concentrating on areas where randomized controlled trials (RCTs) are lacking. The current body of research suggests efficacy of compression interventions in the treatment and management of lymphedema. However, studies to date have failed to adequately address various forms of compression therapy and their optimal application in BCRL. We offer recommendations for standardized compression research trials for prophylaxis of arm lymphedema and for the management of chronic BCRL. Suggestions are also made regarding; inclusion and exclusion criteria, measurement methodology and additional variables of interest for researchers to capture. This document should inform future research trials in compression therapy and serve as a guide to clinical researchers, industry researchers and lymphologists regarding the strengths, weaknesses and shortcomings of the current literature. By providing this construct for research trials, the authors aim to support evidence-based therapy interventions, promote a cohesive, standardized and informative body of literature to enhance clinical outcomes, improve the quality of future research trials, inform industry innovation and guide policy related to BCRL.
Compression for the management of venous leg ulcers: which material do we have?
Partsch, Hugo
2014-05-01
Compression therapy is the most important basic treatment modality in venous leg ulcers. The review focusses on the materials which are used: 1. Compression bandages, 2. Compression stockings, 3. Self-adjustable Velcro-devices, 4. Compression pumps, 5. Hybrid devices. Compression bandages, usually applied by trained staff, provide a wide spectrum of materials with different elastic properties. To make bandaging easier, safer and more effective, most modern bandages combine different material components. Self-management of venous ulcers has become feasible by introducing double compression stockings ("ulcer kits") and self-adjustable Velcro devices. Compression pumps can be used as adjunctive measures, especially for patients with restricted mobility. The combination of sustained and intermittent compression ("hybrid device") is a promising new tool. The interface pressure corresponding to the dosage of compression therapy determines the hemodynamic efficacy of each device. In order to reduce ambulatory venous hypertension compression pressures of more than 50 mm Hg in the upright position are desirable. At the same time pressure should be lower in the resting position in order to be tolerated. This prerequisite may be fulfilled by using inelastic, short stretch material including multicomponent bandages and cohesive surfaces, all characterized by high stiffness. Such materials do not give way when calf muscles contract during walking which leads to high peaks of interface pressure ("massaging effect"). © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Lietaert, Karel; Cutolo, Antonio; Boey, Dries; Van Hooreweder, Brecht
2018-03-21
Mechanical performance of additively manufactured (AM) Ti6Al4V scaffolds has mostly been studied in uniaxial compression. However, in real-life applications, more complex load conditions occur. To address this, a novel sample geometry was designed, tested and analyzed in this work. The new scaffold geometry, with porosity gradient between the solid ends and scaffold middle, was successfully used for quasi-static tension, tension-tension (R = 0.1), tension-compression (R = -1) and compression-compression (R = 10) fatigue tests. Results show that global loading in tension-tension leads to a decreased fatigue performance compared to global loading in compression-compression. This difference in fatigue life can be understood fairly well by approximating the local tensile stress amplitudes in the struts near the nodes. Local stress based Haigh diagrams were constructed to provide more insight in the fatigue behavior. When fatigue life is interpreted in terms of local stresses, the behavior of single struts is shown to be qualitatively the same as bulk Ti6Al4V. Compression-compression and tension-tension fatigue regimes lead to a shorter fatigue life than fully reversed loading due to the presence of a mean local tensile stress. Fractographic analysis showed that most fracture sites were located close to the nodes, where the highest tensile stresses are located.
Locally adaptive vector quantization: Data compression with feature preservation
NASA Technical Reports Server (NTRS)
Cheung, K. M.; Sayano, M.
1992-01-01
A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.
Lossless compression of VLSI layout image data.
Dai, Vito; Zakhor, Avideh
2006-09-01
We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W; Benton, Nathanael; Burns, Patrick
Compressed-air systems are used widely throughout industry for many operations, including pneumatic tools, packaging and automation equipment, conveyors, and other industrial process operations. Compressed-air systems are defined as a group of subsystems composed of air compressors, air treatment equipment, controls, piping, pneumatic tools, pneumatically powered machinery, and process applications using compressed air. A compressed-air system has three primary functional subsystems: supply, distribution, and demand. Air compressors are the primary energy consumers in a compressed-air system and are the primary focus of this protocol. The two compressed-air energy efficiency measures specifically addressed in this protocol are: High-efficiency/variable speed drive (VSD) compressormore » replacing modulating, load/unload, or constant-speed compressor; and Compressed-air leak survey and repairs. This protocol provides direction on how to reliably verify savings from these two measures using a consistent approach for each.« less
Mužíková, Jitka; Kubíčková, Alena
2016-09-01
The paper evaluates and compares the compressibility and compactibility of directly compressible tableting materials for the preparation of hydrophilic gel matrix tablets containing tramadol hydrochloride and the coprocessed dry binders Prosolv® SMCC 90 and Disintequik™ MCC 25. The selected types of hypromellose are Methocel™ Premium K4M and Methocel™ Premium K100M in 30 and 50 % concentrations, the lubricant being magnesium stearate in a 1 % concentration. Compressibility is evaluated by means of the energy profile of compression process and compactibility by the tensile strength of tablets. The values of total energy of compression and plasticity were higher in the tableting materials containing Prosolv® SMCC 90 than in those containing Disintequik™ MCC 25. Tramadol slightly decreased the values of total energy of compression and plasticity. Tableting materials containing Prosolv® SMCC 90 yielded stronger tablets. Tramadol decreased the strength of tablets from both coprocessed dry binders.
Squish: Near-Optimal Compression for Archival of Relational Datasets
Gao, Yihan; Parameswaran, Aditya
2017-01-01
Relational datasets are being generated at an alarmingly rapid rate across organizations and industries. Compressing these datasets could significantly reduce storage and archival costs. Traditional compression algorithms, e.g., gzip, are suboptimal for compressing relational datasets since they ignore the table structure and relationships between attributes. We study compression algorithms that leverage the relational structure to compress datasets to a much greater extent. We develop Squish, a system that uses a combination of Bayesian Networks and Arithmetic Coding to capture multiple kinds of dependencies among attributes and achieve near-entropy compression rate. Squish also supports user-defined attributes: users can instantiate new data types by simply implementing five functions for a new class interface. We prove the asymptotic optimality of our compression algorithm and conduct experiments to show the effectiveness of our system: Squish achieves a reduction of over 50% in storage size relative to systems developed in prior work on a variety of real datasets. PMID:28180028
Boonruab, Jurairat; Nimpitakpong, Netraya; Damjuti, Watchara
2018-01-01
This randomized controlled trial aimed to investigate the distinctness after treatment among hot herbal compress, hot compress, and topical diclofenac. The registrants were equally divided into groups and received the different treatments including hot herbal compress, hot compress, and topical diclofenac group, which served as the control group. After treatment courses, Visual Analog Scale and 36-Item Short Form Health survey were, respectively, used to establish the level of pain intensity and quality of life. In addition, cervical range of motion and pressure pain threshold were also examined to identify the motional effects. All treatments showed significantly decreased level of pain intensity and increased cervical range of motion, while the intervention groups exhibited extraordinary capability compared with the topical diclofenac group in pressure pain threshold and quality of life. In summary, hot herbal compress holds promise to be an efficacious treatment parallel to hot compress and topical diclofenac.
Laser-pulse compression in a collisional plasma under weak-relativistic ponderomotive nonlinearity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Mamta; Gupta, D. N., E-mail: dngupta@physics.du.ac.in
We present theory and numerical analysis which demonstrate laser-pulse compression in a collisional plasma under the weak-relativistic ponderomotive nonlinearity. Plasma equilibrium density is modified due to the ohmic heating of electrons, the collisions, and the weak relativistic-ponderomotive force during the interaction of a laser pulse with plasmas. First, within one-dimensional analysis, the longitudinal self-compression mechanism is discussed. Three-dimensional analysis (spatiotemporal) of laser pulse propagation is also investigated by coupling the self-compression with the self-focusing. In the regime in which the laser becomes self-focused due to the weak relativistic-ponderomotive nonlinearity, we provide results for enhanced pulse compression. The results show thatmore » the matched interplay between self-focusing and self-compression can improve significantly the temporal profile of the compressed pulse. Enhanced pulse compression can be achieved by optimizing and selecting the parameters such as collision frequency, ion-temperature, and laser intensity.« less
Maffiodo, Daniela; De Nisco, Giuseppe; Gallo, Diego; Audenino, Alberto; Morbiducci, Umberto; Ferraresi, Carlo
2016-04-01
This work investigates the effect that the application of intermittent pneumatic compression to lower limbs has on the cardiovascular system. Intermittent pneumatic compression can be applied to subjects with reduced or null mobility and can be useful for therapeutic purposes in sports recovery, deep vein thrombosis prevention and lymphedema drainage. However, intermittent pneumatic compression performance and the effectiveness are often difficult to predict. This study presents a reduced-order numerical model of the interaction between the cardiovascular system and the intermittent pneumatic compression device. The effect that different intermittent pneumatic compression operating conditions have on the overall circulation is investigated. Our findings confirm (1) that an overall positive effect on hemodynamics can be obtained by properly applying the intermittent pneumatic compression device and (2) that using intermittent pneumatic compression for cardiocirculatory recovery is feasible in subjects affected by lower limb disease. © IMechE 2016.
The effect of JPEG compression on automated detection of microaneurysms in retinal images
NASA Astrophysics Data System (ADS)
Cree, M. J.; Jelinek, H. F.
2008-02-01
As JPEG compression at source is ubiquitous in retinal imaging, and the block artefacts introduced are known to be of similar size to microaneurysms (an important indicator of diabetic retinopathy) it is prudent to evaluate the effect of JPEG compression on automated detection of retinal pathology. Retinal images were acquired at high quality and then compressed to various lower qualities. An automated microaneurysm detector was run on the retinal images of various qualities of JPEG compression and the ability to predict the presence of diabetic retinopathy based on the detected presence of microaneurysms was evaluated with receiver operating characteristic (ROC) methodology. The negative effect of JPEG compression on automated detection was observed even at levels of compression sometimes used in retinal eye-screening programmes and these may have important clinical implications for deciding on acceptable levels of compression for a fully automated eye-screening programme.
Compression in Working Memory and Its Relationship With Fluid Intelligence.
Chekaf, Mustapha; Gauvrit, Nicolas; Guida, Alessandro; Mathy, Fabien
2018-06-01
Working memory has been shown to be strongly related to fluid intelligence; however, our goal is to shed further light on the process of information compression in working memory as a determining factor of fluid intelligence. Our main hypothesis was that compression in working memory is an excellent indicator for studying the relationship between working-memory capacity and fluid intelligence because both depend on the optimization of storage capacity. Compressibility of memoranda was estimated using an algorithmic complexity metric. The results showed that compressibility can be used to predict working-memory performance and that fluid intelligence is well predicted by the ability to compress information. We conclude that the ability to compress information in working memory is the reason why both manipulation and retention of information are linked to intelligence. This result offers a new concept of intelligence based on the idea that compression and intelligence are equivalent problems. Copyright © 2018 Cognitive Science Society, Inc.
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua
2014-10-01
The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.
Bitshuffle: Filter for improving compression of typed binary data
NASA Astrophysics Data System (ADS)
Masui, Kiyoshi
2017-12-01
Bitshuffle rearranges typed, binary data for improving compression; the algorithm is implemented in a python/C package within the Numpy framework. The library can be used alongside HDF5 to compress and decompress datasets and is integrated through the dynamically loaded filters framework. Algorithmically, Bitshuffle is closely related to HDF5's Shuffle filter except it operates at the bit level instead of the byte level. Arranging a typed data array in to a matrix with the elements as the rows and the bits within the elements as the columns, Bitshuffle "transposes" the matrix, such that all the least-significant-bits are in a row, etc. This transposition is performed within blocks of data roughly 8kB long; this does not in itself compress data, but rearranges it for more efficient compression. A compression library is necessary to perform the actual compression. This scheme has been used for compression of radio data in high performance computing.
SeqCompress: an algorithm for biological sequence compression.
Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan
2014-10-01
The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.
Abelairas-Gómez, Cristian; Rodríguez-Núñez, Antonio; Vilas-Pintos, Elisardo; Prieto Saborit, José Antonio; Barcala-Furelos, Roberto
2015-06-01
To describe the quality of chest compressions performed by secondary-school students trained with a realtime audiovisual feedback system. The learners were 167 students aged 12 to 15 years who had no prior experience with cardiopulmonary resuscitation (CPR). They received an hour of instruction in CPR theory and practice and then took a 2-minute test, performing hands-only CPR on a child mannequin (Prestan Professional Child Manikin). Lights built into the mannequin gave learners feedback about how many compressions they had achieved and clicking sounds told them when compressions were deep enough. All the learners were able to maintain a steady enough rhythm of compressions and reached at least 80% of the targeted compression depth. Fewer correct compressions were done in the second minute than in the first (P=.016). Real-time audiovisual feedback helps schoolchildren aged 12 to 15 years to achieve quality chest compressions on a mannequin.
Parallel compression of data chunks of a shared data object using a log-structured file system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, John M.; Faibish, Sorin; Grider, Gary
2016-10-25
Techniques are provided for parallel compression of data chunks being written to a shared object. A client executing on a compute node or a burst buffer node in a parallel computing system stores a data chunk generated by the parallel computing system to a shared data object on a storage node by compressing the data chunk; and providing the data compressed data chunk to the storage node that stores the shared object. The client and storage node may employ Log-Structured File techniques. The compressed data chunk can be de-compressed by the client when the data chunk is read. A storagemore » node stores a data chunk as part of a shared object by receiving a compressed version of the data chunk from a compute node; and storing the compressed version of the data chunk to the shared data object on the storage node.« less
Task-oriented lossy compression of magnetic resonance images
NASA Astrophysics Data System (ADS)
Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques
1996-04-01
A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.
2014-03-31
dissimilar materials ( steel end fixtures and RMS). 2.6.4 Compression Tests To prevent the ends of the specimens from mushrooming during compression ...RMS cylinder. The compression test was modeled in ANSYS by applying a fixed displacement in the axial direction. The first ply to exceed the...four phases of loading: 1) a compressive acceleration during gun launch, 2) a tensile unloading on exit from the barrel , 3) a compressive decelera
Okodo, Mitsuaki; Okayama, Kaori; Fukui, Tadasi; Shiina, Natsuko; Caniz, Timothy; Yabusaki, Hiromi; Fujii, Masahiko
2017-01-01
Purpose: Binucleation is a reactive cellular change (RCC) in Pap smears due to Candida infection. However, the origin of these binucleated cells as RCCs remains unclear. The purpose of this study was to examine binucleation in patients negative for intraepithelial lesion or malignancy (NILM) and infected with Candida and those infected with high-risk human papillomavirus (hr-HPV) and to clarify the origin of the binucleated cells. Methods: A total of 115 endocervical swab specimens with a combined diagnosis of NILM, Candida infection, and RCCs were used for this study. Pap smears were used to identify binucleated cells and then separate them into two groups, compression-positive and compression-negative. In addition, hr-HPV was detected using polymerase chain reaction (PCR) with a specific primer on the DNA extracted from the remaining residual cytology specimens. To make the hr-HPV-infected binucleated cells visible, an in situ PCR assay was performed on the Pap smear. Result: Of the 115 specimens, 69.6% contained binucleated cells, 26 (32.5%) showed only the compressed form, 35 (43.8%) showed only the non-compressed form, and 19 showed both the compressed and non-compressed forms of binucleated cells. Also, 34 specimens (29.6%) were positive for hr-HPV. The sensitivity and specificity of compression-positive binucleated cells were 91.2% and 82.7% (p < 0.001), but they were not significant in the compression-negative group (p = 0.156). Also, 34 cases with hr-HPV contained 99 compression-positive and 24 compression-negative cells. The hr-HPV-positive cells accounted for 68 (68.7%) of the 99 compression-positive and 2 (8.3%) of the 24 compression-negative binucleated cells as determined by an in situ PCR assay for hr-HPV. The relationship between compression and hr-HPV was statistically significant (p < 0.001). Conclusion: Compression-positive binucleated cells may be present as a result of hr-HPV infection and not RCC, which is caused due to inflammation in NILM cases infected with Candida. PMID:28952287
Davies, C E; Woolfrey, G; Hogg, N; Dyer, J; Cooper, A; Waldron, J; Bulbulia, R; Whyman, M R; Poskitt, K R
2015-12-01
Slough in chronic venous leg ulcers may be associated with delayed healing. The purpose of this study was to assess larval debridement in chronic venous leg ulcers and to assess subsequent effect on healing. All patients with chronic leg ulcers presenting to the leg ulcer service were evaluated for the study. Exclusion criteria were: ankle brachial pressure indices <0.85 or >1.25, no venous reflux on duplex and <20% of ulcer surface covered with slough. Participants were randomly allocated to either 4-layer compression bandaging alone or 4-layer compression bandaging + larvae. Surface areas of ulcer and slough were assessed on day 4; 4-layer compression bandaging was then continued and ulcer size was measured every 2 weeks for up to 12 weeks. A total of 601 patients with chronic leg ulcers were screened between November 2008 and July 2012. Of these, 20 were randomised to 4-layer compression bandaging and 20 to 4-layer compression bandaging + larvae. Median (range) ulcer size was 10.8 (3-21.3) cm(2) and 8.1 (4.3-13.5) cm(2) in the 4-layer compression bandaging and 4-layer compression bandaging + larvae groups, respectively (Mann-Whitney U test, P = 0.184). On day 4, median reduction in slough area was 3.7 cm(2) in the 4-layer compression bandaging group (P < 0.05) and 4.2 cm(2) (P < 0.001) in the 4-layer compression bandaging + larvae group. Median percentage area reduction of slough was 50% in the 4-layer compression bandaging group and 84% in the 4-layer compression bandaging + larvae group (Mann-Whitney U test, P < 0.05). The 12-week healing rate was 73% and 68% in the 4-layer compression bandaging and 4-layer compression bandaging + larvae groups, respectively (Kaplan-Meier analysis, P = 0.664). Larval debridement therapy improves wound debridement in chronic venous leg ulcers treated with multilayer compression bandages. However, no subsequent improvement in ulcer healing was demonstrated. © The Author(s) 2014.
Deschilder, Koen; De Vos, Rien; Stockman, Willem
2007-07-01
Recent cardio pulmonary resuscitation (CPR) guidelines changed the compression:ventilation ratio in 30:2. To compare the quality of chest compressions and exhaustion using the ratio 30:2 versus 15:2. A prospective, randomised crossover design was used. Subjects were recruited from the H.-Hart hospital personnel and the University College Katho for nurses and bio-engineering. Each participant performed 5min of CPR using either the ratio 30:2 or 15:2, then after a 15min rest switched to the other ratio. The data were collected using a questionnaire and an adult resuscitation manikin. The outcomes included exhaustion as measured by a visual analogue scale (VAS) score, depth of chest compressions, rates of chest compressions, total number of chest compressions, number of correct chest compressions and incomplete release. Data were compared using the Wilcoxon Signed Ranks Test. The results are presented as medians and interquartile ranges (IQR). One hundred and thirty subjects completed the study. The exhaustion-score using the VAS was 5.9 (IQR 2.25) for the ratio 30:2 and 4.5 (IQR 2.88) for the ratio 15:2 (P<0.001). The compression depth was 40.5mm (IQR 15.75) for 30:2 and 41mm (IQR 15.5) for 15:2 (P=0.5). The compression rate was 118beats/min (IQR 29) for 30:2 and 115beats/min (IQR 32) for 15:2 (P=0.02). The total number of compressions/5min was 347 (IQR 79) for 30:2 and 244compressions/5min (IQR 72.5) for 15:2 (P<0.001). The number of correct compression/5min was 61.5 (IQR 211.75) for 30:2 and 55.5 (IQR 142.75) for 15:2 (P=0.001). The relative risk (RR) of incomplete release in 30:2 versus 15:2 was 1.087 (95% CI=0.633-1.867). Although the 30:2 ratio is rated to be more exhausting, the 30:2 technique delivers more chest compressions and the quality of chest compressions remains unchanged.
Lin, Yiqun; Wan, Brandi; Belanger, Claudia; Hecker, Kent; Gilfoyle, Elaine; Davidson, Jennifer; Cheng, Adam
2017-01-01
The depth of chest compression (CC) during cardiac arrest is associated with patient survival and good neurological outcomes. Previous studies showed that mattress compression can alter the amount of CCs given with adequate depth. We aim to quantify the amount of mattress compressibility on two types of ICU mattresses and explore the effect of memory foam mattress use and a backboard on mattress compression depth and effect of feedback source on effective compression depth. The study utilizes a cross-sectional self-control study design. Participants working in the pediatric intensive care unit (PICU) performed 1 min of CC on a manikin in each of the following four conditions: (i) typical ICU mattress; (ii) typical ICU mattress with a CPR backboard; (iii) memory foam ICU mattress; and (iv) memory foam ICU mattress with a CPR backboard, using two different sources of real-time feedback: (a) external accelerometer sensor device measuring total compression depth and (b) internal light sensor measuring effective compression depth only. CPR quality was concurrently measured by these two devices. The differences of the two measures (mattress compression depth) were summarized and compared using multilevel linear regression models. Effective compression depths with different sources of feedback were compared with a multilevel linear regression model. The mean mattress compression depth varied from 24.6 to 47.7 mm, with percentage of depletion from 31.2 to 47.5%. Both use of memory foam mattress (mean difference, MD 11.7 mm, 95%CI 4.8-18.5 mm) and use of backboard (MD 11.6 mm, 95% CI 9.0-14.3 mm) significantly minimized the mattress compressibility. Use of internal light sensor as source of feedback improved effective CC depth by 7-14 mm, compared with external accelerometer sensor. Use of a memory foam mattress and CPR backboard minimizes mattress compressibility, but depletion of compression depth is still substantial. A feedback device measuring sternum-to-spine displacement can significantly improve effective compression depth on a mattress. Not applicable. This is a mannequin-based simulation research.
Cortegiani, Andrea; Russotto, Vincenzo; Montalto, Francesca; Iozzo, Pasquale; Meschis, Roberta; Pugliesi, Marinella; Mariano, Dario; Benenati, Vincenzo; Raineri, Santi Maurizio; Gregoretti, Cesare; Giarratano, Antonino
2017-01-01
High-quality chest compressions are pivotal to improve survival from cardiac arrest. Basic life support training of school students is an international priority. The aim of this trial was to assess the effectiveness of a real-time training software (Laerdal QCPR®) compared to a standard instructor-based feedback for chest compressions acquisition in secondary school students. After an interactive frontal lesson about basic life support and high quality chest compressions, 144 students were randomized to two types of chest compressions training: 1) using Laerdal QCPR® (QCPR group- 72 students) for real-time feedback during chest compressions with the guide of an instructor who considered software data for students' correction 2) based on standard instructor-based feedback (SF group- 72 students). Both groups had a minimum of a 2-minute chest compressions training session. Students were required to reach a minimum technical skill level before the evaluation. We evaluated all students at 7 days from the training with a 2-minute chest compressions session. The primary outcome was the compression score, which is an overall measure of chest compressions quality calculated by the software expressed as percentage. 125 students were present at the evaluation session (60 from QCPR group and 65 from SF group). Students in QCPR group had a significantly higher compression score (median 90%, IQR 81.9-96.0) compared to SF group (median 67%, IQR 27.7-87.5), p = 0.0003. Students in QCPR group performed significantly higher percentage of fully released chest compressions (71% [IQR 24.5-99.0] vs 24% [IQR 2.5-88.2]; p = 0.005) and better chest compression rate (117.5/min [IQR 106-123.5] vs 125/min [115-135.2]; p = 0.001). In secondary school students, a training for chest compressions based on a real-time feedback software (Laerdal QCPR®) guided by an instructor is superior to instructor-based feedback training in terms of chest compression technical skill acquisition. Australian New Zealand Clinical Trials Registry ACTRN12616000383460.
Fast and accurate face recognition based on image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Blasch, Erik
2017-05-01
Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.
The prevalence of chest compression leaning during in-hospital cardiopulmonary resuscitation
Fried, David A.; Leary, Marion; Smith, Douglas A.; Sutton, Robert M.; Niles, Dana; Herzberg, Daniel L.; Becker, Lance B.; Abella, Benjamin S.
2011-01-01
Objective Successful resuscitation from cardiac arrest requires the delivery of high-quality chest compressions, encompassing parameters such as adequate rate, depth, and full recoil between compressions. The lack of compression recoil (“leaning” or “incomplete recoil”) has been shown to adversely affect hemodynamics in experimental arrest models, but the prevalence of leaning during actual resuscitation is poorly understood. We hypothesized that leaning varies across resuscitation events, possibly due to rescuer and/or patient characteristics and may worsen over time from rescuer fatigue during continuous chest compressions. Methods This was an observational clinical cohort study at one academic medical center. Data were collected from adult in-hospital and Emergency Department arrest events using monitor/defibrillators that record chest compression characteristics and provide real-time feedback. Results We analyzed 112,569 chest compressions from 108 arrest episodes from 5/2007 to 2/2009. Leaning was present in 98/108 (91%) cases; 12% of all compressions exhibited leaning. Leaning varied widely across cases: 41/108 (38%) of arrest episodes exhibited <5% leaning yet 20/108 (19%) demonstrated >20% compression leaning. When evaluating blocks of continuous compressions (>120 sec), only 4/33 (12%) had an increase in leaning over time and 29/33 (88%) showed a decrease (p<0.001). Conclusions Chest compression leaning was common during resuscitation care and exhibited a wide distribution, with most leaning within a subset of resuscitations. Leaning decreased over time during continuous chest compression blocks, suggesting that either leaning may not be a function of rescuer fatiguing, or that it may have been mitigated by automated feedback provided during resuscitation episodes. PMID:21482010
Contribution of collagen fibers to the compressive stiffness of cartilaginous tissues.
Römgens, Anne M; van Donkelaar, Corrinus C; Ito, Keita
2013-11-01
Cartilaginous tissues such as the intervertebral disk are predominantly loaded under compression. Yet, they contain abundant collagen fibers, which are generally assumed to contribute to tensile loading only. Fiber tension is thought to originate from swelling of the proteoglycan-rich nucleus. However, in aged or degenerate disk, proteoglycans are depleted, whereas collagen content changes little. The question then rises to which extend the collagen may contribute to the compressive stiffness of the tissue. We hypothesized that this contribution is significant at high strain magnitudes and that the effect depends on fiber orientation. In addition, we aimed to determine the compression of the matrix. Bovine inner and outer annulus fibrosus specimens were subjected to incremental confined compression tests up to 60 % strain in radial and circumferential direction. The compressive aggregate modulus was determined per 10 % strain increment. The biochemical composition of the compressed specimens and uncompressed adjacent tissue was determined to compute solid matrix compression. The stiffness of all specimens increased nonlinearly with strain. The collagen-rich outer annulus was significantly stiffer than the inner annulus above 20 % compressive strain. Orientation influenced the modulus in the collagen-rich outer annulus. Finally, it was shown that the solid matrix was significantly compressed above 30 % strain. Therefore, we concluded that collagen fibers significantly contribute to the compressive stiffness of the intervertebral disk at high strains. This is valuable for understanding the compressive behavior of collagen-reinforced tissues in general, and may be particularly relevant for aging or degenerate disks, which become more fibrous and less hydrated.
Zhang, Hehua; Yang, Zhengfei; Huang, Zitong; Chen, Bihua; Zhang, Lei; Li, Heng; Wu, Baoming; Yu, Tao; Li, Yongqin
2012-10-01
The quality of cardiopulmonary resuscitation (CPR), especially adequate compression depth, is associated with return of spontaneous circulation (ROSC) and is therefore recommended to be measured routinely. In the current study, we investigated the relationship between changes of transthoracic impedance (TTI) measured through the defibrillation electrodes, chest compression depth and coronary perfusion pressure (CPP) in a porcine model of cardiac arrest. In 14 male pigs weighing between 28 and 34 kg, ventricular fibrillation (VF) was electrically induced and untreated for 6 min. Animals were randomized to either optimal or suboptimal chest compression group. Optimal depth of manual compression in 7 pigs was defined as a decrease of 25% (50 mm) in anterior posterior diameter of the chest, while suboptimal compression was defined as 70% of the optimal depth (35 mm). After 2 min of chest compression, defibrillation was attempted with a 120-J rectilinear biphasic shock. There were no differences in baseline measurements between groups. All animals had ROSC after optimal compressions; this contrasted with suboptimal compressions, after which only 2 of the animals had ROSC (100% vs. 28.57%, p=0.021). The correlation coefficient was 0.89 between TTI amplitude and compression depth (p<0.001), 0.83 between TTI amplitude and CPP (p<0.001). Amplitude change of TTI was correlated with compression depth and CPP in this porcine model of cardiac arrest. The TTI measured from defibrillator electrodes, therefore has the potential to serve as an indicator to monitor the quality of chest compression and estimate CPP during CPR. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
A database for assessment of effect of lossy compression on digital mammograms
NASA Astrophysics Data System (ADS)
Wang, Jiheng; Sahiner, Berkman; Petrick, Nicholas; Pezeshk, Aria
2018-03-01
With widespread use of screening digital mammography, efficient storage of the vast amounts of data has become a challenge. While lossless image compression causes no risk to the interpretation of the data, it does not allow for high compression rates. Lossy compression and the associated higher compression ratios are therefore more desirable. The U.S. Food and Drug Administration (FDA) currently interprets the Mammography Quality Standards Act as prohibiting lossy compression of digital mammograms for primary image interpretation, image retention, or transfer to the patient or her designated recipient. Previous work has used reader studies to determine proper usage criteria for evaluating lossy image compression in mammography, and utilized different measures and metrics to characterize medical image quality. The drawback of such studies is that they rely on a threshold on compression ratio as the fundamental criterion for preserving the quality of images. However, compression ratio is not a useful indicator of image quality. On the other hand, many objective image quality metrics (IQMs) have shown excellent performance for natural image content for consumer electronic applications. In this paper, we create a new synthetic mammogram database with several unique features. We compare and characterize the impact of image compression on several clinically relevant image attributes such as perceived contrast and mass appearance for different kinds of masses. We plan to use this database to develop a new objective IQM for measuring the quality of compressed mammographic images to help determine the allowed maximum compression for different kinds of breasts and masses in terms of visual and diagnostic quality.
Mirza, Muzna; Brown, Todd B; Saini, Devashish; Pepper, Tracy L; Nandigam, Hari Krishna; Kaza, Niroop; Cofield, Stacey S
2008-10-01
Cardiopulmonary resuscitation (CPR) with adequate chest compression depth appears to improve first shock success in cardiac arrest. We evaluate the effect of simplification of chest compression instructions on compression depth in dispatcher-assisted CPR protocol. Data from two randomized, double-blinded, controlled trials with identical methodology were combined to obtain 332 records for this analysis. Subjects were randomized to either modified Medical Priority Dispatch System (MPDS) v11.2 protocol or a new simplified protocol. The main difference between the protocols was the instruction to "push as hard as you can" in the simplified protocol, compared to "push down firmly 2in. (5cm)" in MPDS. Data were recorded via a Laerdal ResusciAnne SkillReporter manikin. Primary outcome measures included: chest compression depth, proportion of compressions without error, with adequate depth and with total release. Instructions to "push as hard as you can", compared to "push down firmly 2in. (5cm)", resulted in improved chest compression depth (36.4 mm vs. 29.7 mm, p<0.0001), and improved median proportion of chest compressions done to the correct depth (32% vs. <1%, p<0.0001). No significant difference in median proportion of compressions with total release (100% for both) and average compression rate (99.7 min(-1) vs. 97.5 min(-1), p<0.56) was found. Modifying dispatcher-assisted CPR instructions by changing "push down firmly 2in. (5cm)" to "push as hard as you can" achieved improvement in chest compression depth at no cost to total release or average chest compression rate.
Classifying elementary cellular automata using compressibility, diversity and sensitivity measures
NASA Astrophysics Data System (ADS)
Ninagawa, Shigeru; Adamatzky, Andrew
2014-10-01
An elementary cellular automaton (ECA) is a one-dimensional, synchronous, binary automaton, where each cell update depends on its own state and states of its two closest neighbors. We attempt to uncover correlations between the following measures of ECA behavior: compressibility, sensitivity and diversity. The compressibility of ECA configurations is calculated using the Lempel-Ziv (LZ) compression algorithm LZ78. The sensitivity of ECA rules to initial conditions and perturbations is evaluated using Derrida coefficients. The generative morphological diversity shows how many different neighborhood states are produced from a single nonquiescent cell. We found no significant correlation between sensitivity and compressibility. There is a substantial correlation between generative diversity and compressibility. Using sensitivity, compressibility and diversity, we uncover and characterize novel groupings of rules.
NASA Astrophysics Data System (ADS)
Zhang, L.; Han, X. X.; Ge, J.; Wang, C. H.
2018-01-01
To determine the relationship between compressive strength and flexural strength of pavement geopolymer grouting material, 20 groups of geopolymer grouting materials were prepared, the compressive strength and flexural strength were determined by mechanical properties test. On the basis of excluding the abnormal values through boxplot, the results show that, the compressive strength test results were normal, but there were two mild outliers in 7days flexural strength test. The compressive strength and flexural strength were linearly fitted by SPSS, six regression models were obtained by linear fitting of compressive strength and flexural strength. The linear relationship between compressive strength and flexural strength can be better expressed by the cubic curve model, and the correlation coefficient was 0.842.
Compressing turbulence and sudden viscous dissipation with compression-dependent ionization state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidovits, Seth; Fisch, Nathaniel J.
Turbulent plasma flow, amplified by rapid three-dimensional compression, can be suddenly dissipated under continuing compression. Furthermore, this effect relies on the sensitivity of the plasma viscosity to the temperature, μ ~ T 5 / 2 . The plasma viscosity is also sensitive to the plasma ionization state. Here, we show that the sudden dissipation phenomenon may be prevented when the plasma ionization state increases during compression, and we demonstrate the regime of net viscosity dependence on compression where sudden dissipation is guaranteed. In addition, it is shown that, compared to cases with no ionization, ionization during compression is associated withmore » larger increases in turbulent energy and can make the difference between growing and decreasing turbulent energy.« less
Compressing turbulence and sudden viscous dissipation with compression-dependent ionization state
Davidovits, Seth; Fisch, Nathaniel J.
2016-11-14
Turbulent plasma flow, amplified by rapid three-dimensional compression, can be suddenly dissipated under continuing compression. Furthermore, this effect relies on the sensitivity of the plasma viscosity to the temperature, μ ~ T 5 / 2 . The plasma viscosity is also sensitive to the plasma ionization state. Here, we show that the sudden dissipation phenomenon may be prevented when the plasma ionization state increases during compression, and we demonstrate the regime of net viscosity dependence on compression where sudden dissipation is guaranteed. In addition, it is shown that, compared to cases with no ionization, ionization during compression is associated withmore » larger increases in turbulent energy and can make the difference between growing and decreasing turbulent energy.« less
A Novel Range Compression Algorithm for Resolution Enhancement in GNSS-SARs
Zheng, Yu; Yang, Yang; Chen, Wu
2017-01-01
In this paper, a novel range compression algorithm for enhancing range resolutions of a passive Global Navigation Satellite System-based Synthetic Aperture Radar (GNSS-SAR) is proposed. In the proposed algorithm, within each azimuth bin, firstly range compression is carried out by correlating a reflected GNSS intermediate frequency (IF) signal with a synchronized direct GNSS base-band signal in the range domain. Thereafter, spectrum equalization is applied to the compressed results for suppressing side lobes to obtain a final range-compressed signal. Both theoretical analysis and simulation results have demonstrated that significant range resolution improvement in GNSS-SAR images can be achieved by the proposed range compression algorithm, compared to the conventional range compression algorithm. PMID:28672830
A zero-error operational video data compression system
NASA Technical Reports Server (NTRS)
Kutz, R. L.
1973-01-01
A data compression system has been operating since February 1972, using ATS spin-scan cloud cover data. With the launch of ITOS 3 in October 1972, this data compression system has become the only source of near-realtime very high resolution radiometer image data at the data processing facility. The VHRR image data are compressed and transmitted over a 50 kilobit per second wideband ground link. The goal of the data compression experiment was to send data quantized to six bits at twice the rate possible when no compression is used, while maintaining zero error between the transmitted and reconstructed data. All objectives of the data compression experiment were met, and thus a capability of doubling the data throughput of the system has been achieved.
Compressive sensing in medical imaging
Graff, Christian G.; Sidky, Emil Y.
2015-01-01
The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400
Update on mechanical cardiopulmonary resuscitation devices.
Rubertsson, Sten
2016-06-01
The aim of this review is to update and discuss the use of mechanical chest compression devices in treatment of cardiac arrest. Three recently published large multicenter randomized trials have not been able to show any improved outcome in adult out-of-hospital cardiac arrest patients when compared with manual chest compressions. Mechanical chest compression devices have been developed to better deliver uninterrupted chest compressions of good quality. Prospective large randomized studies have not been able to prove a better outcome compared to manual chest compressions; however, latest guidelines support their use when high-quality manual chest compressions cannot be delivered. Mechanical chest compressions can also be preferred during transportation, in the cath-lab and as a bridge to more invasive support like extracorporeal membrane oxygenation.
Internal combustion engine with compressed air collection system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, P.W.
1988-08-23
This patent describes an internal combustion engine comprising cylinders respectively including a pressure port, pistons respectively movable in the cylinders through respective compression strokes, fuel injectors respectively connected to the cylinders and operative to supply, from a fuel source to the respective cylinders, a metered quantity of fuel conveyed by compressed gas in response to fuel injector operation during the compression strokes of the respective cylinders, a storage tank for accumulating and storing compressed gas, means for selectively connecting the pressure ports to the storage tank only during the compression strokes of the respective cylinders, and duct means connecting themore » storage tank to the fuel injectors for supplying the fuel injectors with compressed gas in response to fuel injector operation.« less
Shock-adiabatic to quasi-isentropic compression of warm dense helium up to 150 GPa
NASA Astrophysics Data System (ADS)
Zheng, J.; Chen, Q. F.; Gu, Y. J.; Li, J. T.; Li, Z. G.; Li, C. J.; Chen, Z. Y.
2017-06-01
Multiple reverberation compression can achieve higher pressure, higher temperature, but lower entropy. It is available to provide an important validation for the elaborate and wider planetary models and simulate the inertial confinement fusion capsule implosion process. In the work, we have developed the thermodynamic and optical properties of helium from shock-adiabatic to quasi-isentropic compression by means of a multiple reverberation technique. By this technique, the initial dense gaseous helium was compressed to high pressure and high temperature and entered the warm dense matter (WDM) region. The experimental equation of state (EOS) of WDM helium in the pressure-density-temperature (P-ρ -T) range of 1 -150 GPa , 0.1 -1.1 g c m-3 , and 4600-24 000 K were measured. The optical radiations emanating from the WDM helium were recorded, and the particle velocity profiles detecting from the sample/window interface were obtained successfully up to 10 times compression. The optical radiation results imply that dense He has become rather opaque after the 2nd compression with a density of about 0.3 g c m-3 and a temperature of about 1 eV. The opaque states of helium under multiple compression were analyzed by the particle velocity measurements. The multiple compression technique could efficiently enhanced the density and the compressibility, and our multiple compression ratios (ηi=ρi/ρ0,i =1 -10 ) of helium are greatly improved from 3.5 to 43 based on initial precompressed density (ρ0) . For the relative compression ratio (ηi'=ρi/ρi -1) , it increases with pressure in the lower density regime and reversely decreases in the higher density regime, and a turning point occurs at the 3rd and 4th compression states under the different loading conditions. This nonmonotonic evolution of the compression is controlled by two factors, where the excitation of internal degrees of freedom results in the increasing compressibility and the repulsive interactions between the particles results in the decreasing compressibility at the onset of electron excitation and ionization. In the P-ρ -T contour with the experiments and the calculations, our multiple compression states from insulating to semiconducting fluid (from transparent to opaque fluid) are illustrated. Our results give an elaborate validation of EOS models and have applications for planetary and stellar opaque atmospheres.
Adaptive efficient compression of genomes
2012-01-01
Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. However, memory requirements of the current algorithms are high and run times often are slow. In this paper, we propose an adaptive, parallel and highly efficient referential sequence compression method which allows fine-tuning of the trade-off between required memory and compression speed. When using 12 MB of memory, our method is for human genomes on-par with the best previous algorithms in terms of compression ratio (400:1) and compression speed. In contrast, it compresses a complete human genome in just 11 seconds when provided with 9 GB of main memory, which is almost three times faster than the best competitor while using less main memory. PMID:23146997
Chattoraj, Sayantan; Sun, Changquan Calvin
2018-04-01
Continuous manufacturing of tablets has many advantages, including batch size flexibility, demand-adaptive scale up or scale down, consistent product quality, small operational foot print, and increased manufacturing efficiency. Simplicity makes direct compression the most suitable process for continuous tablet manufacturing. However, deficiencies in powder flow and compression of active pharmaceutical ingredients (APIs) limit the range of drug loading that can routinely be considered for direct compression. For the widespread adoption of continuous direct compression, effective API engineering strategies to address power flow and compression problems are needed. Appropriate implementation of these strategies would facilitate the design of high-quality robust drug products, as stipulated by the Quality-by-Design framework. Here, several crystal and particle engineering strategies for improving powder flow and compression properties are summarized. The focus is on the underlying materials science, which is the foundation for effective API engineering to enable successful continuous manufacturing by the direct compression process. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Image splitting and remapping method for radiological image compression
NASA Astrophysics Data System (ADS)
Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.
1990-07-01
A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.
Compression in wearable sensor nodes: impacts of node topology.
Imtiaz, Syed Anas; Casson, Alexander J; Rodriguez-Villegas, Esther
2014-04-01
Wearable sensor nodes monitoring the human body must operate autonomously for very long periods of time. Online and low-power data compression embedded within the sensor node is therefore essential to minimize data storage/transmission overheads. This paper presents a low-power MSP430 compressive sensing implementation for providing such compression, focusing particularly on the impact of the sensor node architecture on the compression performance. Compression power performance is compared for four different sensor nodes incorporating different strategies for wireless transmission/on-sensor-node local storage of data. The results demonstrate that the compressive sensing used must be designed differently depending on the underlying node topology, and that the compression strategy should not be guided only by signal processing considerations. We also provide a practical overview of state-of-the-art sensor node topologies. Wireless transmission of data is often preferred as it offers increased flexibility during use, but in general at the cost of increased power consumption. We demonstrate that wireless sensor nodes can highly benefit from the use of compressive sensing and now can achieve power consumptions comparable to, or better than, the use of local memory.
Compressive Behavior of Fiber-Reinforced Concrete with End-Hooked Steel Fibers.
Lee, Seong-Cheol; Oh, Joung-Hwan; Cho, Jae-Yeol
2015-03-27
In this paper, the compressive behavior of fiber-reinforced concrete with end-hooked steel fibers has been investigated through a uniaxial compression test in which the variables were concrete compressive strength, fiber volumetric ratio, and fiber aspect ratio (length to diameter). In order to minimize the effect of specimen size on fiber distribution, 48 cylinder specimens 150 mm in diameter and 300 mm in height were prepared and then subjected to uniaxial compression. From the test results, it was shown that steel fiber-reinforced concrete (SFRC) specimens exhibited ductile behavior after reaching their compressive strength. It was also shown that the strain at the compressive strength generally increased along with an increase in the fiber volumetric ratio and fiber aspect ratio, while the elastic modulus decreased. With consideration for the effect of steel fibers, a model for the stress-strain relationship of SFRC under compression is proposed here. Simple formulae to predict the strain at the compressive strength and the elastic modulus of SFRC were developed as well. The proposed model and formulae will be useful for realistic predictions of the structural behavior of SFRC members or structures.
Compression of regions in the global advanced very high resolution radiometer 1-km data set
NASA Technical Reports Server (NTRS)
Kess, Barbara L.; Steinwand, Daniel R.; Reichenbach, Stephen E.
1994-01-01
The global advanced very high resolution radiometer (AVHRR) 1-km data set is a 10-band image produced at USGS' EROS Data Center for the study of the world's land surfaces. The image contains masked regions for non-land areas which are identical in each band but vary between data sets. They comprise over 75 percent of this 9.7 gigabyte image. The mask is compressed once and stored separately from the land data which is compressed for each of the 10 bands. The mask is stored in a hierarchical format for multi-resolution decompression of geographic subwindows of the image. The land for each band is compressed by modifying a method that ignores fill values. This multi-spectral region compression efficiently compresses the region data and precludes fill values from interfering with land compression statistics. Results show that the masked regions in a one-byte test image (6.5 Gigabytes) compress to 0.2 percent of the 557,756,146 bytes they occupy in the original image, resulting in a compression ratio of 89.9 percent for the entire image.
Chaos-Based Simultaneous Compression and Encryption for Hadoop.
Usama, Muhammad; Zakaria, Nordin
2017-01-01
Data compression and encryption are key components of commonly deployed platforms such as Hadoop. Numerous data compression and encryption tools are presently available on such platforms and the tools are characteristically applied in sequence, i.e., compression followed by encryption or encryption followed by compression. This paper focuses on the open-source Hadoop framework and proposes a data storage method that efficiently couples data compression with encryption. A simultaneous compression and encryption scheme is introduced that addresses an important implementation issue of source coding based on Tent Map and Piece-wise Linear Chaotic Map (PWLM), which is the infinite precision of real numbers that result from their long products. The approach proposed here solves the implementation issue by removing fractional components that are generated by the long products of real numbers. Moreover, it incorporates a stealth key that performs a cyclic shift in PWLM without compromising compression capabilities. In addition, the proposed approach implements a masking pseudorandom keystream that enhances encryption quality. The proposed algorithm demonstrated a congruent fit within the Hadoop framework, providing robust encryption security and compression.
NASA Astrophysics Data System (ADS)
Gong, Lihua; Deng, Chengzhi; Pan, Shumin; Zhou, Nanrun
2018-07-01
Based on hyper-chaotic system and discrete fractional random transform, an image compression-encryption algorithm is designed. The original image is first transformed into a spectrum by the discrete cosine transform and the resulting spectrum is compressed according to the method of spectrum cutting. The random matrix of the discrete fractional random transform is controlled by a chaotic sequence originated from the high dimensional hyper-chaotic system. Then the compressed spectrum is encrypted by the discrete fractional random transform. The order of DFrRT and the parameters of the hyper-chaotic system are the main keys of this image compression and encryption algorithm. The proposed algorithm can compress and encrypt image signal, especially can encrypt multiple images once. To achieve the compression of multiple images, the images are transformed into spectra by the discrete cosine transform, and then the spectra are incised and spliced into a composite spectrum by Zigzag scanning. Simulation results demonstrate that the proposed image compression and encryption algorithm is of high security and good compression performance.
MP3 compression of Doppler ultrasound signals.
Poepping, Tamie L; Gill, Jeremy; Fenster, Aaron; Holdsworth, David W
2003-01-01
The effect of lossy, MP3 compression on spectral parameters derived from Doppler ultrasound (US) signals was investigated. Compression was tested on signals acquired from two sources: 1. phase quadrature and 2. stereo audio directional output. A total of 11, 10-s acquisitions of Doppler US signal were collected from each source at three sites in a flow phantom. Doppler signals were digitized at 44.1 kHz and compressed using four grades of MP3 compression (in kilobits per second, kbps; compression ratios in brackets): 1400 kbps (uncompressed), 128 kbps (11:1), 64 kbps (22:1) and 32 kbps (44:1). Doppler spectra were characterized by peak velocity, mean velocity, spectral width, integrated power and ratio of spectral power between negative and positive velocities. The results suggest that MP3 compression on digital Doppler US signals is feasible at 128 kbps, with a resulting 11:1 compression ratio, without compromising clinically relevant information. Higher compression ratios led to significant differences for both signal sources when compared with the uncompressed signals. Copyright 2003 World Federation for Ultrasound in Medicine & Biology
A New Compression Method for FITS Tables
NASA Technical Reports Server (NTRS)
Pence, William; Seaman, Rob; White, Richard L.
2010-01-01
As the size and number of FITS binary tables generated by astronomical observatories increases, so does the need for a more efficient compression method to reduce the amount disk space and network bandwidth required to archive and down1oad the data tables. We have developed a new compression method for FITS binary tables that is modeled after the FITS tiled-image compression compression convention that has been in use for the past decade. Tests of this new method on a sample of FITS binary tables from a variety of current missions show that on average this new compression technique saves about 50% more disk space than when simply compressing the whole FITS file with gzip. Other advantages of this method are (1) the compressed FITS table is itself a valid FITS table, (2) the FITS headers remain uncompressed, thus allowing rapid read and write access to the keyword values, and (3) in the common case where the FITS file contains multiple tables, each table is compressed separately and may be accessed without having to uncompress the whole file.
Monitoring compaction and compressibility changes in offshore chalk reservoirs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dean, G.; Hardy, R.; Eltvik, P.
1994-03-01
Some of the North Sea's largest and most important oil fields are in chalk reservoirs. In these fields, it is important to measure reservoir compaction and compressibility because compaction can result in platform subsidence. Also, compaction drive is a main drive mechanism in these fields, so an accurate reserves estimate cannot be made without first measuring compressibility. Estimating compaction and reserves is difficult because compressibility changes throughout field life. Installing of accurate, permanent downhole pressure gauges on offshore chalk fields makes it possible to use a new method to monitor compressibility -- measurement of reservoir pressure changes caused by themore » tide. This tidal-monitoring technique is an in-situ method that can greatly increase compressibility information. It can be used to estimate compressibility and to measure compressibility variation over time. This paper concentrates on application of the tidal-monitoring technique to North Sea chalk reservoirs. However, the method is applicable for any tidal offshore area and can be applied whenever necessary to monitor in-situ rock compressibility. One such application would be if platform subsidence was expected.« less
Chaos-Based Simultaneous Compression and Encryption for Hadoop
Zakaria, Nordin
2017-01-01
Data compression and encryption are key components of commonly deployed platforms such as Hadoop. Numerous data compression and encryption tools are presently available on such platforms and the tools are characteristically applied in sequence, i.e., compression followed by encryption or encryption followed by compression. This paper focuses on the open-source Hadoop framework and proposes a data storage method that efficiently couples data compression with encryption. A simultaneous compression and encryption scheme is introduced that addresses an important implementation issue of source coding based on Tent Map and Piece-wise Linear Chaotic Map (PWLM), which is the infinite precision of real numbers that result from their long products. The approach proposed here solves the implementation issue by removing fractional components that are generated by the long products of real numbers. Moreover, it incorporates a stealth key that performs a cyclic shift in PWLM without compromising compression capabilities. In addition, the proposed approach implements a masking pseudorandom keystream that enhances encryption quality. The proposed algorithm demonstrated a congruent fit within the Hadoop framework, providing robust encryption security and compression. PMID:28072850
Scan-Line Methods in Spatial Data Systems
1990-09-04
algorithms in detail to show some of the implementation issues. Data Compression Storage and transmission times can be reduced by using compression ...goes through the data . Luckily, there are good one-directional compression algorithms , such as run-length coding 13 in which each scan line can be...independently compressed . These are the algorithms to use in a parallel scan-line system. Data compression is usually only used for long-term storage of
Data Compression Using the Dictionary Approach Algorithm
1990-12-01
Compression Technique The LZ77 is an OPM/L data compression scheme suggested by Ziv and Lempel . A slightly modified...June 1984. 12. Witten H. I., Neal M. R. and Cleary G. J., Arithmetic Coding For Data Compression , Communication ACM June 1987. 13. Ziv I. and Lempel A...AD-A242 539 NAVAL POSTGRADUATE SCHOOL Monterey, California DTIC NOV 181991 0 THESIS DATA COMPRESSION USING THE DICTIONARY APPROACH ALGORITHM
Adult-like processing of time-compressed speech by newborns: A NIRS study.
Issard, Cécile; Gervain, Judit
2017-06-01
Humans can adapt to a wide range of variations in the speech signal, maintaining an invariant representation of the linguistic information it contains. Among them, adaptation to rapid or time-compressed speech has been well studied in adults, but the developmental origin of this capacity remains unknown. Does this ability depend on experience with speech (if yes, as heard in utero or as heard postnatally), with sounds in general or is it experience-independent? Using near-infrared spectroscopy, we show that the newborn brain can discriminate between three different compression rates: normal, i.e. 100% of the original duration, moderately compressed, i.e. 60% of original duration and highly compressed, i.e. 30% of original duration. Even more interestingly, responses to normal and moderately compressed speech are similar, showing a canonical hemodynamic response in the left temporoparietal, right frontal and right temporal cortex, while responses to highly compressed speech are inverted, showing a decrease in oxyhemoglobin concentration. These results mirror those found in adults, who readily adapt to moderately compressed, but not to highly compressed speech, showing that adaptation to time-compressed speech requires little or no experience with speech, and happens at an auditory, and not at a more abstract linguistic level. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Fixed-Rate Compressed Floating-Point Arrays.
Lindstrom, Peter
2014-12-01
Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.
Peng, Shoujian; Fang, Zhiming; Shen, Jian; Xu, Jiang; Wang, Geoff
2017-10-30
The cleat compressibility of coal is a key parameter that is extensively used in modeling the coal reservoir permeability for Coal Bed Methane (CBM) recovery. Cleat compressibility is often determined from the permeability measurement made at different confining pressures but with a constant pore pressure. Hence, this parameter ignores the sorption strain effects on the cleat compressibility. By using the transient pulse decay (TPD) technique, this study presents the results from a laboratory characterization program using coal core drilled from different bedding directions to estimate gas permeability and coal cleat compressibility under different pore pressures while maintaining effective stress constant. Cleat compressibility was determined from permeability and sorption strain measurements that are made at different pore pressures under an effective stress constant. Results show that the cleat compressibility of coal increases slightly with the increase of pore pressure. Moreover, the cleat compressibility of Sample P (representing the face cleats in coal) is larger than that of Sample C (representing the butt cleats in coal). This result suggests that cleat compressibility should not be regarded as constant in the modeling of the CBM recovery. Furthermore, the compressibility of face cleats is considerably sensitive to the sorption-induced swelling/shrinkage and offers significant effects on the coal permeability.
Light-weight reference-based compression of FASTQ data.
Zhang, Yongpeng; Li, Linsen; Yang, Yanli; Yang, Xiao; He, Shan; Zhu, Zexuan
2015-06-09
The exponential growth of next generation sequencing (NGS) data has posed big challenges to data storage, management and archive. Data compression is one of the effective solutions, where reference-based compression strategies can typically achieve superior compression ratios compared to the ones not relying on any reference. This paper presents a lossless light-weight reference-based compression algorithm namely LW-FQZip to compress FASTQ data. The three components of any given input, i.e., metadata, short reads and quality score strings, are first parsed into three data streams in which the redundancy information are identified and eliminated independently. Particularly, well-designed incremental and run-length-limited encoding schemes are utilized to compress the metadata and quality score streams, respectively. To handle the short reads, LW-FQZip uses a novel light-weight mapping model to fast map them against external reference sequence(s) and produce concise alignment results for storage. The three processed data streams are then packed together with some general purpose compression algorithms like LZMA. LW-FQZip was evaluated on eight real-world NGS data sets and achieved compression ratios in the range of 0.111-0.201. This is comparable or superior to other state-of-the-art lossless NGS data compression algorithms. LW-FQZip is a program that enables efficient lossless FASTQ data compression. It contributes to the state of art applications for NGS data storage and transmission. LW-FQZip is freely available online at: http://csse.szu.edu.cn/staff/zhuzx/LWFQZip.
2014-01-01
Background According to the guidelines for cardiopulmonary resuscitation (CPR), the rotation time for chest compression should be about 2 min. The quality of chest compressions is related to the physical fitness of the rescuer, but this was not considered when determining rotation time. The present study aimed to clarify associations between body weight and the quality of chest compression and physical fatigue during CPR performed by 18 registered nurses (10 male and 8 female) assigned to light and heavy groups according to the average weight for each sex in Japan. Methods Five-minute chest compressions were then performed on a manikin that was placed on the floor. Measurement parameters were compression depth, heart rate, oxygen uptake, integrated electromyography signals, and rating of perceived exertion. Compression depth was evaluated according to the ratio (%) of adequate compressions (at least 5 cm deep). Results The ratio of adequate compressions decreased significantly over time in the light group. Values for heart rate, oxygen uptake, muscle activity defined as integrated electromyography signals, and rating of perceived exertion were significantly higher for the light group than for the heavy group. Conclusion Chest compression caused increased fatigue among the light group, which consequently resulted in a gradual fall in the quality of chest compression. These results suggested that individuals with a lower body weight should rotate at 1-min intervals to maintain high quality CPR and thus improve the survival rates and neurological outcomes of victims of cardiac arrest. PMID:24957919
Kelly, Terri-Ann N; Roach, Brendan L; Weidner, Zachary D; Mackenzie-Smith, Charles R; O'Connell, Grace D; Lima, Eric G; Stoker, Aaron M; Cook, James L; Ateshian, Gerard A; Hung, Clark T
2013-07-26
The tensile modulus of articular cartilage is much larger than its compressive modulus. This tension-compression nonlinearity enhances interstitial fluid pressurization and decreases the frictional coefficient. The current set of studies examines the tensile and compressive properties of cylindrical chondrocyte-seeded agarose constructs over different developmental stages through a novel method that combines osmotic loading, video microscopy, and uniaxial unconfined compression testing. This method was previously used to examine tension-compression nonlinearity in native cartilage. Engineered cartilage, cultured under free-swelling (FS) or dynamically loaded (DL) conditions, was tested in unconfined compression in hypertonic and hypotonic salt solutions. The apparent equilibrium modulus decreased with increasing salt concentration, indicating that increasing the bath solution osmolarity shielded the fixed charges within the tissue, shifting the measured moduli along the tension-compression curve and revealing the intrinsic properties of the tissue. With this method, we were able to measure the tensile (401±83kPa for FS and 678±473kPa for DL) and compressive (161±33kPa for FS and 348±203kPa for DL) moduli of the same engineered cartilage specimens. These moduli are comparable to values obtained from traditional methods, validating this technique for measuring the tensile and compressive properties of hydrogel-based constructs. This study shows that engineered cartilage exhibits tension-compression nonlinearity reminiscent of the native tissue, and that dynamic deformational loading can yield significantly higher tensile properties. Copyright © 2013 Elsevier Ltd. All rights reserved.
Tomographic Image Compression Using Multidimensional Transforms.
ERIC Educational Resources Information Center
Villasenor, John D.
1994-01-01
Describes a method for compressing tomographic images obtained using Positron Emission Tomography (PET) and Magnetic Resonance (MR) by applying transform compression using all available dimensions. This takes maximum advantage of redundancy of the data, allowing significant increases in compression efficiency and performance. (13 references) (KRN)
Production and Assessment of Damaged High Energy Propellant Samples,
1980-05-08
and (c) -69.8% ...... 14 3 Longitudinal Velocity one hour after Compressing Versus Applied Engineering Compressive Strain for Propellant Samples...LONGITUDINAL VELOCITY ONE HOUR AFTER COMPRESSING VERSUS APPLIED ENGINEERING COMPRESSIVE STRAIN FOR PROPELLANT SAMPLES (NOMINAL 40 mm DIA x 13 mm HIGH
Gehrig, Nicolas; Dragotti, Pier Luigi
2009-03-01
In this paper, we study the sampling and the distributed compression of the data acquired by a camera sensor network. The effective design of these sampling and compression schemes requires, however, the understanding of the structure of the acquired data. To this end, we show that the a priori knowledge of the configuration of the camera sensor network can lead to an effective estimation of such structure and to the design of effective distributed compression algorithms. For idealized scenarios, we derive the fundamental performance bounds of a camera sensor network and clarify the connection between sampling and distributed compression. We then present a distributed compression algorithm that takes advantage of the structure of the data and that outperforms independent compression algorithms on real multiview images.
Park, Sang O; Hong, Chong Kun; Shin, Dong Hyuk; Lee, Jun Ho; Hwang, Seong Youn
2013-08-01
Untrained laypersons should perform compression-only cardiopulmonary resuscitation (COCPR) under a dispatcher's guidance, but the quality of the chest compressions may be suboptimal. We hypothesised that providing metronome sounds via a phone speaker may improve the quality of chest compressions during dispatcher-assisted COCPR (DA-COCPR). Untrained laypersons were allocated to either the metronome sound-guided group (MG), who performed DA-COCPR with metronome sounds (110 ticks/min), or the control group (CG), who performed conventional DA-COCPR. The participants of each group performed DA-COCPR for 4 min using a manikin with Skill-Reporter, and the data regarding chest compression quality were collected. The data from 33 cases of DA-COCPR in the MG and 34 cases in the CG were compared. The MG showed a faster compression rate than the CG (111.9 vs 96.7/min; p=0.018). A significantly higher proportion of subjects in the MG performed the DA-COCPR with an accurate chest compression rate (100-120/min) compared with the subjects in the CG (32/33 (97.0%) vs 5/34 (14.7%); p<0.0001). The mean compression depth was not different between the MG and the CG (45.9 vs 46.8 mm; p=0.692). However, a higher proportion of subjects in the MG performed shallow compressions (compression depth <38 mm) compared with subjects in the CG (median % was 69.2 vs 15.7; p=0.035). Metronome sound guidance during DA-COCPR for the untrained bystanders improved the chest compression rates, but was associated more with shallow compressions than the conventional DA-COCPR in a manikin model.
HUGO: Hierarchical mUlti-reference Genome cOmpression for aligned reads
Li, Pinghao; Jiang, Xiaoqian; Wang, Shuang; Kim, Jihoon; Xiong, Hongkai; Ohno-Machado, Lucila
2014-01-01
Background and objective Short-read sequencing is becoming the standard of practice for the study of structural variants associated with disease. However, with the growth of sequence data largely surpassing reasonable storage capability, the biomedical community is challenged with the management, transfer, archiving, and storage of sequence data. Methods We developed Hierarchical mUlti-reference Genome cOmpression (HUGO), a novel compression algorithm for aligned reads in the sorted Sequence Alignment/Map (SAM) format. We first aligned short reads against a reference genome and stored exactly mapped reads for compression. For the inexact mapped or unmapped reads, we realigned them against different reference genomes using an adaptive scheme by gradually shortening the read length. Regarding the base quality value, we offer lossy and lossless compression mechanisms. The lossy compression mechanism for the base quality values uses k-means clustering, where a user can adjust the balance between decompression quality and compression rate. The lossless compression can be produced by setting k (the number of clusters) to the number of different quality values. Results The proposed method produced a compression ratio in the range 0.5–0.65, which corresponds to 35–50% storage savings based on experimental datasets. The proposed approach achieved 15% more storage savings over CRAM and comparable compression ratio with Samcomp (CRAM and Samcomp are two of the state-of-the-art genome compression algorithms). The software is freely available at https://sourceforge.net/projects/hierachicaldnac/with a General Public License (GPL) license. Limitation Our method requires having different reference genomes and prolongs the execution time for additional alignments. Conclusions The proposed multi-reference-based compression algorithm for aligned reads outperforms existing single-reference based algorithms. PMID:24368726
Jawień, Arkadiusz; Cierzniakowska, Katarzyna; Cwajda-Białasik, Justyna; Mościcka, Paulina
2010-01-01
Introduction The aim of the research was to compare the dynamics of venous ulcer healing when treated with the use of compression stockings as well as original two- and four-layer bandage systems. Material and methods A group of 46 patients suffering from venous ulcers was studied. This group consisted of 36 (78.3%) women and 10 (21.70%) men aged between 41 and 88 years (the average age was 66.6 years and the median was 67). Patients were randomized into three groups, for treatment with the ProGuide two-layer system, Profore four-layer compression, and with the use of compression stockings class II. In the case of multi-layer compression, compression ensuring 40 mmHg blood pressure at ankle level was used. Results In all patients, independently of the type of compression therapy, a few significant statistical changes of ulceration area in time were observed (Student’s t test for matched pairs, p < 0.05). The largest loss of ulceration area in each of the successive measurements was observed in patients treated with the four-layer system – on average 0.63 cm2/per week. The smallest loss of ulceration area was observed in patients using compression stockings – on average 0.44 cm2/per week. However, the observed differences were not statistically significant (Kruskal-Wallis test H = 4.45, p > 0.05). Conclusions A systematic compression therapy, applied with preliminary blood pressure of 40 mmHg, is an effective method of conservative treatment of venous ulcers. Compression stockings and prepared systems of multi-layer compression were characterized by similar clinical effectiveness. PMID:22419941
Corneal Staining and Hot Black Tea Compresses.
Achiron, Asaf; Birger, Yael; Karmona, Lily; Avizemer, Haggay; Bartov, Elisha; Rahamim, Yocheved; Burgansky-Eliash, Zvia
2017-03-01
Warm compresses are widely touted as an effective treatment for ocular surface disorders. Black tea compresses are a common household remedy, although there is no evidence in the medical literature proving their effect and their use may lead to harmful side effects. To describe a case in which the application of black tea to an eye with a corneal epithelial defect led to anterior stromal discoloration; evaluate the prevalence of hot tea compress use; and analyze, in vitro, the discoloring effect of tea compresses on a model of a porcine eye. We assessed the prevalence of hot tea compresses in our community and explored the effect of warm tea compresses on the cornea when the corneal epithelium's integrity is disrupted. An in vitro experiment in which warm compresses were applied to 18 fresh porcine eyes was performed. In half the eyes a corneal epithelial defect was created and in the other half the epithelium was intact. Both groups were divided into subgroups of three eyes each and treated experimentally with warm black tea compresses, pure water, or chamomile tea compresses. We also performed a study in patients with a history of tea compress use. Brown discoloration of the anterior stroma appeared only in the porcine corneas that had an epithelial defect and were treated with black tea compresses. No other eyes from any group showed discoloration. Of the patients included in our survey, approximately 50% had applied some sort of tea ingredient as a solid compressor or as the hot liquid. An intact corneal epithelium serves as an effective barrier against tea-stain discoloration. Only when this layer is disrupted does the damage occur. Therefore, direct application of black tea (Camellia sinensis) to a cornea with an epithelial defect should be avoided.
Efficacy of compression of different capacitance beds in the amelioration of orthostatic hypotension
NASA Technical Reports Server (NTRS)
Denq, J. C.; Opfer-Gehrking, T. L.; Giuliani, M.; Felten, J.; Convertino, V. A.; Low, P. A.
1997-01-01
Orthostatic hypotension (OH) is the most disabling and serious manifestation of adrenergic failure, occurring in the autonomic neuropathies, pure autonomic failure (PAF) and multiple system atrophy (MSA). No specific treatment is currently available for most etiologies of OH. A reduction in venous capacity, secondary to some physical counter maneuvers (e.g., squatting or leg crossing), or the use of compressive garments, can ameliorate OH. However, there is little information on the differential efficacy, or the mechanisms of improvement, engendered by compression of specific capacitance beds. We therefore evaluated the efficacy of compression of specific compartments (calves, thighs, low abdomen, calves and thighs, and all compartments combined), using a modified antigravity suit, on the end-points of orthostatic blood pressure, and symptoms of orthostatic intolerance. Fourteen patients (PAF, n = 9; MSA, n = 3; diabetic autonomic neuropathy, n = 2; five males and nine females) with clinical OH were studied. The mean age was 62 years (range 31-78). The mean +/- SEM orthostatic systolic blood pressure when all compartments were compressed was 115.9 +/- 7.4 mmHg, significantly improved (p < 0.001) over the head-up tilt value without compression of 89.6 +/- 7.0 mmHg. The abdomen was the only single compartment whose compression significantly reduced OH (p < 0.005). There was a significant increase of peripheral resistance index (PRI) with compression of abdomen (p < 0.001) or all compartments (p < 0.001); end-diastolic index and cardiac index did not change. We conclude that denervation increases vascular capacity, and that venous compression improves OH by reducing this capacity and increasing PRI. Compression of all compartments is the most efficacious, followed by abdominal compression, whereas leg compression alone was less effective, presumably reflecting the large capacity of the abdomen relative to the legs.
Kılınçer, Abidin; Akpınar, Erhan; Erbil, Bülent; Ünal, Emre; Karaosmanoğlu, Ali Devrim; Kaynaroğlu, Volkan; Akata, Deniz; Özmen, Mustafa
2017-08-01
To determine the diagnostic accuracy of abdominal CT with compression to the right lower quadrant (RLQ) in adults with acute appendicitis. 168 patients (age range, 18-78 years) were included who underwent contrast-enhanced CT for suspected appendicitis performed either using compression to the RLQ (n = 71) or a standard protocol (n = 97). Outer diameter of the appendix, appendiceal wall thickening, luminal content and associated findings were evaluated in each patient. Kruskal-Wallis, Fisher's and Pearson's chi-squared tests were used for statistical analysis. There was no significant difference in the mean outer diameter (MOD) between compression CT scans (10.6 ± 1.9 mm) and standard protocol (11.2 ± 2.3 mm) in patients with acute appendicitis (P = 1). MOD was significantly lower in the compression group (5.2 ± 0.8 mm) compared to the standard protocol (6.5 ± 1.1 mm) (P < 0.01) in patients without appendicitis. A cut-off value of 6.75 mm for the outer diameter of the appendix was found to be 100% sensitive in the diagnosis of acute appendicitis for both groups. The specificity was higher for compression CT technique (67.7 vs. 94.9%). Normal appendix diameter was significantly smaller in the compression-CT group compared to standard-CT group, increasing diagnostic accuracy of abdominal compression CT. • Normal appendix diameter is significantly smaller in compression CT. • Compression could force contrast material to flow through the appendiceal lumen. • Compression CT may be a CT counterpart of graded compression US.
Mirza, Muzna; Brown, Todd B.; Saini, Devashish; Pepper, Tracy L; Nandigam, Hari Krishna; Kaza, Niroop; Cofield, Stacey S.
2008-01-01
Background and Objective Cardiopulmonary Resuscitation (CPR) with adequate chest compression depth appears to improve first shock success in cardiac arrest. We evaluate the effect of simplification of chest compression instructions on compression depth in dispatcher-assisted CPR protocol. Methods Data from two randomized, double-blinded, controlled trials with identical methodology were combined to obtain 332 records for this analysis. Subjects were randomized to either modified Medical Priority Dispatch System (MPDS) v11.2 protocol or a new simplified protocol. The main difference between the protocols was the instruction to “push as hard as you can” in the simplified protocol, compared to “push down firmly 2 inches (5cm)” in MPDS. Data were recorded via a Laerdal® ResusciAnne® SkillReporter™ manikin. Primary outcome measures included: chest compression depth, proportion of compressions without error, with adequate depth and with total release. Results Instructions to “push as hard as you can”, compared to “push down firmly 2 inches (5cm)”, resulted in improved chest compression depth (36.4 vs 29.7 mm, p<0.0001), and improved median proportion of chest compressions done to the correct depth (32% vs <1%, p<0.0001). No significant difference in median proportion of compressions with total release (100% for both) and average compression rate (99.7 vs 97.5 per min, p<0.56) was found. Conclusions Modifying dispatcher-assisted CPR instructions by changing “push down firmly 2 inches (5cm)” to “push as hard as you can” achieved improvement in chest compression depth at no cost to total release or average chest compression rate. PMID:18635306
Compressibility of the protein-water interface
NASA Astrophysics Data System (ADS)
Persson, Filip; Halle, Bertil
2018-06-01
The compressibility of a protein relates to its stability, flexibility, and hydrophobic interactions, but the measurement, interpretation, and computation of this important thermodynamic parameter present technical and conceptual challenges. Here, we present a theoretical analysis of protein compressibility and apply it to molecular dynamics simulations of four globular proteins. Using additively weighted Voronoi tessellation, we decompose the solution compressibility into contributions from the protein and its hydration shells. We find that positively cross-correlated protein-water volume fluctuations account for more than half of the protein compressibility that governs the protein's pressure response, while the self correlations correspond to small (˜0.7%) fluctuations of the protein volume. The self compressibility is nearly the same as for ice, whereas the total protein compressibility, including cross correlations, is ˜45% of the bulk-water value. Taking the inhomogeneous solvent density into account, we decompose the experimentally accessible protein partial compressibility into intrinsic, hydration, and molecular exchange contributions and show how they can be computed with good statistical accuracy despite the dominant bulk-water contribution. The exchange contribution describes how the protein solution responds to an applied pressure by redistributing water molecules from lower to higher density; it is negligibly small for native proteins, but potentially important for non-native states. Because the hydration shell is an open system, the conventional closed-system compressibility definitions yield a pseudo-compressibility. We define an intrinsic shell compressibility, unaffected by occupation number fluctuations, and show that it approaches the bulk-water value exponentially with a decay "length" of one shell, less than the bulk-water compressibility correlation length. In the first hydration shell, the intrinsic compressibility is 25%-30% lower than in bulk water, whereas its self part is 15%-20% lower. These large reductions are caused mainly by the proximity to the more rigid protein and are not a consequence of the perturbed water structure.
Yeung, Joyce; Davies, Robin; Gao, Fang; Perkins, Gavin D
2014-04-01
This study aims to compare the effect of three CPR prompt and feedback devices on quality of chest compressions amongst healthcare providers. A single blinded, randomised controlled trial compared a pressure sensor/metronome device (CPREzy), an accelerometer device (Phillips Q-CPR) and simple metronome on the quality of chest compressions on a manikin by trained rescuers. The primary outcome was compression depth. Secondary outcomes were compression rate, proportion of chest compressions with inadequate depth, incomplete release and user satisfaction. The pressure sensor device improved compression depth (37.24-43.64 mm, p=0.02), the accelerometer device decreased chest compression depth (37.38-33.19 mm, p=0.04) whilst the metronome had no effect (39.88 mm vs. 40.64 mm, p=0.802). Compression rate fell with all devices (pressure sensor device 114.68-98.84 min(-1), p=0.001, accelerometer 112.04-102.92 min(-1), p=0.072 and metronome 108.24 min(-1) vs. 99.36 min(-1), p=0.009). The pressure sensor feedback device reduced the proportion of compressions with inadequate depth (0.52 vs. 0.24, p=0.013) whilst the accelerometer device and metronome did not have a statistically significant effect. Incomplete release of compressions was common, but unaffected by the CPR feedback devices. Users preferred the accelerometer and metronome devices over the pressure sensor device. A post hoc study showed that de-activating the voice prompt on the accelerometer device prevented the deterioration in compression quality seen in the main study. CPR feedback devices vary in their ability to improve performance. In this study the pressure sensor device improved compression depth, whilst the accelerometer device reduced it and metronome had no effect. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Nishiyama, Chika; Iwami, Taku; Kitamura, Tetsuhisa; Ando, Masahiko; Sakamoto, Tetsuya; Marukawa, Seishiro; Kawamura, Takashi
2014-01-01
It is unclear how much the length of a cardiopulmonary resuscitation (CPR) training program can be reduced without ruining its effectiveness. The authors aimed to compare CPR skills 6 months and 1 year after training between shortened chest compression-only CPR training and conventional CPR training. Participants were randomly assigned to either the compression-only CPR group, which underwent a 45-minute training program consisting of chest compressions and automated external defibrillator (AED) use with personal training manikins, or the conventional CPR group, which underwent a 180-minute training program with chest compressions, rescue breathing, and AED use. Participants' resuscitation skills were evaluated 6 months and 1 year after the training. The primary outcome measure was the proportion of appropriate chest compressions 1 year after the training. A total of 146 persons were enrolled, and 63 (87.5%) in the compression-only CPR group and 56 (75.7%) in the conventional CPR group completed the 1-year evaluation. The compression-only CPR group was superior to the conventional CPR group regarding the proportion of appropriate chest compression (mean ± SD = 59.8% ± 40.0% vs. 46.3% ± 28.6%; p = 0.036) and the number of appropriate chest compressions (mean ± SD = 119.5 ± 80.0 vs. 77.2 ± 47.8; p = 0.001). Time without chest compression in the compression-only CPR group was significantly shorter than that in the conventional CPR group (mean ± SD = 11.8 ± 21.1 seconds vs. 52.9 ± 14.9 seconds; p < 0.001). The shortened compression-only CPR training program appears to help the general public retain CPR skills better than the conventional CPR training program. UMIN-CTR UMIN000001675. © 2013 by the Society for Academic Emergency Medicine.
Protective effect of caspase inhibition on compression-induced muscle damage
Teng, Bee T; Tam, Eric W; Benzie, Iris F; Siu, Parco M
2011-01-01
Abstract There are currently no effective therapies for treating pressure-induced deep tissue injury. This study tested the efficacy of pharmacological inhibition of caspase in preventing muscle damage following sustained moderate compression. Adult Sprague–Dawley rats were subjected to prolonged moderate compression. Static pressure of 100 mmHg compression was applied to an area of 1.5 cm2 in the tibialis region of the right limb of the rats for 6 h each day for two consecutive days. The left uncompressed limb served as intra-animal control. Rats were randomized to receive either vehicle (DMSO) as control treatment (n = 8) or 6 mg kg−1 of caspase inhibitor (z-VAD-fmk; n = 8) prior to the 6 h compression on the two consecutive days. Muscle tissues directly underneath the compression region of the compressed limb and the same region of control limb were harvested after the compression procedure. Histological examination and biochemical/molecular measurement of apoptosis and autophagy were performed. Caspase inhibition was effective in alleviating the compression-induced pathohistology of muscle. The increases in caspase-3 protease activity, TUNEL index, apoptotic DNA fragmentation and pro-apoptotic factors (Bax, p53 and EndoG) and the decreases in anti-apoptotic factors (XIAP and HSP70) observed in compressed muscle of DMSO-treated animals were not found in animals treated with caspase inhibitor. The mRNA content of autophagic factors (Beclin-1, Atg5 and Atg12) and the protein content of LC3, FoxO3 and phospho-FoxO3 that were down-regulated in compressed muscle of DMSO-treated animals were all maintained at their basal level in the caspase inhibitor treated animals. Our data provide evidence that caspase inhibition attenuates compression-induced muscle apoptosis and maintains the basal autophagy level. These findings demonstrate that pharmacological inhibition of caspase/apoptosis is effective in alleviating muscle damage as induced by prolonged compression. PMID:21540338
Compressibility of the protein-water interface.
Persson, Filip; Halle, Bertil
2018-06-07
The compressibility of a protein relates to its stability, flexibility, and hydrophobic interactions, but the measurement, interpretation, and computation of this important thermodynamic parameter present technical and conceptual challenges. Here, we present a theoretical analysis of protein compressibility and apply it to molecular dynamics simulations of four globular proteins. Using additively weighted Voronoi tessellation, we decompose the solution compressibility into contributions from the protein and its hydration shells. We find that positively cross-correlated protein-water volume fluctuations account for more than half of the protein compressibility that governs the protein's pressure response, while the self correlations correspond to small (∼0.7%) fluctuations of the protein volume. The self compressibility is nearly the same as for ice, whereas the total protein compressibility, including cross correlations, is ∼45% of the bulk-water value. Taking the inhomogeneous solvent density into account, we decompose the experimentally accessible protein partial compressibility into intrinsic, hydration, and molecular exchange contributions and show how they can be computed with good statistical accuracy despite the dominant bulk-water contribution. The exchange contribution describes how the protein solution responds to an applied pressure by redistributing water molecules from lower to higher density; it is negligibly small for native proteins, but potentially important for non-native states. Because the hydration shell is an open system, the conventional closed-system compressibility definitions yield a pseudo-compressibility. We define an intrinsic shell compressibility, unaffected by occupation number fluctuations, and show that it approaches the bulk-water value exponentially with a decay "length" of one shell, less than the bulk-water compressibility correlation length. In the first hydration shell, the intrinsic compressibility is 25%-30% lower than in bulk water, whereas its self part is 15%-20% lower. These large reductions are caused mainly by the proximity to the more rigid protein and are not a consequence of the perturbed water structure.
Shih, Tzu-Ching; Chen, Jeon-Hor; Liu, Dongxu; Nie, Ke; Sun, Lizhi; Lin, Muqing; Chang, Daniel; Nalcioglu, Orhan; Su, Min-Ying
2010-01-01
This study presents a finite element based computational model to simulate the three-dimensional deformation of the breast and the fibroglandular tissues under compression. The simulation was based on 3D MR images of the breast, and the craniocaudal and mediolateral oblique compression as used in mammography was applied. The geometry of whole breast and the segmented fibroglandular tissues within the breast were reconstructed using triangular meshes by using the Avizo® 6.0 software package. Due to the large deformation in breast compression, a finite element model was used to simulate the non-linear elastic tissue deformation under compression, using the MSC.Marc® software package. The model was tested in 4 cases. The results showed a higher displacement along the compression direction compared to the other two directions. The compressed breast thickness in these 4 cases at 60% compression ratio was in the range of 5-7 cm, which is the typical range of thickness in mammography. The projection of the fibroglandular tissue mesh at 60% compression ratio was compared to the corresponding mammograms of two women, and they demonstrated spatially matched distributions. However, since the compression was based on MRI, which has much coarser spatial resolution than the in-plane resolution of mammography, this method is unlikely to generate a synthetic mammogram close to the clinical quality. Whether this model may be used to understand the technical factors that may impact the variations in breast density measurements needs further investigation. Since this method can be applied to simulate compression of the breast at different views and different compression levels, another possible application is to provide a tool for comparing breast images acquired using different imaging modalities – such as MRI, mammography, whole breast ultrasound, and molecular imaging – that are performed using different body positions and different compression conditions. PMID:20601773
Shih, Tzu-Ching; Chen, Jeon-Hor; Liu, Dongxu; Nie, Ke; Sun, Lizhi; Lin, Muqing; Chang, Daniel; Nalcioglu, Orhan; Su, Min-Ying
2010-07-21
This study presents a finite element-based computational model to simulate the three-dimensional deformation of a breast and fibroglandular tissues under compression. The simulation was based on 3D MR images of the breast, and craniocaudal and mediolateral oblique compression, as used in mammography, was applied. The geometry of the whole breast and the segmented fibroglandular tissues within the breast were reconstructed using triangular meshes by using the Avizo 6.0 software package. Due to the large deformation in breast compression, a finite element model was used to simulate the nonlinear elastic tissue deformation under compression, using the MSC.Marc software package. The model was tested in four cases. The results showed a higher displacement along the compression direction compared to the other two directions. The compressed breast thickness in these four cases at a compression ratio of 60% was in the range of 5-7 cm, which is a typical range of thickness in mammography. The projection of the fibroglandular tissue mesh at a compression ratio of 60% was compared to the corresponding mammograms of two women, and they demonstrated spatially matched distributions. However, since the compression was based on magnetic resonance imaging (MRI), which has much coarser spatial resolution than the in-plane resolution of mammography, this method is unlikely to generate a synthetic mammogram close to the clinical quality. Whether this model may be used to understand the technical factors that may impact the variations in breast density needs further investigation. Since this method can be applied to simulate compression of the breast at different views and different compression levels, another possible application is to provide a tool for comparing breast images acquired using different imaging modalities--such as MRI, mammography, whole breast ultrasound and molecular imaging--that are performed using different body positions and under different compression conditions.
A compressibility correction of the pressure strain correlation model in turbulent flow
NASA Astrophysics Data System (ADS)
Klifi, Hechmi; Lili, Taieb
2013-07-01
This paper is devoted to the second-order closure for compressible turbulent flows with special attention paid to modeling the pressure-strain correlation appearing in the Reynolds stress equation. This term appears as the main one responsible for the changes of the turbulence structures that arise from structural compressibility effects. From the analysis and DNS results of Simone et al. and Sarkar, the compressibility effects on the homogeneous turbulence shear flow are parameterized by the gradient Mach number. Several experiment and DNS results suggest that the convective Mach number is appropriate to study the compressibility effects on the mixing layers. The extension of the LRR model recently proposed by Marzougui, Khlifi and Lili for the pressure-strain correlation gives results that are in disagreement with the DNS results of Sarkar for high-speed shear flows. This extension is revised to derive a turbulence model for the pressure-strain correlation in which the compressibility is included in the turbulent Mach number, the gradient Mach number and then the convective Mach number. The behavior of the proposed model is compared to the compressible model of Adumitroiae et al. for the pressure-strain correlation in two turbulent compressible flows: homogeneous shear flow and mixing layers. In compressible homogeneous shear flows, the predicted results are compared with the DNS data of Simone et al. and those of Sarkar. For low compressibility, the two compressible models are similar, but they become substantially different at high compressibilities. The proposed model shows good agreement with all cases of DNS results. Those of Adumitroiae et al. do not reflect any effect of a change in the initial value of the gradient Mach number on the Reynolds stress anisotropy. The models are used to simulate compressible mixing layers. Comparison of our predictions with those of Adumitroiae et al. and with the experimental results of Goebel et al. shows good qualitative agreement.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-25
... DEPARTMENT OF ENERGY Research and Development Strategies for Compressed & Cryo- Compressed Hydrogen Storage Workshops AGENCY: Fuel Cell Technologies Program, Office of Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Notice of meeting. SUMMARY: The Systems Integration group of...
Cánovas, Rodrigo; Moffat, Alistair; Turpin, Andrew
2016-12-15
Next generation sequencing machines produce vast amounts of genomic data. For the data to be useful, it is essential that it can be stored and manipulated efficiently. This work responds to the combined challenge of compressing genomic data, while providing fast access to regions of interest, without necessitating decompression of whole files. We describe CSAM (Compressed SAM format), a compression approach offering lossless and lossy compression for SAM files. The structures and techniques proposed are suitable for representing SAM files, as well as supporting fast access to the compressed information. They generate more compact lossless representations than BAM, which is currently the preferred lossless compressed SAM-equivalent format; and are self-contained, that is, they do not depend on any external resources to compress or decompress SAM files. An implementation is available at https://github.com/rcanovas/libCSAM CONTACT: canovas-ba@lirmm.frSupplementary Information: Supplementary data is available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Assessing the Effects of Data Compression in Simulations Using Physically Motivated Metrics
Laney, Daniel; Langer, Steven; Weber, Christopher; ...
2014-01-01
This paper examines whether lossy compression can be used effectively in physics simulations as a possible strategy to combat the expected data-movement bottleneck in future high performance computing architectures. We show that, for the codes and simulations we tested, compression levels of 3–5X can be applied without causing significant changes to important physical quantities. Rather than applying signal processing error metrics, we utilize physics-based metrics appropriate for each code to assess the impact of compression. We evaluate three different simulation codes: a Lagrangian shock-hydrodynamics code, an Eulerian higher-order hydrodynamics turbulence modeling code, and an Eulerian coupled laser-plasma interaction code. Wemore » compress relevant quantities after each time-step to approximate the effects of tightly coupled compression and study the compression rates to estimate memory and disk-bandwidth reduction. We find that the error characteristics of compression algorithms must be carefully considered in the context of the underlying physics being modeled.« less
Compressed/reconstructed test images for CRAF/Cassini
NASA Technical Reports Server (NTRS)
Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.
1991-01-01
A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.
Accidental fatal lung injury by compressed air: a case report.
Rayamane, Anand Parashuram; Pradeepkumar, M V
2015-03-01
Compressed air is being used extensively as a source of energy at industries and in daily life. A variety of fatal injuries are caused by improper and ignorant use of compressed air equipments. Many types of injuries due to compressed air are reported in the literature such as colorectal injury, orbital injury, surgical emphysema, and so on. Most of these injuries are accidental in nature. It is documented that 40 pounds per square inch pressure causes fatal injuries to the ear, eyes, lungs, stomach, and intestine. Openings of body are vulnerable to injuries by compressed air. Death due to compressed air injuries is rarely reported. Many cases are treated successfully by conservative or surgical management. Extensive survey of literature revealed no reports of fatal injury to the upper respiratory tract and lungs caused by compressed air. Here, we are reporting a fatal event of accidental death after insertion of compressed air pipe into the mouth. The postmortem findings are corroborated with the history and discussed in detail.
Tan, E S; Mat Jais, I S; Abdul Rahim, S; Tay, S C
2018-01-01
We investigated the effect of an interfragmentary gap on the final compression force using the Acutrak 2 Mini headless compression screw (length 26 mm) (Acumed, Hillsboro, OR, USA). Two blocks of solid rigid polyurethane foam in a custom jig were separated by spacers of varying thickness (1.0, 1.5, 2.0 and 2.5 mm) to simulate an interfragmentary gap. The spacers were removed before full insertion of the screw and the compression force was measured when the screw was buried 2 mm below the surface of the upper block. Gaps of 1.5 mm and 2.0 mm resulted in significantly decreased compression forces, whereas there was no significant decrease in compression force with a gap of 1 mm. An interfragmentary gap of 2.5 mm did not result in any contact between blocks. We conclude that an increased interfragmentary gap leads to decreased compression force with this screw, which may have implications on fracture healing.
Near-lossless multichannel EEG compression based on matrix and tensor decompositions.
Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej
2013-05-01
A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.
In situ X-Ray Diffraction of Shock-Compressed Fused Silica
NASA Astrophysics Data System (ADS)
Tracy, Sally June; Turneaure, Stefan J.; Duffy, Thomas S.
2018-03-01
Because of its widespread applications in materials science and geophysics, SiO2 has been extensively examined under shock compression. Both quartz and fused silica transform through a so-called "mixed-phase region" to a dense, low compressibility high-pressure phase. For decades, the nature of this phase has been a subject of debate. Proposed structures include crystalline stishovite, another high-pressure crystalline phase, or a dense amorphous phase. Here we use plate-impact experiments and pulsed synchrotron x-ray diffraction to examine the structure of fused silica shock compressed to 63 GPa. In contrast to recent laser-driven compression experiments, we find that fused silica adopts a dense amorphous structure at 34 GPa and below. When compressed above 34 GPa, fused silica transforms to untextured polycrystalline stishovite. Our results can explain previously ambiguous features of the shock-compression behavior of fused silica and are consistent with recent molecular dynamics simulations. Stishovite grain sizes are estimated to be ˜5 - 30 nm for compression over a few hundred nanosecond time scale.
Prediction of compressibility parameters of the soils using artificial neural network.
Kurnaz, T Fikret; Dagdeviren, Ugur; Yildiz, Murat; Ozkan, Ozhan
2016-01-01
The compression index and recompression index are one of the important compressibility parameters to determine the settlement calculation for fine-grained soil layers. These parameters can be determined by carrying out laboratory oedometer test on undisturbed samples; however, the test is quite time-consuming and expensive. Therefore, many empirical formulas based on regression analysis have been presented to estimate the compressibility parameters using soil index properties. In this paper, an artificial neural network (ANN) model is suggested for prediction of compressibility parameters from basic soil properties. For this purpose, the input parameters are selected as the natural water content, initial void ratio, liquid limit and plasticity index. In this model, two output parameters, including compression index and recompression index, are predicted in a combined network structure. As the result of the study, proposed ANN model is successful for the prediction of the compression index, however the predicted recompression index values are not satisfying compared to the compression index.
The Basic Principles and Methods of the System Approach to Compression of Telemetry Data
NASA Astrophysics Data System (ADS)
Levenets, A. V.
2018-01-01
The task of data compressing of measurement data is still urgent for information-measurement systems. In paper the basic principles necessary for designing of highly effective systems of compression of telemetric information are offered. A basis of the offered principles is representation of a telemetric frame as whole information space where we can find of existing correlation. The methods of data transformation and compressing algorithms realizing the offered principles are described. The compression ratio for offered compression algorithm is about 1.8 times higher, than for a classic algorithm. Thus, results of a research of methods and algorithms showing their good perspectives.
Fu, Chi-Yung; Petrich, Loren I.
1997-01-01
An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.
Breaking of rod-shaped model material during compression
NASA Astrophysics Data System (ADS)
Lukas, Kulaviak; Vera, Penkavova; Marek, Ruzicka; Miroslav, Puncochar; Petr, Zamostny; Zdenek, Grof; Frantisek, Stepanek; Marek, Schongut; Jaromir, Havlica
2017-06-01
The breakage of a model anisometric dry granular material caused by uniaxial compression was studied. The bed of uniform rod-like pasta particles (8 mm long, aspect ratio 1:8) was compressed (Gamlen Tablet Press) and their size distribution was measured after each run (Dynamic Image Analysing). The compression dynamics was recorded and the effect of several parameters was tested (rate of compression, volume of granular bed, pressure magnitude and mode of application). Besides the experiments, numerical modelling of the compressed breakable material was performed as well, employing the DEM approach (Discrete Element Method). The comparison between the data and the model looks promising.
Compressive buckling of black phosphorene nanotubes: an atomistic study
NASA Astrophysics Data System (ADS)
Nguyen, Van-Trang; Le, Minh-Quy
2018-04-01
We investigate through molecular dynamics finite element method with Stillinger-Weber potential the uniaxial compression of armchair and zigzag black phosphorene nanotubes. We focus especially on the effects of the tube’s diameter with fixed length-diameter ratio, effects of the tube’s length for a pair of armchair and zigzag tubes of equal diameters, and effects of the tube’s diameter with fixed lengths. Their Young’s modulus, critical compressive stress and critical compressive strain are studied and discussed for these 3 case studies. Compressive buckling was clearly observed in the armchair nanotubes. Local bond breaking near the boundary occurred in the zigzag ones under compression.
Compression and neutron and ion beams emission mechanisms within a plasma focus device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yousefi, H. R.; Mohanty, S. R.; Nakada, Y.
This paper reports some results of investigations of the neutron emission from middle energy Mather-type plasma focus. Multiple compressions were observed, and it seems that multiple compression regimes can occur at low pressure, while single compression appeared at higher pressure, which is favorable for neutron production. The multiple compression mechanism can be attributed to the (m=0 type) instability. The m=0 type instability is a necessary condition for fusion activity and x-ray production, but is not sufficient by itself. Accompanying the multiple compressions, multiple deuteron and neutron pulses were detected, which implies that there are different kinds of acceleration mechanisms.
Fingerprint recognition of wavelet-based compressed images by neuro-fuzzy clustering
NASA Astrophysics Data System (ADS)
Liu, Ti C.; Mitra, Sunanda
1996-06-01
Image compression plays a crucial role in many important and diverse applications requiring efficient storage and transmission. This work mainly focuses on a wavelet transform (WT) based compression of fingerprint images and the subsequent classification of the reconstructed images. The algorithm developed involves multiresolution wavelet decomposition, uniform scalar quantization, entropy and run- length encoder/decoder and K-means clustering of the invariant moments as fingerprint features. The performance of the WT-based compression algorithm has been compared with JPEG current image compression standard. Simulation results show that WT outperforms JPEG in high compression ratio region and the reconstructed fingerprint image yields proper classification.
Lossless medical image compression with a hybrid coder
NASA Astrophysics Data System (ADS)
Way, Jing-Dar; Cheng, Po-Yuen
1998-10-01
The volume of medical image data is expected to increase dramatically in the next decade due to the large use of radiological image for medical diagnosis. The economics of distributing the medical image dictate that data compression is essential. While there is lossy image compression, the medical image must be recorded and transmitted lossless before it reaches the users to avoid wrong diagnosis due to the image data lost. Therefore, a low complexity, high performance lossless compression schematic that can approach the theoretic bound and operate in near real-time is needed. In this paper, we propose a hybrid image coder to compress the digitized medical image without any data loss. The hybrid coder is constituted of two key components: an embedded wavelet coder and a lossless run-length coder. In this system, the medical image is compressed with the lossy wavelet coder first, and the residual image between the original and the compressed ones is further compressed with the run-length coder. Several optimization schemes have been used in these coders to increase the coding performance. It is shown that the proposed algorithm is with higher compression ratio than run-length entropy coders such as arithmetic, Huffman and Lempel-Ziv coders.
Mangal, Sharad; Meiser, Felix; Morton, David; Larson, Ian
2015-01-01
Tablets represent the preferred and most commonly dispensed pharmaceutical dosage form for administering active pharmaceutical ingredients (APIs). Minimizing the cost of goods and improving manufacturing output efficiency has motivated companies to use direct compression as a preferred method of tablet manufacturing. Excipients dictate the success of direct compression, notably by optimizing powder formulation compactability and flow, thus there has been a surge in creating excipients specifically designed to meet these needs for direct compression. Greater scientific understanding of tablet manufacturing coupled with effective application of the principles of material science and particle engineering has resulted in a number of improved direct compression excipients. Despite this, significant practical disadvantages of direct compression remain relative to granulation, and this is partly due to the limitations of direct compression excipients. For instance, in formulating high-dose APIs, a much higher level of excipient is required relative to wet or dry granulation and so tablets are much bigger. Creating excipients to enable direct compression of high-dose APIs requires the knowledge of the relationship between fundamental material properties and excipient functionalities. In this paper, we review the current understanding of the relationship between fundamental material properties and excipient functionality for direct compression.
NASA Technical Reports Server (NTRS)
Tilton, James C.; Ramapriyan, H. K.
1989-01-01
A case study is presented where an image segmentation based compression technique is applied to LANDSAT Thematic Mapper (TM) and Nimbus-7 Coastal Zone Color Scanner (CZCS) data. The compression technique, called Spatially Constrained Clustering (SCC), can be regarded as an adaptive vector quantization approach. The SCC can be applied to either single or multiple spectral bands of image data. The segmented image resulting from SCC is encoded in small rectangular blocks, with the codebook varying from block to block. Lossless compression potential (LDP) of sample TM and CZCS images are evaluated. For the TM test image, the LCP is 2.79. For the CZCS test image the LCP is 1.89, even though when only a cloud-free section of the image is considered the LCP increases to 3.48. Examples of compressed images are shown at several compression ratios ranging from 4 to 15. In the case of TM data, the compressed data are classified using the Bayes' classifier. The results show an improvement in the similarity between the classification results and ground truth when compressed data are used, thus showing that compression is, in fact, a useful first step in the analysis.
Mixed raster content (MRC) model for compound image compression
NASA Astrophysics Data System (ADS)
de Queiroz, Ricardo L.; Buckley, Robert R.; Xu, Ming
1998-12-01
This paper will describe the Mixed Raster Content (MRC) method for compressing compound images, containing both binary test and continuous-tone images. A single compression algorithm that simultaneously meets the requirements for both text and image compression has been elusive. MRC takes a different approach. Rather than using a single algorithm, MRC uses a multi-layered imaging model for representing the results of multiple compression algorithms, including ones developed specifically for text and for images. As a result, MRC can combine the best of existing or new compression algorithms and offer different quality-compression ratio tradeoffs. The algorithms used by MRC set the lower bound on its compression performance. Compared to existing algorithms, MRC has some image-processing overhead to manage multiple algorithms and the imaging model. This paper will develop the rationale for the MRC approach by describing the multi-layered imaging model in light of a rate-distortion trade-off. Results will be presented comparing images compressed using MRC, JPEG and state-of-the-art wavelet algorithms such as SPIHT. MRC has been approved or proposed as an architectural model for several standards, including ITU Color Fax, IETF Internet Fax, and JPEG 2000.
Compressive Behavior of Fiber-Reinforced Concrete with End-Hooked Steel Fibers
Lee, Seong-Cheol; Oh, Joung-Hwan; Cho, Jae-Yeol
2015-01-01
In this paper, the compressive behavior of fiber-reinforced concrete with end-hooked steel fibers has been investigated through a uniaxial compression test in which the variables were concrete compressive strength, fiber volumetric ratio, and fiber aspect ratio (length to diameter). In order to minimize the effect of specimen size on fiber distribution, 48 cylinder specimens 150 mm in diameter and 300 mm in height were prepared and then subjected to uniaxial compression. From the test results, it was shown that steel fiber-reinforced concrete (SFRC) specimens exhibited ductile behavior after reaching their compressive strength. It was also shown that the strain at the compressive strength generally increased along with an increase in the fiber volumetric ratio and fiber aspect ratio, while the elastic modulus decreased. With consideration for the effect of steel fibers, a model for the stress–strain relationship of SFRC under compression is proposed here. Simple formulae to predict the strain at the compressive strength and the elastic modulus of SFRC were developed as well. The proposed model and formulae will be useful for realistic predictions of the structural behavior of SFRC members or structures. PMID:28788011
The effect of compression and attention allocation on speech intelligibility
NASA Astrophysics Data System (ADS)
Choi, Sangsook; Carrell, Thomas
2003-10-01
Research investigating the effects of amplitude compression on speech intelligibility for individuals with sensorineural hearing loss has demonstrated contradictory results [Souza and Turner (1999)]. Because percent-correct measures may not be the best indicator of compression effectiveness, a speech intelligibility and motor coordination task was developed to provide data that may more thoroughly explain the perception of compressed speech signals. In the present study, a pursuit rotor task [Dlhopolsky (2000)] was employed along with word identification task to measure the amount of attention required to perceive compressed and non-compressed words in noise. Monosyllabic words were mixed with speech-shaped noise at a fixed signal-to-noise ratio and compressed using a wide dynamic range compression scheme. Participants with normal hearing identified each word with or without a simultaneous pursuit-rotor task. Also, participants completed the pursuit-rotor task without simultaneous word presentation. It was expected that the performance on the additional motor task would reflect effect of the compression better than simple word-accuracy measures. Results were complex. For example, in some conditions an irrelevant task actually improved performance on a simultaneous listening task. This suggests there might be an optimal level of attention required for recognition of monosyllabic words.
Fukatsu, Hiroshi; Naganawa, Shinji; Yumura, Shinnichiro
2008-04-01
This study was aimed to validate the performance of a novel image compression method using a neural network to achieve a lossless compression. The encoding consists of the following blocks: a prediction block; a residual data calculation block; a transformation and quantization block; an organization and modification block; and an entropy encoding block. The predicted image is divided into four macro-blocks using the original image for teaching; and then redivided into sixteen sub-blocks. The predicted image is compared to the original image to create the residual image. The spatial and frequency data of the residual image are compared and transformed. Chest radiography, computed tomography (CT), magnetic resonance imaging, positron emission tomography, radioisotope mammography, ultrasonography, and digital subtraction angiography images were compressed using the AIC lossless compression method; and the compression rates were calculated. The compression rates were around 15:1 for chest radiography and mammography, 12:1 for CT, and around 6:1 for other images. This method thus enables greater lossless compression than the conventional methods. This novel method should improve the efficiency of handling of the increasing volume of medical imaging data.
C-FSCV: Compressive Fast-Scan Cyclic Voltammetry for Brain Dopamine Recording.
Zamani, Hossein; Bahrami, Hamid Reza; Chalwadi, Preeti; Garris, Paul A; Mohseni, Pedram
2018-01-01
This paper presents a novel compressive sensing framework for recording brain dopamine levels with fast-scan cyclic voltammetry (FSCV) at a carbon-fiber microelectrode. Termed compressive FSCV (C-FSCV), this approach compressively samples the measured total current in each FSCV scan and performs basic FSCV processing steps, e.g., background current averaging and subtraction, directly with compressed measurements. The resulting background-subtracted faradaic currents, which are shown to have a block-sparse representation in the discrete cosine transform domain, are next reconstructed from their compressively sampled counterparts with the block sparse Bayesian learning algorithm. Using a previously recorded dopamine dataset, consisting of electrically evoked signals recorded in the dorsal striatum of an anesthetized rat, the C-FSCV framework is shown to be efficacious in compressing and reconstructing brain dopamine dynamics and associated voltammograms with high fidelity (correlation coefficient, ), while achieving compression ratio, CR, values as high as ~ 5. Moreover, using another set of dopamine data recorded 5 minutes after administration of amphetamine (AMPH) to an ambulatory rat, C-FSCV once again compresses (CR = 5) and reconstructs the temporal pattern of dopamine release with high fidelity ( ), leading to a true-positive rate of 96.4% in detecting AMPH-induced dopamine transients.
Improved compression technique for multipass color printers
NASA Astrophysics Data System (ADS)
Honsinger, Chris
1998-01-01
A multipass color printer prints a color image by printing one color place at a time in a prescribed order, e.g., in a four-color systems, the cyan plane may be printed first, the magenta next, and so on. It is desirable to discard the data related to each color plane once it has been printed, so that data from the next print may be downloaded. In this paper, we present a compression scheme that allows the release of a color plane memory, but still takes advantage of the correlation between the color planes. The compression scheme is based on a block adaptive technique for decorrelating the color planes followed by a spatial lossy compression of the decorrelated data. A preferred method of lossy compression is the DCT-based JPEG compression standard, as it is shown that the block adaptive decorrelation operations can be efficiently performed in the DCT domain. The result of the compression technique are compared to that of using JPEG on RGB data without any decorrelating transform. In general, the technique is shown to improve the compression performance over a practical range of compression ratios by at least 30 percent in all images, and up to 45 percent in some images.
Wavelet-based compression of pathological images for telemedicine applications
NASA Astrophysics Data System (ADS)
Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun
2000-05-01
In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.
46 CFR 147.60 - Compressed gases.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Compressed gases. 147.60 Section 147.60 Shipping COAST... Other Special Requirements for Particular Materials § 147.60 Compressed gases. (a) Cylinder requirements. Cylinders used for containing hazardous ships' stores that are compressed gases must be— (1) Authorized for...
5 CFR 532.513 - Flexible and compressed work schedules.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Flexible and compressed work schedules... REGULATIONS PREVAILING RATE SYSTEMS Premium Pay and Differentials § 532.513 Flexible and compressed work schedules. Federal Wage System employees who are authorized to work flexible and compressed work schedules...
A biological compression model and its applications.
Cao, Minh Duc; Dix, Trevor I; Allison, Lloyd
2011-01-01
A biological compression model, expert model, is presented which is superior to existing compression algorithms in both compression performance and speed. The model is able to compress whole eukaryotic genomes. Most importantly, the model provides a framework for knowledge discovery from biological data. It can be used for repeat element discovery, sequence alignment and phylogenetic analysis. We demonstrate that the model can handle statistically biased sequences and distantly related sequences where conventional knowledge discovery tools often fail.
COxSwAIN: Compressive Sensing for Advanced Imaging and Navigation
NASA Technical Reports Server (NTRS)
Kurwitz, Richard; Pulley, Marina; LaFerney, Nathan; Munoz, Carlos
2015-01-01
The COxSwAIN project focuses on building an image and video compression scheme that can be implemented in a small or low-power satellite. To do this, we used Compressive Sensing, where the compression is performed by matrix multiplications on the satellite and reconstructed on the ground. Our paper explains our methodology and demonstrates the results of the scheme, being able to achieve high quality image compression that is robust to noise and corruption.
Multiple Compressions in the Middle Energy Plasma Focus Device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yousefi, H. R.; Ejiri, Y.; Ito, H.
This paper reports some of the results that are aimed to investigate the neutron emission from the middle energy Mather-type plasma focus. These results indicated that with increase the pressure, compression time is increase but there is not any direct relation between the compression time and neutron yield. Also it seems that multiple compression regimes is occurred in low pressure and single compression is appeared at higher pressure where is the favorable to neutron production.
Influence of temper condition on the nonlinear stress-strain behavior of boron-aluminum
NASA Technical Reports Server (NTRS)
Kennedy, J. M.; Herakovich, E. T.; Tenney, D. R.
1977-01-01
The influence of temper condition on the tensile and compressive stress-strain behavior for six boron-aluminum laminates was investigated. In addition to monotonic tension and compression tests, tension-tension, compression-compression, and tension--compression tests were conducted to study the effects of cyclic loading. Tensile strength results are a function of the laminate configuration; unidirectional laminates were affected considerably more than other laminates with some strength values increasing and others decreasing.
Shock-wave studies of anomalous compressibility of glassy carbon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molodets, A. M., E-mail: molodets@icp.ac.ru; Golyshev, A. A.; Savinykh, A. S.
2016-02-15
The physico-mechanical properties of amorphous glassy carbon are investigated under shock compression up to 10 GPa. Experiments are carried out on the continuous recording of the mass velocity of compression pulses propagating in glassy carbon samples with initial densities of 1.502(5) g/cm{sup 3} and 1.55(2) g/cm{sup 3}. It is shown that, in both cases, a compression wave in glassy carbon contains a leading precursor with amplitude of 0.135(5) GPa. It is established that, in the range of pressures up to 2 GPa, a shock discontinuity in glassy carbon is transformed into a broadened compression wave, and shock waves are formedmore » in the release wave, which generally means the anomalous compressibility of the material in both the compression and release waves. It is shown that, at pressure higher than 3 GPa, anomalous behavior turns into normal behavior, accompanied by the formation of a shock compression wave. In the investigated area of pressure, possible structural changes in glassy carbon under shock compression have a reversible character. A physico-mechanical model of glassy carbon is proposed that involves the equation of state and a constitutive relation for Poisson’s ratio and allows the numerical simulation of physico-mechanical and thermophysical properties of glassy carbon of different densities in the region of its anomalous compressibility.« less
Learning random networks for compression of still and moving images
NASA Technical Reports Server (NTRS)
Gelenbe, Erol; Sungur, Mert; Cramer, Christopher
1994-01-01
Image compression for both still and moving images is an extremely important area of investigation, with numerous applications to videoconferencing, interactive education, home entertainment, and potential applications to earth observations, medical imaging, digital libraries, and many other areas. We describe work on a neural network methodology to compress/decompress still and moving images. We use the 'point-process' type neural network model which is closer to biophysical reality than standard models, and yet is mathematically much more tractable. We currently achieve compression ratios of the order of 120:1 for moving grey-level images, based on a combination of motion detection and compression. The observed signal-to-noise ratio varies from values above 25 to more than 35. The method is computationally fast so that compression and decompression can be carried out in real-time. It uses the adaptive capabilities of a set of neural networks so as to select varying compression ratios in real-time as a function of quality achieved. It also uses a motion detector which will avoid retransmitting portions of the image which have varied little from the previous frame. Further improvements can be achieved by using on-line learning during compression, and by appropriate compensation of nonlinearities in the compression/decompression scheme. We expect to go well beyond the 250:1 compression level for color images with good quality levels.
An effective and efficient compression algorithm for ECG signals with irregular periods.
Chou, Hsiao-Hsuan; Chen, Ying-Jui; Shiau, Yu-Chien; Kuo, Te-Son
2006-06-01
This paper presents an effective and efficient preprocessing algorithm for two-dimensional (2-D) electrocardiogram (ECG) compression to better compress irregular ECG signals by exploiting their inter- and intra-beat correlations. To better reveal the correlation structure, we first convert the ECG signal into a proper 2-D representation, or image. This involves a few steps including QRS detection and alignment, period sorting, and length equalization. The resulting 2-D ECG representation is then ready to be compressed by an appropriate image compression algorithm. We choose the state-of-the-art JPEG2000 for its high efficiency and flexibility. In this way, the proposed algorithm is shown to outperform some existing arts in the literature by simultaneously achieving high compression ratio (CR), low percent root mean squared difference (PRD), low maximum error (MaxErr), and low standard derivation of errors (StdErr). In particular, because the proposed period sorting method rearranges the detected heartbeats into a smoother image that is easier to compress, this algorithm is insensitive to irregular ECG periods. Thus either the irregular ECG signals or the QRS false-detection cases can be better compressed. This is a significant improvement over existing 2-D ECG compression methods. Moreover, this algorithm is not tied exclusively to JPEG2000. It can also be combined with other 2-D preprocessing methods or appropriate codecs to enhance the compression performance in irregular ECG cases.
Biological sequence compression algorithms.
Matsumoto, T; Sadakane, K; Imai, H
2000-01-01
Today, more and more DNA sequences are becoming available. The information about DNA sequences are stored in molecular biology databases. The size and importance of these databases will be bigger and bigger in the future, therefore this information must be stored or communicated efficiently. Furthermore, sequence compression can be used to define similarities between biological sequences. The standard compression algorithms such as gzip or compress cannot compress DNA sequences, but only expand them in size. On the other hand, CTW (Context Tree Weighting Method) can compress DNA sequences less than two bits per symbol. These algorithms do not use special structures of biological sequences. Two characteristic structures of DNA sequences are known. One is called palindromes or reverse complements and the other structure is approximate repeats. Several specific algorithms for DNA sequences that use these structures can compress them less than two bits per symbol. In this paper, we improve the CTW so that characteristic structures of DNA sequences are available. Before encoding the next symbol, the algorithm searches an approximate repeat and palindrome using hash and dynamic programming. If there is a palindrome or an approximate repeat with enough length then our algorithm represents it with length and distance. By using this preprocessing, a new program achieves a little higher compression ratio than that of existing DNA-oriented compression algorithms. We also describe new compression algorithm for protein sequences.
ERGC: an efficient referential genome compression algorithm
Saha, Subrata; Rajasekaran, Sanguthevar
2015-01-01
Motivation: Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. Results: We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. Availability and implementation: The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. Contact: rajasek@engr.uconn.edu PMID:26139636
Texture Studies and Compression Behaviour of Apple Flesh
NASA Astrophysics Data System (ADS)
James, Bryony; Fonseca, Celia
Compressive behavior of fruit flesh has been studied using mechanical tests and microstructural analysis. Apple flesh from two cultivars (Braeburn and Cox's Orange Pippin) was investigated to represent the extremes in a spectrum of fruit flesh types, hard and juicy (Braeburn) and soft and mealy (Cox's). Force-deformation curves produced during compression of unconstrained discs of apple flesh followed trends predicted from the literature for each of the "juicy" and "mealy" types. The curves display the rupture point and, in some cases, a point of inflection that may be related to the point of incipient juice release. During compression these discs of flesh generally failed along the centre line, perpendicular to the direction of loading, through a barrelling mechanism. Cryo-Scanning Electron Microscopy (cryo-SEM) was used to examine the behavior of the parenchyma cells during fracture and compression using a purpose designed sample holder and compression tester. Fracture behavior reinforced the difference in mechanical properties between crisp and mealy fruit flesh. During compression testing prior to cryo-SEM imaging the apple flesh was constrained perpendicular to the direction of loading. Microstructural analysis suggests that, in this arrangement, the material fails along a compression front ahead of the compressing plate. Failure progresses by whole lines of parenchyma cells collapsing, or rupturing, with juice filling intercellular spaces, before the compression force is transferred to the next row of cells.
Moore, Brian C J; Sęk, Aleksander
2016-09-07
Multichannel amplitude compression is widely used in hearing aids. The preferred compression speed varies across individuals. Moore (2008) suggested that reduced sensitivity to temporal fine structure (TFS) may be associated with preference for slow compression. This idea was tested using a simulated hearing aid. It was also assessed whether preferences for compression speed depend on the type of stimulus: speech or music. Twenty-two hearing-impaired subjects were tested, and the stimulated hearing aid was fitted individually using the CAM2A method. On each trial, a given segment of speech or music was presented twice. One segment was processed with fast compression and the other with slow compression, and the order was balanced across trials. The subject indicated which segment was preferred and by how much. On average, slow compression was preferred over fast compression, more so for music, but there were distinct individual differences, which were highly correlated for speech and music. Sensitivity to TFS was assessed using the difference limen for frequency at 2000 Hz and by two measures of sensitivity to interaural phase at low frequencies. The results for the difference limens for frequency, but not the measures of sensitivity to interaural phase, supported the suggestion that preference for compression speed is affected by sensitivity to TFS. © The Author(s) 2016.
Video compression via log polar mapping
NASA Astrophysics Data System (ADS)
Weiman, Carl F. R.
1990-09-01
A three stage process for compressing real time color imagery by factors in the range of 1600-to-i is proposed for remote driving'. The key is to match the resolution gradient of human vision and preserve only those cues important for driving. Some hardware components have been built and a research prototype is planned. Stage 1 is log polar mapping, which reduces peripheral image sampling resolution to match the peripheral gradient in human visual acuity. This can yield 25-to-i compression. Stage 2 partitions color and contrast into separate channels. This can yield 8-to-i compression. Stage 3 is conventional block data compression such as hybrid DCT/DPCM which can yield 8-to-i compression. The product of all three stages is i600-to-i data compression. The compressed signal can be transmitted over FM bands which do not require line-of-sight, greatly increasing the range of operation and reducing the topographic exposure of teleoperated vehicles. Since the compressed channel data contains the essential constituents of human visual perception, imagery reconstructed by inverting each of the three compression stages is perceived as complete, provided the operator's direction of gaze is at the center of the mapping. This can be achieved by eye-tracker feedback which steers the center of log polar mapping in the remote vehicle to match the teleoperator's direction of gaze.
Farruggia, Andrea; Gagie, Travis; Navarro, Gonzalo; Puglisi, Simon J; Sirén, Jouni
2018-05-01
Suffix trees are one of the most versatile data structures in stringology, with many applications in bioinformatics. Their main drawback is their size, which can be tens of times larger than the input sequence. Much effort has been put into reducing the space usage, leading ultimately to compressed suffix trees. These compressed data structures can efficiently simulate the suffix tree, while using space proportional to a compressed representation of the sequence. In this work, we take a new approach to compressed suffix trees for repetitive sequence collections, such as collections of individual genomes. We compress the suffix trees of individual sequences relative to the suffix tree of a reference sequence. These relative data structures provide competitive time/space trade-offs, being almost as small as the smallest compressed suffix trees for repetitive collections, and competitive in time with the largest and fastest compressed suffix trees.
Mohammed, Monzoorul Haque; Dutta, Anirban; Bose, Tungadri; Chadaram, Sudha; Mande, Sharmila S
2012-10-01
An unprecedented quantity of genome sequence data is currently being generated using next-generation sequencing platforms. This has necessitated the development of novel bioinformatics approaches and algorithms that not only facilitate a meaningful analysis of these data but also aid in efficient compression, storage, retrieval and transmission of huge volumes of the generated data. We present a novel compression algorithm (DELIMINATE) that can rapidly compress genomic sequence data in a loss-less fashion. Validation results indicate relatively higher compression efficiency of DELIMINATE when compared with popular general purpose compression algorithms, namely, gzip, bzip2 and lzma. Linux, Windows and Mac implementations (both 32 and 64-bit) of DELIMINATE are freely available for download at: http://metagenomics.atc.tcs.com/compression/DELIMINATE. sharmila@atc.tcs.com Supplementary data are available at Bioinformatics online.
Sandford, M.T. II; Handel, T.G.; Bradley, J.N.
1998-03-10
A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.
Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.
1998-01-01
A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.
Farruggia, Andrea; Gagie, Travis; Navarro, Gonzalo; Puglisi, Simon J; Sirén, Jouni
2018-01-01
Abstract Suffix trees are one of the most versatile data structures in stringology, with many applications in bioinformatics. Their main drawback is their size, which can be tens of times larger than the input sequence. Much effort has been put into reducing the space usage, leading ultimately to compressed suffix trees. These compressed data structures can efficiently simulate the suffix tree, while using space proportional to a compressed representation of the sequence. In this work, we take a new approach to compressed suffix trees for repetitive sequence collections, such as collections of individual genomes. We compress the suffix trees of individual sequences relative to the suffix tree of a reference sequence. These relative data structures provide competitive time/space trade-offs, being almost as small as the smallest compressed suffix trees for repetitive collections, and competitive in time with the largest and fastest compressed suffix trees. PMID:29795706
Data compression techniques applied to high resolution high frame rate video technology
NASA Technical Reports Server (NTRS)
Hartz, William G.; Alexovich, Robert E.; Neustadter, Marc S.
1989-01-01
An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended.
Moshina, Nataliia; Sebuødegård, Sofie; Hofvind, Solveig
2017-06-01
We aimed to investigate early performance measures in a population-based breast cancer screening program stratified by compression force and pressure at the time of mammographic screening examination. Early performance measures included recall rate, rates of screen-detected and interval breast cancers, positive predictive value of recall (PPV), sensitivity, specificity, and histopathologic characteristics of screen-detected and interval breast cancers. Information on 261,641 mammographic examinations from 93,444 subsequently screened women was used for analyses. The study period was 2007-2015. Compression force and pressure were categorized using tertiles as low, medium, or high. χ 2 test, t tests, and test for trend were used to examine differences between early performance measures across categories of compression force and pressure. We applied generalized estimating equations to identify the odds ratios (OR) of screen-detected or interval breast cancer associated with compression force and pressure, adjusting for fibroglandular and/or breast volume and age. The recall rate decreased, while PPV and specificity increased with increasing compression force (p for trend <0.05 for all). The recall rate increased, while rate of screen-detected cancer, PPV, sensitivity, and specificity decreased with increasing compression pressure (p for trend <0.05 for all). High compression pressure was associated with higher odds of interval breast cancer compared with low compression pressure (1.89; 95% CI 1.43-2.48). High compression force and low compression pressure were associated with more favorable early performance measures in the screening program.
Stoffels-Weindorf, M; Stoffels, I; Jockenhöfer, F; Dissemond, J
2018-04-01
For effective compression therapy in patients with venous leg ulcers, sufficient pressure is essential. In everyday life, it is often the patients themselves who apply the compression bandages. Many of these patients have restriction in their movement and had been rarely trained adequately. Hence, there was the question of how efficient are the autonomously applied compression bandages of those patients. In all, 100 consecutive patients with venous leg ulcer were asked to apply compression bandages on their own leg. We documented both the achieved compression and formal criteria of correct performance. A total of 59 women and 41 men with an average age of 70.3 years were included in the study. Overall 43 patients were not able to apply a compression bandage because of physical limitations. The measured pressure values in the remaining 57 patients ranged between 6 and 93 mm Hg (mean 28.3 mm Hg). Eleven patients reached the prescribed effective compression pressure. Of these, formal errors were found in 6 patients, so that only 5 patients had correctly applied the compression bandages. Our data show that most patients with venous leg ulcers are not able to apply effective compression therapy with short-stretch bandages to themselves. Multilayer systems, adaptive compression bandages, and ulcer stocking systems today are possibly easier and more effective therapy options. Alternatively short-stretch bandages could be applied by trained persons but only under the control with pressure measuring probes.
Kinetics of the B1-B2 phase transition in KCl under rapid compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Chuanlong; Smith, Jesse S.; Sinogeikin, Stanislav V.
2016-01-28
Kinetics of the B1-B2 phase transition in KCl has been investigated under various compression rates (0.03–13.5 GPa/s) in a dynamic diamond anvil cell using time-resolved x-ray diffraction and fast imaging. Our experimental data show that the volume fraction across the transition generally gives sigmoidal curves as a function of pressure during rapid compression. Based upon classical nucleation and growth theories (Johnson-Mehl-Avrami-Kolmogorov theories), we propose a model that is applicable for studying kinetics for the compression rates studied. The fit of the experimental volume fraction as a function of pressure provides information on effective activation energy and average activation volume at amore » given compression rate. The resulting parameters are successfully used for interpreting several experimental observables that are compression-rate dependent, such as the transition time, grain size, and over-pressurization. The effective activation energy (Q{sub eff}) is found to decrease linearly with the logarithm of compression rate. When Q{sub eff} is applied to the Arrhenius equation, this relationship can be used to interpret the experimentally observed linear relationship between the logarithm of the transition time and logarithm of the compression rates. The decrease of Q{sub eff} with increasing compression rate results in the decrease of the nucleation rate, which is qualitatively in agreement with the observed change of the grain size with compression rate. The observed over-pressurization is also well explained by the model when an exponential relationship between the average activation volume and the compression rate is assumed.« less
A compressible near-wall turbulence model for boundary layer calculations
NASA Technical Reports Server (NTRS)
So, R. M. C.; Zhang, H. S.; Lai, Y. G.
1992-01-01
A compressible near-wall two-equation model is derived by relaxing the assumption of dynamical field similarity between compressible and incompressible flows. This requires justifications for extending the incompressible models to compressible flows and the formulation of the turbulent kinetic energy equation in a form similar to its incompressible counterpart. As a result, the compressible dissipation function has to be split into a solenoidal part, which is not sensitive to changes of compressibility indicators, and a dilational part, which is directly affected by these changes. This approach isolates terms with explicit dependence on compressibility so that they can be modeled accordingly. An equation that governs the transport of the solenoidal dissipation rate with additional terms that are explicitly dependent on the compressibility effects is derived similarly. A model with an explicit dependence on the turbulent Mach number is proposed for the dilational dissipation rate. Thus formulated, all near-wall incompressible flow models could be expressed in terms of the solenoidal dissipation rate and straight-forwardly extended to compressible flows. Therefore, the incompressible equations are recovered correctly in the limit of constant density. The two-equation model and the assumption of constant turbulent Prandtl number are used to calculate compressible boundary layers on a flat plate with different wall thermal boundary conditions and free-stream Mach numbers. The calculated results, including the near-wall distributions of turbulence statistics and their limiting behavior, are in good agreement with measurements. In particular, the near-wall asymptotic properties are found to be consistent with incompressible behavior; thus suggesting that turbulent flows in the viscous sublayer are not much affected by compressibility effects.
Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les
2012-12-01
This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.
2001-12-01
A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.
N-Cadherin Maintains the Healthy Biology of Nucleus Pulposus Cells under High-Magnitude Compression.
Wang, Zhenyu; Leng, Jiali; Zhao, Yuguang; Yu, Dehai; Xu, Feng; Song, Qingxu; Qu, Zhigang; Zhuang, Xinming; Liu, Yi
2017-01-01
Mechanical load can regulate disc nucleus pulposus (NP) biology in terms of cell viability, matrix homeostasis and cell phenotype. N-cadherin (N-CDH) is a molecular marker of NP cells. This study investigated the role of N-CDH in maintaining NP cell phenotype, NP matrix synthesis and NP cell viability under high-magnitude compression. Rat NP cells seeded on scaffolds were perfusion-cultured using a self-developed perfusion bioreactor for 5 days. NP cell biology in terms of cell apoptosis, matrix biosynthesis and cell phenotype was studied after the cells were subjected to different compressive magnitudes (low- and high-magnitudes: 2% and 20% compressive deformation, respectively). Non-loaded NP cells were used as controls. Lentivirus-mediated N-CDH overexpression was used to further investigate the role of N-CDH under high-magnitude compression. The 20% deformation compression condition significantly decreased N-CDH expression compared with the 2% deformation compression and control conditions. Meanwhile, 20% deformation compression increased the number of apoptotic NP cells, up-regulated the expression of Bax and cleaved-caspase-3 and down-regulated the expression of Bcl-2, matrix macromolecules (aggrecan and collagen II) and NP cell markers (glypican-3, CAXII and keratin-19) compared with 2% deformation compression. Additionally, N-CDH overexpression attenuated the effects of 20% deformation compression on NP cell biology in relation to the designated parameters. N-CDH helps to restore the cell viability, matrix biosynthesis and cellular phenotype of NP cells under high-magnitude compression. © 2017 The Author(s). Published by S. Karger AG, Basel.
Blomberg, Hans; Gedeborg, Rolf; Berglund, Lars; Karlsten, Rolf; Johansson, Jakob
2011-10-01
Mechanical chest compression devices are being implemented as an aid in cardiopulmonary resuscitation (CPR), despite lack of evidence of improved outcome. This manikin study evaluates the CPR-performance of ambulance crews, who had a mechanical chest compression device implemented in their routine clinical practice 8 months previously. The objectives were to evaluate time to first defibrillation, no-flow time, and estimate the quality of compressions. The performance of 21 ambulance crews (ambulance nurse and emergency medical technician) with the authorization to perform advanced life support was studied in an experimental, randomized cross-over study in a manikin setup. Each crew performed two identical CPR scenarios, with and without the aid of the mechanical compression device LUCAS. A computerized manikin was used for data sampling. There were no substantial differences in time to first defibrillation or no-flow time until first defibrillation. However, the fraction of adequate compressions in relation to total compressions was remarkably low in LUCAS-CPR (58%) compared to manual CPR (88%) (95% confidence interval for the difference: 13-50%). Only 12 out of the 21 ambulance crews (57%) applied the mandatory stabilization strap on the LUCAS device. The use of a mechanical compression aid was not associated with substantial differences in time to first defibrillation or no-flow time in the early phase of CPR. However, constant but poor chest compressions due to failure in recognizing and correcting a malposition of the device may counteract a potential benefit of mechanical chest compressions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Donoghue, Aaron; Hsieh, Ting-Chang; Myers, Sage; Mak, Allison; Sutton, Robert; Nadkarni, Vinay
2015-06-01
To describe the adherence to guidelines for CPR in a tertiary pediatric emergency department (ED) where resuscitations are reviewed by videorecording. Resuscitations in a tertiary pediatric ED are videorecorded as part of a quality improvement project. Patients receiving CPR under videorecorded conditions were eligible for inclusion. CPR parameters were quantified by retrospective review. Data were described by 30-s epoch (compression rate, ventilation rate, compression:ventilation ratio), by segment (duration of single providers' compressions) and by overall event (compression fraction). Duration of interruptions in compressions was measured; tasks completed during pauses were tabulated. 33 children received CPR under videorecorded conditions. A total of 650 min of CPR were analyzed. Chest compressions were performed at <100/min in 90/714 (13%) of epochs; 100-120/min in 309/714 (43%); >120/min in 315/714 (44%). Ventilations were 6-12 breaths/min in 201/708 (23%) of epochs and >12/min in 489/708 (70%). During CPR without an artificial airway, compression:ventilation coordination (15:2) was done in 93/234 (40%) of epochs. 178 pauses in CPR occurred; 120 (67%) were <10s in duration. Of 370 segments of compressions by individual providers, 282/370 (76%) were <2 min in duration. Median compression fraction was 91% (range 88-100%). CPR in a tertiary pediatric ED frequently met recommended parameters for compression rate, pause duration, and compression fraction. Hyperventilation and failure of C:V coordination were very common. Future studies should focus on the impact of training methods on CPR performance as documented by videorecording. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
46 CFR 112.50-7 - Compressed air starting.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 4 2013-10-01 2013-10-01 false Compressed air starting. 112.50-7 Section 112.50-7... air starting. A compressed air starting system must meet the following: (a) The starting, charging... air compressors addressed in paragraph (c)(3)(i) of this section. (b) The compressed air starting...
46 CFR 112.50-7 - Compressed air starting.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 4 2014-10-01 2014-10-01 false Compressed air starting. 112.50-7 Section 112.50-7... air starting. A compressed air starting system must meet the following: (a) The starting, charging... air compressors addressed in paragraph (c)(3)(i) of this section. (b) The compressed air starting...
49 CFR 393.68 - Compressed natural gas fuel containers.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 5 2010-10-01 2010-10-01 false Compressed natural gas fuel containers. 393.68... AND ACCESSORIES NECESSARY FOR SAFE OPERATION Fuel Systems § 393.68 Compressed natural gas fuel containers. (a) Applicability. The rules in this section apply to compressed natural gas (CNG) fuel...
METHOD OF FIXING NITROGEN FOR PRODUCING OXIDES OF NITROGEN
Harteck, P.; Dondes, S.
1959-08-01
A method is described for fixing nitrogen from air by compressing the air, irradiating the compressed air in a nuclear reactor, cooling to remove NO/ sub 2/, compressing the cooled gas, further cooling to remove N/sub 2/O and recirculating the cooled compressed air to the reactor.
42 CFR 84.87 - Compressed gas filters; minimum requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Compressed gas filters; minimum requirements. 84.87...-Contained Breathing Apparatus § 84.87 Compressed gas filters; minimum requirements. All self-contained breathing apparatus using compressed gas shall have a filter downstream of the gas source to effectively...
42 CFR 84.87 - Compressed gas filters; minimum requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 1 2013-10-01 2013-10-01 false Compressed gas filters; minimum requirements. 84.87...-Contained Breathing Apparatus § 84.87 Compressed gas filters; minimum requirements. All self-contained breathing apparatus using compressed gas shall have a filter downstream of the gas source to effectively...
42 CFR 84.87 - Compressed gas filters; minimum requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 1 2011-10-01 2011-10-01 false Compressed gas filters; minimum requirements. 84.87...-Contained Breathing Apparatus § 84.87 Compressed gas filters; minimum requirements. All self-contained breathing apparatus using compressed gas shall have a filter downstream of the gas source to effectively...
42 CFR 84.87 - Compressed gas filters; minimum requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 1 2012-10-01 2012-10-01 false Compressed gas filters; minimum requirements. 84.87...-Contained Breathing Apparatus § 84.87 Compressed gas filters; minimum requirements. All self-contained breathing apparatus using compressed gas shall have a filter downstream of the gas source to effectively...
42 CFR 84.87 - Compressed gas filters; minimum requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 1 2014-10-01 2014-10-01 false Compressed gas filters; minimum requirements. 84.87...-Contained Breathing Apparatus § 84.87 Compressed gas filters; minimum requirements. All self-contained breathing apparatus using compressed gas shall have a filter downstream of the gas source to effectively...
46 CFR 197.338 - Compressed gas cylinders.
Code of Federal Regulations, 2010 CFR
2010-10-01
... STANDARDS GENERAL PROVISIONS Commercial Diving Operations Equipment § 197.338 Compressed gas cylinders. Each compressed gas cylinder must— (a) Be stored in a ventilated area; (b) Be protected from excessive heat; (c... 46 Shipping 7 2010-10-01 2010-10-01 false Compressed gas cylinders. 197.338 Section 197.338...
49 CFR 393.68 - Compressed natural gas fuel containers.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 5 2011-10-01 2011-10-01 false Compressed natural gas fuel containers. 393.68... AND ACCESSORIES NECESSARY FOR SAFE OPERATION Fuel Systems § 393.68 Compressed natural gas fuel containers. (a) Applicability. The rules in this section apply to compressed natural gas (CNG) fuel...
Multichannel Compression, Temporal Cues, and Audibility.
ERIC Educational Resources Information Center
Souza, Pamela E.; Turner, Christopher W.
1998-01-01
The effect of the reduction of the temporal envelope produced by multichannel compression on recognition was examined in 16 listeners with hearing loss, with particular focus on audibility of the speech signal. Multichannel compression improved speech recognition when superior audibility was provided by a two-channel compression system over linear…
Magnetic compression laser driving circuit
Ball, D.G.; Birx, D.; Cook, E.G.
1993-01-05
A magnetic compression laser driving circuit is disclosed. The magnetic compression laser driving circuit compresses voltage pulses in the range of 1.5 microseconds at 20 kilovolts of amplitude to pulses in the range of 40 nanoseconds and 60 kilovolts of amplitude. The magnetic compression laser driving circuit includes a multi-stage magnetic switch where the last stage includes a switch having at least two turns which has larger saturated inductance with less core material so that the efficiency of the circuit and hence the laser is increased.
Mid-IR soliton compression in silicon optical fibers and fiber tapers.
Peacock, Anna C
2012-03-01
Numerical simulations are used to investigate soliton compression in silicon core optical fibers at 2.3 μm in the mid-infrared waveguide regime. Compression in both standard silicon fibers and fiber tapers is compared to establish the relative compression ratios for a range of input pulse conditions. The results show that tapered fibers can be used to obtain higher levels of compression for moderate soliton orders and thus lower input powers. © 2012 Optical Society of America
The Polygon-Ellipse Method of Data Compression of Weather Maps
1994-03-28
Report No. DOT’•FAAJRD-9416 Pr•oject Report AD-A278 958 ATC-213 The Polygon-Ellipse Method of Data Compression of Weather Maps ELDCT E J.L. GerIz 28...a o means must he- found to Compress this image. The l’olygion.Ellip.e (PE.) encoding algorithm develop.ed in this report rt-premrnt. weather regions...severely compress the image. For example, Mode S would require approximately a 10-fold compression . In addition, the algorithms used to perform the
Sequential neural text compression.
Schmidhuber, J; Heil, S
1996-01-01
The purpose of this paper is to show that neural networks may be promising tools for data compression without loss of information. We combine predictive neural nets and statistical coding techniques to compress text files. We apply our methods to certain short newspaper articles and obtain compression ratios exceeding those of the widely used Lempel-Ziv algorithms (which build the basis of the UNIX functions "compress" and "gzip"). The main disadvantage of our methods is that they are about three orders of magnitude slower than standard methods.
Gain compression and its dependence on output power in quantum dot lasers
NASA Astrophysics Data System (ADS)
Zhukov, A. E.; Maximov, M. V.; Savelyev, A. V.; Shernyakov, Yu. M.; Zubov, F. I.; Korenev, V. V.; Martinez, A.; Ramdane, A.; Provost, J.-G.; Livshits, D. A.
2013-06-01
The gain compression coefficient was evaluated by applying the frequency modulation/amplitude modulation technique in a distributed feedback InAs/InGaAs quantum dot laser. A strong dependence of the gain compression coefficient on the output power was found. Our analysis of the gain compression within the frame of the modified well-barrier hole burning model reveals that the gain compression coefficient decreases beyond the lasing threshold, which is in a good agreement with the experimental observations.
High-quality lossy compression: current and future trends
NASA Astrophysics Data System (ADS)
McLaughlin, Steven W.
1995-01-01
This paper is concerned with current and future trends in the lossy compression of real sources such as imagery, video, speech and music. We put all lossy compression schemes into common framework where each can be characterized in terms of three well-defined advantages: cell shape, region shape and memory advantages. We concentrate on image compression and discuss how new entropy constrained trellis-based compressors achieve cell- shape, region-shape and memory gain resulting in high fidelity and high compression.
Robak, A N
2008-11-01
A new method for the formation of a compression esophagointestinal anastomosis is proposed. The compression force in the new device for creation of compression circular anastomoses is created by means of a titanium nickelide spring with a "shape memory" effect. Experimental study showed good prospects of the new device and the advantages of the anastomosis compression suture formed by means of this device in comparison with manual ligature suturing.
Malla, Ratnakar
2008-11-06
HTTP compression is a technique specified as part of the W3C HTTP 1.0 standard. It allows HTTP servers to take advantage of GZIP compression technology that is built into latest browsers. A brief survey of medical informatics websites show that compression is not enabled. With compression enabled, downloaded files sizes are reduced by more than 50% and typical transaction time is also reduced from 20 to 8 minutes, thus providing a better user experience.
Highly compressible and all-solid-state supercapacitors based on nanostructured composite sponge.
Niu, Zhiqiang; Zhou, Weiya; Chen, Xiaodong; Chen, Jun; Xie, Sishen
2015-10-21
Based on polyaniline-single-walled carbon nanotubes -sponge electrodes, highly compressible all-solid-state supercapacitors are prepared with an integrated configuration using a poly(vinyl alcohol) (PVA)/H2 SO4 gel as the electrolyte. The unique configuration enables the resultant supercapacitors to be compressed as an integrated unit arbitrarily during 60% compressible strain. Furthermore, the performance of the resultant supercapacitors is nearly unchanged even under 60% compressible strain. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The effect of lossy image compression on image classification
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1995-01-01
We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.
Magnetic compression laser driving circuit
Ball, Don G.; Birx, Dan; Cook, Edward G.
1993-01-01
A magnetic compression laser driving circuit is disclosed. The magnetic compression laser driving circuit compresses voltage pulses in the range of 1.5 microseconds at 20 Kilovolts of amplitude to pulses in the range of 40 nanoseconds and 60 Kilovolts of amplitude. The magnetic compression laser driving circuit includes a multi-stage magnetic switch where the last stage includes a switch having at least two turns which has larger saturated inductance with less core material so that the efficiency of the circuit and hence the laser is increased.
Data Compression With Application to Geo-Location
2010-08-01
wireless sensor network requires the estimation of time-difference-of-arrival (TDOA) parameters using data collected by a set of spatially separated sensors. Compressing the data that is shared among the sensors can provide tremendous savings in terms of the energy and transmission latency. Traditional MSE and perceptual based data compression schemes fail to accurately capture the effects of compression on the TDOA estimation task; therefore, it is necessary to investigate compression algorithms suitable for TDOA parameter estimation. This thesis explores the
Fu, C.Y.; Petrich, L.I.
1997-12-30
An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.
Near-wall modeling of compressible turbulent flow
NASA Technical Reports Server (NTRS)
So, Ronald M. C.
1991-01-01
A near-wall two-equation model for compressible flows is proposed. The model is formulated by relaxing the assumption of dynamic field similarity between compressible and incompressible flows. A postulate is made to justify the extension of incompressible models to ammount for compressibility effects. This requires formulation the turbulent kinetic energy equation in a form similar to its incompressible counterpart. As a result, the compressible dissipation function has to be split into a solenoidal part, which is not sensitive to changes of compressibility indicators, and a dilatational part, which is directly affected by these changes. A model with an explicit dependence on the turbulent Mach number is proposed for the dilatational dissipation rate.
Hyperspectral data compression using a Wiener filter predictor
NASA Astrophysics Data System (ADS)
Villeneuve, Pierre V.; Beaven, Scott G.; Stocker, Alan D.
2013-09-01
The application of compression to hyperspectral image data is a significant technical challenge. A primary bottleneck in disseminating data products to the tactical user community is the limited communication bandwidth between the airborne sensor and the ground station receiver. This report summarizes the newly-developed "Z-Chrome" algorithm for lossless compression of hyperspectral image data. A Wiener filter prediction framework is used as a basis for modeling new image bands from already-encoded bands. The resulting residual errors are then compressed using available state-of-the-art lossless image compression functions. Compression performance is demonstrated using a large number of test data collected over a wide variety of scene content from six different airborne and spaceborne sensors .
A new efficient method for color image compression based on visual attention mechanism
NASA Astrophysics Data System (ADS)
Shao, Xiaoguang; Gao, Kun; Lv, Lily; Ni, Guoqiang
2010-11-01
One of the key procedures in color image compression is to extract its region of interests (ROIs) and evaluate different compression ratios. A new non-uniform color image compression algorithm with high efficiency is proposed in this paper by using a biology-motivated selective attention model for the effective extraction of ROIs in natural images. When the ROIs have been extracted and labeled in the image, the subsequent work is to encode the ROIs and other regions with different compression ratios via popular JPEG algorithm. Furthermore, experiment results and quantitative and qualitative analysis in the paper show perfect performance when comparing with other traditional color image compression approaches.
Joint image encryption and compression scheme based on IWT and SPIHT
NASA Astrophysics Data System (ADS)
Zhang, Miao; Tong, Xiaojun
2017-03-01
A joint lossless image encryption and compression scheme based on integer wavelet transform (IWT) and set partitioning in hierarchical trees (SPIHT) is proposed to achieve lossless image encryption and compression simultaneously. Making use of the properties of IWT and SPIHT, encryption and compression are combined. Moreover, the proposed secure set partitioning in hierarchical trees (SSPIHT) via the addition of encryption in the SPIHT coding process has no effect on compression performance. A hyper-chaotic system, nonlinear inverse operation, Secure Hash Algorithm-256(SHA-256), and plaintext-based keystream are all used to enhance the security. The test results indicate that the proposed methods have high security and good lossless compression performance.
Tensile and Compressive Constitutive Response of 316 Stainless Steel at Elevated Temperatures
NASA Technical Reports Server (NTRS)
Manson, S. S.; Muralidharan, U.; Halford, G. R.
1983-01-01
Creep rate in compression is lower by factors of 2 to 10 than in tension if the microstructure of the two specimens is the same and are tested at equal temperatures and equal but opposite stresses. Such behavior is characteristic for monotonic creep and conditions involving cyclic creep. In the latter case creep rate in both tension and compression progressively increases from cycle to cycle, rendering questionable the possibility of expressing a time stabilized constitutive relationship. The difference in creep rates in tension and compression is considerably reduced if the tension specimen is first subjected to cycles of tensile creep (reversed by compressive plasticity), while the compression specimen is first subjected to cycles of compressive creep (reversed by tensile plasticity). In both cases, the test temperature is the same and the stresses are equal and opposite. Such reduction is a reflection of differences in microstructure of the specimens resulting from different prior mechanical history.
Microbiological contamination of compressed air used in dentistry: an investigation.
Conte, M; Lynch, R M; Robson, M G
2001-11-01
The purpose of this preliminary investigation was twofold: 1) to examine the possibility of cross-contamination between a dental-evacuation system and the compressed air used in dental operatories and 2) to capture and identify the most common microflora in the compressed-air supply. The investigation used swab, water, and air sampling that was designed to track microorganisms from the evacuation system, through the air of the mechanical room, into the compressed-air system, and back to the patient. Samples taken in the vacuum system, the air space in the mechanical room, and the compressed-air storage tank had significantly higher total concentrations of bacteria than the outside air sampled. Samples of the compressed air returning to the operatory were found to match the outside air sample in total bacteria. It was concluded that the air dryer may have played a significant role in the elimination of microorganisms from the dental compressed-air supply.
A comparison of select image-compression algorithms for an electronic still camera
NASA Technical Reports Server (NTRS)
Nerheim, Rosalee
1989-01-01
This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.
Sharifahmadian, Ershad
2006-01-01
The set partitioning in hierarchical trees (SPIHT) algorithm is very effective and computationally simple technique for image and signal compression. Here the author modified the algorithm which provides even better performance than the SPIHT algorithm. The enhanced set partitioning in hierarchical trees (ESPIHT) algorithm has performance faster than the SPIHT algorithm. In addition, the proposed algorithm reduces the number of bits in a bit stream which is stored or transmitted. I applied it to compression of multichannel ECG data. Also, I presented a specific procedure based on the modified algorithm for more efficient compression of multichannel ECG data. This method employed on selected records from the MIT-BIH arrhythmia database. According to experiments, the proposed method attained the significant results regarding compression of multichannel ECG data. Furthermore, in order to compress one signal which is stored for a long time, the proposed multichannel compression method can be utilized efficiently.
Tseng, Yun-Hua; Lu, Chih-Wen
2017-01-01
Compressed sensing (CS) is a promising approach to the compression and reconstruction of electrocardiogram (ECG) signals. It has been shown that following reconstruction, most of the changes between the original and reconstructed signals are distributed in the Q, R, and S waves (QRS) region. Furthermore, any increase in the compression ratio tends to increase the magnitude of the change. This paper presents a novel approach integrating the near-precise compressed (NPC) and CS algorithms. The simulation results presented notable improvements in signal-to-noise ratio (SNR) and compression ratio (CR). The efficacy of this approach was verified by fabricating a highly efficient low-cost chip using the Taiwan Semiconductor Manufacturing Company’s (TSMC) 0.18-μm Complementary Metal-Oxide-Semiconductor (CMOS) technology. The proposed core has an operating frequency of 60 MHz and gate counts of 2.69 K. PMID:28991216
Reversible Watermarking Surviving JPEG Compression.
Zain, J; Clarke, M
2005-01-01
This paper will discuss the properties of watermarking medical images. We will also discuss the possibility of such images being compressed by JPEG and give an overview of JPEG compression. We will then propose a watermarking scheme that is reversible and robust to JPEG compression. The purpose is to verify the integrity and authenticity of medical images. We used 800x600x8 bits ultrasound (US) images in our experiment. SHA-256 of the image is then embedded in the Least significant bits (LSB) of an 8x8 block in the Region of Non Interest (RONI). The image is then compressed using JPEG and decompressed using Photoshop 6.0. If the image has not been altered, the watermark extracted will match the hash (SHA256) of the original image. The result shown that the embedded watermark is robust to JPEG compression up to image quality 60 (~91% compressed).
Generalized massive optimal data compression
NASA Astrophysics Data System (ADS)
Alsing, Justin; Wandelt, Benjamin
2018-05-01
In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.
MHD simulation of plasma compression experiments
NASA Astrophysics Data System (ADS)
Reynolds, Meritt; Barsky, Sandra; de Vietien, Peter
2017-10-01
General Fusion (GF) is working to build a magnetized target fusion (MTF) power plant based on compression of magnetically-confined plasma by liquid metal. GF is testing this compression concept by collapsing solid aluminum liners onto plasmas formed by coaxial helicity injection in a series of experiments called PCS (Plasma Compression, Small). We simulate the PCS experiments using the finite-volume MHD code VAC. The single-fluid plasma model includes temperature-dependent resistivity and anisotropic heat transport. The time-dependent curvilinear mesh for MHD simulation is derived from LS-DYNA simulations of actual field tests of liner implosion. We will discuss how 3D simulations reproduced instability observed in the PCS13 experiment and correctly predicted stabilization of PCS14 by ramping the shaft current during compression. We will also present a comparison of simulated Mirnov and x-ray diagnostics with experimental measurements indicating that PCS14 compressed well to a linear compression ratio of 2.5:1.
Dissipative processes under the shock compression of glass
NASA Astrophysics Data System (ADS)
Savinykh, A. S.; Kanel, G. I.; Cherepanov, I. A.; Razorenov, S. V.
2016-03-01
New experimental data on the behavior of the K8 and TF1 glasses under shock-wave loading conditions are obtained. It is found that the propagation of shock waves is close to the self-similar one in the maximum compression stress range 4-12 GPa. Deviations from a general deformation diagram, which are related to viscous dissipation, take place when the final state of compression is approached. The parameter region in which failure waves form in glass is found not to be limited to the elastic compression stress range, as was thought earlier. The failure front velocity increases with the shock compression stress. Outside the region covered by a failure wave, the glasses demonstrate a high tensile dynamic strength (6-7 GPa) in the case of elastic compression, and this strength is still very high after transition through the elastic limit in a compression wave.
Influence of compressibility on the Lagrangian statistics of vorticity-strain-rate interactions.
Danish, Mohammad; Sinha, Sawan Suman; Srinivasan, Balaji
2016-07-01
The objective of this study is to investigate the influence of compressibility on Lagrangian statistics of vorticity and strain-rate interactions. The Lagrangian statistics are extracted from "almost" time-continuous data sets of direct numerical simulations of compressible decaying isotropic turbulence by employing a cubic spline-based Lagrangian particle tracker. We study the influence of compressibility on Lagrangian statistics of alignment in terms of compressibility parameters-turbulent Mach number, normalized dilatation-rate, and flow topology. In comparison to incompressible turbulence, we observe that the presence of compressibility in a flow field weakens the alignment tendency of vorticity toward the largest strain-rate eigenvector. Based on the Lagrangian statistics of alignment conditioned on dilatation and topology, we find that the weakened tendency of alignment observed in compressible turbulence is because of a special group of fluid particles that have an initially negligible dilatation-rate and are associated with stable-focus-stretching topology.
Method for testing the strength and structural integrity of nuclear fuel particles
Lessing, P.A.
1995-10-17
An accurate method for testing the strength of nuclear fuel particles is disclosed. Each particle includes an upper and lower portion, and is placed within a testing apparatus having upper and lower compression members. The upper compression member includes a depression therein which is circular and sized to receive only part of the upper portion of the particle. The lower compression member also includes a similar depression. The compression members are parallel to each other with the depressions therein being axially aligned. The fuel particle is then placed between the compression members and engaged within the depressions. The particle is then compressed between the compression members until it fractures. The amount of force needed to fracture the particle is thereafter recorded. This technique allows a broader distribution of forces and provides more accurate results compared with systems which distribute forces at singular points on the particle. 13 figs.
Method for testing the strength and structural integrity of nuclear fuel particles
Lessing, Paul A.
1995-01-01
An accurate method for testing the strength of nuclear fuel particles. Each particle includes an upper and lower portion, and is placed within a testing apparatus having upper and lower compression members. The upper compression member includes a depression therein which is circular and sized to receive only part of the upper portion of the particle. The lower compression member also includes a similar depression. The compression members are parallel to each other with the depressions therein being axially aligned. The fuel particle is then placed between the compression members and engaged within the depressions. The particle is then compressed between the compression members until it fractures. The amount of force needed to fracture the particle is thereafter recorded. This technique allows a broader distribution of forces and provides more accurate results compared with systems which distribute forces at singular points on the particle.
Modeling turbulent energy behavior and sudden viscous dissipation in compressing plasma turbulence
Davidovits, Seth; Fisch, Nathaniel J.
2017-12-21
Here, we present a simple model for the turbulent kinetic energy behavior of subsonic plasma turbulence undergoing isotropic three-dimensional compression, which may exist in various inertial confinement fusion experiments or astrophysical settings. The plasma viscosity depends on both the temperature and the ionization state, for which many possible scalings with compression are possible. For example, in an adiabatic compression the temperature scales as 1/L 2, with L the linear compression ratio, but if thermal energy loss mechanisms are accounted for, the temperature scaling may be weaker. As such, the viscosity has a wide range of net dependencies on the compression.more » The model presented here, with no parameter changes, agrees well with numerical simulations for a range of these dependencies. This model permits the prediction of the partition of injected energy between thermal and turbulent energy in a compressing plasma.« less
Temporal compression in episodic memory for real-life events.
Jeunehomme, Olivier; Folville, Adrien; Stawarczyk, David; Van der Linden, Martial; D'Argembeau, Arnaud
2018-07-01
Remembering an event typically takes less time than experiencing it, suggesting that episodic memory represents past experience in a temporally compressed way. Little is known, however, about how the continuous flow of real-life events is summarised in memory. Here we investigated the nature and determinants of temporal compression by directly comparing memory contents with the objective timing of events as measured by a wearable camera. We found that episodic memories consist of a succession of moments of prior experience that represent events with varying compression rates, such that the density of retrieved information is modulated by goal processing and perceptual changes. Furthermore, the results showed that temporal compression rates remain relatively stable over one week and increase after a one-month delay, particularly for goal-related events. These data shed new light on temporal compression in episodic memory and suggest that compression rates are adaptively modulated to maintain current goal-relevant information.
Modeling turbulent energy behavior and sudden viscous dissipation in compressing plasma turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidovits, Seth; Fisch, Nathaniel J.
Here, we present a simple model for the turbulent kinetic energy behavior of subsonic plasma turbulence undergoing isotropic three-dimensional compression, which may exist in various inertial confinement fusion experiments or astrophysical settings. The plasma viscosity depends on both the temperature and the ionization state, for which many possible scalings with compression are possible. For example, in an adiabatic compression the temperature scales as 1/L 2, with L the linear compression ratio, but if thermal energy loss mechanisms are accounted for, the temperature scaling may be weaker. As such, the viscosity has a wide range of net dependencies on the compression.more » The model presented here, with no parameter changes, agrees well with numerical simulations for a range of these dependencies. This model permits the prediction of the partition of injected energy between thermal and turbulent energy in a compressing plasma.« less
Compressive stress system for a gas turbine engine
Hogberg, Nicholas Alvin
2015-03-24
The present application provides a compressive stress system for a gas turbine engine. The compressive stress system may include a first bucket attached to a rotor, a second bucket attached to the rotor, the first and the second buckets defining a shank pocket therebetween, and a compressive stress spring positioned within the shank pocket.
41 CFR 50-204.8 - Use of compressed air.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 41 Public Contracts and Property Management 1 2013-07-01 2013-07-01 false Use of compressed air. 50-204.8 Section 50-204.8 Public Contracts and Property Management Other Provisions Relating to... CONTRACTS General Safety and Health Standards § 50-204.8 Use of compressed air. Compressed air shall not be...
41 CFR 50-204.8 - Use of compressed air.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 41 Public Contracts and Property Management 1 2014-07-01 2014-07-01 false Use of compressed air. 50-204.8 Section 50-204.8 Public Contracts and Property Management Other Provisions Relating to... CONTRACTS General Safety and Health Standards § 50-204.8 Use of compressed air. Compressed air shall not be...
49 CFR 571.303 - Standard No. 303; Fuel system integrity of compressed natural gas vehicles.
Code of Federal Regulations, 2010 CFR
2010-10-01
... compressed natural gas vehicles. 571.303 Section 571.303 Transportation Other Regulations Relating to... system integrity of compressed natural gas vehicles. S1. Scope. This standard specifies requirements for the integrity of motor vehicle fuel systems using compressed natural gas (CNG), including the CNG fuel...
An investigation of the compressive strength of PRD-49-3/Epoxy composites
NASA Technical Reports Server (NTRS)
Kulkarni, S. V.; Rice, J. S.; Rosen, B. W.
1973-01-01
The development of unidirectional fiber composite materials is discussed. The mechanical and physical properties of the materials are described. Emphasis is placed in analyzing the compressive behavior of composite materials and developing methods for increasing compressive strength. The test program for evaluating the various procedures for improving compressive strength are reported.
Evaluation of a Text Compression Algorithm Against Computer-Aided Instruction (CAI) Material.
ERIC Educational Resources Information Center
Knight, Joseph M., Jr.
This report describes the initial evaluation of a text compression algorithm against computer assisted instruction (CAI) material. A review of some concepts related to statistical text compression is followed by a detailed description of a practical text compression algorithm. A simulation of the algorithm was programed and used to obtain…
Quantization Distortion in Block Transform-Compressed Data
NASA Technical Reports Server (NTRS)
Boden, A. F.
1995-01-01
The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.
Energy-efficient sensing in wireless sensor networks using compressed sensing.
Razzaque, Mohammad Abdur; Dobson, Simon
2014-02-12
Sensing of the application environment is the main purpose of a wireless sensor network. Most existing energy management strategies and compression techniques assume that the sensing operation consumes significantly less energy than radio transmission and reception. This assumption does not hold in a number of practical applications. Sensing energy consumption in these applications may be comparable to, or even greater than, that of the radio. In this work, we support this claim by a quantitative analysis of the main operational energy costs of popular sensors, radios and sensor motes. In light of the importance of sensing level energy costs, especially for power hungry sensors, we consider compressed sensing and distributed compressed sensing as potential approaches to provide energy efficient sensing in wireless sensor networks. Numerical experiments investigating the effectiveness of compressed sensing and distributed compressed sensing using real datasets show their potential for efficient utilization of sensing and overall energy costs in wireless sensor networks. It is shown that, for some applications, compressed sensing and distributed compressed sensing can provide greater energy efficiency than transform coding and model-based adaptive sensing in wireless sensor networks.
Evaluation on Compressive Characteristics of Medical Stents Applied by Mesh Structures
NASA Astrophysics Data System (ADS)
Hirayama, Kazuki; He, Jianmei
2017-11-01
There are concerns about strength reduction and fatigue fracture due to stress concentration in currently used medical stents. To address these problems, meshed stents applied by mesh structures were interested for achieving long life and high strength perfromance of medical stents. The purpose of this study is to design basic mesh shapes to obatin three dimensional (3D) meshed stent models for mechanical property evaluation. The influence of introduced design variables on compressive characteristics of meshed stent models are evaluated through finite element analysis using ANSYS Workbench code. From the analytical results, the compressive stiffness are changed periodically with compressive directions, average results need to be introduced as the mean value of compressive stiffness of meshed stents. Secondly, compressive flexibility of meshed stents can be improved by increasing the angle proportional to the arm length of the mesh basic shape. By increasing the number of basic mesh shapes arranged in stent’s circumferential direction, compressive rigidity of meshed stent tends to be increased. Finaly reducing the mesh line width is found effective to improve compressive flexibility of meshed stents.
NASA Astrophysics Data System (ADS)
Lv, Peng; Tang, Xun; Zheng, Ruilin; Ma, Xiaobo; Yu, Kehan; Wei, Wei
2017-12-01
Superelastic graphene aerogel with ultra-high compressibility shows promising potential for compression-tolerant supercapacitor electrode. However, its specific capacitance is too low to meet the practical application. Herein, we deposited polyaniline (PANI) into the superelastic graphene aerogel to improve the capacitance while maintaining the superelasticity. Graphene/PANI aerogel with optimized PANI mass content of 63 wt% shows the improved specific capacitance of 713 F g-1 in the three-electrode system. And the graphene/PANI aerogel presents a high recoverable compressive strain of 90% due to the strong interaction between PANI and graphene. The all-solid-state supercapacitors were assembled to demonstrate the compression-tolerant ability of graphene/PANI electrodes. The gravimetric capacitance of graphene/PANI electrodes reaches 424 F g-1 and retains 96% even at 90% compressive strain. And a volumetric capacitance of 65.5 F cm-3 is achieved, which is much higher than that of other compressible composite electrodes. Furthermore, several compressible supercapacitors can be integrated and connected in series to enhance the overall output voltage, suggesting the potential to meet the practical application.
Lv, Peng; Tang, Xun; Zheng, Ruilin; Ma, Xiaobo; Yu, Kehan; Wei, Wei
2017-12-19
Superelastic graphene aerogel with ultra-high compressibility shows promising potential for compression-tolerant supercapacitor electrode. However, its specific capacitance is too low to meet the practical application. Herein, we deposited polyaniline (PANI) into the superelastic graphene aerogel to improve the capacitance while maintaining the superelasticity. Graphene/PANI aerogel with optimized PANI mass content of 63 wt% shows the improved specific capacitance of 713 F g -1 in the three-electrode system. And the graphene/PANI aerogel presents a high recoverable compressive strain of 90% due to the strong interaction between PANI and graphene. The all-solid-state supercapacitors were assembled to demonstrate the compression-tolerant ability of graphene/PANI electrodes. The gravimetric capacitance of graphene/PANI electrodes reaches 424 F g -1 and retains 96% even at 90% compressive strain. And a volumetric capacitance of 65.5 F cm -3 is achieved, which is much higher than that of other compressible composite electrodes. Furthermore, several compressible supercapacitors can be integrated and connected in series to enhance the overall output voltage, suggesting the potential to meet the practical application.
The effects of lossy compression on diagnostically relevant seizure information in EEG signals.
Higgins, G; McGinley, B; Faul, S; McEvoy, R P; Glavin, M; Marnane, W P; Jones, E
2013-01-01
This paper examines the effects of compression on EEG signals, in the context of automated detection of epileptic seizures. Specifically, it examines the use of lossy compression on EEG signals in order to reduce the amount of data which has to be transmitted or stored, while having as little impact as possible on the information in the signal relevant to diagnosing epileptic seizures. Two popular compression methods, JPEG2000 and SPIHT, were used. A range of compression levels was selected for both algorithms in order to compress the signals with varying degrees of loss. This compression was applied to the database of epileptiform data provided by the University of Freiburg, Germany. The real-time EEG analysis for event detection automated seizure detection system was used in place of a trained clinician for scoring the reconstructed data. Results demonstrate that compression by a factor of up to 120:1 can be achieved, with minimal loss in seizure detection performance as measured by the area under the receiver operating characteristic curve of the seizure detection system.
Internal combustion engine for natural gas compressor operation
Hagen, Christopher L.; Babbitt, Guy; Turner, Christopher; Echter, Nick; Weyer-Geigel, Kristina
2016-04-19
This application concerns systems and methods for compressing natural gas with an internal combustion engine. In a representative embodiment, a system for compressing a gas comprises a reciprocating internal combustion engine including at least one piston-cylinder assembly comprising a piston configured to travel in a cylinder and to compress gas in the cylinder in multiple compression stages. The system can further comprise a first pressure tank in fluid communication with the piston-cylinder assembly to receive compressed gas from the piston-cylinder assembly until the first pressure tank reaches a predetermined pressure, and a second pressure tank in fluid communication with the piston-cylinder assembly and the first pressure tank. The second pressure tank can be configured to receive compressed gas from the piston-cylinder assembly until the second pressure tank reaches a predetermined pressure. When the first and second pressure tanks have reached the predetermined pressures, the first pressure tank can be configured to supply gas to the piston-cylinder assembly, and the piston can be configured to compress the gas supplied by the first pressure tank such that the compressed gas flows into the second pressure tank.
NASA Technical Reports Server (NTRS)
Tilton, James C.; Manohar, Mareboyana
1994-01-01
Recent advances in imaging technology make it possible to obtain imagery data of the Earth at high spatial, spectral and radiometric resolutions from Earth orbiting satellites. The rate at which the data is collected from these satellites can far exceed the channel capacity of the data downlink. Reducing the data rate to within the channel capacity can often require painful trade-offs in which certain scientific returns are sacrificed for the sake of others. In this paper we model the radiometric version of this form of lossy compression by dropping a specified number of least significant bits from each data pixel and compressing the remaining bits using an appropriate lossless compression technique. We call this approach 'truncation followed by lossless compression' or TLLC. We compare the TLLC approach with applying a lossy compression technique to the data for reducing the data rate to the channel capacity, and demonstrate that each of three different lossy compression techniques (JPEG/DCT, VQ and Model-Based VQ) give a better effective radiometric resolution than TLLC for a given channel rate.
Effect of shock waves on the statistics and scaling in compressible isotropic turbulence
NASA Astrophysics Data System (ADS)
Wang, Jianchun; Wan, Minping; Chen, Song; Xie, Chenyue; Chen, Shiyi
2018-04-01
The statistics and scaling of compressible isotropic turbulence in the presence of large-scale shock waves are investigated by using numerical simulations at turbulent Mach number Mt ranging from 0.30 to 0.65. The spectra of the compressible velocity component, density, pressure, and temperature exhibit a k-2 scaling at different turbulent Mach numbers. The scaling exponents for structure functions of the compressible velocity component and thermodynamic variables are close to 1 at high orders n ≥3 . The probability density functions of increments of the compressible velocity component and thermodynamic variables exhibit a power-law region with the exponent -2 . Models for the conditional average of increments of the compressible velocity component and thermodynamic variables are developed based on the ideal shock relations and are verified by numerical simulations. The overall statistics of the compressible velocity component and thermodynamic variables are similar to one another at different turbulent Mach numbers. It is shown that the effect of shock waves on the compressible velocity spectrum and kinetic energy transfer is different from that of acoustic waves.
The Significance of Education for Mortality Compression in the United States*
Brown, Dustin C.; Hayward, Mark D.; Montez, Jennifer Karas; Humme, Robert A.; Chiu, Chi-Tsun; Hidajat, Mira M.
2012-01-01
Recent studies of old-age mortality trends assess whether longevity improvements over time are linked to increasing compression of mortality at advanced ages. The historical backdrop of these studies is the long-term improvements in a population's socioeconomic resources that fueled longevity gains. We extend this line of inquiry by examining whether socioeconomic differences in longevity within a population are accompanied by old-age mortality compression. Specifically, we document educational differences in longevity and mortality compression for older men and women in the United States. Drawing on the fundamental cause of disease framework, we hypothesize that both longevity and compression increase with higher levels of education and that women with the highest levels of education will exhibit the greatest degree of longevity and compression. Results based on the Health and Retirement Study and the National Health Interview Survey Linked Mortality File confirm a strong educational gradient in both longevity and mortality compression. We also find that mortality is more compressed within educational groups among women than men. The results suggest that educational attainment in the United States maximizes life chances by delaying the biological aging process. PMID:22556045
Analysis of direct-drive capsule compression experiments on the Iskra-5 laser facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gus'kov, S. Yu.; Demchenko, N. N.; Zhidkov, N. V.
2010-09-15
We have analyzed and numerically simulated our experiments on the compression of DT-gas-filled glass capsules under irradiation by a small number of beams on the Iskra-5 facility (12 beams) at the second harmonic of an iodine laser ({lambda} = 0.66 {mu}m) for a laser pulse energy of 2 kJ and duration of 0.5 ns in the case of asymmetric irradiation and compression. Our simulations include the construction of a target illumination map and a histogram of the target surface illumination distribution; 1D capsule compression simulations based on the DIANA code corresponding to various target surface regions; and 2D compression simulationsmore » based on the NUTCY code corresponding to the illumination conditions. We have succeeded in reproducing the shape of the compressed region at the time of maximum compression and the reduction in neutron yield (compared to the 1D simulations) to the experimentally observed values. For the Iskra-5 conditions, we have considered targets that can provide a more symmetric compression and a higher neutron yield.« less
Loaded delay lines for future RF pulse compression systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, R.M.; Wilson, P.B.; Kroll, N.M.
1995-05-01
The peak power delivered by the klystrons in the NLCRA (Next Linear Collider Test Accelerator) now under construction at SLAC is enhanced by a factor of four in a SLED-II type of R.F. pulse compression system (pulse width compression ratio of six). To achieve the desired output pulse duration of 250 ns, a delay line constructed from a 36 m length of circular waveguide is used. Future colliders, however, will require even higher peak power and larger compression factors, which favors a more efficient binary pulse compression approach. Binary pulse compression, however, requires a line whose delay time is approximatelymore » proportional to the compression factor. To reduce the length of these lines to manageable proportions, periodically loaded delay lines are being analyzed using a generalized scattering matrix approach. One issue under study is the possibility of propagating two TE{sub o} modes, one with a high group velocity and one with a group velocity of the order 0.05c, for use in a single-line binary pulse compression system. Particular attention is paid to time domain pulse degradation and to Ohmic losses.« less
NASA Astrophysics Data System (ADS)
Xu, Feng; Rao, Qiuhua; Ma, Wenbo
2018-03-01
The sinkage of a moving tracked mining vehicle is greatly affected by the combined compression-shear rheological properties of soft deep-sea sediments. For test purposes, the best sediment simulant is prepared based on soft deep-sea sediment from a C-C poly-metallic nodule mining area in the Pacific Ocean. Compressive creep tests and shear creep tests are combined to obtain compressive and shear rheological parameters to establish a combined compressive-shear rheological constitutive model and a compression-sinkage rheological constitutive model. The combined compression-shear rheological sinkage of the tracked mining vehicle at different speeds is calculated using the RecurDyn software with a selfprogrammed subroutine to implement the combined compression-shear rheological constitutive model. The model results are compared with shear rheological sinkage and ordinary sinkage (without consideration of rheological properties). These results show that the combined compression-shear rheological constitutive model must be taken into account when calculating the sinkage of a tracked mining vehicle. The combined compression-shear rheological sinkage decrease with vehicle speed and is the largest among the three types of sinkage. The developed subroutine in the RecurDyn software can be used to study the performance and structural optimization of moving tracked mining vehicles.
Dual pathology proximal median nerve compression of the forearm.
Murphy, Siun M; Browne, Katherine; Tuite, David J; O'Shaughnessy, Michael
2013-12-01
We report an unusual case of synchronous pathology in the forearm- the coexistence of a large lipoma of the median nerve together with an osteochondroma of the proximal ulna, giving rise to a dual proximal median nerve compression. Proximal median nerve compression neuropathies in the forearm are uncommon compared to the prevalence of distal compression neuropathies (eg Carpal Tunnel Syndrome). Both neural fibrolipomas (Refs. 1,2) and osteochondromas of the proximal ulna (Ref. 3) in isolation are rare but well documented. Unlike that of a distal compression, a proximal compression of the median nerve will often have a definite cause. Neural fibrolipoma, also called fibrolipomatous hamartoma are rare, slow-growing, benign tumours of peripheral nerves, most often occurring in the median nerve of younger patients. To our knowledge, this is the first report of such dual pathology in the same forearm, giving rise to a severe proximal compression of the median nerve. In this case, the nerve was being pushed anteriorly by the osteochondroma, and was being compressed from within by the intraneural lipoma. This unusual case highlights the advantage of preoperative imaging as part of the workup of proximal median nerve compression. Copyright © 2013 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Clinical Effects of Thai Herbal Compress: A Systematic Review and Meta-Analysis
Dhippayom, Teerapon; Kongkaew, Chuenjid; Chaiyakunapruk, Nathorn; Dilokthornsakul, Piyameth; Sruamsiri, Rosarin; Saokaew, Surasak; Chuthaputti, Anchalee
2015-01-01
Objective. To determine the clinical effects of Thai herbal compress. Methods. International and Thai databases were searched from inception through September 2014. Comparative clinical studies investigating herbal compress for any indications were included. Outcomes of interest included level of pain, difficulties in performing activities, and time from delivery to milk secretion. Mean changes of the outcomes from baseline were compared between herbal compress and comparators by calculating mean difference. Results. A total of 13 studies which involved 778 patients were selected from 369 articles identified. The overall effects of Thai herbal compress on reducing osteoarthritis (OA) and muscle pain were not different from those of nonsteroidal anti-inflammatory drugs, knee exercise, and hot compress. However, the reduction of OA pain in the herbal compress group tended to be higher than that of any comparators (weighted mean difference 0.419; 95% CI −0.004, 0.842) with moderate heterogeneity (I 2 = 58.3%, P = 0.048). When compared with usual care, herbal compress provided significantly less time from delivery to milk secretion in postpartum mothers (mean difference −394.425 minutes; 95% CI −620.084, −168.766). Conclusion. Thai herbal compress may be considered as an alternative for osteoarthritis and muscle pain and could also be used as a treatment of choice to induce lactation. PMID:25861373
NASA Astrophysics Data System (ADS)
Wang, Huamiao; Wu, Peidong; Wang, Jian
2015-07-01
Magnesium alloy AZ31B plastically deforms via twinning and slip. Corresponding to the unidirectional nature of twinning, the activity of twinning/detwinning is directly related to loading history and materials texture. Using the elastic viscoplastic self-consistent model implementing with the twinning and detwinning model (EVPSC-TDT), we revisited experimental data of AZ31B sheets under four different strain paths: (1) tension-compression-tension along rolling direction, (2) tension-compression-tension along transverse direction, (3) compression-tension-compression along rolling direction, and (4) compression-tension-compression along transverse direction, and identified the dominant deformation mechanisms with respect to the strain path. We captured plastic deformation behaviors observed in experiments and quantitatively interpreted experimental observations in terms of the activities of different deformation mechanisms and the evolution of texture. It is found that the in-plane pre-tension has slight effect on the subsequent deformation, and the pre-compression and the reverse tension after compression have significant effect on the subsequent deformation. The inelastic behavior under compressive unloading is found to be insignificant at a small strain level but pronounced at a large strain level. Such significant effect is mainly ascribed to the activity of twinning and detwinning.
Compression and fast retrieval of SNP data.
Sambo, Francesco; Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio
2014-11-01
The increasing interest in rare genetic variants and epistatic genetic effects on complex phenotypic traits is currently pushing genome-wide association study design towards datasets of increasing size, both in the number of studied subjects and in the number of genotyped single nucleotide polymorphisms (SNPs). This, in turn, is leading to a compelling need for new methods for compression and fast retrieval of SNP data. We present a novel algorithm and file format for compressing and retrieving SNP data, specifically designed for large-scale association studies. Our algorithm is based on two main ideas: (i) compress linkage disequilibrium blocks in terms of differences with a reference SNP and (ii) compress reference SNPs exploiting information on their call rate and minor allele frequency. Tested on two SNP datasets and compared with several state-of-the-art software tools, our compression algorithm is shown to be competitive in terms of compression rate and to outperform all tools in terms of time to load compressed data. Our compression and decompression algorithms are implemented in a C++ library, are released under the GNU General Public License and are freely downloadable from http://www.dei.unipd.it/~sambofra/snpack.html. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
An efficient coding algorithm for the compression of ECG signals using the wavelet transform.
Rajoub, Bashar A
2002-04-01
A wavelet-based electrocardiogram (ECG) data compression algorithm is proposed in this paper. The ECG signal is first preprocessed, the discrete wavelet transform (DWT) is then applied to the preprocessed signal. Preprocessing guarantees that the magnitudes of the wavelet coefficients be less than one, and reduces the reconstruction errors near both ends of the compressed signal. The DWT coefficients are divided into three groups, each group is thresholded using a threshold based on a desired energy packing efficiency. A binary significance map is then generated by scanning the wavelet decomposition coefficients and outputting a binary one if the scanned coefficient is significant, and a binary zero if it is insignificant. Compression is achieved by 1) using a variable length code based on run length encoding to compress the significance map and 2) using direct binary representation for representing the significant coefficients. The ability of the coding algorithm to compress ECG signals is investigated, the results were obtained by compressing and decompressing the test signals. The proposed algorithm is compared with direct-based and wavelet-based compression algorithms and showed superior performance. A compression ratio of 24:1 was achieved for MIT-BIH record 117 with a percent root mean square difference as low as 1.08%.
Compression and fast retrieval of SNP data
Sambo, Francesco; Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio
2014-01-01
Motivation: The increasing interest in rare genetic variants and epistatic genetic effects on complex phenotypic traits is currently pushing genome-wide association study design towards datasets of increasing size, both in the number of studied subjects and in the number of genotyped single nucleotide polymorphisms (SNPs). This, in turn, is leading to a compelling need for new methods for compression and fast retrieval of SNP data. Results: We present a novel algorithm and file format for compressing and retrieving SNP data, specifically designed for large-scale association studies. Our algorithm is based on two main ideas: (i) compress linkage disequilibrium blocks in terms of differences with a reference SNP and (ii) compress reference SNPs exploiting information on their call rate and minor allele frequency. Tested on two SNP datasets and compared with several state-of-the-art software tools, our compression algorithm is shown to be competitive in terms of compression rate and to outperform all tools in terms of time to load compressed data. Availability and implementation: Our compression and decompression algorithms are implemented in a C++ library, are released under the GNU General Public License and are freely downloadable from http://www.dei.unipd.it/~sambofra/snpack.html. Contact: sambofra@dei.unipd.it or cobelli@dei.unipd.it. PMID:25064564
Guo, H X; Heinämäki, J; Yliruusi, J
1999-09-20
Direct compression of riboflavin sodium phosphate tablets was studied by confocal laser scanning microscopy (CLSM). The technique is non-invasive and generates three-dimensional (3D) images. Tablets of 1% riboflavin sodium phosphate with two grades of microcrystalline cellulose (MCC) were individually compressed at compression forces of 1.0 and 26.8 kN. The behaviour and deformation of drug particles on the upper and lower surfaces of the tablets were studied under compression forces. Even at the lower compression force, distinct recrystallized areas in the riboflavin sodium phosphate particles were observed in both Avicel PH-101 and Avicel PH-102 tablets. At the higher compression force, the recrystallization of riboflavin sodium phosphate was more extensive on the upper surface of the Avicel PH-102 tablet than the Avicel PH-101 tablet. The plastic deformation properties of both MCC grades reduced the fragmentation of riboflavin sodium phosphate particles. When compressed with MCC, riboflavin sodium phosphate behaved as a plastic material. The riboflavin sodium phosphate particles were more tightly bound on the upper surface of the tablet than on the lower surface, and this could also be clearly distinguished by CLSM. Drug deformation could not be visualized by other techniques. Confocal laser scanning microscopy provides valuable information on the internal mechanisms of direct compression of tablets.
Lee, Donghee; Erickson, Alek; You, Taesun; Dudley, Andrew T; Ryu, Sangjin
2018-06-13
Hyaline cartilage is a specialized type of connective tissue that lines many moveable joints (articular cartilage) and contributes to bone growth (growth plate cartilage). Hyaline cartilage is composed of a single cell type, the chondrocyte, which produces a unique hydrated matrix to resist compressive stress. Although compressive stress has profound effects on transcriptional networks and matrix biosynthesis in chondrocytes, mechanistic relationships between strain, signal transduction, cell metabolism, and matrix production remain superficial. Here, we describe development and validation of a polydimethylsiloxane (PDMS)-based pneumatic microfluidic cell compression device which generates multiple compression conditions in a single platform. The device contained an array of PDMS balloons of different sizes which were actuated by pressurized air, and the balloons compressed chondrocytes cells in alginate hydrogel constructs. Our characterization and testing of the device showed that the developed platform could compress chondrocytes with various magnitudes simultaneously with negligible effect on cell viability. Also, the device is compatible with live cell imaging to probe early effects of compressive stress, and it can be rapidly dismantled to facilitate molecular studies of compressive stress on transcriptional networks. Therefore, the proposed device will enhance the productivity of chondrocyte mechanobiology studies, and it can be applied to study mechanobiology of other cell types.
Impacts of compression on crystallization behavior of freeze-dried amorphous sucrose.
Imamura, Koreyoshi; Nomura, Mayo; Tanaka, Kazuhiro; Kataoka, Nobuhide; Oshitani, Jun; Imanaka, Hiroyuki; Nakanishi, Kazuhiro
2010-03-01
An amorphous matrix comprised of sugar molecules is used as excipient and stabilizing agent for labile ingredients in the pharmaceutical industry. The amorphous sugar matrix is often compressed into a tablet form to reduce the volume and improve handling. Herein, the effect of compression on the crystallization behavior of an amorphous sucrose matrix was investigated. Amorphous sucrose samples were prepared by freeze-drying and compressed under different conditions, followed by analyses by differential scanning calorimetry, isothermal crystallization tests, X-ray powder diffractometry, Fourier transform infrared spectroscopy (FTIR), and gas pycnometry. The compressed sample had a lower crystallization temperature and a shorter induction period for isothermal crystallization, indicating that compression facilitates the formation of the critical nucleus of a sucrose crystal. Based on FTIR and molecular dynamics simulation results, the conformational distortion of sucrose molecules due to the compression appears to contribute to the increase in the free energy of the system, which leads to the facilitation of critical nucleus formation. An isothermal crystallization test indicated an increase in the growth rate of sucrose crystals by the compression. This can be attributed to the transformation of the microstructure from porous to nonporous, as the result of compression. 2009 Wiley-Liss, Inc. and the American Pharmacists Association
Compressed Sensing for Body MRI
Feng, Li; Benkert, Thomas; Block, Kai Tobias; Sodickson, Daniel K; Otazo, Ricardo; Chandarana, Hersh
2016-01-01
The introduction of compressed sensing for increasing imaging speed in MRI has raised significant interest among researchers and clinicians, and has initiated a large body of research across multiple clinical applications over the last decade. Compressed sensing aims to reconstruct unaliased images from fewer measurements than that are traditionally required in MRI by exploiting image compressibility or sparsity. Moreover, appropriate combinations of compressed sensing with previously introduced fast imaging approaches, such as parallel imaging, have demonstrated further improved performance. The advent of compressed sensing marks the prelude to a new era of rapid MRI, where the focus of data acquisition has changed from sampling based on the nominal number of voxels and/or frames to sampling based on the desired information content. This paper presents a brief overview of the application of compressed sensing techniques in body MRI, where imaging speed is crucial due to the presence of respiratory motion along with stringent constraints on spatial and temporal resolution. The first section provides an overview of the basic compressed sensing methodology, including the notion of sparsity, incoherence, and non-linear reconstruction. The second section reviews state-of-the-art compressed sensing techniques that have been demonstrated for various clinical body MRI applications. In the final section, the paper discusses current challenges and future opportunities. PMID:27981664
Structure and Properties of Silica Glass Densified in Cold Compression and Hot Compression
NASA Astrophysics Data System (ADS)
Guerette, Michael; Ackerson, Michael R.; Thomas, Jay; Yuan, Fenglin; Bruce Watson, E.; Walker, David; Huang, Liping
2015-10-01
Silica glass has been shown in numerous studies to possess significant capacity for permanent densification under pressure at different temperatures to form high density amorphous (HDA) silica. However, it is unknown to what extent the processes leading to irreversible densification of silica glass in cold-compression at room temperature and in hot-compression (e.g., near glass transition temperature) are common in nature. In this work, a hot-compression technique was used to quench silica glass from high temperature (1100 °C) and high pressure (up to 8 GPa) conditions, which leads to density increase of ~25% and Young’s modulus increase of ~71% relative to that of pristine silica glass at ambient conditions. Our experiments and molecular dynamics (MD) simulations provide solid evidences that the intermediate-range order of the hot-compressed HDA silica is distinct from that of the counterpart cold-compressed at room temperature. This explains the much higher thermal and mechanical stability of the former than the latter upon heating and compression as revealed in our in-situ Brillouin light scattering (BLS) experiments. Our studies demonstrate the limitation of the resulting density as a structural indicator of polyamorphism, and point out the importance of temperature during compression in order to fundamentally understand HDA silica.
GTZ: a fast compression and cloud transmission tool optimized for FASTQ files.
Xing, Yuting; Li, Gen; Wang, Zhenguo; Feng, Bolun; Song, Zhuo; Wu, Chengkun
2017-12-28
The dramatic development of DNA sequencing technology is generating real big data, craving for more storage and bandwidth. To speed up data sharing and bring data to computing resource faster and cheaper, it is necessary to develop a compression tool than can support efficient compression and transmission of sequencing data onto the cloud storage. This paper presents GTZ, a compression and transmission tool, optimized for FASTQ files. As a reference-free lossless FASTQ compressor, GTZ treats different lines of FASTQ separately, utilizes adaptive context modelling to estimate their characteristic probabilities, and compresses data blocks with arithmetic coding. GTZ can also be used to compress multiple files or directories at once. Furthermore, as a tool to be used in the cloud computing era, it is capable of saving compressed data locally or transmitting data directly into cloud by choice. We evaluated the performance of GTZ on some diverse FASTQ benchmarks. Results show that in most cases, it outperforms many other tools in terms of the compression ratio, speed and stability. GTZ is a tool that enables efficient lossless FASTQ data compression and simultaneous data transmission onto to cloud. It emerges as a useful tool for NGS data storage and transmission in the cloud environment. GTZ is freely available online at: https://github.com/Genetalks/gtz .
Lattanzi, Riccardo; Zhang, Bei; Knoll, Florian; Assländer, Jakob; Cloos, Martijn A
2018-06-01
Magnetic Resonance Fingerprinting reconstructions can become computationally intractable with multiple transmit channels, if the B 1 + phases are included in the dictionary. We describe a general method that allows to omit the transmit phases. We show that this enables straightforward implementation of dictionary compression to further reduce the problem dimensionality. We merged the raw data of each RF source into a single k-space dataset, extracted the transceiver phases from the corresponding reconstructed images and used them to unwind the phase in each time frame. All phase-unwound time frames were combined in a single set before performing SVD-based compression. We conducted synthetic, phantom and in-vivo experiments to demonstrate the feasibility of SVD-based compression in the case of two-channel transmission. Unwinding the phases before SVD-based compression yielded artifact-free parameter maps. For fully sampled acquisitions, parameters were accurate with as few as 6 compressed time frames. SVD-based compression performed well in-vivo with highly under-sampled acquisitions using 16 compressed time frames, which reduced reconstruction time from 750 to 25min. Our method reduces the dimensions of the dictionary atoms and enables to implement any fingerprint compression strategy in the case of multiple transmit channels. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhou, Ping; Beeh, Elmar; Friedrich, Horst E.
2016-03-01
Magnesium alloys are promising materials for lightweight design in the automotive industry due to their high strength-to-mass ratio. This study aims to study the influence of tension-compression asymmetry on the radius of curvature and energy absorption capacity of AZ31B-O magnesium alloy sheets in bending. The mechanical properties were characterized using tension, compression, and three-point bending tests. The material exhibits significant tension-compression asymmetry in terms of strength and strain hardening rate due to extension twinning in compression. The compressive yield strength is much lower than the tensile yield strength, while the strain hardening rate is much higher in compression. Furthermore, the tension-compression asymmetry in terms of r value (Lankford value) was also observed. The r value in tension is much higher than that in compression. The bending results indicate that the AZ31B-O sheet can outperform steel and aluminum sheets in terms of specific energy absorption in bending mainly due to its low density. In addition, the AZ31B-O sheet was deformed with a larger radius of curvature than the steel and aluminum sheets, which brings a benefit to energy absorption capacity. Finally, finite element simulation for three-point bending was performed using LS-DYNA and the results confirmed that the larger radius of curvature of a magnesium specimen is mainly attributed to the high strain hardening rate in compression.
Fluffy dust forms icy planetesimals by static compression
NASA Astrophysics Data System (ADS)
Kataoka, Akimasa; Tanaka, Hidekazu; Okuzumi, Satoshi; Wada, Koji
2013-09-01
Context. Several barriers have been proposed in planetesimal formation theory: bouncing, fragmentation, and radial drift problems. Understanding the structure evolution of dust aggregates is a key in planetesimal formation. Dust grains become fluffy by coagulation in protoplanetary disks. However, once they are fluffy, they are not sufficiently compressed by collisional compression to form compact planetesimals. Aims: We aim to reveal the pathway of dust structure evolution from dust grains to compact planetesimals. Methods: Using the compressive strength formula, we analytically investigate how fluffy dust aggregates are compressed by static compression due to ram pressure of the disk gas and self-gravity of the aggregates in protoplanetary disks. Results: We reveal the pathway of the porosity evolution from dust grains via fluffy aggregates to form planetesimals, circumventing the barriers in planetesimal formation. The aggregates are compressed by the disk gas to a density of 10-3 g/cm3 in coagulation, which is more compact than is the case with collisional compression. Then, they are compressed more by self-gravity to 10-1 g/cm3 when the radius is 10 km. Although the gas compression decelerates the growth, the aggregates grow rapidly enough to avoid the radial drift barrier when the orbital radius is ≲6 AU in a typical disk. Conclusions: We propose a fluffy dust growth scenario from grains to planetesimals. It enables icy planetesimal formation in a wide range beyond the snowline in protoplanetary disks. This result proposes a concrete initial condition of planetesimals for the later stages of the planet formation.
2D-pattern matching image and video compression: theory, algorithms, and experiments.
Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth
2002-01-01
In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.
Impact of multilayered compression bandages on sub-bandage interface pressure: a model.
Al Khaburi, J; Nelson, E A; Hutchinson, J; Dehghani-Sanij, A A
2011-03-01
Multi-component medical compression bandages are widely used to treat venous leg ulcers. The sub-bandage interface pressures induced by individual components of the multi-component compression bandage systems are not always simply additive. Current models to explain compression bandage performance do not take account of the increase in leg circumference when each bandage is applied, and this may account for the difference between predicted and actual pressures. To calculate the interface pressure when a multi-component compression bandage system is applied to a leg. Use thick wall cylinder theory to estimate the sub-bandage pressure over the leg when a multi-component compression bandage is applied to a leg. A mathematical model was developed based on thick cylinder theory to include bandage thickness in the calculation of the interface pressure in multi-component compression systems. In multi-component compression systems, the interface pressure corresponds to the sum of the pressures applied by individual bandage layers. However, the change in the limb diameter caused by additional bandage layers should be considered in the calculation. Adding the interface pressure produced by single components without considering the bandage thickness will result in an overestimate of the overall interface pressure produced by the multi-component compression systems. At the ankle (circumference 25 cm) this error can be 19.2% or even more in the case of four components bandaging systems. Bandage thickness should be considered when calculating the pressure applied using multi-component compression systems.
ERGC: an efficient referential genome compression algorithm.
Saha, Subrata; Rajasekaran, Sanguthevar
2015-11-01
Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. rajasek@engr.uconn.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Jaferzadeh, Keyvan; Gholami, Samaneh; Moon, Inkyu
2016-12-20
In this paper, we evaluate lossless and lossy compression techniques to compress quantitative phase images of red blood cells (RBCs) obtained by an off-axis digital holographic microscopy (DHM). The RBC phase images are numerically reconstructed from their digital holograms and are stored in 16-bit unsigned integer format. In the case of lossless compression, predictive coding of JPEG lossless (JPEG-LS), JPEG2000, and JP3D are evaluated, and compression ratio (CR) and complexity (compression time) are compared against each other. It turns out that JP2k can outperform other methods by having the best CR. In the lossy case, JP2k and JP3D with different CRs are examined. Because some data is lost in a lossy way, the degradation level is measured by comparing different morphological and biochemical parameters of RBC before and after compression. Morphological parameters are volume, surface area, RBC diameter, sphericity index, and the biochemical cell parameter is mean corpuscular hemoglobin (MCH). Experimental results show that JP2k outperforms JP3D not only in terms of mean square error (MSE) when CR increases, but also in compression time in the lossy compression way. In addition, our compression results with both algorithms demonstrate that with high CR values the three-dimensional profile of RBC can be preserved and morphological and biochemical parameters can still be within the range of reported values.
Nonpainful wide-area compression inhibits experimental pain.
Honigman, Liat; Bar-Bachar, Ofrit; Yarnitsky, David; Sprecher, Elliot; Granovsky, Yelena
2016-09-01
Compression therapy, a well-recognized treatment for lymphoedema and venous disorders, pressurizes limbs and generates massive non-noxious afferent sensory barrages. The aim of this study was to study whether such afferent activity has an analgesic effect when applied on the lower limbs, hypothesizing that larger compression areas will induce stronger analgesic effects, and whether this effect correlates with conditioned pain modulation (CPM). Thirty young healthy subjects received painful heat and pressure stimuli (47°C for 30 seconds, forearm; 300 kPa for 15 seconds, wrist) before and during 3 compression protocols of either SMALL (up to ankles), MEDIUM (up to knees), or LARGE (up to hips) compression areas. Conditioned pain modulation (heat pain conditioned by noxious cold water) was tested before and after each compression protocol. The LARGE protocol induced more analgesia for heat than the SMALL protocol (P < 0.001). The analgesic effect interacted with gender (P = 0.015). The LARGE protocol was more efficient for females, whereas the MEDIUM protocol was more efficient for males. Pressure pain was reduced by all protocols (P < 0.001) with no differences between protocols and no gender effect. Conditioned pain modulation was more efficient than the compression-induced analgesia. For the LARGE protocol, precompression CPM efficiency positively correlated with compression-induced analgesia. Large body area compression exerts an area-dependent analgesic effect on experimental pain stimuli. The observed correlation with pain inhibition in response to robust non-noxious sensory stimulation may suggest that compression therapy shares similar mechanisms with inhibitory pain modulation assessed through CPM.
Haffner, Leopold; Mahling, Moritz; Muench, Alexander; Castan, Christoph; Schubert, Paul; Naumann, Aline; Reddersen, Silke; Herrmann-Werner, Anne; Reutershan, Jörg; Riessen, Reimer; Celebi, Nora
2017-03-03
Chest compressions are a core element of cardio-pulmonary resuscitation. Despite periodic training, real-life chest compressions have been reported to be overly shallow and/or fast, very likely affecting patient outcomes. We investigated the effect of a brief Crew Resource Management (CRM) training program on the correction rate of improperly executed chest compressions in a simulated cardiac arrest scenario. Final-year medical students (n = 57) were randomised to receive a 10-min computer-based CRM or a control training on ethics. Acting as team leaders, subjects performed resuscitation in a simulated cardiac arrest scenario before and after the training. Team members performed standardised overly shallow and fast chest compressions. We analysed how often the team leader recognised and corrected improper chest compressions, as well as communication and resuscitation quality. After the CRM training, team leaders corrected improper chest compressions (35.5%) significantly more often compared with those undergoing control training (7.7%, p = 0.03*). Consequently, four students have to be trained (number needed to treat = 3.6) for one improved chest compression scenario. Communication quality assessed by the Leader Behavior Description Questionnaire significantly increased in the intervention group by a mean of 4.5 compared with 2.0 (p = 0.01*) in the control group. A computer-based, 10-min CRM training improved the recognition of ineffective of chest compressions. Furthermore, communication quality increased. As guideline-adherent chest compressions have been linked to improved patient outcomes, our CRM training might represent a brief and affordable approach to increase chest compression quality and potentially improve patient outcomes.
Atkins, Dianne L; de Caen, Allan R; Berger, Stuart; Samson, Ricardo A; Schexnayder, Stephen M; Joyner, Benny L; Bigham, Blair L; Niles, Dana E; Duff, Jonathan P; Hunt, Elizabeth A; Meaney, Peter A
2018-01-02
This focused update to the American Heart Association guidelines for cardiopulmonary resuscitation (CPR) and emergency cardiovascular care follows the Pediatric Task Force of the International Liaison Committee on Resuscitation evidence review. It aligns with the International Liaison Committee on Resuscitation's continuous evidence review process, and updates are published when the International Liaison Committee on Resuscitation completes a literature review based on new science. This update provides the evidence review and treatment recommendation for chest compression-only CPR versus CPR using chest compressions with rescue breaths for children <18 years of age. Four large database studies were available for review, including 2 published after the "2015 American Heart Association Guidelines Update for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care." Two demonstrated worse 30-day outcomes with chest compression-only CPR for children 1 through 18 years of age, whereas 2 studies documented no difference between chest compression-only CPR and CPR using chest compressions with rescue breaths. When the results were analyzed for infants <1 year of age, CPR using chest compressions with rescue breaths was better than no CPR but was no different from chest compression-only CPR in 1 study, whereas another study observed no differences among chest compression-only CPR, CPR using chest compressions with rescue breaths, and no CPR. CPR using chest compressions with rescue breaths should be provided for infants and children in cardiac arrest. If bystanders are unwilling or unable to deliver rescue breaths, we recommend that rescuers provide chest compressions for infants and children. © 2017 American Heart Association, Inc.
Oichi, Takeshi; Oshima, Yasushi; Okazaki, Rentaro; Azuma, Seiichi
2016-01-01
The objective of this study is to investigate whether preexisting severe cervical spinal cord compression affects the severity of paralysis once patients develop traumatic cervical spinal cord injury (CSCI) without bone injury. We retrospectively investigated 122 consecutive patients with traumatic CSCI without bone injury. The severity of paralysis on admission was assessed by the American Spinal Injury Association impairment scale (AIS). The degree of preexisting cervical spinal cord compression was evaluated by the maximum spinal cord compression (MSCC) and was divided into three categories: minor compression (MSCC ≤ 20 %), moderate compression (20 % < MSCC ≤ 40 %), and severe compression (40 % < MSCC). We investigated soft-tissue damage on magnetic resonance imaging to estimate the external force applied. Other potential risk factors, including age, sex, fused vertebra, and ossification of longitudinal ligament, were also reviewed. A multivariate logistic regression analysis was performed to investigate the risk factors for developing severe paralysis (AIS A-C) on admission. Our study included 103 males and 19 females with mean age of 65 years. Sixty-one patients showed severe paralysis (AIS A-C) on admission. The average MSCC was 22 %. Moderate compression was observed in 41, and severe in 20. Soft-tissue damage was observed in 91. A multivariate analysis showed that severe cervical spinal cord compression significantly affected the severity of paralysis at the time of injury, whereas both mild and moderate compression did not affect it. Soft-tissue damage was also significantly associated with severe paralysis on admission. Preexisting severe cervical cord compression is an independent risk factor for severe paralysis once patients develop traumatic CSCI without bone injury.
A multicenter observer performance study of 3D JPEG2000 compression of thin-slice CT.
Erickson, Bradley J; Krupinski, Elizabeth; Andriole, Katherine P
2010-10-01
The goal of this study was to determine the compression level at which 3D JPEG2000 compression of thin-slice CTs of the chest and abdomen-pelvis becomes visually perceptible. A secondary goal was to determine if residents in training and non-physicians are substantially different from experienced radiologists in their perception of compression-related changes. This study used multidetector computed tomography 3D datasets with 0.625-1-mm thickness slices of standard chest, abdomen, or pelvis, clipped to 12 bits. The Kakadu v5.2 JPEG2000 compression algorithm was used to compress and decompress the 80 examinations creating four sets of images: lossless, 1.5 bpp (8:1), 1 bpp (12:1), and 0.75 bpp (16:1). Two randomly selected slices from each examination were shown to observers using a flicker mode paradigm in which observers rapidly toggled between two images, the original and a compressed version, with the task of deciding whether differences between them could be detected. Six staff radiologists, four residents, and six PhDs experienced in medical imaging (from three institutions) served as observers. Overall, 77.46% of observers detected differences at 8:1, 94.75% at 12:1, and 98.59% at 16:1 compression levels. Across all compression levels, the staff radiologists noted differences 64.70% of the time, the resident's detected differences 71.91% of the time, and the PhDs detected differences 69.95% of the time. Even mild compression is perceptible with current technology. The ability to detect differences does not equate to diagnostic differences, although perception of compression artifacts could affect diagnostic decision making and diagnostic workflow.
Chaudhary, R S; Patel, C; Sevak, V; Chan, M
2018-01-01
The study evaluates use of Kollidon VA ® 64 and a combination of Kollidon VA ® 64 with Kollidon VA ® 64 Fine as excipient in direct compression process of tablets. The combination of the two grades of material is evaluated for capping, lamination and excessive friability. Inter particulate void space is higher for such excipient due to the hollow structure of the Kollidon VA ® 64 particles. During tablet compression air remains trapped in the blend exhibiting poor compression with compromised physical properties of the tablets. Composition of Kollidon VA ® 64 and Kollidon VA ® 64 Fine is evaluated by design of experiment (DoE). A scanning electron microscopy (SEM) of two grades of Kollidon VA ® 64 exhibits morphological differences between coarse and fine grade. The tablet compression process is evaluated with a mix consisting of entirely Kollidon VA ® 64 and two mixes containing Kollidon VA ® 64 and Kollidon VA ® 64 Fine in ratio of 77:23 and 65:35. A statistical modeling on the results from the DoE trials resulted in the optimum composition for direct tablet compression as combination of Kollidon VA ® 64 and Kollidon VA ® 64 Fine in ratio of 77:23. This combination compressed with the predicted parameters based on the statistical modeling and applying main compression force between 5 and 15 kN, pre-compression force between 2 and 3 kN, feeder speed fixed at 25 rpm and compression range of 45-49 rpm produced tablets with hardness ranging between 19 and 21 kp, with no friability, capping, or lamination issue.
Chen, Li-Jin; Wang, Yueh-Jan; Tseng, Guo-Fang
2017-10-24
Trauma and tumor compressing the brain distort underlying cortical neurons. Compressed cortical neurons remodel their dendrites instantly. The effects on axons however remain unclear. Using a rat epidural bead implantation model, we studied the effects of unilateral somatosensory cortical compression on its transcallosal projection and the reversibility of the changes following decompression. Compression reduced the density, branching profuseness and boutons of the projection axons in the contralateral homotopic cortex 1week and 1month post-compression. Projection fiber density was higher 1-month than 1-week post-compression, suggesting adaptive temporal changes. Compression reduced contralateral cortical synaptophysin, vesicular glutamate transporter 1 (VGLUT1) and postsynaptic density protein-95 (PSD95) expressions in a week and the first two marker proteins further by 1month. βIII-tubulin and kinesin light chain (KLC) expressions in the corpus callosum (CC) where transcallosal axons traveled were also decreased. Kinesin heavy chain (KHC) level in CC was temporarily increased 1week after compression. Decompression increased transcallosal axon density and branching profuseness to higher than sham while bouton density returned to sham levels. This was accompanied by restoration of synaptophysin, VGLUT1 and PSD95 expressions in the contralateral cortex of the 1-week, but not the 1-month, compression rats. Decompression restored βIII-tubulin, but not KLC and KHC expressions in CC. However, KLC and KHC expressions in the cell bodies of the layer II/III pyramidal neurons partially recovered. Our results show cerebral compression compromised cortical axonal outputs and reduced transcallosal projection. Some of these changes did not recover in long-term decompression. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
Real-time 3D video compression for tele-immersive environments
NASA Astrophysics Data System (ADS)
Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William
2006-01-01
Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).
How radiologic/clinicopathologic features relate to compressive symptoms in benign thyroid disease.
Siegel, Bianca; Ow, Thomas J; Abraham, Suzanne S; Loftus, Patricia A; Tassler, Andrew B; Smith, Richard V; Schiff, Bradley A
2017-04-01
To identify compressive symptomatology in a patient cohort with benign thyroid disease who underwent thyroidectomy. To determine radiographic/clinicopathologic features related to and predictive of a compressive outcome. Retrospective cohort study. Medical records of 232 patients with benign thyroid disease on fine needle aspiration who underwent thyroidectomy from 2009 to 2012 at an academic medical center were reviewed. Data collection and analyses involved subjects' demographics, compressive symptoms, preoperative airway encroachment, intubation complications, specimen weight, and final pathologic diagnosis. Subjects were ages 14 to 86 years (mean: 52.4 years). Ninety-six subjects (41.4%) reported compressive symptomatology of dysphagia (n =74; 32%), dyspnea (n = 39; 17%), and hoarseness (n = 24; 10%). Ninety-seven (42.2%) had preoperative airway encroachment. Dyspnea was significantly related to tracheal compression, tracheal deviation, and substernal extension. Dysphagia was related to tracheal compression and tracheal deviation. Compressive symptoms and preoperative airway encroachment were not related to intubation complications. Final pathologic diagnosis was not related to compressive symptoms, whereas specimen weight was significantly related to dyspnea and dysphagia. Final pathology revealed 74 subjects (32%) with malignant lesions. Malignant and benign nodular subject groups differed significantly in substernal extension, gland weight, tracheal deviation, and dyspnea. Logit modeling for dyspnea was significant for tracheal compression as a predictor for the likelihood of dyspnea. Dyspnea was closely related to preoperative airway encroachment and most indicative of a clinically relevant thyroid in our cohort with benign thyroid disease. Tracheal compression was found to have predictive value for the likelihood of a dyspneic outcome. 4. Laryngoscope, 127:993-997, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
NASA Astrophysics Data System (ADS)
Kurokawa, A. K.; Miwa, T.; Okumura, S.; Uesugi, K.
2017-12-01
After ash-dominated Strombolian eruption, considerable amount of ash falls back to the volcanic conduit forming a dense near-surface region compacted by weights of its own and other fallback clasts (Patrick et al., 2007). Gas accumulation below this dense cap causes a substantial increase in pressure within the conduit, causing the volcanic activity to shift to the preliminary stages of a forthcoming eruption (Del Bello et al., 2015). Under such conditions, rheology of the fallback ash plays an important role because it controls whether the fallback ash can be the cap. However, little attention has been given to the point. We examined the rheology of ash collected at Stromboli volcano via intermittent compression experiments changing temperature and compression time/rate. The ash deformed at a constant rate during compression process, and then it was compressed without any deformation during rest process. The compression and rest processes repeated during each experiment to see rheological variations with progression of compaction. Viscoelastic changes during the experiment were estimated by Maxwell model. The results show that both elasticity and viscosity increases with decreasing porosity. On the other hand, the elasticity shows strong rate-dependence in the both compression and rest processes while the viscosity dominantly depends on the temperature, although the compression rate also affects the viscosity in the case of the compression process. Thus, the ash behaves either elastically or viscously depending on experimental process, temperature, and compression rate/time. The viscoelastic characteristics can be explained by magnitude relationships between the characteristic relaxation times and times for compression and rest processes. This indicates that the balance of the time scales is key to determining the rheological characteristics and whether the ash behaves elastically or viscously may control cyclic Strombolian eruptions.
Liu, Shawn; Vaillancourt, Christian; Kasaboski, Ann; Taljaard, Monica
2016-11-01
This study sought to measure bystander fatigue and cardiopulmonary resuscitation (CPR) quality after five minutes of CPR using the continuous chest compression (CCC) versus the 30:2 chest compression to ventilation method in older lay persons, a population most likely to perform CPR on cardiac arrest victims. This randomized crossover trial took place at three tertiary care hospitals and a seniors' center. Participants were aged ≥55 years without significant physical limitations (frailty score ≤3/7). They completed two 5-minute CPR sessions (using 30:2 and CCC) on manikins; sessions were separated by a rest period. We used concealed block randomization to determine CPR method order. Metronome feedback maintained a compression rate of 100/minute. We measured heart rate (HR), mean arterial pressure (MAP), and Borg Exertion Scale. CPR quality measures included total number of compressions and number of adequate compressions (depth ≥5 cm). Sixty-three participants were enrolled: mean age 70.8 years, female 66.7%, past CPR training 60.3%. Bystander fatigue was similar between CPR methods: mean difference in HR -0.59 (95% CI -3.51-2.33), MAP 1.64 (95% CI -0.23-3.50), and Borg 0.46 (95% CI 0.07-0.84). Compared to 30:2, participants using CCC performed more chest compressions (480.0 v. 376.3, mean difference 107.7; p<0.0001) and more adequate chest compressions (381.5 v. 324.9, mean difference. 62.0; p=0.0001), although good compressions/minute declined significantly faster with the CCC method (p=0.0002). CPR quality decreased significantly faster when performing CCC compared to 30:2. However, performing CCC produced more adequate compressions overall with a similar level of fatigue compared to the 30:2 method.
Effects of Compression on Speech Acoustics, Intelligibility, and Sound Quality
Souza, Pamela E.
2002-01-01
The topic of compression has been discussed quite extensively in the last 20 years (eg, Braida et al., 1982; Dillon, 1996, 2000; Dreschler, 1992; Hickson, 1994; Kuk, 2000 and 2002; Kuk and Ludvigsen, 1999; Moore, 1990; Van Tasell, 1993; Venema, 2000; Verschuure et al., 1996; Walker and Dillon, 1982). However, the latest comprehensive update by this journal was published in 1996 (Kuk, 1996). Since that time, use of compression hearing aids has increased dramatically, from half of hearing aids dispensed only 5 years ago to four out of five hearing aids dispensed today (Strom, 2002b). Most of today's digital and digitally programmable hearing aids are compression devices (Strom, 2002a). It is probable that within a few years, very few patients will be fit with linear hearing aids. Furthermore, compression has increased in complexity, with greater numbers of parameters under the clinician's control. Ideally, these changes will translate to greater flexibility and precision in fitting and selection. However, they also increase the need for information about the effects of compression amplification on speech perception and speech quality. As evidenced by the large number of sessions at professional conferences on fitting compression hearing aids, clinicians continue to have questions about compression technology and when and how it should be used. How does compression work? Who are the best candidates for this technology? How should adjustable parameters be set to provide optimal speech recognition? What effect will compression have on speech quality? These and other questions continue to drive our interest in this technology. This article reviews the effects of compression on the speech signal and the implications for speech intelligibility, quality, and design of clinical procedures. PMID:25425919
Lok, U-Wai; Li, Pai-Chi
2016-03-01
Graphics processing unit (GPU)-based software beamforming has advantages over hardware-based beamforming of easier programmability and a faster design cycle, since complicated imaging algorithms can be efficiently programmed and modified. However, the need for a high data rate when transferring ultrasound radio-frequency (RF) data from the hardware front end to the software back end limits the real-time performance. Data compression methods can be applied to the hardware front end to mitigate the data transfer issue. Nevertheless, most decompression processes cannot be performed efficiently on a GPU, thus becoming another bottleneck of the real-time imaging. Moreover, lossless (or nearly lossless) compression is desirable to avoid image quality degradation. In a previous study, we proposed a real-time lossless compression-decompression algorithm and demonstrated that it can reduce the overall processing time because the reduction in data transfer time is greater than the computation time required for compression/decompression. This paper analyzes the lossless compression method in order to understand the factors limiting the compression efficiency. Based on the analytical results, a nearly lossless compression is proposed to further enhance the compression efficiency. The proposed method comprises a transformation coding method involving modified lossless compression that aims at suppressing amplitude data. The simulation results indicate that the compression ratio (CR) of the proposed approach can be enhanced from nearly 1.8 to 2.5, thus allowing a higher data acquisition rate at the front end. The spatial and contrast resolutions with and without compression were almost identical, and the process of decompressing the data of a single frame on a GPU took only several milliseconds. Moreover, the proposed method has been implemented in a 64-channel system that we built in-house to demonstrate the feasibility of the proposed algorithm in a real system. It was found that channel data from a 64-channel system can be transferred using the standard USB 3.0 interface in most practical imaging applications.
Cachia, Victor V; Culbert, Brad; Warren, Chris; Oka, Richard; Mahar, Andrew
2003-01-01
The purpose of this study was to evaluate the structural and mechanical characteristics of a new and unique titanium cortical-cancellous helical compression anchor with BONE-LOK (Triage Medical, Inc., Irvine, CA) technology for compressive internal fixation of fractures and osteotomies. This device provides fixation through the use of a distal helical anchor and a proximal retentive collar that are united by an axially movable pin (U.S. and international patents issued and pending). The helical compression anchor (2.7-mm diameter) was compared with 3.0-mm diameter titanium cancellous screws (Synthes, Paoli, PA) for pullout strength and compression in 7# and 12# synthetic rigid polyurethane foam (simulated bone matrix), and for 3-point bending stiffness. The following results (mean +/- standard deviation) were obtained: foam block pullout strength in 12# foam: 2.7-mm helical compression anchor 70 +/- 2.0 N and 3.0-mm titanium cancellous screws 37 +/- 11 N; in 7# foam: 2.7-mm helical compression anchor 33 +/- 3 N and 3.0-mm titanium cancellous screws 31 +/- 12 N. Three-point bending stiffness, 2.7-mm helical compression anchor 988 +/- 68 N/mm and 3.0-mm titanium cancellous screws 845 +/- 88 N/mm. Compression strength testing in 12# foam: 2.7-mm helical compression anchor 70.8 +/- 4.8 N and 3.0-mm titanium cancellous screws 23.0 +/- 3.1 N, in 7# foam: 2.7-mm helical compression anchor 42.6 +/- 3.2 N and 3.0-mm titanium cancellous screws 10.4 +/- 0.9 N. Results showed greater pullout strength, 3-point bending stiffness, and compression strength for the 2.7-mm helical compression anchor as compared with the 3.0-mm titanium cancellous screws in these testing models. This difference represents a distinct advantage in the new device that warrants further in vivo testing.
A New Approach for Fingerprint Image Compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazieres, Bertrand
1997-12-01
The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefactsmore » which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.« less
Survey of Header Compression Techniques
NASA Technical Reports Server (NTRS)
Ishac, Joseph
2001-01-01
This report provides a summary of several different header compression techniques. The different techniques included are: (1) Van Jacobson's header compression (RFC 1144); (2) SCPS (Space Communications Protocol Standards) header compression (SCPS-TP, SCPS-NP); (3) Robust header compression (ROHC); and (4) The header compression techniques in RFC2507 and RFC2508. The methodology for compression and error correction for these schemes are described in the remainder of this document. All of the header compression schemes support compression over simplex links, provided that the end receiver has some means of sending data back to the sender. However, if that return path does not exist, then neither Van Jacobson's nor SCPS can be used, since both rely on TCP (Transmission Control Protocol). In addition, under link conditions of low delay and low error, all of the schemes perform as expected. However, based on the methodology of the schemes, each scheme is likely to behave differently as conditions degrade. Van Jacobson's header compression relies heavily on the TCP retransmission timer and would suffer an increase in loss propagation should the link possess a high delay and/or bit error rate (BER). The SCPS header compression scheme protects against high delay environments by avoiding delta encoding between packets. Thus, loss propagation is avoided. However, SCPS is still affected by an increased BER (bit-error-rate) since the lack of delta encoding results in larger header sizes. Next, the schemes found in RFC2507 and RFC2508 perform well for non-TCP connections in poor conditions. RFC2507 performance with TCP connections is improved by various techniques over Van Jacobson's, but still suffers a performance hit with poor link properties. Also, RFC2507 offers the ability to send TCP data without delta encoding, similar to what SCPS offers. ROHC is similar to the previous two schemes, but adds additional CRCs (cyclic redundancy check) into headers and improves compression schemes which provide better tolerances in conditions with a high BER.
Tensile and compressive creep behavior of extruded Mg–10Gd–3Y–0.5Zr (wt.%) alloy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, H.; The State Key Laboratory of Metal Matrix Composites, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240; Wang, Q.D., E-mail: wangqudong@sjtu.edu.cn
2015-01-15
The tensile and compressive creep behavior of an extruded Mg–10Gd–3Y–0.5Zr (wt.%) alloy was investigated at temperatures ranging from 200 °C to 300 °C and under stresses ranging from 30 MPa to 120 MPa. There existed an asymmetry in the tensile and compressive creep properties. The minimum creep rate of the alloy was slightly greater in tension than in compression. The measured values of the transient strain and initial creep rate in compression were greater than those in tension. The creep stress exponent was approximately 2.5 at low temperatures (T < 250 °C) and 3.4 at higher temperatures both in tensionmore » and in compression. The compression creep activation energy at low temperatures and high temperatures was 83.4 and 184.3 kJ/mol respectively, while one activation energy (184 kJ/mol) represented the tensile–creep behavior over the temperature range examined. Dislocation creep was suggested to be the main mechanism in tensile creep and in the high-temperature regime in compressive creep, while grain boundary sliding was suggested to dominate in the low-temperature regime in compressive creep. Precipitate free zones were observed near grain boundaries perpendicular to the loading direction in tension and parallel to the loading direction in compression. Electron backscattered diffraction analysis revealed that the texture changed slightly during creep. Non-basal slip was suggested to contribute to the deformation after basal slip was introduced. In the tensile–creep ruptured specimens, intergranular cracks were mainly observed at general high-angle boundaries. - Highlights: • Creep behavior of an extruded Mg–RE alloy was characterized by EBSD. • T5 aging treatment enhanced the tension–compression creep asymmetry. • The grains grew slightly during tensile creep, but not for compressive creep. • Precipitate free zones (PFZs) were observed at specific grain boundaries. • Intergranular fracture was dominant and cracks mainly originated at GHABs.« less
Context Modeler for Wavelet Compression of Spectral Hyperspectral Images
NASA Technical Reports Server (NTRS)
Kiely, Aaron; Xie, Hua; Klimesh, matthew; Aranki, Nazeeh
2010-01-01
A context-modeling sub-algorithm has been developed as part of an algorithm that effects three-dimensional (3D) wavelet-based compression of hyperspectral image data. The context-modeling subalgorithm, hereafter denoted the context modeler, provides estimates of probability distributions of wavelet-transformed data being encoded. These estimates are utilized by an entropy coding subalgorithm that is another major component of the compression algorithm. The estimates make it possible to compress the image data more effectively than would otherwise be possible. The following background discussion is prerequisite to a meaningful summary of the context modeler. This discussion is presented relative to ICER-3D, which is the name attached to a particular compression algorithm and the software that implements it. The ICER-3D software is summarized briefly in the preceding article, ICER-3D Hyperspectral Image Compression Software (NPO-43238). Some aspects of this algorithm were previously described, in a slightly more general context than the ICER-3D software, in "Improving 3D Wavelet-Based Compression of Hyperspectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. In turn, ICER-3D is a product of generalization of ICER, another previously reported algorithm and computer program that can perform both lossless and lossy wavelet-based compression and decompression of gray-scale-image data. In ICER-3D, hyperspectral image data are decomposed using a 3D discrete wavelet transform (DWT). Following wavelet decomposition, mean values are subtracted from spatial planes of spatially low-pass subbands prior to encoding. The resulting data are converted to sign-magnitude form and compressed. In ICER-3D, compression is progressive, in that compressed information is ordered so that as more of the compressed data stream is received, successive reconstructions of the hyperspectral image data are of successively higher overall fidelity.
Understanding turbulence in compressing plasmas and its exploitation or prevention.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidovits, Seth
Unprecedented densities and temperatures are now achieved in compressions of plasma, by lasers and by pulsed power, in major experimental facilities. These compressions, carried out at the largest scale at the National Ignition Facility and at the Z Pulsed Power Facility, have important applications, including fusion, X-ray production, and materials research. Several experimental and simulation results suggest that the plasma in some of these compressions is turbulent. In fact, measurements suggest that in certain laboratory plasma compressions the turbulent energy is a dominant energy component. Similarly, turbulence is dominant in some compressing astrophysical plasmas, such as in molecular clouds. Turbulencemore » need not be dominant to be important; even small quantities could greatly influence experiments that are sensitive to mixing of non-fuel into fuel, such as compressions seeking fusion ignition. Despite its important role in major settings, bulk plasma turbulence under compression is insufficiently understood to answer or even to pose some of the most fundamental questions about it. This thesis both identifies and answers key questions in compressing turbulent motion, while providing a description of the behavior of three-dimensional, isotropic, compressions of homogeneous turbulence with a plasma viscosity. This description includes a simple, but successful, new model for the turbulent energy of plasma undergoing compression. The unique features of compressing turbulence with a plasma viscosity are shown, including the sensitivity of the turbulence to plasma ionization, and a sudden viscous dissipation'' effect which rapidly converts plasma turbulent energy into thermal energy. This thesis then examines turbulence in both laboratory compression experiments and molecular clouds. It importantly shows: the possibility of exploiting turbulence to make fusion or X-ray production more efficient; conditions under which hot-spot turbulence can be prevented; and a lower bound on the growth of turbulence in molecular clouds. This bound raises questions about the level of dissipation in existing molecular cloud models. Finally, the observations originally motivating the thesis, Z-pinch measurements suggesting dominant turbulent energy, are reexamined by self-consistently accounting for the impact of the turbulence on the spectroscopic analysis. This is found to strengthen the evidence that the multiple observations describe a highly turbulent plasma state.« less
An efficient compression scheme for bitmap indices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie
2004-04-13
When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap codemore » (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query response time is proportional to the index size. This indicates that the compressed bitmap indices are efficient for very large datasets.« less
Liu, Qi; Yang, Yu; Chen, Chun; Bu, Jiajun; Zhang, Yin; Ye, Xiuzi
2008-03-31
With the rapid emergence of RNA databases and newly identified non-coding RNAs, an efficient compression algorithm for RNA sequence and structural information is needed for the storage and analysis of such data. Although several algorithms for compressing DNA sequences have been proposed, none of them are suitable for the compression of RNA sequences with their secondary structures simultaneously. This kind of compression not only facilitates the maintenance of RNA data, but also supplies a novel way to measure the informational complexity of RNA structural data, raising the possibility of studying the relationship between the functional activities of RNA structures and their complexities, as well as various structural properties of RNA based on compression. RNACompress employs an efficient grammar-based model to compress RNA sequences and their secondary structures. The main goals of this algorithm are two fold: (1) present a robust and effective way for RNA structural data compression; (2) design a suitable model to represent RNA secondary structure as well as derive the informational complexity of the structural data based on compression. Our extensive tests have shown that RNACompress achieves a universally better compression ratio compared with other sequence-specific or common text-specific compression algorithms, such as Gencompress, winrar and gzip. Moreover, a test of the activities of distinct GTP-binding RNAs (aptamers) compared with their structural complexity shows that our defined informational complexity can be used to describe how complexity varies with activity. These results lead to an objective means of comparing the functional properties of heteropolymers from the information perspective. A universal algorithm for the compression of RNA secondary structure as well as the evaluation of its informational complexity is discussed in this paper. We have developed RNACompress, as a useful tool for academic users. Extensive tests have shown that RNACompress is a universally efficient algorithm for the compression of RNA sequences with their secondary structures. RNACompress also serves as a good measurement of the informational complexity of RNA secondary structure, which can be used to study the functional activities of RNA molecules.
A Novel Method of Newborn Chest Compression: A Randomized Crossover Simulation Study.
Smereka, Jacek; Szarpak, Lukasz; Ladny, Jerzy R; Rodriguez-Nunez, Antonio; Ruetzler, Kurt
2018-01-01
Objective: To compare a novel two-thumb chest compression technique with standard techniques during newborn resuscitation performed by novice physicians in terms of median depth of chest compressions, degree of full chest recoil, and effective compression efficacy. Patients and Methods: The total of 74 novice physicians with less than 1-year work experience participated in the study. They performed chest compressions using three techniques: (A) The new two-thumb technique (nTTT). The novel method of chest compressions in an infant consists in using two thumbs directed at the angle of 90° to the chest while closing the fingers of both hands in a fist. (B) TFT. With this method, the rescuer compresses the sternum with the tips of two fingers. (C) TTHT. Two thumbs are placed over the lower third of the sternum, with the fingers encircling the torso and supporting the back. Results: The median depth of chest compressions for nTTT was 3.8 (IQR, 3.7-3.9) cm, for TFT-2.1 (IQR, 1.7-2.5) cm, while for TTHT-3.6 (IQR, 3.5-3.8) cm. There was a significant difference between nTTT and TFT, and TTHT and TFT ( p < 0.001) for each time interval during resuscitation. The degree of full chest recoil was 93% (IQR, 91-97) for nTTT, 99% (IQR, 96-100) for TFT, and 90% (IQR, 74-91) for TTHT. There was a statistically significant difference in the degree of complete chest relaxation between nTTT and TFT ( p < 0.001), between nTTT and TTHT ( p = 0.016), and between TFT and TTHT ( p < 0.001). Conclusion: The median chest compression depth for nTTT and TTHT is significantly higher than that for TFT. The degree of full chest recoil was highest for TFT, then for nTTT and TTHT. The effective compression efficiency with nTTT was higher than for TTHT and TFT. Our novel newborn chest compression method in this manikin study provided adequate chest compression depth and degree of full chest recoil, as well as very good effective compression efficiency. Further clinical studies are necessary to confirm these initial results.
NASA Astrophysics Data System (ADS)
Nakata, Kotaro; Hasegawa, Takuma; Oyama, Takahiro; Miyakawa, Kazuya
2018-06-01
Stable isotopes (δ2H and δ18O) of water can help our understanding of origin, mixing and migration of groundwater. In the formation with low permeability, it provides information about migration mechanism of ion such as diffusion and/or advection. Thus it has been realized as very important information to understand the migration of water and ions in it. However, in formation with low permeability it is difficult to obtain the ground water sample as liquid and water in pores needs to be extracted to estimate it. Compressing rock is the most common and widely used method of extracting water in pores. However, changes in δ2H and δ18O may take place during compression because changes in ion concentration have been reported in previous studies. In this study, two natural rocks were compressed, and the changes in the δ2H and δ18O with compression pressure were investigated. Mechanisms for the changes in water isotopes observed during the compression were then discussed. In addition, δ2H and δ18O of water in pores were also evaluated by direct vapor equilibration and laser spectrometry (DVE-LS) and δ2H and δ18O were compared with those obtained by compression. δ2H was found to change during the compression and a part of this change was found to be explained by the effect of water from closed pores extracted by compression. In addition, water isotopes in both open and closed pores were estimated by combining the results of 2 kinds of compression experiments. Water isotopes evaluated by compression that not be affected by water from closed pores showed good agreements with those obtained by DVE-LS indicating compression could show the mixed information of water from open and closed pores, while DVE-LS could show the information only for open pores. Thus, the comparison of water isotopes obtained by compression and DVE-LS could provide the information about water isotopes in closed and open pores.
Liu, Qi; Yang, Yu; Chen, Chun; Bu, Jiajun; Zhang, Yin; Ye, Xiuzi
2008-01-01
Background With the rapid emergence of RNA databases and newly identified non-coding RNAs, an efficient compression algorithm for RNA sequence and structural information is needed for the storage and analysis of such data. Although several algorithms for compressing DNA sequences have been proposed, none of them are suitable for the compression of RNA sequences with their secondary structures simultaneously. This kind of compression not only facilitates the maintenance of RNA data, but also supplies a novel way to measure the informational complexity of RNA structural data, raising the possibility of studying the relationship between the functional activities of RNA structures and their complexities, as well as various structural properties of RNA based on compression. Results RNACompress employs an efficient grammar-based model to compress RNA sequences and their secondary structures. The main goals of this algorithm are two fold: (1) present a robust and effective way for RNA structural data compression; (2) design a suitable model to represent RNA secondary structure as well as derive the informational complexity of the structural data based on compression. Our extensive tests have shown that RNACompress achieves a universally better compression ratio compared with other sequence-specific or common text-specific compression algorithms, such as Gencompress, winrar and gzip. Moreover, a test of the activities of distinct GTP-binding RNAs (aptamers) compared with their structural complexity shows that our defined informational complexity can be used to describe how complexity varies with activity. These results lead to an objective means of comparing the functional properties of heteropolymers from the information perspective. Conclusion A universal algorithm for the compression of RNA secondary structure as well as the evaluation of its informational complexity is discussed in this paper. We have developed RNACompress, as a useful tool for academic users. Extensive tests have shown that RNACompress is a universally efficient algorithm for the compression of RNA sequences with their secondary structures. RNACompress also serves as a good measurement of the informational complexity of RNA secondary structure, which can be used to study the functional activities of RNA molecules. PMID:18373878
Wanner, Gregory K; Osborne, Arayel; Greene, Charlotte H
2016-11-29
Cardiopulmonary resuscitation (CPR) training has traditionally involved classroom-based courses or, more recently, home-based video self-instruction. These methods typically require preparation and purchase fee; which can dissuade many potential bystanders from receiving training. This study aimed to evaluate the effectiveness of teaching compression-only CPR to previously untrained individuals using our 6-min online CPR training video and skills practice on a homemade mannequin, reproduced by viewers with commonly available items (towel, toilet paper roll, t-shirt). Participants viewed the training video and practiced with the homemade mannequin. This was a parallel-design study with pre and post training evaluations of CPR skills (compression rate, depth, hand position, release), and hands-off time (time without compressions). CPR skills were evaluated using a sensor-equipped mannequin and two blinded CPR experts observed testing of participants. Twenty-four participants were included: 12 never-trained and 12 currently certified in CPR. Comparing pre and post training, the never-trained group had improvements in average compression rate per minute (64.3 to 103.9, p = 0.006), compressions with correct hand position in 1 min (8.3 to 54.3, p = 0.002), and correct compression release in 1 min (21.2 to 76.3, p < 0.001). The CPR-certified group had adequate pre and post-test compression rates (>100/min), but an improved number of compressions with correct release (53.5 to 94.7, p < 0.001). Both groups had significantly reduced hands-off time after training. Achieving adequate compression depths (>50 mm) remained problematic in both groups. Comparisons made between groups indicated significant improvements in compression depth, hand position, and hands-off time in never-trained compared to CPR-certified participants. Inter-rater agreement values were also calculated between the CPR experts and sensor-equipped mannequin. A brief internet-based video coupled with skill practice on a homemade mannequin improved compression-only CPR skills, especially in the previously untrained participants. This training method allows for widespread compression-only CPR training with a tactile learning component, without fees or advance preparation.
Understanding Turbulence in Compressing Plasmas and Its Exploitation or Prevention
NASA Astrophysics Data System (ADS)
Davidovits, Seth
Unprecedented densities and temperatures are now achieved in compressions of plasma, by lasers and by pulsed power, in major experimental facilities. These compressions, carried out at the largest scale at the National Ignition Facility and at the Z Pulsed Power Facility, have important applications, including fusion, X-ray production, and materials research. Several experimental and simulation results suggest that the plasma in some of these compressions is turbulent. In fact, measurements suggest that in certain laboratory plasma compressions the turbulent energy is a dominant energy component. Similarly, turbulence is dominant in some compressing astrophysical plasmas, such as in molecular clouds. Turbulence need not be dominant to be important; even small quantities could greatly influence experiments that are sensitive to mixing of non-fuel into fuel, such as compressions seeking fusion ignition. Despite its important role in major settings, bulk plasma turbulence under compression is insufficiently understood to answer or even to pose some of the most fundamental questions about it. This thesis both identifies and answers key questions in compressing turbulent motion, while providing a description of the behavior of three-dimensional, isotropic, compressions of homogeneous turbulence with a plasma viscosity. This description includes a simple, but successful, new model for the turbulent energy of plasma undergoing compression. The unique features of compressing turbulence with a plasma viscosity are shown, including the sensitivity of the turbulence to plasma ionization, and a "sudden viscous dissipation'' effect which rapidly converts plasma turbulent energy into thermal energy. This thesis then examines turbulence in both laboratory compression experiments and molecular clouds. It importantly shows: the possibility of exploiting turbulence to make fusion or X-ray production more efficient; conditions under which hot-spot turbulence can be prevented; and a lower bound on the growth of turbulence in molecular clouds. This bound raises questions about the level of dissipation in existing molecular cloud models. Finally, the observations originally motivating the thesis, Z-pinch measurements suggesting dominant turbulent energy, are reexamined by self-consistently accounting for the impact of the turbulence on the spectroscopic analysis. This is found to strengthen the evidence that the multiple observations describe a highly turbulent plasma state.
Latt, L Daniel; Glisson, Richard R; Adams, Samuel B; Schuh, Reinhard; Narron, John A; Easley, Mark E
2015-10-01
Transverse tarsal joint arthrodesis is commonly performed in the operative treatment of hindfoot arthritis and acquired flatfoot deformity. While fixation is typically achieved using screws, failure to obtain and maintain joint compression sometimes occurs, potentially leading to nonunion. External fixation is an alternate method of achieving arthrodesis site compression and has the advantage of allowing postoperative compression adjustment when necessary. However, its performance relative to standard screw fixation has not been quantified in this application. We hypothesized that external fixation could provide transverse tarsal joint compression exceeding that possible with screw fixation. Transverse tarsal joint fixation was performed sequentially, first with a circular external fixator and then with compression screws, on 9 fresh-frozen cadaveric legs. The external fixator was attached in abutting rings fixed to the tibia and the hindfoot and a third anterior ring parallel to the hindfoot ring using transverse wires and half-pins in the tibial diaphysis, calcaneus, and metatarsals. Screw fixation comprised two 4.3 mm headless compression screws traversing the talonavicular joint and 1 across the calcaneocuboid joint. Compressive forces generated during incremental fixator foot ring displacement to 20 mm and incremental screw tightening were measured using a custom-fabricated instrumented miniature external fixator spanning the transverse tarsal joint. The maximum compressive force generated by the external fixator averaged 186% of that produced by the screws (range, 104%-391%). Fixator compression surpassed that obtainable with screws at 12 mm of ring displacement and decreased when the tibial ring was detached. No correlation was found between bone density and the compressive force achievable by either fusion method. The compression across the transverse tarsal joint that can be obtained with a circular external fixator including a tibial ring exceeds that which can be obtained with 3 headless compression screws. Screw and external fixator performance did not correlate with bone mineral density. This study supports the use of external fixation as an alternative method of generating compression to help stimulate fusion across the transverse tarsal joints. The findings provide biomechanical evidence to support the use of external fixation as a viable option in transverse tarsal joint fusion cases in which screw fixation has failed or is anticipated to be inadequate due to suboptimal bone quality. © The Author(s) 2015.
Mückley, Thomas; Eichorn, Stephan; Hoffmeier, Konrad; von Oldenburg, Geert; Speitling, Andreas; Hoffmann, Gunther O; Bühren, Volker
2007-02-01
Intramedullary implants are being used with increasing frequency for tibiotalocalcaneal fusion (TTCF). Clinically, the question arises whether intramedullary (IM) nails should have a compression mode to enhance biomechanical stiffness and fusion-site compression. This biomechanical study compared the primary stability of TTCF constructs using compressed and uncompressed retrograde IM nails and a screw technique in a bone model. For each technique, three composite bone models were used. The implants were a Biomet nail (static locking mode and compressed mode), a T2 femoral nail (compressed mode); a prototype IM nail 1 (PT1, compressed mode), a prototype IM nail 2 (PT2, dynamic locking mode and compressed mode), and a three-screw construct. The compressed contact surface of each construct was measured with pressure-sensitive film and expressed as percent of the available fusion-site area. Stiffness was tested in dorsiflexion and plantarflexion (D/P), varus and valgus (V/V), and internal rotation and external rotation (I/E) (20 load cycles per loading mode). Mean contact surfaces were 84.0 +/- 6.0% for the Biomet nail, 84.0 +/- 13.0% for the T2 nail, 70.0 +/- 7.2% for the PTI nail, and 83.5 +/- 5.5% for the compressed PT2 nail. The greatest primary stiffness in D/P was obtained with the compressed PT2, followed by the compressed Biomet nail. The dynamically locked PT2 produced the least primary stiffness. In V/V, PT1 had the (significantly) greatest primary stiffness, followed by the compressed PT2. The statically locked Biomet nail and the dynamically locked PT2 had the least primary stiffness in V/V. In I/E, the compressed PT2 had the greatest primary stiffness, followed by the PT1 and the T2 nails, which did not differ significantly from each other. The dynamically locked PT2 produced the least primary stiffness. The screw construct's contact surface and stiffness were intermediate. The IM nails with compression used for TTCF produced good contact surfaces and primary stiffness. They were significantly superior in these respects to the uncompressed nails and the screw construct. The large contact surfaces and great primary stiffness provided by the IM nails in a bone model may translate into improved union rates in patients who have TTCF.
Hardware Implementation of Lossless Adaptive Compression of Data From a Hyperspectral Imager
NASA Technical Reports Server (NTRS)
Keymeulen, Didlier; Aranki, Nazeeh I.; Klimesh, Matthew A.; Bakhshi, Alireza
2012-01-01
Efficient onboard data compression can reduce the data volume from hyperspectral imagers on NASA and DoD spacecraft in order to return as much imagery as possible through constrained downlink channels. Lossless compression is important for signature extraction, object recognition, and feature classification capabilities. To provide onboard data compression, a hardware implementation of a lossless hyperspectral compression algorithm was developed using a field programmable gate array (FPGA). The underlying algorithm is the Fast Lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral- Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), p. 26 with the modification reported in Lossless, Multi-Spectral Data Comressor for Improved Compression for Pushbroom-Type Instruments (NPO-45473), NASA Tech Briefs, Vol. 32, No. 7 (July 2008) p. 63, which provides improved compression performance for data from pushbroom-type imagers. An FPGA implementation of the unmodified FL algorithm was previously developed and reported in Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System (NPO-46867), NASA Tech Briefs, Vol. 36, No. 5 (May 2012) p. 42. The essence of the FL algorithm is adaptive linear predictive compression using the sign algorithm for filter adaption. The FL compressor achieves a combination of low complexity and compression effectiveness that exceeds that of stateof- the-art techniques currently in use. The modification changes the predictor structure to tolerate differences in sensitivity of different detector elements, as occurs in pushbroom-type imagers, which are suitable for spacecraft use. The FPGA implementation offers a low-cost, flexible solution compared to traditional ASIC (application specific integrated circuit) and can be integrated as an intellectual property (IP) for part of, e.g., a design that manages the instrument interface. The FPGA implementation was benchmarked on the Xilinx Virtex IV LX25 device, and ported to a Xilinx prototype board. The current implementation has a critical path of 29.5 ns, which dictated a clock speed of 33 MHz. The critical path delay is end-to-end measurement between the uncompressed input data and the output compression data stream. The implementation compresses one sample every clock cycle, which results in a speed of 33 Msample/s. The implementation has a rather low device use of the Xilinx Virtex IV LX25, making the total power consumption of the implementation about 1.27 W.
Pourazadi, Shahram; Ahmadi, Sadegh; Menon, Carlo
2015-11-05
One of the recommended treatments for disorders associated with the lower extremity venous insufficiency is the application of external mechanical compression. Compression stockings and elastic bandages are widely used for the purpose of compression therapy and are usually designed to exert a specified value or range of compression on the leg. However, the leg deforms under external compression, which can lead to undesirable variations in the amount of compression applied by the compression bandages. In this paper, the use of an active compression bandage (ACB), whose compression can be regulated through an electrical signal, is investigated. The ACB is based on the use of dielectric elastomer actuators. This paper specifically investigates, via both analytical and non-linear numerical simulations, the potential pressure the ACB can apply when the compliancy of the human leg is taken into account. The work underpins the need to account for the compressibility of the leg when designing compression garments for lower extremity venous insufficiency. A mathematical model is used to simulate the volumetric change of a calf when compressed. Suitable parameters for this calf model are selected from the literature where the calf, from ankle to knee, is divided into six different regions. An analytical electromechanical model of the ACB, which considers its compliancy as a function of its pre-stretch and electricity applied, is used to predict the ACB's behavior. Based on these calf and ACB analytical models, a simulation is performed to investigate the interaction between the ACB and the human calf with and without an electrical stimulus applied to the ACB. This simulation is validated by non-linear analysis performed using a software based on the finite element method (FEM). In all simulations, the ACB's elastomer is stretched to a value in the range between 140 and 220 % of its initial length. Using data from the literature, the human calf model, which is examined in this work, has different compliancy in its different regions. For example, when a 28.5 mmHg (3.8 kPa) of external compression is applied to the entire calf, the ankle shows a 3.7 % of volume change whereas the knee region undergoes a 2.7 % of volume change. The paper presents the actual pressure in the different regions of the calf for different values of the ACB's stretch ratio when it is either electrically activated or not activated, and when compliancy of the leg is either considered or not considered. For example, results of the performed simulation show that about 10 % variation in compression in the ankle region is expected when the ACB initially applies 6 kPa and the compressibility of the calf is first considered and then not considered. Such a variation reduces to 5 % when the initial pressure applied by the ACB reduced by half. Comparison with non-linear FEM simulations show that the analytical models used in this work can closely estimate interaction between an active compression bandage and a human calf. In addition, compliancy of the leg should not be neglected when either designing a compression band or predicting the compressive force it can exert. The methodology proposed in this work can be extended to other types of elastic compression bandages and garments for biomedical applications.
Data Compression for Maskless Lithography Systems: Architecture, Algorithms and Implementation
2008-05-19
Data Compression for Maskless Lithography Systems: Architecture, Algorithms and Implementation Vito Dai Electrical Engineering and Computer Sciences...servers or to redistribute to lists, requires prior specific permission. Data Compression for Maskless Lithography Systems: Architecture, Algorithms and...for Maskless Lithography Systems: Architecture, Algorithms and Implementation Copyright 2008 by Vito Dai 1 Abstract Data Compression for Maskless
49 CFR 173.314 - Compressed gases in tank cars and multi-unit tank cars.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 2 2010-10-01 2010-10-01 false Compressed gases in tank cars and multi-unit tank cars. 173.314 Section 173.314 Transportation Other Regulations Relating to Transportation PIPELINE AND... Compressed gases in tank cars and multi-unit tank cars. (a) Definitions. For definitions of compressed gases...
49 CFR 173.314 - Compressed gases in tank cars and multi-unit tank cars.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 2 2011-10-01 2011-10-01 false Compressed gases in tank cars and multi-unit tank cars. 173.314 Section 173.314 Transportation Other Regulations Relating to Transportation PIPELINE AND... Compressed gases in tank cars and multi-unit tank cars. (a) Definitions. For definitions of compressed gases...
Processing Maple Syrup with a Vapor Compression Distiller: An Economic Analysis
Lawrence D. Garrett
1977-01-01
A test of vapor compression distillers for processing maple syrup revealed that: (1) vapor compression equipment tested evaporated 1 pound of water with .047 pounds of steam equivalent (electrical energy); open-pan evaporators of similar capacity required 1.5 pounds of steam equivalent (oil energy) to produce 1 pound of water; (2) vapor compression evaporation produced...
Calculation methods for compressible turbulent boundary layers, 1976
NASA Technical Reports Server (NTRS)
Bushnell, D. M.; Cary, A. M., Jr.; Harris, J. E.
1977-01-01
Equations and closure methods for compressible turbulent boundary layers are discussed. Flow phenomena peculiar to calculation of these boundary layers were considered, along with calculations of three dimensional compressible turbulent boundary layers. Procedures for ascertaining nonsimilar two and three dimensional compressible turbulent boundary layers were appended, including finite difference, finite element, and mass-weighted residual methods.
ERIC Educational Resources Information Center
Dailey, K. Anne
Time-compressed speech (also called compressed speech, speeded speech, or accelerated speech) is an extension of the normal recording procedure for reproducing the spoken word. Compressed speech can be used to achieve dramatic reductions in listening time without significant loss in comprehension. The implications of such temporal reductions in…
41 CFR 50-204.8 - Use of compressed air.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Use of compressed air. 50-204.8 Section 50-204.8 Public Contracts and Property Management Other Provisions Relating to Public... General Safety and Health Standards § 50-204.8 Use of compressed air. Compressed air shall not be used for...
Hildebrand, Richard J.; Wozniak, John J.
2001-01-01
A compressed gas storage cell interconnecting manifold including a thermally activated pressure relief device, a manual safety shut-off valve, and a port for connecting the compressed gas storage cells to a motor vehicle power source and to a refueling adapter. The manifold is mechanically and pneumatically connected to a compressed gas storage cell by a bolt including a gas passage therein.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 2 2014-10-01 2014-10-01 false Exceptions for cylinders of compressed oxygen or... Exceptions for cylinders of compressed oxygen or other oxidizing gases transported within the State of Alaska. (a) Exceptions. When transported in the State of Alaska, cylinders of compressed oxygen or other...
A Posteriori Restoration of Block Transform-Compressed Data
NASA Technical Reports Server (NTRS)
Brown, R.; Boden, A. F.
1995-01-01
The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.
Pulse compression at 1.06 μm in dispersion-decreasing holey fibers
NASA Astrophysics Data System (ADS)
Tse, M. L. V.; Horak, P.; Price, J. H. V.; Poletti, F.; He, F.; Richardson, D. J.
2006-12-01
We report compression of low-power femtosecond pulses at 1.06 μm in a dispersion-decreasing holey fiber. Near-adiabatic compression of 130 fs pulses down to 60 fs has been observed. Measured spectra and pulse shapes agree well with numerical simulations. Compression factors of ten are possible in optimized fibers.
An Efficient, Lossless Database for Storing and Transmitting Medical Images
NASA Technical Reports Server (NTRS)
Fenstermacher, Marc J.
1998-01-01
This research aimed in creating new compression methods based on the central idea of Set Redundancy Compression (SRC). Set Redundancy refers to the common information that exists in a set of similar images. SRC compression methods take advantage of this common information and can achieve improved compression of similar images by reducing their Set Redundancy. The current research resulted in the development of three new lossless SRC compression methods: MARS (Median-Aided Region Sorting), MAZE (Max-Aided Zero Elimination) and MaxGBA (Max-Guided Bit Allocation).
Compressor ported shroud for foil bearing cooling
Elpern, David G [Los Angeles, CA; McCabe, Niall [Torrance, CA; Gee, Mark [South Pasadena, CA
2011-08-02
A compressor ported shroud takes compressed air from the shroud of the compressor before it is completely compressed and delivers it to foil bearings. The compressed air has a lower pressure and temperature than compressed outlet air. The lower temperature of the air means that less air needs to be bled off from the compressor to cool the foil bearings. This increases the overall system efficiency due to the reduced mass flow requirements of the lower temperature air. By taking the air at a lower pressure, less work is lost compressing the bearing cooling air.
Spectral compression algorithms for the analysis of very large multivariate images
Keenan, Michael R.
2007-10-16
A method for spectrally compressing data sets enables the efficient analysis of very large multivariate images. The spectral compression algorithm uses a factored representation of the data that can be obtained from Principal Components Analysis or other factorization technique. Furthermore, a block algorithm can be used for performing common operations more efficiently. An image analysis can be performed on the factored representation of the data, using only the most significant factors. The spectral compression algorithm can be combined with a spatial compression algorithm to provide further computational efficiencies.
Superconductivity under uniaxial compression in β-(BDA-TTP) salts
NASA Astrophysics Data System (ADS)
Suzuki, T.; Onari, S.; Ito, H.; Tanaka, Y.
2009-10-01
In order to clarify the mechanism of organic superconductor β-(BDA-TTP) salts. We study the superconductivity under uniaxial compression with non-dimerized two-band Hubbard model. We have calculated the uniaxial compression dependence of T c by solving the Eliashberg’s equation using the fluctuation exchange (FLEX) approximation. The transfer integral under the uniaxial compression was estimated by the extended Huckel method. We have found that non-monotonic behaviors of T c in experimental results under uniaxial compression are understood taking the spin frustration and spin fluctuation into account.
Simulations of free shear layers using a compressible k-epsilon model
NASA Technical Reports Server (NTRS)
Yu, S. T.; Chang, C. T.; Marek, C. J.
1991-01-01
A two-dimensional, compressible Navier-Stokes equations with a k-epsilon turbulence model are solved numerically to simulate the flows of compressible free shear layers. The appropriate form of k and epsilon equations for compressible flows are discussed. Sarkar's modeling is adopted to simulate the compressibility effects in the k and epsilon equations. The numerical results show that the spreading rate of the shear layers decreases with increasing convective Mach number. In addition, favorable comparison was found between the calculated results and Goebel and Dutton's experimental data.
Simulations of free shear layers using a compressible kappa-epsilon model
NASA Technical Reports Server (NTRS)
Yu, S. T.; Chang, C. T.; Marek, C. J.
1991-01-01
A two-dimensional, compressible Navier-Stokes equation with a k-epsilon turbulence model is solved numerically to simulate the flow of a compressible free shear layer. The appropriate form of k and epsilon equations for compressible flow is discussed. Sarkar's modeling is adopted to simulate the compressibility effects in the k and epsilon equations. The numerical results show that the spreading rate of the shear layers decreases with increasing convective Mach number. In addition, favorable comparison was found between the calculated results and experimental data.
Effect of data compression on diagnostic accuracy in digital hand and chest radiography
NASA Astrophysics Data System (ADS)
Sayre, James W.; Aberle, Denise R.; Boechat, Maria I.; Hall, Theodore R.; Huang, H. K.; Ho, Bruce K. T.; Kashfian, Payam; Rahbar, Guita
1992-05-01
Image compression is essential to handle a large volume of digital images including CT, MR, CR, and digitized films in a digital radiology operation. The full-frame bit allocation using the cosine transform technique developed during the last few years has been proven to be an excellent irreversible image compression method. This paper describes the effect of using the hardware compression module on diagnostic accuracy in hand radiographs with subperiosteal resorption and chest radiographs with interstitial disease. Receiver operating characteristic analysis using 71 hand radiographs and 52 chest radiographs with five observers each demonstrates that there is no statistical significant difference in diagnostic accuracy between the original films and the compressed images with a compression ratio as high as 20:1.
NASA Astrophysics Data System (ADS)
Hua, Yunfeng; Deng, Zhenyu; Jiang, Yangwei; Zhang, Linxi
2017-06-01
Molecular dynamics simulations of a coarse-grained bead-spring model of ring polymer brushes under compression are presented. Flexible polymer brushes are always disordered during compression, whereas semiflexible polymer brushes tend to be ordered under sufficiently strong compression. Further, the polymer monomer density of the semiflexible polymer brush is very high near the brush surface, inducing a peak value of the free energy near the surface. Therefore, when nanoparticles are compressed in semiflexible ring polymer brushes, they tend to exhibit a closely packed single-layer structure between the brush surface and the impenetrable wall, and a quasi-two-dimensional ordered structure near the brush surface is formed under strong compression. These findings provide a new approach to designing responsive applications.
Multiresolution Distance Volumes for Progressive Surface Compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laney, D E; Bertram, M; Duchaineau, M A
2002-04-18
We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distancemore » volumes for surface compression and progressive reconstruction for complex high genus surfaces.« less
Nonpainful wide-area compression inhibits experimental pain
Honigman, Liat; Bar-Bachar, Ofrit; Yarnitsky, David; Sprecher, Elliot; Granovsky, Yelena
2016-01-01
Abstract Compression therapy, a well-recognized treatment for lymphoedema and venous disorders, pressurizes limbs and generates massive non-noxious afferent sensory barrages. The aim of this study was to study whether such afferent activity has an analgesic effect when applied on the lower limbs, hypothesizing that larger compression areas will induce stronger analgesic effects, and whether this effect correlates with conditioned pain modulation (CPM). Thirty young healthy subjects received painful heat and pressure stimuli (47°C for 30 seconds, forearm; 300 kPa for 15 seconds, wrist) before and during 3 compression protocols of either SMALL (up to ankles), MEDIUM (up to knees), or LARGE (up to hips) compression areas. Conditioned pain modulation (heat pain conditioned by noxious cold water) was tested before and after each compression protocol. The LARGE protocol induced more analgesia for heat than the SMALL protocol (P < 0.001). The analgesic effect interacted with gender (P = 0.015). The LARGE protocol was more efficient for females, whereas the MEDIUM protocol was more efficient for males. Pressure pain was reduced by all protocols (P < 0.001) with no differences between protocols and no gender effect. Conditioned pain modulation was more efficient than the compression-induced analgesia. For the LARGE protocol, precompression CPM efficiency positively correlated with compression-induced analgesia. Large body area compression exerts an area-dependent analgesic effect on experimental pain stimuli. The observed correlation with pain inhibition in response to robust non-noxious sensory stimulation may suggest that compression therapy shares similar mechanisms with inhibitory pain modulation assessed through CPM. PMID:27152691
Hatt, A; Cheng, S; Tan, K; Sinkus, R; Bilston, L E
2015-10-01
Compressing the internal jugular veins can reverse ventriculomegaly in the syndrome of inappropriately low pressure acute hydrocephalus, and it has been suggested that this works by "stiffening" the brain tissue. Jugular compression may also alter blood and CSF flow in other conditions. We aimed to understand the effect of jugular compression on brain tissue stiffness and CSF flow. The head and neck of 9 healthy volunteers were studied with and without jugular compression. Brain stiffness (shear modulus) was measured by using MR elastography. Phase-contrast MR imaging was used to measure CSF flow in the cerebral aqueduct and blood flow in the neck. The shear moduli of the brain tissue increased with the percentage of blood draining through the internal jugular veins during venous compression. Peak velocity of caudally directed CSF in the aqueduct increased significantly with jugular compression (P < .001). The mean jugular venous flow rate, amplitude, and vessel area were significantly reduced with jugular compression, while cranial arterial flow parameters were unaffected. Jugular compression influences cerebral CSF hydrodynamics in healthy subjects and can increase brain tissue stiffness, but the magnitude of the stiffening depends on the percentage of cranial blood draining through the internal jugular veins during compression—that is, subjects who maintain venous drainage through the internal jugular veins during jugular compression have stiffer brains than those who divert venous blood through alternative pathways. These methods may be useful for studying this phenomenon in patients with the syndrome of inappropriately low-pressure acute hydrocephalus and other conditions. © 2015 by American Journal of Neuroradiology.
Fleming, Braden C.; Brady, Mark F.; Bradley, Michael P.; Banerjee, Rahul; Hulstyn, Michael J.; Fadale, Paul D.
2008-01-01
Purpose To document the tibiofemoral (TF) compression forces produced during clinical initial graft tension protocols. Methods An image analysis system was used to track the position of the tibia relative to the femur in 11 cadaver knees. TF compression forces were quantified using thin-film pressure sensors. Prior to performing ACL reconstructions with patellar tendon grafts, measurements of TF compression force were obtained from the ACL-intact knee with knee flexion. ACL reconstructions were then performed using “force-based” and “laxity-based” graft tension approaches. Within each approach, high- and low-tension conditions were compared to the ACL-intact condition over the range of knee flexion angles. Results The TF compression forces for all initial graft tension conditions were significantly greater than that of the normal knee when the knee was in full extension (0°). The TF compression forces when using the laxity-based approach were greater than those produced with the force-based approach. However the laxity-based approach was necessary to restore normal laxity at the time of surgery. Conclusions The initial graft tension conditions produce different TF compressive force profiles at the time of surgery. A compromise must be made between restoring knee laxity or TF compressive forces when reconstructing the ACL with patellar tendon graft. Clinical Relevance The TF compression forces were greater in the ACL-reconstructed knee for all the initial graft tension conditions when compared to the ACL-intact knee, and that clinically relevant initial graft tension conditions produce different TF compressive forces. PMID:18760214
Brady, Mark F.; Bradley, Michael P.; Fleming, Braden C.; Fadale, Paul D.; Hulstyn, Michael J.; Banerjee, Rahul
2007-01-01
Background The initial tension applied to an ACL graft at the time of fixation modulates knee motion and the tibiofemoral compressive loads. Purpose To establish the relationships between initial graft tension, tibiofemoral compressive force, and the neutral tibiofemoral position in the cadaver knee. Study Design Controlled Laboratory Study. Methods The tibiofemoral compressive forces and joint positions were determined in the ACL-intact knee at 0°, 20° and 90° knee flexion. The ACL was excised and reconstructed with a patellar tendon graft using graft tensions of 1, 15, 30, 60 and 90 N applied at 0°, 20° and 90° knee flexion. The compressive forces and neutral positions were compared between initial tension conditions and the ACL-intact knee. Results Increasing initial graft tension increased the tibiofemoral compressive forces. The forces in the medial compartment were 1.8 times those in the lateral compartment. The compressive forces were dependent on the knee angle at which the tension was applied. The greatest compressive forces occurred when the graft was tensioned with the knee in extension. An increase in initial graft tension caused the tibia to rotate externally compared to the ACL-intact knee. Increases in initial graft tension also caused a significant posterior translation of the tibia relative to the femur. Conclusions Different initial graft tension protocols produced predictable changes in the tibiofemoral compressive forces and joint positions. Clinical Relevance The tibiofemoral compressive force and neutral joint position were best replicated with a low graft tension (1–15 N) when using a patellar tendon graft. PMID:17218659
A protocol for monitoring soft tissue motion under compression garments during drop landings.
Mills, Chris; Scurr, Joanna; Wood, Louise
2011-06-03
This study used a single-subject design to establish a valid and reliable protocol for monitoring soft tissue motion under compression garments during drop landings. One male participant performed six 40 cm drop landings onto a force platform, in three compression conditions (none, medium high). Five reflective markers placed on the thigh under the compression garment and five over the garment were filmed using two cameras (1000 Hz). Following manual digitisation, marker coordinates were reconstructed and their resultant displacements and maximum change in separation distance between skin and garment markers were calculated. To determine reliability of marker application, 35 markers were attached to the thigh over the high compression garment and filmed. Markers were then removed and re-applied on three occasions; marker separation and distance to thigh centre of gravity were calculated. Results showed similar ground reaction forces during landing trials. Significant reductions in the maximum change in separation distance between markers from no compression to high compression landings were reported. Typical errors in marker movement under and over the garment were 0.1mm in medium and high compression landings. Re-application of markers showed mean typical errors of 1mm in marker separation and <3mm relative to thigh centre of gravity. This paper presents a novel protocol that demonstrates sufficient sensitivity to detect reductions in soft tissue motion during landings in high compression garments compared to no compression. Additionally, markers placed under or over the garment demonstrate low variance in movement, and the protocol reports good reliability in marker re-application. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Xie, ChengJun; Xu, Lin
2008-03-01
This paper presents an algorithm based on mixing transform of wave band grouping to eliminate spectral redundancy, the algorithm adapts to the relativity difference between different frequency spectrum images, and still it works well when the band number is not the power of 2. Using non-boundary extension CDF(2,2)DWT and subtraction mixing transform to eliminate spectral redundancy, employing CDF(2,2)DWT to eliminate spatial redundancy and SPIHT+CABAC for compression coding, the experiment shows that a satisfied lossless compression result can be achieved. Using hyper-spectral image Canal of American JPL laboratory as the data set for lossless compression test, when the band number is not the power of 2, lossless compression result of this compression algorithm is much better than the results acquired by JPEG-LS, WinZip, ARJ, DPCM, the research achievements of a research team of Chinese Academy of Sciences, Minimum Spanning Tree and Near Minimum Spanning Tree, on the average the compression ratio of this algorithm exceeds the above algorithms by 41%,37%,35%,29%,16%,10%,8% respectively; when the band number is the power of 2, for 128 frames of the image Canal, taking 8, 16 and 32 respectively as the number of one group for groupings based on different numbers, considering factors like compression storage complexity, the type of wave band and the compression effect, we suggest using 8 as the number of bands included in one group to achieve a better compression effect. The algorithm of this paper has priority in operation speed and hardware realization convenience.
High bit depth infrared image compression via low bit depth codecs
NASA Astrophysics Data System (ADS)
Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren
2017-08-01
Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.
Komasawa, Nobuyasu; Ueki, Ryusuke; Kaminoh, Yoshiroh; Nishi, Shin-Ichi
2014-10-01
In the 2010 American Heart Association guidelines, supraglottic devices (SGDs) such as the laryngeal mask are proposed as alternatives to tracheal intubation for cardiopulmonary resuscitation. Some SGDs can also serve as a means for tracheal intubation after successful ventilation. The purpose of this study was to evaluate the effect of chest compression on airway management with four intubating SGDs, aura-i (aura-i), air-Q (air-Q), i-gel (i-gel), and Fastrack (Fastrack), during cardiopulmonary resuscitation using a manikin. Twenty novice physicians inserted the four intubating SGDs into a manikin with or without chest compression. Insertion time and successful ventilation rate were measured. For cases of successful ventilation, blind tracheal intubation via the intubating SGD was performed with chest compression and success or failure within 30 s was recorded. Chest compression did not decrease the ventilation success rate of the four intubating SGDs (without chest compression (success/total): air-Q, 19/20; aura-i, 19/20; i-gel, 18/20; Fastrack, 19/20; with chest compression: air-Q, 19/20; aura-i, 19/20; i-gel, 16/20; Fastrack, 18/20). Insertion time was significantly lengthened by chest compression in the i-gel trial (P < 0.05), but not with the other three devices. The blind intubation success rate with chest compression was the highest in the air-Q trial (air-Q, 15/19; aura-i, 14/19; i-gel, 12/16; Fastrack, 10/18). This simulation study revealed the utility of intubating SGDs for airway management during chest compression.
Buléon, Clément; Delaunay, Julie; Parienti, Jean-Jacques; Halbout, Laurent; Arrot, Xavier; Gérard, Jean-Louis; Hanouz, Jean-Luc
2016-09-01
Chest compressions require physical effort leading to increased fatigue and rapid degradation in the quality of cardiopulmonary resuscitation overtime. Despite harmful effect of interrupting chest compressions, current guidelines recommend that rescuers switch every 2 minutes. The impact on the quality of chest compressions during extended cardiopulmonary resuscitation has yet to be assessed. We conducted randomized crossover study on manikin (ResusciAnne; Laerdal). After randomization, 60 professional emergency rescuers performed 2 × 10 minutes of continuous chest compressions with and without a feedback device (CPRmeter). Efficient compression rate (primary outcome) was defined as the frequency target reached along with depth and leaning at the same time (recorded continuously). The 10-minute mean efficient compression rate was significantly better in the feedback group: 42% vs 21% (P< .001). There was no significant difference between the first (43%) and the tenth minute (36%; P= .068) with feedback. Conversely, a significant difference was evident from the second minute without feedback (35% initially vs 27%; P< .001). The efficient compression rate difference with and without feedback was significant every minute, from the second minute onwards. CPRmeter feedback significantly improved chest compression depth from the first minute, leaning from the second minute and rate from the third minute. A real-time feedback device delivers longer effective, steadier chest compressions over time. An extrapolation of these results from simulation may allow rescuer switches to be carried out beyond the currently recommended 2 minutes when a feedback device is used. Copyright © 2016 Elsevier Inc. All rights reserved.
Flour, Mieke; Clark, Michael; Partsch, Hugo; Mosti, Giovanni; Uhl, Jean-Francois; Chauveau, Michel; Cros, Francois; Gelade, Pierre; Bender, Dean; Andriessen, Anneke; Schuren, Jan; Cornu-Thenard, André; Arkans, Ed; Milic, Dragan; Benigni, Jean-Patrick; Damstra, Robert; Szolnoky, Gyozo; Schingale, Franz
2013-10-01
The International Compression Club (ICC) is a partnership between academics, clinicians and industry focused upon understanding the role of compression in the management of different clinical conditions. The ICC meet regularly and from these meetings have produced a series of eight consensus publications upon topics ranging from evidence-based compression to compression trials for arm lymphoedema. All of the current consensus documents can be accessed on the ICC website (http://www.icc-compressionclub.com/index.php). In May 2011, the ICC met in Brussels during the European Wound Management Association (EWMA) annual conference. With almost 50 members in attendance, the day-long ICC meeting challenged a series of dogmas and myths that exist when considering compression therapies. In preparation for a discussion on beliefs surrounding compression, a forum was established on the ICC website where presenters were able to display a summary of their thoughts upon each dogma to be discussed during the meeting. Members of the ICC could then provide comments on each topic thereby widening the discussion to the entire membership of the ICC rather than simply those who were attending the EWMA conference. This article presents an extended report of the issues that were discussed, with each dogma covered in a separate section. The ICC discussed 12 'dogmas' with areas 1 through 7 dedicated to materials and application techniques used to apply compression with the remaining topics (8 through 12) related to the indications for using compression. © 2012 The Authors. International Wound Journal © 2012 John Wiley & Sons Ltd and Medicalhelplines.com Inc.
Fujita, Megumi; Himi, Satoshi; Iwata, Motokazu
2010-03-01
SX-3228, 6-benzyl-3-(5-methoxy-1,3,4-oxadiazol-2-yl)-5,6,7,8-tetrahydro-1,6-naphthyridin-2(1H)-one, is a newly-synthesized benzodiazepine receptor agonist intended to be developed as a tablet preparation. This compound, however, becomes chemically unstable due to decreased crystallinity when it undergoes mechanical treatments such as grinding and compression. A wet-granule tableting method, where wet granules are compressed before being dried, was therefore investigated as it has the advantage of producing tablets of sufficient hardness at quite low compression pressures. The results of the stability testing showed that the drug substance was chemically considerably more stable in wet-granule compression tablets compared to conventional tablets. Furthermore, the drug substance was found to be relatively chemically stable in wet-granule compression tablets even when high compression pressure was used and the effect of this pressure was small. After investigating the reason for this excellent stability, it became evident that near-isotropic pressure was exerted on the crystals of the drug substance because almost all the empty spaces in the tablets were occupied with water during the wet-granule compression process. Decreases in crystallinity of the drug substance were thus small, making the drug substance chemically stable in the wet-granule compression tablets. We believe that this novel approach could be useful for many other compounds that are destabilized by mechanical treatments.
Breast compression in mammography: how much is enough?
Poulos, Ann; McLean, Donald; Rickard, Mary; Heard, Robert
2003-06-01
The amount of breast compression that is applied during mammography potentially influences image quality and the discomfort experienced. The aim of this study was to determine the relationship between applied compression force, breast thickness, reported discomfort and image quality. Participants were women attending routine breast screening by mammography at BreastScreen New South Wales Central and Eastern Sydney. During the mammographic procedure, an 'extra' craniocaudal (CC) film was taken at a reduced level of compression ranging from 10 to 30 Newtons. Breast thickness measurements were recorded for both the normal and the extra CC film. Details of discomfort experienced, cup size, menstrual status, existing breast pain and breast problems were also recorded. Radiologists were asked to compare the image quality of the normal and manipulated film. The results indicated that 24% of women did not experience a difference in thickness when the compression was reduced. This is an important new finding because the aim of breast compression is to reduce breast thickness. If breast thickness is not reduced when compression force is applied then discomfort is increased with no benefit in image quality. This has implications for mammographic practice when determining how much breast compression is sufficient. Radiologists found a decrease in contrast resolution within the fatty area of the breast between the normal and the extra CC film, confirming a decrease in image quality due to insufficient applied compression force.
Buys, Gerhard M; du Plessis, Lissinda H; Marais, Andries F; Kotze, Awie F; Hamman, Josias H
2013-06-01
Chitosan is a polymer derived from chitin that is widely available at relatively low cost, but due to compression challenges it has limited application for the production of direct compression tablets. The aim of this study was to use certain process and formulation variables to improve manufacturing of tablets containing chitosan as bulking agent. Chitosan particle size and flow properties were determined, which included bulk density, tapped density, compressibility and moisture uptake. The effect of process variables (i.e. compression force, punch depth, percentage compaction in a novel double fill compression process) and formulation variables (i.e. type of glidant, citric acid, pectin, coating with Eudragit S®) on chitosan tablet performance (i.e. mass variation, tensile strength, dissolution) was investigated. Moisture content of the chitosan powder, particle size and the inclusion of glidants had a pronounced effect on its flow ability. Varying the percentage compaction during the first cycle of a double fill compression process produced chitosan tablets with more acceptable tensile strength and dissolution rate properties. The inclusion of citric acid and pectin into the formulation significantly decreased the dissolution rate of isoniazid from the tablets due to gel formation. Direct compression of chitosan powder into tablets can be significantly improved by the investigated process and formulation variables as well as applying a double fill compression process.
Bezci, Semih E; Klineberg, Eric O; O'Connell, Grace D
2018-01-01
The intervertebral disc is a complex joint that acts to support and transfer large multidirectional loads, including combinations of compression, tension, bending, and torsion. Direct comparison of disc torsion mechanics across studies has been difficult, due to differences in loading protocols. In particular, the lack of information on the combined effect of multiple parameters, including axial compressive preload and rotation angle, makes it difficult to discern whether disc torsion mechanics are sensitive to the variables used in the test protocol. Thus, the objective of this study was to evaluate compression-torsion mechanical behavior of healthy discs under a wide range of rotation angles. Bovine caudal discs were tested under a range of compressive preloads (150, 300, 600, and 900N) and rotation angles (± 1, 2, 3, 4, or 5°) applied at a rate of 0.5°/s. Torque-rotation data were used to characterize shape changes in the hysteresis loop and to calculate disc torsion mechanics. Torsional mechanical properties were described using multivariate regression models. The rate of change in torsional mechanical properties with compression depended on the maximum rotation angle applied, indicating a strong interaction between compressive stress and maximum rotation angle. The regression models reported here can be used to predict disc torsion mechanics under axial compression for a given disc geometry, compressive preload, and rotation angle. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Small, Ward; Pearson, Mark A.; Maiti, Amitesh
Dow Corning SE 1700 (reinforced polydimethylsiloxane) porous structures were made by direct ink writing (DIW). The specimens (~50% porosity) were subjected to various compressive strains (15, 30, 45%) and temperatures (room temperature, 35, 50, 70°C) in a nitrogen atmosphere (active purge) for 1 year. Compression set and load retention of the aged specimens were measured periodically during the study. Compression set increased with strain and temperature. After 1 year, specimens aged at room temperature, 35, and 50°C showed ~10% compression set (relative to the applied compressive deflection), while those aged at 70°C showed 20-40%. Due to the increasing compression set,more » load retention decreased with temperature, ranging from ~90% at room temperature to ~60-80% at 70°C. Long-term compression set and load retention at room temperature were predicted by applying time-temperature superposition (TTS). The predictions show compression set relative to the compressive deflection will be ~10-15% with ~70-90% load retention after 50 years at 15-45% strain, suggesting the material will continue to be mechanically functional. Comparison of the results to previously acquired data for cellular (M97*, M9760, M9763) and RTV (S5370) silicone foams suggests that the SE 1700 DIW porous specimens are on par with, or outperform, the legacy foams.« less
The compression and storage method of the same kind of medical images: DPCM
NASA Astrophysics Data System (ADS)
Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong
2006-09-01
Medical imaging has started to take advantage of digital technology, opening the way for advanced medical imaging and teleradiology. Medical images, however, require large amounts of memory. At over 1 million bytes per image, a typical hospital needs a staggering amount of memory storage (over one trillion bytes per year), and transmitting an image over a network (even the promised superhighway) could take minutes--too slow for interactive teleradiology. This calls for image compression to reduce significantly the amount of data needed to represent an image. Several compression techniques with different compression ratio have been developed. However, the lossless techniques, which allow for perfect reconstruction of the original images, yield modest compression ratio, while the techniques that yield higher compression ratio are lossy, that is, the original image is reconstructed only approximately. Medical imaging poses the great challenge of having compression algorithms that are lossless (for diagnostic and legal reasons) and yet have high compression ratio for reduced storage and transmission time. To meet this challenge, we are developing and studying some compression schemes, which are either strictly lossless or diagnostically lossless, taking advantage of the peculiarities of medical images and of the medical practice. In order to increase the Signal to Noise Ratio (SNR) by exploitation of correlations within the source signal, a method of combining differential pulse code modulation (DPCM) is presented.
NASA Astrophysics Data System (ADS)
Lv, Peng; Tang, Xun; Yuan, Jiajiao; Ji, Chenglong
2017-11-01
Highly compressible electrodes are in high demand in volume-restricted energy storage devices. Superelastic reduced graphene oxide (rGO) aerogel with attractive characteristics are proposed as the promising skeleton for compressible electrodes. Herein, a ternary aerogel was prepared by successively electrodepositing polypyrrole (PPy) and MnO2 into the superelastic rGO aerogel. In the rGO/PPy/MnO2 aerogel, rGO aerogel provides the continuously conductive network; MnO2 is mainly responsible for pseudo reactions; the middle PPy layer not only reduces the interface resistance between rGO and MnO2, but also further enhanced the mechanical strength of rGO backbone. The synergistic effect of the three components leads to excellent performances including high specific capacitance, reversible compressibility, and extreme durability. The gravimetric capacitance of the compressible rGO/PPy/MnO2 aerogel electrodes reaches 366 F g-1 and can retain 95.3% even under 95% compressive strain. And a volumetric capacitance of 138 F cm-3 is achieved, which is much higher than that of other rGO-based compressible electrodes. This volumetric capacitance value can be preserved by 85% after 3500 charge/discharge cycles with various compression conditions. This work will pave the way for advanced applications in the area of compressible energy-storage devices meeting the requirement of limiting space.
Johnson, Jeffrey P; Krupinski, Elizabeth A; Yan, Michelle; Roehrig, Hans; Graham, Anna R; Weinstein, Ronald S
2011-02-01
A major issue in telepathology is the extremely large and growing size of digitized "virtual" slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. "Visually lossless" compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods.
Minami, Kouichiro; Kokubo, Yota; Maeda, Ichinosuke; Hibino, Shingo
2017-02-01
In chest compression for cardiopulmonary resuscitation (CPR), the lower half of the sternum is pressed according to the American Heart Association (AHA) guidelines 2010. These have been no studies which identify the exact location of the applied by individual chest compressions. We developed a rubber power-flexible capacitive sensor that could measure the actual pressure point of chest compression in real time. Here, we examined the pressure point of chest compression by ambulance crews during CPR using a mannequin. We included 179 ambulance crews. Chest compression was performed for 2 min. The pressure position was monitored, and the quality of chest compression was analyzed by using a flexible pressure sensor (Shinnosukekun™). Of the ambulance crews, 58 (32.4 %) pressed the center and 121 (67.6 %) pressed outside the proper area of chest compression. Many of them pressed outside the center; 8, 7, 41, and 90 pressed on the caudal, left, right, and cranial side, respectively. Average compression rate, average recoil, average depth, and average duty cycle were 108.6 counts per minute, 0.089, 4.5 cm, and 48.27 %, respectively. Many of the ambulance crews did not press on the sternal lower half definitely. This new device has the potential to improve the quality of CPR during training or in clinical practice.
Use of compression garments by women with lymphoedema secondary to breast cancer treatment.
Longhurst, E; Dylke, E S; Kilbreath, S L
2018-02-19
This aim of this study was to determine the use of compression garments by women with lymphoedema secondary to breast cancer treatment and factors which underpin use. An online survey was distributed to the Survey and Review group of the Breast Cancer Network Australia. The survey included questions related to the participants' demographics, breast cancer and lymphoedema medical history, prescription and use of compression garments and their beliefs about compression and lymphoedema. Data were analysed using principal component analysis and multivariable logistic regression. Compression garments had been prescribed to 83% of 201 women with lymphoedema within the last 5 years, although 37 women had discontinued their use. Even when accounting for severity of swelling, type of garment(s) and advice given for use varied across participants. Use of compression garments was driven by women's beliefs that they were vulnerable to progression of their disease and that compression would prevent its worsening. Common reasons given as to why women had discontinued their use included discomfort, and their lymphoedema was stable. Participant characteristics associated with discontinuance of compression garments included their belief that (i) the garments were not effective in managing their condition, (ii) experienced mild-moderate swelling and/or (iii) had experienced swelling for greater than 5 years. The prescription of compression garments for lymphoedema is highly varied and may be due to lack of underpinning evidence to inform treatment.
Fahmy, Gamal; Black, John; Panchanathan, Sethuraman
2006-06-01
Today's multimedia applications demand sophisticated compression and classification techniques in order to store, transmit, and retrieve audio-visual information efficiently. Over the last decade, perceptually based image compression methods have been gaining importance. These methods take into account the abilities (and the limitations) of human visual perception (HVP) when performing compression. The upcoming MPEG 7 standard also addresses the need for succinct classification and indexing of visual content for efficient retrieval. However, there has been no research that has attempted to exploit the characteristics of the human visual system to perform both compression and classification jointly. One area of HVP that has unexplored potential for joint compression and classification is spatial frequency perception. Spatial frequency content that is perceived by humans can be characterized in terms of three parameters, which are: 1) magnitude; 2) phase; and 3) orientation. While the magnitude of spatial frequency content has been exploited in several existing image compression techniques, the novel contribution of this paper is its focus on the use of phase coherence for joint compression and classification in the wavelet domain. Specifically, this paper describes a human visual system-based method for measuring the degree to which an image contains coherent (perceptible) phase information, and then exploits that information to provide joint compression and classification. Simulation results that demonstrate the efficiency of this method are presented.
A new approach of objective quality evaluation on JPEG2000 lossy-compressed lung cancer CT images
NASA Astrophysics Data System (ADS)
Cai, Weihua; Tan, Yongqiang; Zhang, Jianguo
2007-03-01
Image compression has been used to increase the communication efficiency and storage capacity. JPEG 2000 compression, based on the wavelet transformation, has its advantages comparing to other compression methods, such as ROI coding, error resilience, adaptive binary arithmetic coding and embedded bit-stream. However it is still difficult to find an objective method to evaluate the image quality of lossy-compressed medical images so far. In this paper, we present an approach to evaluate the image quality by using a computer aided diagnosis (CAD) system. We selected 77 cases of CT images, bearing benign and malignant lung nodules with confirmed pathology, from our clinical Picture Archiving and Communication System (PACS). We have developed a prototype of CAD system to classify these images into benign ones and malignant ones, the performance of which was evaluated by the receiver operator characteristics (ROC) curves. We first used JPEG 2000 to compress these cases of images with different compression ratio from lossless to lossy, and used the CAD system to classify the cases with different compressed ratio, then compared the ROC curves from the CAD classification results. Support vector machine (SVM) and neural networks (NN) were used to classify the malignancy of input nodules. In each approach, we found that the area under ROC (AUC) decreases with the increment of compression ratio with small fluctuations.
1994-04-01
a variation of Ziv - Lempel compression [ZL77]. We found that using a standard compression algorithm rather than semantic compression allowed simplified...mentation. In Proceedings of the Conference on Programming Language Design and Implementation, 1993. (ZL77] J. Ziv and A. Lempel . A universal algorithm ...required by adaptable binaries. Our ABS stores adaptable binary information using the conventional binary symbol table and compresses this data using
[Lossless ECG compression algorithm with anti- electromagnetic interference].
Guan, Shu-An
2005-03-01
Based on the study of ECG signal features, a new lossless ECG compression algorithm is put forward here. We apply second-order difference operation with anti- electromagnetic interference to original ECG signals and then, compress the result by the escape-based coding model. In spite of serious 50Hz-interference, the algorithm is still capable of obtaining a high compression ratio.
H.264/AVC Video Compression on Smartphones
NASA Astrophysics Data System (ADS)
Sharabayko, M. P.; Markov, N. G.
2017-01-01
In this paper, we studied the usage of H.264/AVC video compression tools by the flagship smartphones. The results show that only a subset of tools is used, meaning that there is still a potential to achieve higher compression efficiency within the H.264/AVC standard, but the most advanced smartphones are already reaching the compression efficiency limit of H.264/AVC.
Real-Time Aggressive Image Data Compression
1990-03-31
implemented with higher degrees of modularity, concurrency, and higher levels of machine intelligence , thereby providing higher data -throughput rates...Project Summary Project Title: Real-Time Aggressive Image Data Compression Principal Investigators: Dr. Yih-Fang Huang and Dr. Ruey-wen Liu Institution...Summary The objective of the proposed research is to develop reliable algorithms !.hat can achieve aggressive image data compression (with a compression
Pulmonary atelectasis from compression of the left main bronchus by an aortic aneurysm.
Yap, K H; Sulaiman, S
2009-07-01
Pulmonary atelectasis may be caused by endobronchial lesions or by extrinsic compression of the bronchus. However, lung collapse due to compression from a thoracic aneurysm is uncommon. We report a 76-year-old hypertensive female patient who has pulmonary atelectasis due to an extrinsic compression from a descending thoracic aortic aneurysm, and discuss possible treatment options.
Compression of contour data through exploiting curve-to-curve dependence
NASA Technical Reports Server (NTRS)
Yalabik, N.; Cooper, D. B.
1975-01-01
An approach to exploiting curve-to-curve dependencies in order to achieve high data compression is presented. One of the approaches to date of along curve compression through use of cubic spline approximation is taken and extended by investigating the additional compressibility achievable through curve-to-curve structure exploitation. One of the models under investigation is reported on.
49 CFR 173.115 - Class 2, Divisions 2.1, 2.2, and 2.3-Definitions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... cryogenic gas, compressed gas in solution, asphyxiant gas and oxidizing gas). For the purpose of this... °F). (f) Compressed gas in solution. A compressed gas in solution is a non-liquefied compressed gas...% by mass or more flammable components and the chemical heat of combustion is 30 kJ/g or more; (2) An...
49 CFR 173.115 - Class 2, Divisions 2.1, 2.2, and 2.3-Definitions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... cryogenic gas, compressed gas in solution, asphyxiant gas and oxidizing gas). For the purpose of this... °F). (f) Compressed gas in solution. A compressed gas in solution is a non-liquefied compressed gas... mass or more flammable components and the chemical heat of combustion is 30 kJ/g or more; (2) An...
49 CFR 173.115 - Class 2, Divisions 2.1, 2.2, and 2.3-Definitions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... cryogenic gas, compressed gas in solution, asphyxiant gas and oxidizing gas). For the purpose of this... °F). (f) Compressed gas in solution. A compressed gas in solution is a non-liquefied compressed gas...% by mass or more flammable components and the chemical heat of combustion is 30 kJ/g or more; (2) An...
Water removal of wet veneer by roller pressing
Koji Adachi; Masafumi Inoue; Kozo Kanayama; Roger M. Rowell; Shuichi Kawai
2004-01-01
High moisture content, flat sawn Japanese cedar (Cryptomeria japonica D. Don) veneer was compressed using a roller press to mechanically remove water. The amount of water removed depended on the amount of compression applied. At 60% compression, 400 kg/m3 of water was removed. The process was not dependent on the size of the wood, the degree of compression, or the feed...
Fractured Rock Permeability as a Function of Temperature and Confining Pressure
NASA Astrophysics Data System (ADS)
Alam, A. K. M. Badrul; Fujii, Yoshiaki; Fukuda, Daisuke; Kodama, Jun-ichi; Kaneko, Katsuhiko
2015-10-01
Triaxial compression tests were carried out on Shikotsu welded tuff, Kimachi sandstone, and Inada granite under confining pressures of 1-15 MPa at 295 and 353 K. The permeability of the tuff declined monotonically with axial compression. The post-compression permeability became smaller than that before axial compression. The permeability of Kimachi sandstone and Inada granite declined at first, then began to increase before the peak load, and showed values that were almost constant in the residual strength state. The post-compression permeability of Kimachi sandstone was higher than that before axial compression under low confining pressures, but lower under higher confining pressures. On the other hand, the permeability of Inada granite was higher than that before axial compression regardless of the confining pressure values. For the all rock types, the post-compression permeability at 353 K was lower than at 295 K and the influence of the confining pressure was less at 353 K than at 295 K. The above temperature effects were observed apparently for Inada granite, only the latter effect was apparent for Shikotsu welded tuff, and they were not so obvious for Kimachi sandstone. The mechanisms causing the variation in rock permeability and sealability of underground openings were discussed.
Magnetized Target Fusion At General Fusion: An Overview
NASA Astrophysics Data System (ADS)
Laberge, Michel; O'Shea, Peter; Donaldson, Mike; Delage, Michael; Fusion Team, General
2017-10-01
Magnetized Target Fusion (MTF) involves compressing an initial magnetically confined plasma on a timescale faster than the thermal confinement time of the plasma. If near adiabatic compression is achieved, volumetric compression of 350X or more of a 500 eV target plasma would achieve a final plasma temperature exceeding 10 keV. Interesting fusion gains could be achieved provided the compressed plasma has sufficient density and dwell time. General Fusion (GF) is developing a compression system using pneumatic pistons to collapse a cavity formed in liquid metal containing a magnetized plasma target. Low cost driver, straightforward heat extraction, good tritium breeding ratio and excellent neutron protection could lead to a practical power plant. GF (65 employees) has an active plasma R&D program including both full scale and reduced scale plasma experiments and simulation of both. Although pneumatic driven compression of full scale plasmas is the end goal, present compression studies use reduced scale plasmas and chemically accelerated aluminum liners. We will review results from our plasma target development, motivate and review the results of dynamic compression field tests and briefly describe the work to date on the pneumatic driver front.
POLYCOMP: Efficient and configurable compression of astronomical timelines
NASA Astrophysics Data System (ADS)
Tomasi, M.
2016-07-01
This paper describes the implementation of polycomp, a open-sourced, publicly available program for compressing one-dimensional data series in tabular format. The program is particularly suited for compressing smooth, noiseless streams of data like pointing information, as one of the algorithms it implements applies a combination of least squares polynomial fitting and discrete Chebyshev transforms that is able to achieve a compression ratio Cr up to ≈ 40 in the examples discussed in this work. This performance comes at the expense of a loss of information, whose upper bound is configured by the user. I show two areas in which the usage of polycomp is interesting. In the first example, I compress the ephemeris table of an astronomical object (Ganymede), obtaining Cr ≈ 20, with a compression error on the x , y , z coordinates smaller than 1 m. In the second example, I compress the publicly available timelines recorded by the Low Frequency Instrument (LFI), an array of microwave radiometers onboard the ESA Planck spacecraft. The compression reduces the needed storage from ∼ 6.5 TB to ≈ 0.75 TB (Cr ≈ 9), thus making them small enough to be kept in a portable hard drive.
Katz, Jeffrey M; Roopwani, Rahul; Buckner, Ira S
2013-10-01
Compressibility profiles, or functions of solid fraction versus applied pressure, are used to provide insight into the fundamental mechanical behavior of powders during compaction. These functions, collected during compression (in-die) or post ejection (out-of-die), indicate the amount of pressure that a given powder formulation requires to be compressed to a given density or thickness. To take advantage of the benefits offered by both methods, the data collected in-die during a single compression-decompression cycle will be used to generate the equivalent of a complete out-of-die compressibility profile that has been corrected for both elastic and viscoelastic recovery of the powder. This method has been found to be both a precise and accurate means of evaluating out-of-die compressibility for four common tableting excipients. Using this method, a comprehensive characterization of powder compaction behavior, specifically in relation to plastic/brittle, elastic and viscoelastic deformation, can be obtained. Not only is the method computationally simple, but it is also material-sparing. The ability to characterize powder compressibility using this approach can improve productivity and streamline tablet development studies. © 2013 Wiley Periodicals, Inc. and the American Pharmacists Association.
Barbier, Paolo; Alimento, Marina; Berna, Giovanni; Cavoretto, Dario; Celeste, Fabrizio; Muratori, Manuela; Guazzi, Maurizio D
2004-01-01
Tele-echocardiography is not widely used because of lengthy transmission times when using standard Motion Pictures Expert Groups (MPEG)-2 lossy compression algorythms, unless expensive high bandwidth lines are used. We sought to validate the newer MPEG-4 algorythms to allow further reduction in echocardiographic motion video file size. Four cardiologists expert in echocardiography read blindly 165 randomized uncompressed and compressed 2D and color Doppler normal and pathologic motion images. One Digital Video and 3 MPEG-4 compression algorythms were tested, the latter at 3 decreasing compression quality levels (100%, 65% and 40%). Mean diagnostic and image quality scores were computed for each file and compared across the 3 compression levels using uncompressed files as controls. File dimensions decreased from a range of uncompressed 12-83 MB to MPEG-4 0.03-2.3 MB. All algorythms showed mean scores that were not significantly different from uncompressed source, except the MPEG-4 DivX algorythm at the highest selected compression (40%, p=.002). These data support the use of MPEG-4 compression to reduce echocardiographic motion image size for transmission purposes, allowing cost reduction through use of low bandwidth lines.
Highly Efficient Compression Algorithms for Multichannel EEG.
Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda
2018-05-01
The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.
Compressive behavior of laminated neoprene bridge bearing pads under thermal aging condition
NASA Astrophysics Data System (ADS)
Jun, Xie; Zhang, Yannian; Shan, Chunhong
2017-10-01
The present study was conducted to obtain a better understanding of the variation rule of mechanical properties of laminated neoprene bridge bearing pads under thermal aging condition using compression tests. A total of 5 specimens were processed in a high-temperature chamber. After that, the specimens were tested subjected to axial load. The parameter mainly considered time of thermal aging processing for specimens. The results of compression tests show that the specimens after thermal aging processing are more probably brittle failure than the standard specimen. Moreover, the exposure of steel plate, cracks and other failure phenomena are more serious than the standard specimen. The compressive capacity, ultimate compressive strength, compressive elastic modulus of the laminated neoprene bridge bearing pads decreased dramatically with the increasing in the aging time of thermal aging processing. The attenuation trends of ultimate compressive strength, compressive elastic modulus of laminated neoprene bridge bearing pads under thermal aging condition accord with power function. The attenuation models are acquired by regressing data of experiment with the least square method. The attenuation models conform to reality well which shows that this model is applicable and has vast prospect in assessing the performance of laminated neoprene bridge bearing pads under thermal aging condition.
Marqués-Jiménez, Diego; Calleja-González, Julio; Arratibel-Imaz, Iñaki; Delextrat, Anne; Uriarte, Fernando; Terrados, Nicolás
2018-01-01
There is not enough evidence of positive effects of compression therapy on the recovery of soccer players after matches. Therefore, the objective was to evaluate the influence of different types of compression garments in reducing exercise-induced muscle damage (EIMD) during recovery after a friendly soccer match. Eighteen semi-professional soccer players (24 ± 4.07 years, 177 ± 5 cm; 71.8 ± 6.28 kg and 22.73 ± 1.81 BMI) participated in this study. A two-stage crossover design was chosen. Participants acted as controls in one match and were assigned to an experimental group (compression stockings group, full-leg compression group, shorts group) in the other match. Participants in experimental groups played the match wearing the assigned compression garments, which were also worn in the 3 days post-match, for 7 h each day. Results showed a positive, but not significant, effect of compression garments on attenuating EIMD biomarkers response, and inflammatory and perceptual responses suggest that compression may improve physiological and psychological recovery.
Halftoning processing on a JPEG-compressed image
NASA Astrophysics Data System (ADS)
Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent
2003-12-01
Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.
NASA Astrophysics Data System (ADS)
Sivaganesan, S.; Chandrasekaran, M.; Ruban, M.
2017-03-01
The present experimental investigation evaluates the effects of using blends of diesel fuel with 20% concentration of Methyl Ester of Jatropha biodiesel blended with various compression ratio. Both the diesel and biodiesel fuel blend was injected at 23º BTDC to the combustion chamber. The experiment was carried out with three different compression ratio. Biodiesel was extracted from Jatropha oil, 20% (B20) concentration is found to be best blend ratio from the earlier experimental study. The engine was maintained at various compression ratio i.e., 17.5, 16.5 and 15.5 respectively. The main objective is to obtain minimum specific fuel consumption, better efficiency and lesser Emission with different compression ratio. The results concluded that full load show an increase in efficiency when compared with diesel, highest efficiency is obtained with B20MEOJBA with compression ratio 17.5. It is noted that there is an increase in thermal efficiency as the blend ratio increases. Biodiesel blend has performance closer to diesel, but emission is reduced in all blends of B20MEOJBA compared to diesel. Thus this work focuses on the best compression ratio and suitability of biodiesel blends in diesel engine as an alternate fuel.
Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information
NASA Technical Reports Server (NTRS)
Pence, William D.; White, R. L.; Seaman, R.
2010-01-01
We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.
Coil Compression for Accelerated Imaging with Cartesian Sampling
Zhang, Tao; Pauly, John M.; Vasanawala, Shreyas S.; Lustig, Michael
2012-01-01
MRI using receiver arrays with many coil elements can provide high signal-to-noise ratio and increase parallel imaging acceleration. At the same time, the growing number of elements results in larger datasets and more computation in the reconstruction. This is of particular concern in 3D acquisitions and in iterative reconstructions. Coil compression algorithms are effective in mitigating this problem by compressing data from many channels into fewer virtual coils. In Cartesian sampling there often are fully sampled k-space dimensions. In this work, a new coil compression technique for Cartesian sampling is presented that exploits the spatially varying coil sensitivities in these non-subsampled dimensions for better compression and computation reduction. Instead of directly compressing in k-space, coil compression is performed separately for each spatial location along the fully-sampled directions, followed by an additional alignment process that guarantees the smoothness of the virtual coil sensitivities. This important step provides compatibility with autocalibrating parallel imaging techniques. Its performance is not susceptible to artifacts caused by a tight imaging fieldof-view. High quality compression of in-vivo 3D data from a 32 channel pediatric coil into 6 virtual coils is demonstrated. PMID:22488589
Experimental investigation of dynamic compression and spallation of Cerium at pressures up to 6 GPa
NASA Astrophysics Data System (ADS)
Zubareva, A. N.; Kolesnikov, S. A.; Utkin, A. V.
2014-05-01
In this study the experiments on one-dimensional dynamic compression of Cerium (Ce) samples to pressures of 0.5 to 6 GPa using various types of explosively driven generators were conducted. VISAR laser velocimeter was used to obtain Ce free surface velocity profiles. The isentropic compression wave was registered for γ-phase of Ce at pressures lower than 0.76 GPa that corresponds to γ-α phase transition pressure in Ce. Shock rarefaction waves were also registered in several experiments. Both observations were the result of the anomalous compressibility of γ-phase of Ce. On the basis of our experimental results the compression isentrope of Ce γ-phase was constructed. Its comparison with volumetric compression curves allowed to estimate the magnitude of shear stress at dynamic compression conditions for Ce. Spall strength measurements were also conducted for several samples. They showed a strong dependence of the spall strength of Ce on the strain rate.
Yao, Bibo; Zhou, Zhaoyao; Duan, Liuyang; Xiao, Zhiyu
2016-01-01
Powder metallurgy (P/M) technique is usually used for manufacturing porous metal materials. However, some P/M materials are limitedly used in engineering for their performance deficiency. A novel 304 stainless steel P/M material was produced by a solid-state sintering of 304 stainless steel powders and 304 short stainless steel fibers, which were alternately laid in layers according to mass ratio. In this paper, the compressive properties of the P/M materials were characterized by a series of uniaxial compression tests. The effects of fiber content, compaction pressure and high temperature nitriding on compressive properties were investigated. The results indicated that, without nitriding, the samples changed from cuboid to cydariform without damage in the process of compression. The compressive stress was enhanced with increasing fiber content ranging from 0 to 8 wt.%. For compaction pressure from 55 to 75 MPa, greater compaction pressure improved compressive stress. Moreover, high temperature nitriding was able to significantly improve the yield stress, but collapse failure eventually occurred. PMID:28773285
Orientation-dependent deformation mechanisms of bcc niobium nanoparticles
NASA Astrophysics Data System (ADS)
Bian, J. J.; Yang, L.; Niu, X. R.; Wang, G. F.
2018-07-01
Nanoparticles usually exhibit pronounced anisotropic properties, and a close insight into the atomic-scale deformation mechanisms is of great interest. In present study, atomic simulations are conducted to analyse the compression of bcc nanoparticles, and orientation-dependent features are addressed. It is revealed that surface morphology under indenter predominantly governs the initial elastic response. The loading curve follows the flat punch contact model in [1 1 0] compression, while it obeys the Hertzian contact model in [1 1 1] and [0 0 1] compressions. In plastic deformation regime, full dislocation gliding is dominated in [1 1 0] compression, while deformation twinning is prominent in [1 1 1] compression, and these two mechanisms coexist in [0 0 1] compression. Such deformation mechanisms are distinct from those in bulk crystals under nanoindentation and nanopillars under compression, and the major differences are also illuminated. Our results provide an atomic perspective on the mechanical behaviours of bcc nanoparticles and are helpful for the design of nanoparticle-based components and systems.
Preoperative Duplex Scanning is a Helpful Diagnostic Tool in Neurogenic Thoracic Outlet Syndrome.
Orlando, Megan S; Likes, Kendall C; Mirza, Serene; Cao, Yue; Cohen, Anne; Lum, Ying Wei; Freischlag, Julie A
2016-01-01
To evaluate the diagnostic role of venous and arterial duplex scanning in neurogenic thoracic outlet syndrome (NTOS). Retrospective review of patients who underwent duplex ultrasonography prior to first rib resection and scalenectomy (FRRS) for NTOS from 2005 to 2013. Abnormal scans included ipsilateral compression (IC) with abduction of the symptomatic extremity (>50% change in subclavian vessel flow), contralateral (asymptomatic side) compression (CC) or bilateral compression (BC). A total of 143 patients (76% female, average age 34, range 13-59) underwent bilateral preoperative duplex scanning. Ipsilateral compression was seen in 44 (31%), CC in 12 (8%), and BC in 14 (10%). Seventy-three (51%) patients demonstrated no compression. Patients with IC more often experienced intraoperative pneumothoraces (49% vs. 25%, P < .05) and had positive Adson tests (86% vs. 61%, P < .02). Compression of the subclavian vein or artery on duplex ultrasonography can assist in NTOS diagnosis. Ipsilateral compression on abduction often correlates with Adson testing. © The Author(s) 2016.
Data compression: The end-to-end information systems perspective for NASA space science missions
NASA Technical Reports Server (NTRS)
Tai, Wallace
1991-01-01
The unique characteristics of compressed data have important implications to the design of space science data systems, science applications, and data compression techniques. The sequential nature or data dependence between each of the sample values within a block of compressed data introduces an error multiplication or propagation factor which compounds the effects of communication errors. The data communication characteristics of the onboard data acquisition, storage, and telecommunication channels may influence the size of the compressed blocks and the frequency of included re-initialization points. The organization of the compressed data are continually changing depending on the entropy of the input data. This also results in a variable output rate from the instrument which may require buffering to interface with the spacecraft data system. On the ground, there exist key tradeoff issues associated with the distribution and management of the science data products when data compression techniques are applied in order to alleviate the constraints imposed by ground communication bandwidth and data storage capacity.
Effects of compressibility on turbulent relative particle dispersion
NASA Astrophysics Data System (ADS)
Shivamoggi, Bhimsen K.
2016-08-01
In this paper, phenomenological developments are used to explore the effects of compressibility on the relative particle dispersion (RPD) in three-dimensional (3D) fully developed turbulence (FDT). The role played by the compressible FDT cascade physics underlying this process is investigated. Compressibility effects are found to lead to reduction of RPD, development of the ballistic regime and particle clustering, corroborating the laboratory experiment and numerical simulation results (Cressman J. R. et al., New J. Phys., 6 (2004) 53) on the motion of Lagrangian tracers on a surface flow that constitutes a 2D compressible subsystem. These formulations are developed from the scaling relations for compressible FDT and are validated further via an alternative dimensional/scaling development for compressible FDT similar to the one given for incompressible FDT by Batchelor and Townsend (Surveys in Mechanics (Cambridge University Press) 1956, p. 352). The rationale for spatial intermittency effects is legitimized via the nonlinear scaling dependence of RPD on the kinetic-energy dissipation rate.
Tensile and compressive constitutive response of 316 stainless steel at elevated temperatures
NASA Technical Reports Server (NTRS)
Manson, S. S.; Muralidharan, U.; Halford, G. R.
1982-01-01
It is demonstrated that creep rate of 316 SS is lower by factors of 2 to 10 in compression than in tension if the microstructure is the same and tests are conducted at identical temperatures and equal but opposite stresses. Such behavior was observed for both monotonic creep and conditions involving cyclic creep. In the latter case creep rate in both tension and compression progressively increases from cycle to cycle, rendering questionable the possibility of expressing a time-stabilized constitutive relationship. The difference in creep rates in tension and compression is considerably reduced if the tension specimen is first subjected to cycles of tensile creep (reversed by compressive plasticity), while the compression specimen is first subjected to cycles of compressive creep (reversed by tensile plasticity). In both cases, the test temperature is the same and the stresses are equal and opposite. Such reduction is a reflection of differences in microstructure of the specimens resulting from different prior mechanical history.
Effect of multilayer high-compression bandaging on ankle range of motion and oxygen cost of walking
Roaldsen, K S; Elfving, B; Stanghelle, J K; Mattsson, E
2012-01-01
Objective To evaluate the effects of multilayer high-compression bandaging on ankle range of motion, oxygen consumption and subjective walking ability in healthy subjects. Method A volunteer sample of 22 healthy subjects (10 women and 12 men; aged 67 [63–83] years) were studied. The intervention included treadmill-walking at self-selected speed with and without multilayer high-compression bandaging (Proforeº), randomly selected. The primary outcome variables were ankle range of motion, oxygen consumption and subjective walking ability. Results Total ankle range of motion decreased 4% with compression. No change in oxygen cost of walking was observed. Less than half the subjects reported that walking-shoe comfort or walking distance was negatively affected. Conclusion Ankle range of motion decreased with compression but could probably be counteracted with a regular exercise programme. There were no indications that walking with compression was more exhausting than walking without. Appropriate walking shoes could seem important to secure gait efficiency when using compression garments. PMID:21810941
Modeling of video compression effects on target acquisition performance
NASA Astrophysics Data System (ADS)
Cha, Jae H.; Preece, Bradley; Espinola, Richard L.
2009-05-01
The effect of video compression on image quality was investigated from the perspective of target acquisition performance modeling. Human perception tests were conducted recently at the U.S. Army RDECOM CERDEC NVESD, measuring identification (ID) performance on simulated military vehicle targets at various ranges. These videos were compressed with different quality and/or quantization levels utilizing motion JPEG, motion JPEG2000, and MPEG-4 encoding. To model the degradation on task performance, the loss in image quality is fit to an equivalent Gaussian MTF scaled by the Structural Similarity Image Metric (SSIM). Residual compression artifacts are treated as 3-D spatio-temporal noise. This 3-D noise is found by taking the difference of the uncompressed frame, with the estimated equivalent blur applied, and the corresponding compressed frame. Results show good agreement between the experimental data and the model prediction. This method has led to a predictive performance model for video compression by correlating various compression levels to particular blur and noise input parameters for NVESD target acquisition performance model suite.
Effects of video compression on target acquisition performance
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Cha, Jae; Preece, Bradley
2008-04-01
The bandwidth requirements of modern target acquisition systems continue to increase with larger sensor formats and multi-spectral capabilities. To obviate this problem, still and moving imagery can be compressed, often resulting in greater than 100 fold decrease in required bandwidth. Compression, however, is generally not error-free and the generated artifacts can adversely affect task performance. The U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate recently performed an assessment of various compression techniques on static imagery for tank identification. In this paper, we expand this initial assessment by studying and quantifying the effect of various video compression algorithms and their impact on tank identification performance. We perform a series of controlled human perception tests using three dynamic simulated scenarios: target moving/sensor static, target static/sensor static, sensor tracking the target. Results of this study will quantify the effect of video compression on target identification and provide a framework to evaluate video compression on future sensor systems.
High precision Hugoniot measurements on statically pre-compressed fluid helium
NASA Astrophysics Data System (ADS)
Seagle, Christopher T.; Reinhart, William D.; Lopez, Andrew J.; Hickman, Randy J.; Thornhill, Tom F.
2016-09-01
The capability for statically pre-compressing fluid targets for Hugoniot measurements utilizing gas gun driven flyer plates has been developed. Pre-compression expands the capability for initial condition control, allowing access to thermodynamic states off the principal Hugoniot. Absolute Hugoniot measurements with an uncertainty less than 3% on density and pressure were obtained on statically pre-compressed fluid helium utilizing a two stage light gas gun. Helium is highly compressible; the locus of shock states resulting from dynamic loading of an initially compressed sample at room temperature is significantly denser than the cryogenic fluid Hugoniot even for relatively modest (0.27-0.38 GPa) initial pressures. The dynamic response of pre-compressed helium in the initial density range of 0.21-0.25 g/cm3 at ambient temperature may be described by a linear shock velocity (us) and particle velocity (up) relationship: us = C0 + sup, with C0 = 1.44 ± 0.14 km/s and s = 1.344 ± 0.025.
Mechanical Properties of Mg-Gd and Mg-Y Solid Solutions
NASA Astrophysics Data System (ADS)
Kula, Anna; Jia, Xiaohui; Mishra, Raj K.; Niewczas, Marek
2016-12-01
The mechanical properties of Mg-Gd and Mg-Y solid solutions have been studied under uniaxial tension and compression between 4 K and 298 K (-269 °C and 25 °C). The results reveal that Mg-Gd alloys exhibit higher strength and ductility under tension and compression attributed to the more effective solid solution strengthening and grain-boundary strengthening effects. Profuse twinning has been observed under compression, resulting in a material texture with strong dominance of basal component parallel to compression axis. Under tension, twining is less active and the texture evolution is controlled mostly by slip. The alloys exhibit pronounced yield stress asymmetry and significantly different work-hardening behavior under tension and compression. Increasing of Gd and/or Y concentration leads to the reduction of the tension-compression asymmetry due to the weakening of the recrystallization texture and more balanced twinning and slip activity during plastic deformation. The results suggest that under compression of Mg-Y alloys slip is more active than twinning in comparison to Mg-Gd alloys.
The importance of robust error control in data compression applications
NASA Technical Reports Server (NTRS)
Woolley, S. I.
1993-01-01
Data compression has become an increasingly popular option as advances in information technology have placed further demands on data storage capabilities. With compression ratios as high as 100:1 the benefits are clear; however, the inherent intolerance of many compression formats to error events should be given careful consideration. If we consider that efficiently compressed data will ideally contain no redundancy, then the introduction of a channel error must result in a change of understanding from that of the original source. While the prefix property of codes such as Huffman enables resynchronisation, this is not sufficient to arrest propagating errors in an adaptive environment. Arithmetic, Lempel-Ziv, discrete cosine transform (DCT) and fractal methods are similarly prone to error propagating behaviors. It is, therefore, essential that compression implementations provide sufficient combatant error control in order to maintain data integrity. Ideally, this control should be derived from a full understanding of the prevailing error mechanisms and their interaction with both the system configuration and the compression schemes in use.
NASA Astrophysics Data System (ADS)
Yue, Xian-hua; Liu, Chun-fang; Liu, Hui-hua; Xiao, Su-fen; Tang, Zheng-hua; Tang, Tian
2018-02-01
The main goal of this study is to investigate the microstructure and electrical properties of Al-Zr-La alloys under different hot compression deformation temperatures. In particular, a Gleeble 3500 thermal simulator was used to carry out multi-pass hot compression tests. For five-pass hot compression deformation, the last-pass deformation temperatures were 240, 260, 300, 340, 380, and 420°C, respectively, where the first-pass deformation temperature was 460°C. The experimental results indicated that increasing the hot compression deformation temperature with each pass resulted in improved electrical conductivity of the alloy. Consequently, the flow stress was reduced after deformation of the samples subjected to the same number of passes. In addition, the dislocation density gradually decreased and the grain size increased after hot compression deformation. Furthermore, the dynamic recrystallization behavior was effectively suppressed during the hot compression process because spherical Al3Zr precipitates pinned the dislocation movement effectively and prevented grain boundary sliding.
NASA Astrophysics Data System (ADS)
Al-Hayani, Nazar; Al-Jawad, Naseer; Jassim, Sabah A.
2014-05-01
Video compression and encryption became very essential in a secured real time video transmission. Applying both techniques simultaneously is one of the challenges where the size and the quality are important in multimedia transmission. In this paper we proposed a new technique for video compression and encryption. Both encryption and compression are based on edges extracted from the high frequency sub-bands of wavelet decomposition. The compression algorithm based on hybrid of: discrete wavelet transforms, discrete cosine transform, vector quantization, wavelet based edge detection, and phase sensing. The compression encoding algorithm treats the video reference and non-reference frames in two different ways. The encryption algorithm utilized A5 cipher combined with chaotic logistic map to encrypt the significant parameters and wavelet coefficients. Both algorithms can be applied simultaneously after applying the discrete wavelet transform on each individual frame. Experimental results show that the proposed algorithms have the following features: high compression, acceptable quality, and resistance to the statistical and bruteforce attack with low computational processing.
Compressed domain indexing of losslessly compressed images
NASA Astrophysics Data System (ADS)
Schaefer, Gerald
2001-12-01
Image retrieval and image compression have been pursued separately in the past. Only little research has been done on a synthesis of the two by allowing image retrieval to be performed directly in the compressed domain of images without the need to uncompress them first. In this paper methods for image retrieval in the compressed domain of losslessly compressed images are introduced. While most image compression techniques are lossy, i.e. discard visually less significant information, lossless techniques are still required in fields like medical imaging or in situations where images must not be changed due to legal reasons. The algorithms in this paper are based on predictive coding methods where a pixel is encoded based on the pixel values of its (already encoded) neighborhood. The first method is based on an understanding that predictively coded data is itself indexable and represents a textural description of the image. The second method operates directly on the entropy encoded data by comparing codebooks of images. Experiments show good image retrieval results for both approaches.
Yao, Bibo; Zhou, Zhaoyao; Duan, Liuyang; Xiao, Zhiyu
2016-03-04
Powder metallurgy (P/M) technique is usually used for manufacturing porous metal materials. However, some P/M materials are limitedly used in engineering for their performance deficiency. A novel 304 stainless steel P/M material was produced by a solid-state sintering of 304 stainless steel powders and 304 short stainless steel fibers, which were alternately laid in layers according to mass ratio. In this paper, the compressive properties of the P/M materials were characterized by a series of uniaxial compression tests. The effects of fiber content, compaction pressure and high temperature nitriding on compressive properties were investigated. The results indicated that, without nitriding, the samples changed from cuboid to cydariform without damage in the process of compression. The compressive stress was enhanced with increasing fiber content ranging from 0 to 8 wt.%. For compaction pressure from 55 to 75 MPa, greater compaction pressure improved compressive stress. Moreover, high temperature nitriding was able to significantly improve the yield stress, but collapse failure eventually occurred.
NASA Technical Reports Server (NTRS)
Vandermey, Nancy E.; Morris, Don H.; Masters, John E.
1991-01-01
Damage initiation and growth under compression-compression fatigue loading were investigated for a stitched uniweave material system with an underlying AS4/3501-6 quasi-isotropic layup. Performance of unnotched specimens having stitch rows at either 0 degree or 90 degrees to the loading direction was compared. Special attention was given to the effects of stitching related manufacturing defects. Damage evaluation techniques included edge replication, stiffness monitoring, x-ray radiography, residual compressive strength, and laminate sectioning. It was found that the manufacturing defect of inclined stitches had the greatest adverse effect on material performance. Zero degree and 90 degree specimen performances were generally the same. While the stitches were the source of damage initiation, they also slowed damage propagation both along the length and across the width and affected through-the-thickness damage growth. A pinched layer zone formed by the stitches particularly affected damage initiation and growth. The compressive failure mode was transverse shear for all specimens, both in static compression and fatigue cycling effects.
Intelligent bandwith compression
NASA Astrophysics Data System (ADS)
Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.
1980-02-01
The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.
Medical Treatment for Postthrombotic Syndrome
Palacios, Federico Silva; Rathbun, Suman Wasan
2017-01-01
Deep vein thrombosis (DVT) is a prevalent disease. About 20 to 30% of patients with DVT will develop postthrombotic syndrome (PTS) within months after the initial diagnosis of DVT. There is no gold standard for diagnosis of PTS, but clinical signs include pitting edema, hyperpigmentation, phlebectatic crown, venous eczema, and varicose veins. Several scoring systems have been developed for diagnostic evaluation. Conservative treatment includes compression therapy, medications, lifestyle modification, and exercise. Compression therapy, the mainstay and most proven noninvasive therapy for patients with PTS, can be prescribed as compression stockings, bandaging, adjustable compression wrap devices, and intermittent pneumatic compression. Medications may be used to both prevent and treat PTS and include anticoagulation, anti-inflammatories, vasoactive drugs, antibiotics, and diuretics. Exercise, weight loss, smoking cessation, and leg elevation are also recommended. Areas of further research include the duration, compliance, and strength of compression stockings in the prevention of PTS after DVT; the use of intermittent compression devices; the optimal medical anticoagulant regimen after endovascular therapy; and the role of newer anticoagulants as anti-inflammatory agents. PMID:28265131
Khosravan, Shahla; Mohammadzadeh-Moghadam, Hossein; Mohammadzadeh, Fatemeh; Fadafen, Samane Ajam Khames; Gholami, Malihe
2017-01-01
Breast engorgement affects lactation. The present study was conducted to determine the effect of hollyhock combined with warm and cold compresses on improving breast engorgement in lactating women. Participants included 40 women with breast engorgement divided into intervention and control groups, with participants in both groups being applied routine interventions and warm compress before nursing and a cold compress after nursing; however, the intervention group was also applied hollyhock compress. Both groups received these treatments 6 times during 2 days. The data collected were analyzed in SPSS-16 using a generalized estimating equation. According to the results, a significant difference was observed in the overall breast engorgement severity in the intervention group (P < .001). The severity of breast engorgement was also found to have a significant relationship with time (P < .001). According to the findings, hollyhock leaf compress combined with performing routine interventions for breast engorgement can improve breast engorgement. © The Author(s) 2015.
Three-dimensional numerical simulation for plastic injection-compression molding
NASA Astrophysics Data System (ADS)
Zhang, Yun; Yu, Wenjie; Liang, Junjie; Lang, Jianlin; Li, Dequn
2018-03-01
Compared with conventional injection molding, injection-compression molding can mold optical parts with higher precision and lower flow residual stress. However, the melt flow process in a closed cavity becomes more complex because of the moving cavity boundary during compression and the nonlinear problems caused by non-Newtonian polymer melt. In this study, a 3D simulation method was developed for injection-compression molding. In this method, arbitrary Lagrangian- Eulerian was introduced to model the moving-boundary flow problem in the compression stage. The non-Newtonian characteristics and compressibility of the polymer melt were considered. The melt flow and pressure distribution in the cavity were investigated by using the proposed simulation method and compared with those of injection molding. Results reveal that the fountain flow effect becomes significant when the cavity thickness increases during compression. The back flow also plays an important role in the flow pattern and redistribution of cavity pressure. The discrepancy in pressures at different points along the flow path is complicated rather than monotonically decreased in injection molding.
NASA Astrophysics Data System (ADS)
Duplaga, M.; Leszczuk, M. I.; Papir, Z.; Przelaskowski, A.
2008-12-01
Wider dissemination of medical digital video libraries is affected by two correlated factors, resource effective content compression that directly influences its diagnostic credibility. It has been proved that it is possible to meet these contradictory requirements halfway for long-lasting and low motion surgery recordings at compression ratios close to 100 (bronchoscopic procedures were a case study investigated). As the main supporting assumption, it has been accepted that the content can be compressed as far as clinicians are not able to sense a loss of video diagnostic fidelity (a visually lossless compression). Different market codecs were inspected by means of the combined subjective and objective tests toward their usability in medical video libraries. Subjective tests involved a panel of clinicians who had to classify compressed bronchoscopic video content according to its quality under the bubble sort algorithm. For objective tests, two metrics (hybrid vector measure and hosaka Plots) were calculated frame by frame and averaged over a whole sequence.
Compressing a spinodal surface at fixed area: bijels in a centrifuge.
Rumble, Katherine A; Thijssen, Job H J; Schofield, Andrew B; Clegg, Paul S
2016-05-11
Bicontinuous interfacially jammed emulsion gels (bijels) are solid-stabilised emulsions with two inter-penetrating continuous phases. Employing the method of centrifugal compression we find that macroscopically the bijel yields at relatively low angular acceleration. Both continuous phases escape from the top of the structure, making any compression immediately irreversible. Microscopically, the bijel becomes anisotropic with the domains aligned perpendicular to the compression direction which inhibits further liquid expulsion; this contrasts strongly with the sedimentation behaviour of colloidal gels. The original structure can, however, be preserved close to the top of the sample and thus the change to an anisotropic structure suggests internal yielding. Any air bubbles trapped in the bijel are found to aid compression by forming channels aligned parallel to the compression direction which provide a route for liquid to escape.
Toward an image compression algorithm for the high-resolution electronic still camera
NASA Technical Reports Server (NTRS)
Nerheim, Rosalee
1989-01-01
Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.
Environmental effects on the compressive properties - Thermosetting vs. thermoplastic composites
NASA Technical Reports Server (NTRS)
Haque, A.; Jeelani, S.
1992-01-01
The influence of moisture and temperature on the compressive properties of graphite/epoxy and APC-2 materials systems was investigated to assess the viability of using APC-2 instead of graphite/epoxy. Data obtained indicate that the moisture absorption rate of T-300/epoxy is higher than that of APC-2. Thick plate with smaller surface area absorbs less moisture than thin plate with larger surface area. The compressive strength and modulus of APC-2 are higher than those of T-300/epoxy composite, and APC-2 sustains higher compressive strength in the presence of moisture. The compressive strength and modulus decrease with the increase of temperature in the range of 23-100 C. The compression failure was in the form of delamination, interlaminar shear, and end brooming.
Energy Savings Potential and RD&D Opportunities for Non-Vapor-Compression HVAC Technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
none,
While vapor-compression technologies have served heating, ventilation, and air-conditioning (HVAC) needs very effectively, and have been the dominant HVAC technology for close to 100 years, the conventional refrigerants used in vapor-compression equipment contribute to global climate change when released to the atmosphere. This Building Technologies Office report: --Identifies alternatives to vapor-compression technology in residential and commercial HVAC applications --Characterizes these technologies based on their technical energy savings potential, development status, non-energy benefits, and other factors affecting end-user acceptance and their ability to compete with conventional vapor-compression systems --Makes specific research, development, and deployment (RD&D) recommendations to support further development ofmore » these technologies, should DOE choose to support non-vapor-compression technology further.« less
2D-RBUC for efficient parallel compression of residuals
NASA Astrophysics Data System (ADS)
Đurđević, Đorđe M.; Tartalja, Igor I.
2018-02-01
In this paper, we present a method for lossless compression of residuals with an efficient SIMD parallel decompression. The residuals originate from lossy or near lossless compression of height fields, which are commonly used to represent models of terrains. The algorithm is founded on the existing RBUC method for compression of non-uniform data sources. We have adapted the method to capture 2D spatial locality of height fields, and developed the data decompression algorithm for modern GPU architectures already present even in home computers. In combination with the point-level SIMD-parallel lossless/lossy high field compression method HFPaC, characterized by fast progressive decompression and seamlessly reconstructed surface, the newly proposed method trades off small efficiency degradation for a non negligible compression ratio (measured up to 91%) benefit.
Optimal color coding for compression of true color images
NASA Astrophysics Data System (ADS)
Musatenko, Yurij S.; Kurashov, Vitalij N.
1998-11-01
In the paper we present the method that improves lossy compression of the true color or other multispectral images. The essence of the method is to project initial color planes into Karhunen-Loeve (KL) basis that gives completely decorrelated representation for the image and to compress basis functions instead of the planes. To do that the new fast algorithm of true KL basis construction with low memory consumption is suggested and our recently proposed scheme for finding optimal losses of Kl functions while compression is used. Compare to standard JPEG compression of the CMYK images the method provides the PSNR gain from 0.2 to 2 dB for the convenient compression ratios. Experimental results are obtained for high resolution CMYK images. It is demonstrated that presented scheme could work on common hardware.
SCALCE: boosting sequence compression algorithms using locally consistent encoding.
Hach, Faraz; Numanagic, Ibrahim; Alkan, Can; Sahinalp, S Cenk
2012-12-01
The high throughput sequencing (HTS) platforms generate unprecedented amounts of data that introduce challenges for the computational infrastructure. Data management, storage and analysis have become major logistical obstacles for those adopting the new platforms. The requirement for large investment for this purpose almost signalled the end of the Sequence Read Archive hosted at the National Center for Biotechnology Information (NCBI), which holds most of the sequence data generated world wide. Currently, most HTS data are compressed through general purpose algorithms such as gzip. These algorithms are not designed for compressing data generated by the HTS platforms; for example, they do not take advantage of the specific nature of genomic sequence data, that is, limited alphabet size and high similarity among reads. Fast and efficient compression algorithms designed specifically for HTS data should be able to address some of the issues in data management, storage and communication. Such algorithms would also help with analysis provided they offer additional capabilities such as random access to any read and indexing for efficient sequence similarity search. Here we present SCALCE, a 'boosting' scheme based on Locally Consistent Parsing technique, which reorganizes the reads in a way that results in a higher compression speed and compression rate, independent of the compression algorithm in use and without using a reference genome. Our tests indicate that SCALCE can improve the compression rate achieved through gzip by a factor of 4.19-when the goal is to compress the reads alone. In fact, on SCALCE reordered reads, gzip running time can improve by a factor of 15.06 on a standard PC with a single core and 6 GB memory. Interestingly even the running time of SCALCE + gzip improves that of gzip alone by a factor of 2.09. When compared with the recently published BEETL, which aims to sort the (inverted) reads in lexicographic order for improving bzip2, SCALCE + gzip provides up to 2.01 times better compression while improving the running time by a factor of 5.17. SCALCE also provides the option to compress the quality scores as well as the read names, in addition to the reads themselves. This is achieved by compressing the quality scores through order-3 Arithmetic Coding (AC) and the read names through gzip through the reordering SCALCE provides on the reads. This way, in comparison with gzip compression of the unordered FASTQ files (including reads, read names and quality scores), SCALCE (together with gzip and arithmetic encoding) can provide up to 3.34 improvement in the compression rate and 1.26 improvement in running time. Our algorithm, SCALCE (Sequence Compression Algorithm using Locally Consistent Encoding), is implemented in C++ with both gzip and bzip2 compression options. It also supports multithreading when gzip option is selected, and the pigz binary is available. It is available at http://scalce.sourceforge.net. fhach@cs.sfu.ca or cenk@cs.sfu.ca Supplementary data are available at Bioinformatics online.
First-principles molecular dynamics simulations of anorthite (CaAl2Si2O8) glass at high pressure
NASA Astrophysics Data System (ADS)
Ghosh, Dipta B.; Karki, Bijaya B.
2018-06-01
We report first-principles molecular dynamics study of the equation of state, structural, and elastic properties of CaAl2Si2O8 glass at 300 K as a function of pressure up to 155 GPa. Our results for the ambient pressure glass show that: (1) as with other silicates, Si atoms remain mostly (> 95%) under tetrahedral oxygen surroundings; (2) unlike anorthite crystal, presence of high-coordination (> 4) Al atoms with 30% abundance; (3) and significant presence of both non-bridging (8%) and triply (17%) coordinated oxygen. To achieve the glass configurations at various pressures, we use two different simulation schedules: cold and hot compression. Cold compression refers to sequential compression at 300 K. Compression at 3000 K and subsequent isochoric quenching to 300 K is considered as hot compression. At the initial stages of compression (0-10 GPa), smooth increase in bond distance and coordination occurs in the hot-compressed glass. Whereas in cold compression, Si (also Al to some extent) displays mainly topological changes (without significantly affecting the average bond distance or coordination) in this pressure interval. Further increase in pressure results in gradual increases in mean coordination, with Si-O (Al-O) coordination eventually reaching and remaining 6 (6.5) at the highest compression. Similarly, the ambient pressure Ca-O coordination of 5.9 increases to 9.5 at 155 GPa. The continuous pressure-induced increase in the proportion of oxygen triclusters along with the appearance and increasing abundance of tetrahedral oxygens results in mean O-T (T = Si and Al) coordination of > 3 from a value of 2.1 at ambient pressure. Due to the absence of kinetic barrier, the hot-compressed glasses consistently produce greater densities and higher coordination numbers than the cold compression cases. Decompressed glasses show irreversible compaction along with retention of high-coordination species when decompressed from pressure ≥ 10 GPa. The different density retention amounts (12, 17, and 20% when decompressed from 12, 40, and 155 GPa, respectively) signifies that the degree of irreversibility depends on the peak pressure of decompression. The calculated compressional and shear wave velocities (5 and 3 km/s at 0 GPa) for the cold-compressed case display sluggish pressure response in the 0-10 GPa interval as opposed to smooth increase in the hot-compressed one. Shear velocity saturates rather rapidly with a value of 5 km/s, whereas compressional wave velocity displays continuous increase, reaching/exceeding 12.5 km/s at 155 GPa. These structural details suggest that the pressure response of the cold-compressed glasses is not only inherently different at the 0-10 GPa interval, the density, coordination, and wave velocity data are consistently lower than the hot-compressed glasses. Hot-compressed glasses may, therefore, be the better analog in the study of high-pressure silicate melts.