Sample records for alvar variable compression

  1. Alvar soils and ecology in the boreal forest and taiga regions of Canada.

    NASA Astrophysics Data System (ADS)

    Ford, D.

    2012-04-01

    Alvars have been defined as "...a biological association based on a limestone plain with thin or no soil and, as a result, sparse vegetation. Trees and bushes are stunted or absent ... may include prairie spp." (Wikipedia). They were first described in southern Sweden, Estonia, the karst pavements of Yorkshire (UK) and the Burren (Eire). In North America alvars have been recognised and reported only in the Mixed Forest (deciduous/coniferous) Zone around the Great Lakes. An essential feature of the hydrologic controls on vegetation growth on natural alvars is that these terrains were glaciated in the last (Wisconsinan/Würm) ice age: the upper beds of any pre-existing epikarst were stripped away by glacier scour and there has been insufficient time for post-glacial epikarst to achieve the depths and densities required to support the deep rooting needed for mature forest cover. However, in the sites noted above, the alvars have been created, at least in part, by deforestation, overgrazing, burning to create browse, etc. and thus should not be considered wholly natural phenomena. There are extensive natural alvars in the Boreal Forest and Taiga ecozones in Canada. Their nature and variety will be illustrated with examples from cold temperate maritime climate settings in northern Newfoundland and the Gulf of St Lawrence and cold temperate continental to sub-arctic climates in northern Manitoba and the Northwest Territories.

  2. Does team lifting increase the variability in peak lumbar compression in ironworkers?

    PubMed

    Faber, Gert; Visser, Steven; van der Molen, Henk F; Kuijer, P Paul F M; Hoozemans, Marco J M; Van Dieën, Jaap H; Frings-Dresen, Monique H W

    2012-01-01

    Ironworkers frequently perform heavy lifting tasks in teams of two or four workers. Team lifting could potentially lead to a higher variation in peak lumbar compression forces than lifts performed by one worker, resulting in higher maximal peak lumbar compression forces. This study compared single-worker lifts (25-kg, iron bar) to two-worker lifts (50-kg, two iron bars) and to four-worker lifts (100-kg, iron lattice). Inverse dynamics was used to calculate peak lumbar compression forces. To assess the variability in peak lumbar loading, all three lifting tasks were performed six times. Results showed that the variability in peak lumbar loading was somewhat higher in the team lifts compared to the single-worker lifts. However, despite this increased variability, team lifts did not result in larger maximum peak lumbar compression forces. Therefore, it was concluded that, from a biomechanical point of view, team lifting does not result in an additional risk for low back complaints in ironworkers.

  3. Subgrid-scale effects in compressible variable-density decaying turbulence

    DOE PAGES

    GS, Sidharth; Candler, Graham V.

    2018-05-08

    We present that many turbulent flows are characterized by complex scale interactions and vorticity generation caused by compressibility and variable-density effects. In the large-eddy simulation of variable-density flows, these processes manifest themselves as subgrid-scale (SGS) terms that interact with the resolved-scale flow. This paper studies the effect of the variable-density SGS terms and quantifies their relative importance. We consider the SGS terms appearing in the density-weighted Favre-filtered equations and in the unweighted Reynolds-filtered equations. The conventional form of the Reynolds-filtered momentum equation is complicated by a temporal SGS term; therefore, we derive a new form of the Reynolds-filtered governing equationsmore » that does not contain this term and has only double-correlation SGS terms. The new form of the filtered equations has terms that represent the SGS mass flux, pressure-gradient acceleration and velocity-dilatation correlation. To evaluate the dynamical significance of the variable-density SGS effects, we carry out direct numerical simulations of compressible decaying turbulence at a turbulent Mach number of 0.3. Two different initial thermodynamic conditions are investigated: homentropic and a thermally inhomogeneous gas with regions of differing densities. The simulated flow fields are explicitly filtered to evaluate the SGS terms. The importance of the variable-density SGS terms is quantified relative to the SGS specific stress, which is the only SGS term active in incompressible constant-density turbulence. It is found that while the variable-density SGS terms in the homentropic case are negligible, they are dynamically significant in the thermally inhomogeneous flows. Investigation of the variable-density SGS terms is therefore important, not only to develop variable-density closures but also to improve the understanding of scale interactions in variable-density flows.« less

  4. Subgrid-scale effects in compressible variable-density decaying turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    GS, Sidharth; Candler, Graham V.

    We present that many turbulent flows are characterized by complex scale interactions and vorticity generation caused by compressibility and variable-density effects. In the large-eddy simulation of variable-density flows, these processes manifest themselves as subgrid-scale (SGS) terms that interact with the resolved-scale flow. This paper studies the effect of the variable-density SGS terms and quantifies their relative importance. We consider the SGS terms appearing in the density-weighted Favre-filtered equations and in the unweighted Reynolds-filtered equations. The conventional form of the Reynolds-filtered momentum equation is complicated by a temporal SGS term; therefore, we derive a new form of the Reynolds-filtered governing equationsmore » that does not contain this term and has only double-correlation SGS terms. The new form of the filtered equations has terms that represent the SGS mass flux, pressure-gradient acceleration and velocity-dilatation correlation. To evaluate the dynamical significance of the variable-density SGS effects, we carry out direct numerical simulations of compressible decaying turbulence at a turbulent Mach number of 0.3. Two different initial thermodynamic conditions are investigated: homentropic and a thermally inhomogeneous gas with regions of differing densities. The simulated flow fields are explicitly filtered to evaluate the SGS terms. The importance of the variable-density SGS terms is quantified relative to the SGS specific stress, which is the only SGS term active in incompressible constant-density turbulence. It is found that while the variable-density SGS terms in the homentropic case are negligible, they are dynamically significant in the thermally inhomogeneous flows. Investigation of the variable-density SGS terms is therefore important, not only to develop variable-density closures but also to improve the understanding of scale interactions in variable-density flows.« less

  5. Compression based entropy estimation of heart rate variability on multiple time scales.

    PubMed

    Baumert, Mathias; Voss, Andreas; Javorka, Michal

    2013-01-01

    Heart rate fluctuates beat by beat in a complex manner. The aim of this study was to develop a framework for entropy assessment of heart rate fluctuations on multiple time scales. We employed the Lempel-Ziv algorithm for lossless data compression to investigate the compressibility of RR interval time series on different time scales, using a coarse-graining procedure. We estimated the entropy of RR interval time series of 20 young and 20 old subjects and also investigated the compressibility of randomly shuffled surrogate RR time series. The original RR time series displayed significantly smaller compression entropy values than randomized RR interval data. The RR interval time series of older subjects showed significantly different entropy characteristics over multiple time scales than those of younger subjects. In conclusion, data compression may be useful approach for multiscale entropy assessment of heart rate variability.

  6. Variable density randomized stack of spirals (VDR-SoS) for compressive sensing MRI.

    PubMed

    Valvano, Giuseppe; Martini, Nicola; Landini, Luigi; Santarelli, Maria Filomena

    2016-07-01

    To develop a 3D sampling strategy based on a stack of variable density spirals for compressive sensing MRI. A random sampling pattern was obtained by rotating each spiral by a random angle and by delaying for few time steps the gradient waveforms of the different interleaves. A three-dimensional (3D) variable sampling density was obtained by designing different variable density spirals for each slice encoding. The proposed approach was tested with phantom simulations up to a five-fold undersampling factor. Fully sampled 3D dataset of a human knee, and of a human brain, were obtained from a healthy volunteer. The proposed approach was tested with off-line reconstructions of the knee dataset up to a four-fold acceleration and compared with other noncoherent trajectories. The proposed approach outperformed the standard stack of spirals for various undersampling factors. The level of coherence and the reconstruction quality of the proposed approach were similar to those of other trajectories that, however, require 3D gridding for the reconstruction. The variable density randomized stack of spirals (VDR-SoS) is an easily implementable trajectory that could represent a valid sampling strategy for 3D compressive sensing MRI. It guarantees low levels of coherence without requiring 3D gridding. Magn Reson Med 76:59-69, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  7. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    PubMed

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  8. Influence of acquisition frame-rate and video compression techniques on pulse-rate variability estimation from vPPG signal.

    PubMed

    Cerina, Luca; Iozzia, Luca; Mainardi, Luca

    2017-11-14

    In this paper, common time- and frequency-domain variability indexes obtained by pulse rate variability (PRV) series extracted from video-photoplethysmographic signal (vPPG) were compared with heart rate variability (HRV) parameters calculated from synchronized ECG signals. The dual focus of this study was to analyze the effect of different video acquisition frame-rates starting from 60 frames-per-second (fps) down to 7.5 fps and different video compression techniques using both lossless and lossy codecs on PRV parameters estimation. Video recordings were acquired through an off-the-shelf GigE Sony XCG-C30C camera on 60 young, healthy subjects (age 23±4 years) in the supine position. A fully automated, signal extraction method based on the Kanade-Lucas-Tomasi (KLT) algorithm for regions of interest (ROI) detection and tracking, in combination with a zero-phase principal component analysis (ZCA) signal separation technique was employed to convert the video frames sequence to a pulsatile signal. The frame-rate degradation was simulated on video recordings by directly sub-sampling the ROI tracking and signal extraction modules, to correctly mimic videos recorded at a lower speed. The compression of the videos was configured to avoid any frame rejection caused by codec quality leveling, FFV1 codec was used for lossless compression and H.264 with variable quality parameter as lossy codec. The results showed that a reduced frame-rate leads to inaccurate tracking of ROIs, increased time-jitter in the signals dynamics and local peak displacements, which degrades the performances in all the PRV parameters. The root mean square of successive differences (RMSSD) and the proportion of successive differences greater than 50 ms (PNN50) indexes in time-domain and the low frequency (LF) and high frequency (HF) power in frequency domain were the parameters which highly degraded with frame-rate reduction. Such a degradation can be partially mitigated by up-sampling the measured

  9. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  10. Mammographic compression in Asian women.

    PubMed

    Lau, Susie; Abdul Aziz, Yang Faridah; Ng, Kwan Hoong

    2017-01-01

    To investigate: (1) the variability of mammographic compression parameters amongst Asian women; and (2) the effects of reducing compression force on image quality and mean glandular dose (MGD) in Asian women based on phantom study. We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35-80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD) measurement software (Volpara) to assess compression force, compression pressure, compressed breast thickness (CBT), breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA) slabs. Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (p<0.0001). Compression parameters including compression force, compression pressure, CBT and breast contact area were widely variable between [relative standard deviation (RSD)≥21.0%] and within (p<0.0001) Asian women. The median compression force should be about 8.1 daN compared to the current 12.0 daN. Decreasing compression force from 12.0 daN to 9.0 daN increased CBT by 3.3±1.4 mm, MGD by 6.2-11.0%, and caused no significant effects on image quality (p>0.05). Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD.

  11. Variable valve timing in a homogenous charge compression ignition engine

    DOEpatents

    Lawrence, Keith E.; Faletti, James J.; Funke, Steven J.; Maloney, Ronald P.

    2004-08-03

    The present invention relates generally to the field of homogenous charge compression ignition engines, in which fuel is injected when the cylinder piston is relatively close to the bottom dead center position for its compression stroke. The fuel mixes with air in the cylinder during the compression stroke to create a relatively lean homogeneous mixture that preferably ignites when the piston is relatively close to the top dead center position. However, if the ignition event occurs either earlier or later than desired, lowered performance, engine misfire, or even engine damage, can result. The present invention utilizes internal exhaust gas recirculation and/or compression ratio control to control the timing of ignition events and combustion duration in homogeneous charge compression ignition engines. Thus, at least one electro-hydraulic assist actuator is provided that is capable of mechanically engaging at least one cam actuated intake and/or exhaust valve.

  12. Working characteristics of variable intake valve in compressed air engine.

    PubMed

    Yu, Qihui; Shi, Yan; Cai, Maolin

    2014-01-01

    A new camless compressed air engine is proposed, which can make the compressed air energy reasonably distributed. Through analysis of the camless compressed air engine, a mathematical model of the working processes was set up. Using the software MATLAB/Simulink for simulation, the pressure, temperature, and air mass of the cylinder were obtained. In order to verify the accuracy of the mathematical model, the experiments were conducted. Moreover, performance analysis was introduced to design compressed air engine. Results show that, firstly, the simulation results have good consistency with the experimental results. Secondly, under different intake pressures, the highest output power is obtained when the crank speed reaches 500 rpm, which also provides the maximum output torque. Finally, higher energy utilization efficiency can be obtained at the lower speed, intake pressure, and valve duration angle. This research can refer to the design of the camless valve of compressed air engine.

  13. Working Characteristics of Variable Intake Valve in Compressed Air Engine

    PubMed Central

    Yu, Qihui; Shi, Yan; Cai, Maolin

    2014-01-01

    A new camless compressed air engine is proposed, which can make the compressed air energy reasonably distributed. Through analysis of the camless compressed air engine, a mathematical model of the working processes was set up. Using the software MATLAB/Simulink for simulation, the pressure, temperature, and air mass of the cylinder were obtained. In order to verify the accuracy of the mathematical model, the experiments were conducted. Moreover, performance analysis was introduced to design compressed air engine. Results show that, firstly, the simulation results have good consistency with the experimental results. Secondly, under different intake pressures, the highest output power is obtained when the crank speed reaches 500 rpm, which also provides the maximum output torque. Finally, higher energy utilization efficiency can be obtained at the lower speed, intake pressure, and valve duration angle. This research can refer to the design of the camless valve of compressed air engine. PMID:25379536

  14. Effects of selected design variables on three ramp, external compression inlet performance. [boundary layer control bypasses, and mass flow rate

    NASA Technical Reports Server (NTRS)

    Kamman, J. H.; Hall, C. L.

    1975-01-01

    Two inlet performance tests and one inlet/airframe drag test were conducted in 1969 at the NASA-Ames Research Center. The basic inlet system was two-dimensional, three ramp (overhead), external compression, with variable capture area. The data from these tests were analyzed to show the effects of selected design variables on the performance of this type of inlet system. The inlet design variables investigated include inlet bleed, bypass, operating mass flow ratio, inlet geometry, and variable capture area.

  15. Effect of Variable Compression Ratio on Performance of a Diesel Engine Fueled with Karanja Biodiesel and its Blends

    NASA Astrophysics Data System (ADS)

    Mishra, Rahul Kumar; soota, Tarun, Dr.; singh, Ranjeet

    2017-08-01

    Rapid exploration and lavish consumption of underground petroleum resources have led to the scarcity of underground fossil fuels moreover the toxic emissions from such fuels are pernicious which have increased the health hazards around the world. So the aim was to find an alternative fuel which would meet the requirements of petroleum or fossil fuels. Biodiesel is a clean, renewable and bio-degradable fuel having several advantages, one of the most important of which is being its eco-friendly and better knocking characteristics than diesel fuel. In this work the performance of Karanja oil was analyzed on a four stroke, single cylinder, water cooled, variable compression ratio diesel engine. The fuel used was 5% - 25% karanja oil methyl ester by volume in diesel. The results such obtained are compared with standard diesel fuel. Several properties i.e. Brake Thermal Efficiency, Brake Specific Fuel Consumptions, Exhaust Gas Temperature are determined at all operating conditions & at variable compression ratio 17 and 17.5.

  16. Fractal-Based Image Compression

    DTIC Science & Technology

    1990-01-01

    used Ziv - Lempel - experiments and for software development. Addi- Welch compression algorithm (ZLW) [51 [4] was used tional thanks to Roger Boss, Bill...vol17no. 6 (June 4) and with the minimum number of maps. [5] J. Ziv and A. Lempel , Compression of !ndivid- 5 Summary ual Sequences via Variable-Rate...transient and should be discarded. 2.5 Collage Theorem algorithm2 C3.2 Deterministic Algorithm for IFS Attractor For fast image compression the best

  17. Performance and exhaust emission characteristics of variable compression ratio diesel engine fuelled with esters of crude rice bran oil.

    PubMed

    Vasudeva, Mohit; Sharma, Sumeet; Mohapatra, S K; Kundu, Krishnendu

    2016-01-01

    As a substitute to petroleum-derived diesel, biodiesel has high potential as a renewable and environment friendly energy source. For petroleum importing countries the choice of feedstock for biodiesel production within the geographical region is a major influential factor. Crude rice bran oil is found to be good and viable feedstock for biodiesel production. A two step esterification is carried out for higher free fatty acid crude rice bran oil. Blends of 10, 20 and 40 % by vol. crude rice bran biodiesel are tested in a variable compression ratio diesel engine at compression ratio 15, 16, 17 and 18. Engine performance and exhaust emission parameters are examined. Cylinder pressure-crank angle variation is also plotted. The increase in compression ratio from 15 to 18 resulted in 18.6 % decrease in brake specific fuel consumption and 14.66 % increase in brake thermal efficiency on an average. Cylinder pressure increases by 15 % when compression ratio is increased. Carbon monoxide emission decreased by 22.27 %, hydrocarbon decreased by 38.4 %, carbon dioxide increased by 17.43 % and oxides of nitrogen as NOx emission increased by 22.76 % on an average when compression ratio is increased from 15 to 18. The blends of crude rice bran biodiesel show better results than diesel with increase in compression ratio.

  18. Variable Thermal-Force Bending of a Three-Layer Bar with a Compressible Filler

    NASA Astrophysics Data System (ADS)

    Starovoitov, E. I.; Leonenko, D. V.

    2017-11-01

    Deformation of a three-layer elastoplastic bar with a compressible filler in a temperature field is considered. To describe the kinematics of a pack asymmetric across its thickness, the hypothesis of broken line is accepted, according to which the Bernoulli hypothesis is true in thin bearing layers, and the Timoshenko hypothesis is valid for a filler compressible across the its thickness, with a linear approximation of displacements across the layer thickness. The work of filler in the tangential direction is taken into account. The physical stress-strain relations correspond to the theory of small elastoplastic deformations. Temperature variations are calculated from a formula obtained by averaging the thermophysical properties of layer materials across the bar thickness. Using the variational method, a system of differential equilibrium equations is derived. On the boundary, the kinematic conditions of simply supported ends of the bar are assumed. The solution of the boundary problem is reduced to the search for four functions, namely, deflections and longitudinal displacements of median surfaces of the bearing layers. An analytical solution is derived by the method of elastic solutions with the use of the Moskvitin theorem on variable loadings. Its numerical analysis is performed for the cases of continuous and local loads.

  19. A burst compression and expansion technique for variable-rate users in satellite-switched TDMA networks

    NASA Technical Reports Server (NTRS)

    Budinger, James M.

    1990-01-01

    A burst compression and expansion technique is described for asynchronously interconnecting variable-data-rate users with cost-efficient ground terminals in a satellite-switched, time-division-multiple-access (SS/TDMA) network. Compression and expansion buffers in each ground terminal convert between lower rate, asynchronous, continuous-user data streams and higher-rate TDMA bursts synchronized with the satellite-switched timing. The technique described uses a first-in, first-out (FIFO) memory approach which enables the use of inexpensive clock sources by both the users and the ground terminals and obviates the need for elaborate user clock synchronization processes. A continous range of data rates from kilobits per second to that approaching the modulator burst rate (hundreds of megabits per second) can be accommodated. The technique was developed for use in the NASA Lewis Research Center System Integration, Test, and Evaluation (SITE) facility. Some key features of the technique have also been implemented in the gound terminals developed at NASA Lewis for use in on-orbit evaluation of the Advanced Communications Technology Satellite (ACTS) high burst rate (HBR) system.

  20. Adaptive variable-length coding for efficient compression of spacecraft television data.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  1. Dynamic control of a homogeneous charge compression ignition engine

    DOEpatents

    Duffy, Kevin P [Metamora, IL; Mehresh, Parag [Peoria, IL; Schuh, David [Peoria, IL; Kieser, Andrew J [Morton, IL; Hergart, Carl-Anders [Peoria, IL; Hardy, William L [Peoria, IL; Rodman, Anthony [Chillicothe, IL; Liechty, Michael P [Chillicothe, IL

    2008-06-03

    A homogenous charge compression ignition engine is operated by compressing a charge mixture of air, exhaust and fuel in a combustion chamber to an autoignition condition of the fuel. The engine may facilitate a transition from a first combination of speed and load to a second combination of speed and load by changing the charge mixture and compression ratio. This may be accomplished in a consecutive engine cycle by adjusting both a fuel injector control signal and a variable valve control signal away from a nominal variable valve control signal. Thereafter in one or more subsequent engine cycles, more sluggish adjustments are made to at least one of a geometric compression ratio control signal and an exhaust gas recirculation control signal to allow the variable valve control signal to be readjusted back toward its nominal variable valve control signal setting. By readjusting the variable valve control signal back toward its nominal setting, the engine will be ready for another transition to a new combination of engine speed and load.

  2. Interfraction Liver Shape Variability and Impact on GTV Position During Liver Stereotactic Radiotherapy Using Abdominal Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eccles, Cynthia L., E-mail: cynthia.eccles@rob.ox.ac.uk; Dawson, Laura A.; Moseley, Joanne L.

    2011-07-01

    Purpose: For patients receiving liver stereotactic body radiotherapy (SBRT), abdominal compression can reduce organ motion, and daily image guidance can reduce setup error. The reproducibility of liver shape under compression may impact treatment delivery accuracy. The purpose of this study was to measure the interfractional variability in liver shape under compression, after best-fit rigid liver-to-liver registration from kilovoltage (kV) cone beam computed tomography (CBCT) scans to planning computed tomography (CT) scans and its impact on gross tumor volume (GTV) position. Methods and Materials: Evaluable patients were treated in a Research Ethics Board-approved SBRT six-fraction study with abdominal compression. Kilovoltage CBCTmore » scans were acquired before treatment and reconstructed as respiratory sorted CBCT scans offline. Manual rigid liver-to-liver registrations were performed from exhale-phase CBCT scans to exhale planning CT scans. Each CBCT liver was contoured, exported, and compared with the planning CT scan for spatial differences, by use of in house-developed finite-element model-based deformable registration (MORFEUS). Results: We evaluated 83 CBCT scans from 16 patients with 30 GTVs. The mean volume of liver that deformed by greater than 3 mm was 21.7%. Excluding 1 outlier, the maximum volume that deformed by greater than 3 mm was 36.3% in a single patient. Over all patients, the absolute maximum deformations in the left-right (LR), anterior-posterior (AP), and superior-inferior directions were 10.5 mm (SD, 2.2), 12.9 mm (SD, 3.6), and 5.6 mm (SD, 2.7), respectively. The absolute mean predicted impact of liver volume displacements on GTV by use of center of mass displacements was 0.09 mm (SD, 0.13), 0.13 mm (SD, 0.18), and 0.08 mm (SD, 0.07) in the left-right, anterior-posterior, and superior-inferior directions, respectively. Conclusions: Interfraction liver deformations in patients undergoing SBRT under abdominal compression after rigid

  3. Study of communications data compression methods

    NASA Technical Reports Server (NTRS)

    Jones, H. W.

    1978-01-01

    A simple monochrome conditional replenishment system was extended to higher compression and to higher motion levels, by incorporating spatially adaptive quantizers and field repeating. Conditional replenishment combines intraframe and interframe compression, and both areas are investigated. The gain of conditional replenishment depends on the fraction of the image changing, since only changed parts of the image need to be transmitted. If the transmission rate is set so that only one fourth of the image can be transmitted in each field, greater change fractions will overload the system. A computer simulation was prepared which incorporated (1) field repeat of changes, (2) a variable change threshold, (3) frame repeat for high change, and (4) two mode, variable rate Hadamard intraframe quantizers. The field repeat gives 2:1 compression in moving areas without noticeable degradation. Variable change threshold allows some flexibility in dealing with varying change rates, but the threshold variation must be limited for acceptable performance.

  4. Prechamber Compression-Ignition Engine Performance

    NASA Technical Reports Server (NTRS)

    Moore, Charles S; Collins, John H , Jr

    1938-01-01

    Single-cylinder compression-ignition engine tests were made to investigate the performance characteristics of prechamber type of cylinder head. Certain fundamental variables influencing engine performance -- clearance distribution, size, shape, and direction of the passage connecting the cylinder and prechamber, shape of prechamber, cylinder clearance, compression ratio, and boosting -- were independently tested. Results of motoring and of power tests, including several typical indicator cards, are presented.

  5. Effect of shock waves on the statistics and scaling in compressible isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Wang, Jianchun; Wan, Minping; Chen, Song; Xie, Chenyue; Chen, Shiyi

    2018-04-01

    The statistics and scaling of compressible isotropic turbulence in the presence of large-scale shock waves are investigated by using numerical simulations at turbulent Mach number Mt ranging from 0.30 to 0.65. The spectra of the compressible velocity component, density, pressure, and temperature exhibit a k-2 scaling at different turbulent Mach numbers. The scaling exponents for structure functions of the compressible velocity component and thermodynamic variables are close to 1 at high orders n ≥3 . The probability density functions of increments of the compressible velocity component and thermodynamic variables exhibit a power-law region with the exponent -2 . Models for the conditional average of increments of the compressible velocity component and thermodynamic variables are developed based on the ideal shock relations and are verified by numerical simulations. The overall statistics of the compressible velocity component and thermodynamic variables are similar to one another at different turbulent Mach numbers. It is shown that the effect of shock waves on the compressible velocity spectrum and kinetic energy transfer is different from that of acoustic waves.

  6. Self-Similar Compressible Free Vortices

    NASA Technical Reports Server (NTRS)

    vonEllenrieder, Karl

    1998-01-01

    Lie group methods are used to find both exact and numerical similarity solutions for compressible perturbations to all incompressible, two-dimensional, axisymmetric vortex reference flow. The reference flow vorticity satisfies an eigenvalue problem for which the solutions are a set of two-dimensional, self-similar, incompressible vortices. These solutions are augmented by deriving a conserved quantity for each eigenvalue, and identifying a Lie group which leaves the reference flow equations invariant. The partial differential equations governing the compressible perturbations to these reference flows are also invariant under the action of the same group. The similarity variables found with this group are used to determine the decay rates of the velocities and thermodynamic variables in the self-similar flows, and to reduce the governing partial differential equations to a set of ordinary differential equations. The ODE's are solved analytically and numerically for a Taylor vortex reference flow, and numerically for an Oseen vortex reference flow. The solutions are used to examine the dependencies of the temperature, density, entropy, dissipation and radial velocity on the Prandtl number. Also, experimental data on compressible free vortex flow are compared to the analytical results, the evolution of vortices from initial states which are not self-similar is discussed, and the energy transfer in a slightly-compressible vortex is considered.

  7. Variable percolation threshold of composites with fiber fillers under compression

    NASA Astrophysics Data System (ADS)

    Lin, Chuan; Wang, Hongtao; Yang, Wei

    2010-07-01

    The piezoresistant effect in conducting fiber-filled composites has been studied by a continuum percolation model. Simulation was performed by a Monte Carlo method that took into account both the deformation-induced fiber bending and rotation. The percolation threshold was found to rise with the compression strain, which explains the observed positive piezoresistive coefficients in such composites. The simulations unveiled the effect of the microstructure evolution during deformation. The fibers are found to align perpendicularly to the compression direction. As the fiber is bended, the effective length in making a conductive network is shortened. Both effects contribute to a larger percolation threshold and imply a positive piezoresistive coefficient according the universal power law.

  8. Effects of compression and individual variability on face recognition performance

    NASA Astrophysics Data System (ADS)

    McGarry, Delia P.; Arndt, Craig M.; McCabe, Steven A.; D'Amato, Donald P.

    2004-08-01

    The Enhanced Border Security and Visa Entry Reform Act of 2002 requires that the Visa Waiver Program be available only to countries that have a program to issue to their nationals machine-readable passports incorporating biometric identifiers complying with applicable standards established by the International Civil Aviation Organization (ICAO). In June 2002, the New Technologies Working Group of ICAO unanimously endorsed the use of face recognition (FR) as the globally interoperable biometric for machine-assisted identity confirmation with machine-readable travel documents (MRTDs), although Member States may elect to use fingerprint and/or iris recognition as additional biometric technologies. The means and formats are still being developed through which biometric information might be stored in the constrained space of integrated circuit chips embedded within travel documents. Such information will be stored in an open, yet unalterable and very compact format, probably as digitally signed and efficiently compressed images. The objective of this research is to characterize the many factors that affect FR system performance with respect to the legislated mandates concerning FR. A photograph acquisition environment and a commercial face recognition system have been installed at Mitretek, and over 1,400 images have been collected of volunteers. The image database and FR system are being used to analyze the effects of lossy image compression, individual differences, such as eyeglasses and facial hair, and the acquisition environment on FR system performance. Images are compressed by varying ratios using JPEG2000 to determine the trade-off points between recognition accuracy and compression ratio. The various acquisition factors that contribute to differences in FR system performance among individuals are also being measured. The results of this study will be used to refine and test efficient face image interchange standards that ensure highly accurate recognition, both

  9. Fixed-Rate Compressed Floating-Point Arrays.

    PubMed

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

  10. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  11. Combustion engine variable compression ratio apparatus and method

    DOEpatents

    Lawrence,; Keith, E [Peoria, IL; Strawbridge, Bryan E [Dunlap, IL; Dutart, Charles H [Washington, IL

    2006-06-06

    An apparatus and method for varying a compression ratio of an engine having a block and a head mounted thereto. The apparatus and method includes a cylinder having a block portion and a head portion, a piston linearly movable in the block portion of the cylinder, a cylinder plug linearly movable in the head portion of the cylinder, and a valve located in the cylinder plug and operable to provide controlled fluid communication with the block portion of the cylinder.

  12. Direct compression of chitosan: process and formulation factors to improve powder flow and tablet performance.

    PubMed

    Buys, Gerhard M; du Plessis, Lissinda H; Marais, Andries F; Kotze, Awie F; Hamman, Josias H

    2013-06-01

    Chitosan is a polymer derived from chitin that is widely available at relatively low cost, but due to compression challenges it has limited application for the production of direct compression tablets. The aim of this study was to use certain process and formulation variables to improve manufacturing of tablets containing chitosan as bulking agent. Chitosan particle size and flow properties were determined, which included bulk density, tapped density, compressibility and moisture uptake. The effect of process variables (i.e. compression force, punch depth, percentage compaction in a novel double fill compression process) and formulation variables (i.e. type of glidant, citric acid, pectin, coating with Eudragit S®) on chitosan tablet performance (i.e. mass variation, tensile strength, dissolution) was investigated. Moisture content of the chitosan powder, particle size and the inclusion of glidants had a pronounced effect on its flow ability. Varying the percentage compaction during the first cycle of a double fill compression process produced chitosan tablets with more acceptable tensile strength and dissolution rate properties. The inclusion of citric acid and pectin into the formulation significantly decreased the dissolution rate of isoniazid from the tablets due to gel formation. Direct compression of chitosan powder into tablets can be significantly improved by the investigated process and formulation variables as well as applying a double fill compression process.

  13. Development and validation of a turbulent-mix model for variable-density and compressible flows.

    PubMed

    Banerjee, Arindam; Gore, Robert A; Andrews, Malcolm J

    2010-10-01

    The modeling of buoyancy driven turbulent flows is considered in conjunction with an advanced statistical turbulence model referred to as the BHR (Besnard-Harlow-Rauenzahn) k-S-a model. The BHR k-S-a model is focused on variable-density and compressible flows such as Rayleigh-Taylor (RT), Richtmyer-Meshkov (RM), and Kelvin-Helmholtz (KH) driven mixing. The BHR k-S-a turbulence mix model has been implemented in the RAGE hydro-code, and model constants are evaluated based on analytical self-similar solutions of the model equations. The results are then compared with a large test database available from experiments and direct numerical simulations (DNS) of RT, RM, and KH driven mixing. Furthermore, we describe research to understand how the BHR k-S-a turbulence model operates over a range of moderate to high Reynolds number buoyancy driven flows, with a goal of placing the modeling of buoyancy driven turbulent flows at the same level of development as that of single phase shear flows.

  14. DNABIT Compress – Genome compression algorithm

    PubMed Central

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  15. Wave energy devices with compressible volumes.

    PubMed

    Kurniawan, Adi; Greaves, Deborah; Chaplin, John

    2014-12-08

    We present an analysis of wave energy devices with air-filled compressible submerged volumes, where variability of volume is achieved by means of a horizontal surface free to move up and down relative to the body. An analysis of bodies without power take-off (PTO) systems is first presented to demonstrate the positive effects a compressible volume could have on the body response. Subsequently, two compressible device variations are analysed. In the first variation, the compressible volume is connected to a fixed volume via an air turbine for PTO. In the second variation, a water column separates the compressible volume from another volume, which is fitted with an air turbine open to the atmosphere. Both floating and bottom-fixed, axisymmetric, configurations are considered, and linear analysis is employed throughout. Advantages and disadvantages of each device are examined in detail. Some configurations with displaced volumes less than 2000 m 3 and with constant turbine coefficients are shown to be capable of achieving 80% of the theoretical maximum absorbed power over a wave period range of about 4 s.

  16. Wave energy devices with compressible volumes

    PubMed Central

    Kurniawan, Adi; Greaves, Deborah; Chaplin, John

    2014-01-01

    We present an analysis of wave energy devices with air-filled compressible submerged volumes, where variability of volume is achieved by means of a horizontal surface free to move up and down relative to the body. An analysis of bodies without power take-off (PTO) systems is first presented to demonstrate the positive effects a compressible volume could have on the body response. Subsequently, two compressible device variations are analysed. In the first variation, the compressible volume is connected to a fixed volume via an air turbine for PTO. In the second variation, a water column separates the compressible volume from another volume, which is fitted with an air turbine open to the atmosphere. Both floating and bottom-fixed, axisymmetric, configurations are considered, and linear analysis is employed throughout. Advantages and disadvantages of each device are examined in detail. Some configurations with displaced volumes less than 2000 m3 and with constant turbine coefficients are shown to be capable of achieving 80% of the theoretical maximum absorbed power over a wave period range of about 4 s. PMID:25484609

  17. Is There Evidence that Runners can Benefit from Wearing Compression Clothing?

    PubMed

    Engel, Florian Azad; Holmberg, Hans-Christer; Sperlich, Billy

    2016-12-01

    Runners at various levels of performance and specializing in different events (from 800 m to marathons) wear compression socks, sleeves, shorts, and/or tights in attempt to improve their performance and facilitate recovery. Recently, a number of publications reporting contradictory results with regard to the influence of compression garments in this context have appeared. To assess original research on the effects of compression clothing (socks, calf sleeves, shorts, and tights) on running performance and recovery. A computerized research of the electronic databases PubMed, MEDLINE, SPORTDiscus, and Web of Science was performed in September of 2015, and the relevant articles published in peer-reviewed journals were thus identified rated using the Physiotherapy Evidence Database (PEDro) Scale. Studies examining effects on physiological, psychological, and/or biomechanical parameters during or after running were included, and means and measures of variability for the outcome employed to calculate Hedges'g effect size and associated 95 % confidence intervals for comparison of experimental (compression) and control (non-compression) trials. Compression garments exerted no statistically significant mean effects on running performance (times for a (half) marathon, 15-km trail running, 5- and 10-km runs, and 400-m sprint), maximal and submaximal oxygen uptake, blood lactate concentrations, blood gas kinetics, cardiac parameters (including heart rate, cardiac output, cardiac index, and stroke volume), body and perceived temperature, or the performance of strength-related tasks after running. Small positive effect sizes were calculated for the time to exhaustion (in incremental or step tests), running economy (including biomechanical variables), clearance of blood lactate, perceived exertion, maximal voluntary isometric contraction and peak leg muscle power immediately after running, and markers of muscle damage and inflammation. The body core temperature was moderately

  18. Estimating the concrete compressive strength using hard clustering and fuzzy clustering based regression techniques.

    PubMed

    Nagwani, Naresh Kumar; Deo, Shirish V

    2014-01-01

    Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm.

  19. Estimating the Concrete Compressive Strength Using Hard Clustering and Fuzzy Clustering Based Regression Techniques

    PubMed Central

    Nagwani, Naresh Kumar; Deo, Shirish V.

    2014-01-01

    Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm. PMID:25374939

  20. Premixed autoignition in compressible turbulence

    NASA Astrophysics Data System (ADS)

    Konduri, Aditya; Kolla, Hemanth; Krisman, Alexander; Chen, Jacqueline

    2016-11-01

    Prediction of chemical ignition delay in an autoignition process is critical in combustion systems like compression ignition engines and gas turbines. Often, ignition delay times measured in simple homogeneous experiments or homogeneous calculations are not representative of actual autoignition processes in complex turbulent flows. This is due the presence of turbulent mixing which results in fluctuations in thermodynamic properties as well as chemical composition. In the present study the effect of fluctuations of thermodynamic variables on the ignition delay is quantified with direct numerical simulations of compressible isotropic turbulence. A premixed syngas-air mixture is used to remove the effects of inhomogeneity in the chemical composition. Preliminary results show a significant spatial variation in the ignition delay time. We analyze the topology of autoignition kernels and identify the influence of extreme events resulting from compressibility and intermittency. The dependence of ignition delay time on Reynolds and turbulent Mach numbers is also quantified. Supported by Basic Energy Sciences, Dept of Energy, United States.

  1. Optimal sensor placement for control of a supersonic mixed-compression inlet with variable geometry

    NASA Astrophysics Data System (ADS)

    Moore, Kenneth Thomas

    A method of using fluid dynamics models for the generation of models that are useable for control design and analysis is investigated. The problem considered is the control of the normal shock location in the VDC inlet, which is a mixed-compression, supersonic, variable-geometry inlet of a jet engine. A quasi-one-dimensional set of fluid equations incorporating bleed and moving walls is developed. An object-oriented environment is developed for simulation of flow systems under closed-loop control. A public interface between the controller and fluid classes is defined. A linear model representing the dynamics of the VDC inlet is developed from the finite difference equations, and its eigenstructure is analyzed. The order of this model is reduced using the square root balanced model reduction method to produce a reduced-order linear model that is suitable for control design and analysis tasks. A modification to this method that improves the accuracy of the reduced-order linear model for the purpose of sensor placement is presented and analyzed. The reduced-order linear model is used to develop a sensor placement method that quantifies as a function of the sensor location the ability of a sensor to provide information on the variable of interest for control. This method is used to develop a sensor placement metric for the VDC inlet. The reduced-order linear model is also used to design a closed loop control system to control the shock position in the VDC inlet. The object-oriented simulation code is used to simulate the nonlinear fluid equations under closed-loop control.

  2. Compression debarking of wood chips.

    Treesearch

    Rodger A. Arola; John R. Erickson

    1973-01-01

    Presents results from 2 years testing of a single-pass compression process for debarking wood chips of several species. The most significant variable was season of cut. Depending on species, approximately 70% of the bark was removed from wood cut in the growing season while approximately 45% was removed from wood cut in the dormant season.

  3. Progress with lossy compression of data from the Community Earth System Model

    NASA Astrophysics Data System (ADS)

    Xu, H.; Baker, A.; Hammerling, D.; Li, S.; Clyne, J.

    2017-12-01

    Climate models, such as the Community Earth System Model (CESM), generate massive quantities of data, particularly when run at high spatial and temporal resolutions. The burden of storage is further exacerbated by creating large ensembles, generating large numbers of variables, outputting at high frequencies, and duplicating data archives (to protect against disk failures). Applying lossy compression methods to CESM datasets is an attractive means of reducing data storage requirements, but ensuring that the loss of information does not negatively impact science objectives is critical. In particular, test methods are needed to evaluate whether critical features (e.g., extreme values and spatial and temporal gradients) have been preserved and to boost scientists' confidence in the lossy compression process. We will provide an overview on our progress in applying lossy compression to CESM output and describe our unique suite of metric tests that evaluate the impact of information loss. Further, we will describe our processes how to choose an appropriate compression algorithm (and its associated parameters) given the diversity of CESM data (e.g., variables may be constant, smooth, change abruptly, contain missing values, or have large ranges). Traditional compression algorithms, such as those used for images, are not necessarily ideally suited for floating-point climate simulation data, and different methods may have different strengths and be more effective for certain types of variables than others. We will discuss our progress towards our ultimate goal of developing an automated multi-method parallel approach for compression of climate data that both maximizes data reduction and minimizes the impact of data loss on science results.

  4. The use of ZFP lossy floating point data compression in tornado-resolving thunderstorm simulations

    NASA Astrophysics Data System (ADS)

    Orf, L.

    2017-12-01

    In the field of atmospheric science, numerical models are used to produce forecasts of weather and climate and serve as virtual laboratories for scientists studying atmospheric phenomena. In both operational and research arenas, atmospheric simulations exploiting modern supercomputing hardware can produce a tremendous amount of data. During model execution, the transfer of floating point data from memory to the file system is often a significant bottleneck where I/O can dominate wallclock time. One way to reduce the I/O footprint is to compress the floating point data, which reduces amount of data saved to the file system. In this presentation we introduce LOFS, a file system developed specifically for use in three-dimensional numerical weather models that are run on massively parallel supercomputers. LOFS utilizes the core (in-memory buffered) HDF5 driver and includes compression options including ZFP, a lossy floating point data compression algorithm. ZFP offers several mechanisms for specifying the amount of lossy compression to be applied to floating point data, including the ability to specify the maximum absolute error allowed in each compressed 3D array. We explore different maximum error tolerances in a tornado-resolving supercell thunderstorm simulation for model variables including cloud and precipitation, temperature, wind velocity and vorticity magnitude. We find that average compression ratios exceeding 20:1 in scientifically interesting regions of the simulation domain produce visually identical results to uncompressed data in visualizations and plots. Since LOFS splits the model domain across many files, compression ratios for a given error tolerance can be compared across different locations within the model domain. We find that regions of high spatial variability (which tend to be where scientifically interesting things are occurring) show the lowest compression ratios, whereas regions of the domain with little spatial variability compress

  5. Fpack and Funpack Utilities for FITS Image Compression and Uncompression

    NASA Technical Reports Server (NTRS)

    Pence, W.

    2008-01-01

    Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see http://fits.gsfc.nasa.gov). The associated funpack program restores the compressed image file back to its original state (as long as a lossless compression algorithm is used). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression algorithms. Fpack stores the compressed image using the FITS tiled image compression convention (see http://fits.gsfc.nasa.gov/fits_registry.html). Under this convention, the image is first divided into a user-configurable grid of rectangular tiles, and then each tile is individually compressed and stored in a variable-length array column in a FITS binary table. By default, fpack usually adopts a row-by-row tiling pattern. The FITS image header keywords remain uncompressed for fast access by FITS reading and writing software. The tiled image compression convention can in principle support any number of different compression algorithms. The fpack and funpack utilities call on routines in the CFITSIO library (http://hesarc.gsfc.nasa.gov/fitsio) to perform the actual compression and uncompression of the FITS images, which currently supports the GZIP, Rice, H-compress, and PLIO IRAF pixel list compression algorithms.

  6. Compression map, functional groups and fossilization: A chemometric approach (Pennsylvanian neuropteroid foliage, Canada)

    USGS Publications Warehouse

    D'Angelo, J. A.; Zodrow, E.L.; Mastalerz, Maria

    2012-01-01

    Nearly all of the spectrochemical studies involving Carboniferous foliage of seed-ferns are based on a limited number of pinnules, mainly compressions. In contrast, in this paper we illustrate working with a larger pinnate segment, i.e., a 22-cm long neuropteroid specimen, compression-preserved with cuticle, the compression map. The objective is to study preservation variability on a larger scale, where observation of transparency/opacity of constituent pinnules is used as a first approximation for assessing the degree of pinnule coalification/fossilization. Spectrochemical methods by Fourier transform infrared spectrometry furnish semi-quantitative data for principal component analysis.The compression map shows a high degree of preservation variability, which ranges from comparatively more coalified pinnules to less coalified pinnules that resemble fossilized-cuticles, noting that the pinnule midveins are preserved more like fossilized-cuticles. A general overall trend of coalified pinnules towards fossilized-cuticles, i.e., variable chemistry, is inferred from the semi-quantitative FTIR data as higher contents of aromatic compounds occur in the visually more opaque upper location of the compression map. The latter also shows a higher condensation of the aromatic nuclei along with some variation in both ring size and degree of aromatic substitution. From principal component analysis we infer correspondence between transparency/opacity observation and chemical information which correlate with varying degree to fossilization/coalification among pinnules. ?? 2011 Elsevier B.V.

  7. Dataset on predictive compressive strength model for self-compacting concrete.

    PubMed

    Ofuyatan, O M; Edeki, S O

    2018-04-01

    The determination of compressive strength is affected by many variables such as the water cement (WC) ratio, the superplasticizer (SP), the aggregate combination, and the binder combination. In this dataset article, 7, 28, and 90-day compressive strength models are derived using statistical analysis. The response surface methodology is used toinvestigate the effect of the parameters: Varying percentages of ash, cement, WC, and SP on hardened properties-compressive strengthat 7,28 and 90 days. Thelevels of independent parameters are determinedbased on preliminary experiments. The experimental values for compressive strengthat 7, 28 and 90 days and modulus of elasticity underdifferent treatment conditions are also discussed and presented.These dataset can effectively be used for modelling and prediction in concrete production settings.

  8. High Performance Compression of Science Data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Carpentieri, Bruno; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  9. High performance compression of science data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in the interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  10. Contamination of hospital compressed air with nitric oxide: unwitting replacement therapy.

    PubMed

    Pinsky, M R; Genc, F; Lee, K H; Delgado, E

    1997-06-01

    Inhaled nitric oxide (NO) at levels between 5 and 80 ppm has been used experimentally to treat a variety of conditions. NO also is a common environmental air pollutant in industrial regions. As compressed hospital air is drawn from the local environment, we speculated that it may contain NO contamination, which, if present, would provide unwitting inhaled NO therapy to all subjects respiring this compressed gas. NO levels were measured twice daily from ambient hospital air and compressed gas sources driving positive pressure ventilation from two adjacent hospitals and compared with NO levels reported daily by local Environmental Protection Agency sources. An NO chemiluminescence analyzer (Sievers 270B; Boulder, Colo) sensitive to > or =2 parts per billion was used to measure NO levels in ambient air and compressed gas. NO levels in ambient air and hospital compressed air covaried from day to day, and absolute levels of NO differed between hospitals with the difference never exceeding 1.4 ppm (range, 0 to 1.4 ppm; median, 0.07 ppm). The hospital with the highest usage level of compressed air had the highest levels of NO, which approximated ambient levels of NO. NO levels were lowest on weekends in both hospitals. We also documented inadvertent NO contamination in one hospital occurring over 5 days, which corresponded to welding activity near the intake port for fresh gas. This contamination resulted in system-wide NO levels of 5 to 8 ppm. Hospital compressed air contains highly variable levels of NO that tend to covary with ambient NO levels and to be highest when the rate of usage is high enough to preclude natural degradation of NO in 21% oxygen. Assuming that inhaled NO may alter gas exchange, pulmonary hemodynamics, and outcome from acute lung injury, the role of unwitting variable NO of hospital compressed air needs to be evaluated.

  11. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  12. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.

  13. Application of grammar-based codes for lossless compression of digital mammograms

    NASA Astrophysics Data System (ADS)

    Li, Xiaoli; Krishnan, Srithar; Ma, Ngok-Wah

    2006-01-01

    A newly developed grammar-based lossless source coding theory and its implementation was proposed in 1999 and 2000, respectively, by Yang and Kieffer. The code first transforms the original data sequence into an irreducible context-free grammar, which is then compressed using arithmetic coding. In the study of grammar-based coding for mammography applications, we encountered two issues: processing time and limited number of single-character grammar G variables. For the first issue, we discover a feature that can simplify the matching subsequence search in the irreducible grammar transform process. Using this discovery, an extended grammar code technique is proposed and the processing time of the grammar code can be significantly reduced. For the second issue, we propose to use double-character symbols to increase the number of grammar variables. Under the condition that all the G variables have the same probability of being used, our analysis shows that the double- and single-character approaches have the same compression rates. By using the methods proposed, we show that the grammar code can outperform three other schemes: Lempel-Ziv-Welch (LZW), arithmetic, and Huffman on compression ratio, and has similar error tolerance capabilities as LZW coding under similar circumstances.

  14. Comparative data compression techniques and multi-compression results

    NASA Astrophysics Data System (ADS)

    Hasan, M. R.; Ibrahimy, M. I.; Motakabber, S. M. A.; Ferdaus, M. M.; Khan, M. N. H.

    2013-12-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms.

  15. Compression of electromyographic signals using image compression techniques.

    PubMed

    Costa, Marcus Vinícius Chaffim; Berger, Pedro de Azevedo; da Rocha, Adson Ferreira; de Carvalho, João Luiz Azevedo; Nascimento, Francisco Assis de Oliveira

    2008-01-01

    Despite the growing interest in the transmission and storage of electromyographic signals for long periods of time, few studies have addressed the compression of such signals. In this article we present an algorithm for compression of electromyographic signals based on the JPEG2000 coding system. Although the JPEG2000 codec was originally designed for compression of still images, we show that it can also be used to compress EMG signals for both isotonic and isometric contractions. For EMG signals acquired during isometric contractions, the proposed algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.75% to 13.7%. For isotonic EMG signals, the algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.4% to 7%. The compression results using the JPEG2000 algorithm were compared to those using other algorithms based on the wavelet transform.

  16. Context dependent prediction and category encoding for DPCM image compression

    NASA Technical Reports Server (NTRS)

    Beaudet, Paul R.

    1989-01-01

    Efficient compression of image data requires the understanding of the noise characteristics of sensors as well as the redundancy expected in imagery. Herein, the techniques of Differential Pulse Code Modulation (DPCM) are reviewed and modified for information-preserving data compression. The modifications include: mapping from intensity to an equal variance space; context dependent one and two dimensional predictors; rationale for nonlinear DPCM encoding based upon an image quality model; context dependent variable length encoding of 2x2 data blocks; and feedback control for constant output rate systems. Examples are presented at compression rates between 1.3 and 2.8 bits per pixel. The need for larger block sizes, 2D context dependent predictors, and the hope for sub-bits-per-pixel compression which maintains spacial resolution (information preserving) are discussed.

  17. Evaluation on Compressive Characteristics of Medical Stents Applied by Mesh Structures

    NASA Astrophysics Data System (ADS)

    Hirayama, Kazuki; He, Jianmei

    2017-11-01

    There are concerns about strength reduction and fatigue fracture due to stress concentration in currently used medical stents. To address these problems, meshed stents applied by mesh structures were interested for achieving long life and high strength perfromance of medical stents. The purpose of this study is to design basic mesh shapes to obatin three dimensional (3D) meshed stent models for mechanical property evaluation. The influence of introduced design variables on compressive characteristics of meshed stent models are evaluated through finite element analysis using ANSYS Workbench code. From the analytical results, the compressive stiffness are changed periodically with compressive directions, average results need to be introduced as the mean value of compressive stiffness of meshed stents. Secondly, compressive flexibility of meshed stents can be improved by increasing the angle proportional to the arm length of the mesh basic shape. By increasing the number of basic mesh shapes arranged in stent’s circumferential direction, compressive rigidity of meshed stent tends to be increased. Finaly reducing the mesh line width is found effective to improve compressive flexibility of meshed stents.

  18. On the implicit density based OpenFOAM solver for turbulent compressible flows

    NASA Astrophysics Data System (ADS)

    Fürst, Jiří

    The contribution deals with the development of coupled implicit density based solver for compressible flows in the framework of open source package OpenFOAM. However the standard distribution of OpenFOAM contains several ready-made segregated solvers for compressible flows, the performance of those solvers is rather week in the case of transonic flows. Therefore we extend the work of Shen [15] and we develop an implicit semi-coupled solver. The main flow field variables are updated using lower-upper symmetric Gauss-Seidel method (LU-SGS) whereas the turbulence model variables are updated using implicit Euler method.

  19. Electromotive force in strongly compressible magnetohydrodynamic turbulence

    NASA Astrophysics Data System (ADS)

    Yokoi, N.

    2017-12-01

    Variable density fluid turbulence is ubiquitous in geo-fluids, not to mention in astrophysics. Depending on the source of density variation, variable density fluid turbulence may be divided into two categories: the weak compressible (entropy mode) turbulence for slow flow and the strong compressible (acoustic mode) turbulence for fast flow. In the strong compressible turbulence, the pressure fluctuation induces a strong density fluctuation ρ ', which is represented by the density variance <ρ'2> (<·> denotes the ensemble average). The turbulent effect on the large-scale magnetic-field B induction is represented by the turbulent electromotive force (EMF) (u': velocity fluctuation, b': magnetic-field fluctuation). In the usual treatment in the dynamo theory, the expression for the EMF has been obtained in the framework of incompressible or weak compressible turbulence, where only the variation of the mean density <ρ>, if any, is taken into account. We see from the equation of the density fluctuation ρ', the density variance <ρ'2> is generated by the large mean density variation ∂<ρ> coupled with the turbulent mass flux <ρ'u'>. This means that in the region where the mean density steeply changes, the density variance effect becomes relevant for the magnetic field evolution. This situation is typically the case for phenomena associated with shocks and compositional discontinuities. With the aid of the analytical theory of inhomogeneous compressible magnetohydrodynamic (MHD) turbulence, the expression for the turbulent electromotive force is investigated. It is shown that, among others, an obliqueness (misalignment) between the mean density gradient ∂<ρ> and the mean magnetic field B may contribute to the EMF as ≈χ B×∂<ρ> with the turbulent transport coefficient χ proportional to the density variance (χ <ρ'2>). This density variance effect is expected to strongly affect the EMF near the interface, and changes the transport

  20. Compressibility effects on turbulent mixing

    NASA Astrophysics Data System (ADS)

    Panickacheril John, John; Donzis, Diego

    2016-11-01

    We investigate the effect of compressibility on passive scalar mixing in isotropic turbulence with a focus on the fundamental mechanisms that are responsible for such effects using a large Direct Numerical Simulation (DNS) database. The database includes simulations with Taylor Reynolds number (Rλ) up to 100, turbulent Mach number (Mt) between 0.1 and 0.6 and Schmidt number (Sc) from 0.5 to 1.0. We present several measures of mixing efficiency on different canonical flows to robustly identify compressibility effects. We found that, like shear layers, mixing is reduced as Mach number increases. However, data also reveal a non-monotonic trend with Mt. To assess directly the effect of dilatational motions we also present results with both dilatational and soleniodal forcing. Analysis suggests that a small fraction of dilatational forcing decreases mixing time at higher Mt. Scalar spectra collapse when normalized by Batchelor variables which suggests that a compressive mechanism similar to Batchelor mixing in incompressible flows might be responsible for better mixing at high Mt and with dilatational forcing compared to pure solenoidal mixing. We also present results on scalar budgets, in particular on production and dissipation. Support from NSF is gratefully acknowledged.

  1. Compressive Behavior of Fiber-Reinforced Concrete with End-Hooked Steel Fibers.

    PubMed

    Lee, Seong-Cheol; Oh, Joung-Hwan; Cho, Jae-Yeol

    2015-03-27

    In this paper, the compressive behavior of fiber-reinforced concrete with end-hooked steel fibers has been investigated through a uniaxial compression test in which the variables were concrete compressive strength, fiber volumetric ratio, and fiber aspect ratio (length to diameter). In order to minimize the effect of specimen size on fiber distribution, 48 cylinder specimens 150 mm in diameter and 300 mm in height were prepared and then subjected to uniaxial compression. From the test results, it was shown that steel fiber-reinforced concrete (SFRC) specimens exhibited ductile behavior after reaching their compressive strength. It was also shown that the strain at the compressive strength generally increased along with an increase in the fiber volumetric ratio and fiber aspect ratio, while the elastic modulus decreased. With consideration for the effect of steel fibers, a model for the stress-strain relationship of SFRC under compression is proposed here. Simple formulae to predict the strain at the compressive strength and the elastic modulus of SFRC were developed as well. The proposed model and formulae will be useful for realistic predictions of the structural behavior of SFRC members or structures.

  2. Compressive Behavior of Fiber-Reinforced Concrete with End-Hooked Steel Fibers

    PubMed Central

    Lee, Seong-Cheol; Oh, Joung-Hwan; Cho, Jae-Yeol

    2015-01-01

    In this paper, the compressive behavior of fiber-reinforced concrete with end-hooked steel fibers has been investigated through a uniaxial compression test in which the variables were concrete compressive strength, fiber volumetric ratio, and fiber aspect ratio (length to diameter). In order to minimize the effect of specimen size on fiber distribution, 48 cylinder specimens 150 mm in diameter and 300 mm in height were prepared and then subjected to uniaxial compression. From the test results, it was shown that steel fiber-reinforced concrete (SFRC) specimens exhibited ductile behavior after reaching their compressive strength. It was also shown that the strain at the compressive strength generally increased along with an increase in the fiber volumetric ratio and fiber aspect ratio, while the elastic modulus decreased. With consideration for the effect of steel fibers, a model for the stress–strain relationship of SFRC under compression is proposed here. Simple formulae to predict the strain at the compressive strength and the elastic modulus of SFRC were developed as well. The proposed model and formulae will be useful for realistic predictions of the structural behavior of SFRC members or structures. PMID:28788011

  3. The compressed work week as organizational change: behavioral and attitudinal outcomes.

    PubMed

    Ronen, S; Primps, S B

    1981-01-01

    The results from recent studies on the compressed work week have been compiled and categorized in order to provide some basis for generalizing the effects of the work schedule on employee attitudes and behavior. It appears that attitudes toward the compressed week are favorable, with some generalization to job attitudes. Performance outcomes are ambiguous, although there are no reported decreases; fatigue seems to be the only negative aspect of the longer day. An examination of mediating variables suggests more complex relationships between the implementation of the compressed work week and potential outcomes. These relationships are described and directions are indicated for future research.

  4. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  5. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  6. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  7. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  8. Efficiency at Sorting Cards in Compressed Air

    PubMed Central

    Poulton, E. C.; Catton, M. J.; Carpenter, A.

    1964-01-01

    At a site where compressed air was being used in the construction of a tunnel, 34 men sorted cards twice, once at normal atmospheric pressure and once at 3½, 2½, or 2 atmospheres absolute pressure. An additional six men sorted cards twice at normal atmospheric pressure. When the task was carried out for the first time, all the groups of men performing at raised pressure were found to yield a reliably greater proportion of very slow responses than the group of men performing at normal pressure. There was reliably more variability in timing at 3½ and 2½ atmospheres absolute than at normal pressure. At 3½ atmospheres absolute the average performance was also reliably slower. When the task was carried out for the second time, exposure to 3½ atmospheres absolute pressure had no reliable effect. Thus compressed air affected performance only while the task was being learnt; it had little effect after practice. No reliable differences were found related to age, to length of experience in compressed air, or to the duration of the exposure to compressed air, which was never less than 10 minutes at 3½ atmospheres absolute pressure. PMID:14180485

  9. Time-compressed speech test in the elderly.

    PubMed

    Arceno, Rayana Silva; Scharlach, Renata Coelho

    2017-09-28

    The present study aimed to evaluate the performance of elderly people in the time-compressed speech test according to the variables ears and order of display, and analyze the types of errors presented by the volunteers. This is an observational, descriptive, quantitative, analytical and primary cross-sectional study involving 22 elderly with normal hearing or mild sensorineural hearing loss between the ages of 60 and 80. The elderly were submitted to the time-compressed speech test with compression ratio of 60%, through the electromechanical time compression method. A list of 50 disyllables was applied to each ear and the initial side was chosen at random. On what concerns to the performance in the test, the elderly fell short in relation to the adults and there was no statistical difference between the ears. It was found statistical evidence of better performance for the second ear in the test. The most mistaken words were the ones initiated with the phonemes /p/ and /d/. The presence of consonant combination in a word also increased the occurrence of mistakes. The elderly have worse performance in the auditory closure ability when assessed by the time-compressed speech test compared to adults. This result suggests that elderly people have difficulty in recognizing speech when this is pronounced in faster rates. Therefore, strategies must be used to facilitate the communicative process, regardless the presence of hearing loss.

  10. Application of PDF methods to compressible turbulent flows

    NASA Astrophysics Data System (ADS)

    Delarue, B. J.; Pope, S. B.

    1997-09-01

    A particle method applying the probability density function (PDF) approach to turbulent compressible flows is presented. The method is applied to several turbulent flows, including the compressible mixing layer, and good agreement is obtained with experimental data. The PDF equation is solved using a Lagrangian/Monte Carlo method. To accurately account for the effects of compressibility on the flow, the velocity PDF formulation is extended to include thermodynamic variables such as the pressure and the internal energy. The mean pressure, the determination of which has been the object of active research over the last few years, is obtained directly from the particle properties. It is therefore not necessary to link the PDF solver with a finite-volume type solver. The stochastic differential equations (SDE) which model the evolution of particle properties are based on existing second-order closures for compressible turbulence, limited in application to low turbulent Mach number flows. Tests are conducted in decaying isotropic turbulence to compare the performances of the PDF method with the Reynolds-stress closures from which it is derived, and in homogeneous shear flows, at which stage comparison with direct numerical simulation (DNS) data is conducted. The model is then applied to the plane compressible mixing layer, reproducing the well-known decrease in the spreading rate with increasing compressibility. It must be emphasized that the goal of this paper is not as much to assess the performance of models of compressibility effects, as it is to present an innovative and consistent PDF formulation designed for turbulent inhomogeneous compressible flows, with the aim of extending it further to deal with supersonic reacting flows.

  11. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    PubMed

    Gillespy, T; Rowberg, A H

    1994-02-01

    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.

  12. Compressible Turbulence

    NASA Astrophysics Data System (ADS)

    Canuto, V. M.

    1997-06-01

    We present a model to treat fully compressible, nonlocal, time-dependent turbulent convection in the presence of large-scale flows and arbitrary density stratification. The problem is of interest, for example, in stellar pulsation problems, especially since accurate helioseismological data are now available, as well as in accretion disks. Owing to the difficulties in formulating an analytical model, it is not surprising that most of the work has gone into numerical simulations. At present, there are three analytical models: one by the author, which leads to a rather complicated set of equations; one by Yoshizawa; and one by Xiong. The latter two use a Reynolds stress model together with phenomenological relations with adjustable parameters whose determination on the basis of terrestrial flows does not guarantee that they may be extrapolated to astrophysical flows. Moreover, all third-order moments representing nonlocality are taken to be of the down gradient form (which in the case of the planetary boundary layer yields incorrect results). In addition, correlations among pressure, temperature, and velocities are often neglected or treated as in the incompressible case. To avoid phenomenological relations, we derive the full set of dynamic, time-dependent, nonlocal equations to describe all mean variables, second- and third-order moments. Closures are carried out at the fourth order following standard procedures in turbulence modeling. The equations are collected in an Appendix. Some of the novelties of the treatment are (1) new flux conservation law that includes the large-scale flow, (2) increase of the rate of dissipation of turbulent kinetic energy owing to compressibility and thus (3) a smaller overshooting, and (4) a new source of mean temperature due to compressibility; moreover, contrary to some phenomenological suggestions, the adiabatic temperature gradient depends only on the thermal pressure, while in the equation for the large-scale flow, the physical

  13. SeqCompress: an algorithm for biological sequence compression.

    PubMed

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.

  15. Comprehensive numerical methodology for direct numerical simulations of compressible Rayleigh-Taylor instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reckinger, Scott James; Livescu, Daniel; Vasilyev, Oleg V.

    A comprehensive numerical methodology has been developed that handles the challenges introduced by considering the compressive nature of Rayleigh-Taylor instability (RTI) systems, which include sharp interfacial density gradients on strongly stratified background states, acoustic wave generation and removal at computational boundaries, and stratification-dependent vorticity production. The computational framework is used to simulate two-dimensional single-mode RTI to extreme late-times for a wide range of flow compressibility and variable density effects. The results show that flow compressibility acts to reduce the growth of RTI for low Atwood numbers, as predicted from linear stability analysis.

  16. Continuous direct compression as manufacturing platform for sustained release tablets.

    PubMed

    Van Snick, B; Holman, J; Cunningham, C; Kumar, A; Vercruysse, J; De Beer, T; Remon, J P; Vervaet, C

    2017-03-15

    This study presents a framework for process and product development on a continuous direct compression manufacturing platform. A challenging sustained release formulation with high content of a poorly flowing low density drug was selected. Two HPMC grades were evaluated as matrix former: standard Methocel CR and directly compressible Methocel DC2. The feeding behavior of each formulation component was investigated by deriving feed factor profiles. The maximum feed factor was used to estimate the drive command and depended strongly upon the density of the material. Furthermore, the shape of the feed factor profile allowed definition of a customized refill regime for each material. Inline NIRs was used to estimate the residence time distribution (RTD) in the mixer and monitor blend uniformity. Tablet content and weight variability were determined as additional measures of mixing performance. For Methocel CR, the best axial mixing (i.e. feeder fluctuation dampening) was achieved when an impeller with high number of radial mixing blades operated at low speed. However, the variability in tablet weight and content uniformity deteriorated under this condition. One can therefore conclude that balancing axial mixing with tablet quality is critical for Methocel CR. However, reformulating with the direct compressible Methocel DC2 as matrix former improved tablet quality vastly. Furthermore, both process and product were significantly more robust to changes in process and design variables. This observation underpins the importance of flowability during continuous blending and die-filling. At the compaction stage, blends with Methocel CR showed better tabletability driven by a higher compressibility as the smaller CR particles have a higher bonding area. However, tablets of similar strength were achieved using Methocel DC2 by targeting equal porosity. Compaction pressure impacted tablet properties and dissolution. Hence controlling thickness during continuous manufacturing of

  17. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-03-10

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.

  18. Deconstructed transverse mass variables

    DOE PAGES

    Ismail, Ahmed; Schwienhorst, Reinhard; Virzi, Joseph S.; ...

    2015-04-02

    Traditional searches for R-parity conserving natural supersymmetry (SUSY) require large transverse mass and missing energy cuts to separate the signal from large backgrounds. SUSY models with compressed spectra inherently produce signal events with small amounts of missing energy that are hard to explore. We use this difficulty to motivate the construction of "deconstructed" transverse mass variables which are designed preserve information on both the norm and direction of the missing momentum. Here, we demonstrate the effectiveness of these variables in searches for the pair production of supersymmetric top-quark partners which subsequently decay into a final state with an isolated lepton,more » jets and missing energy. We show that the use of deconstructed transverse mass variables extends the accessible compressed spectra parameter space beyond the region probed by traditional methods. The parameter space can further be expanded to neutralino masses that are larger than the difference between the stop and top masses. In addition, we also discuss how these variables allow for novel searches of single stop production, in order to directly probe unconstrained stealth stops in the small stop-and neutralino-mass regime. We also demonstrate the utility of these variables for generic gluino and stop searches in all-hadronic final states. Overall, we demonstrate that deconstructed transverse variables are essential to any search wanting to maximize signal separation from the background when the signal has undetected particles in the final state.« less

  19. Compressed domain indexing of losslessly compressed images

    NASA Astrophysics Data System (ADS)

    Schaefer, Gerald

    2001-12-01

    Image retrieval and image compression have been pursued separately in the past. Only little research has been done on a synthesis of the two by allowing image retrieval to be performed directly in the compressed domain of images without the need to uncompress them first. In this paper methods for image retrieval in the compressed domain of losslessly compressed images are introduced. While most image compression techniques are lossy, i.e. discard visually less significant information, lossless techniques are still required in fields like medical imaging or in situations where images must not be changed due to legal reasons. The algorithms in this paper are based on predictive coding methods where a pixel is encoded based on the pixel values of its (already encoded) neighborhood. The first method is based on an understanding that predictively coded data is itself indexable and represents a textural description of the image. The second method operates directly on the entropy encoded data by comparing codebooks of images. Experiments show good image retrieval results for both approaches.

  20. Radiological Image Compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  1. An efficient and extensible approach for compressing phylogenetic trees.

    PubMed

    Matthews, Suzanne J; Williams, Tiffani L

    2011-10-18

    Biologists require new algorithms to efficiently compress and store their large collections of phylogenetic trees. Our previous work showed that TreeZip is a promising approach for compressing phylogenetic trees. In this paper, we extend our TreeZip algorithm by handling trees with weighted branches. Furthermore, by using the compressed TreeZip file as input, we have designed an extensible decompressor that can extract subcollections of trees, compute majority and strict consensus trees, and merge tree collections using set operations such as union, intersection, and set difference. On unweighted phylogenetic trees, TreeZip is able to compress Newick files in excess of 98%. On weighted phylogenetic trees, TreeZip is able to compress a Newick file by at least 73%. TreeZip can be combined with 7zip with little overhead, allowing space savings in excess of 99% (unweighted) and 92%(weighted). Unlike TreeZip, 7zip is not immune to branch rotations, and performs worse as the level of variability in the Newick string representation increases. Finally, since the TreeZip compressed text (TRZ) file contains all the semantic information in a collection of trees, we can easily filter and decompress a subset of trees of interest (such as the set of unique trees), or build the resulting consensus tree in a matter of seconds. We also show the ease of which set operations can be performed on TRZ files, at speeds quicker than those performed on Newick or 7zip compressed Newick files, and without loss of space savings. TreeZip is an efficient approach for compressing large collections of phylogenetic trees. The semantic and compact nature of the TRZ file allow it to be operated upon directly and quickly, without a need to decompress the original Newick file. We believe that TreeZip will be vital for compressing and archiving trees in the biological community.

  2. Fast and efficient compression of floating-point data.

    PubMed

    Lindstrom, Peter; Isenburg, Martin

    2006-01-01

    Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenecks. When the rate at which data is produced exceeds the available I/O bandwidth, the simulation stalls and the CPUs are idle. Data compression can alleviate this problem by using some CPU cycles to reduce the amount of data needed to be transfered. Most compression schemes, however, are designed to operate offline and seek to maximize compression, not throughput. Furthermore, they often require quantizing floating-point values onto a uniform integer grid, which disqualifies their use in applications where exact values must be retained. We propose a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications. A plug-in scheme for data-dependent prediction makes our scheme applicable to a wide variety of data used in visualization, such as unstructured meshes, point sets, images, and voxel grids. We achieve state-of-the-art compression rates and speeds, the latter in part due to an improved entropy coder. We demonstrate that this significantly accelerates I/O throughput in real simulation runs. Unlike previous schemes, our method also adapts well to variable-precision floating-point and integer data.

  3. Application of wavelet packet transform to compressing Raman spectra data

    NASA Astrophysics Data System (ADS)

    Chen, Chen; Peng, Fei; Cheng, Qinghua; Xu, Dahai

    2008-12-01

    Abstract The Wavelet transform has been established with the Fourier transform as a data-processing method in analytical fields. The main fields of application are related to de-noising, compression, variable reduction, and signal suppression. Raman spectroscopy (RS) is characterized by the frequency excursion that can show the information of molecule. Every substance has its own feature Raman spectroscopy, which can analyze the structure, components, concentrations and some other properties of samples easily. RS is a powerful analytical tool for detection and identification. There are many databases of RS. But the data of Raman spectrum needs large space to storing and long time to searching. In this paper, Wavelet packet is chosen to compress Raman spectra data of some benzene series. The obtained results show that the energy retained is as high as 99.9% after compression, while the percentage for number of zeros is 87.50%. It was concluded that the Wavelet packet has significance in compressing the RS data.

  4. Compressed NMR: Combining compressive sampling and pure shift NMR techniques.

    PubMed

    Aguilar, Juan A; Kenwright, Alan M

    2017-12-26

    Historically, the resolution of multidimensional nuclear magnetic resonance (NMR) has been orders of magnitude lower than the intrinsic resolution that NMR spectrometers are capable of producing. The slowness of Nyquist sampling as well as the existence of signals as multiplets instead of singlets have been two of the main reasons for this underperformance. Fortunately, two compressive techniques have appeared that can overcome these limitations. Compressive sensing, also known as compressed sampling (CS), avoids the first limitation by exploiting the compressibility of typical NMR spectra, thus allowing sampling at sub-Nyquist rates, and pure shift techniques eliminate the second issue "compressing" multiplets into singlets. This paper explores the possibilities and challenges presented by this combination (compressed NMR). First, a description of the CS framework is given, followed by a description of the importance of combining it with the right pure shift experiment. Second, examples of compressed NMR spectra and how they can be combined with covariance methods will be shown. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Some practical aspects of lossless and nearly-lossless compression of AVHRR imagery

    NASA Technical Reports Server (NTRS)

    Hogan, David B.; Miller, Chris X.; Christensen, Than Lee; Moorti, Raj

    1994-01-01

    Compression of Advanced Very high Resolution Radiometers (AVHRR) imagery operating in a lossless or nearly-lossless mode is evaluated. Several practical issues are analyzed including: variability of compression over time and among channels, rate-smoothing buffer size, multi-spectral preprocessing of data, day/night handling, and impact on key operational data applications. This analysis is based on a DPCM algorithm employing the Universal Noiseless Coder, which is a candidate for inclusion in many future remote sensing systems. It is shown that compression rates of about 2:1 (daytime) can be achieved with modest buffer sizes (less than or equal to 2.5 Mbytes) and a relatively simple multi-spectral preprocessing step.

  6. Magnetic resonance image compression using scalar-vector quantization

    NASA Astrophysics Data System (ADS)

    Mohsenian, Nader; Shahri, Homayoun

    1995-12-01

    A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.

  7. ECG compression using non-recursive wavelet transform with quality control

    NASA Astrophysics Data System (ADS)

    Liu, Je-Hung; Hung, King-Chu; Wu, Tsung-Ching

    2016-09-01

    While wavelet-based electrocardiogram (ECG) data compression using scalar quantisation (SQ) yields excellent compression performance, a wavelet's SQ scheme, however, must select a set of multilevel quantisers for each quantisation process. As a result of the properties of multiple-to-one mapping, however, this scheme is not conducive for reconstruction error control. In order to address this problem, this paper presents a single-variable control SQ scheme able to guarantee the reconstruction quality of wavelet-based ECG data compression. Based on the reversible round-off non-recursive discrete periodised wavelet transform (RRO-NRDPWT), the SQ scheme is derived with a three-stage design process that first uses genetic algorithm (GA) for high compression ratio (CR), followed by a quadratic curve fitting for linear distortion control, and the third uses a fuzzy decision-making for minimising data dependency effect and selecting the optimal SQ. The two databases, Physikalisch-Technische Bundesanstalt (PTB) and Massachusetts Institute of Technology (MIT) arrhythmia, are used to evaluate quality control performance. Experimental results show that the design method guarantees a high compression performance SQ scheme with statistically linear distortion. This property can be independent of training data and can facilitate rapid error control.

  8. A Priori Analysis of a Compressible Flamelet Model using RANS Data for a Dual-Mode Scramjet Combustor

    NASA Technical Reports Server (NTRS)

    Quinlan, Jesse R.; Drozda, Tomasz G.; McDaniel, James C.; Lacaze, Guilhem; Oefelein, Joseph

    2015-01-01

    In an effort to make large eddy simulation of hydrocarbon-fueled scramjet combustors more computationally accessible using realistic chemical reaction mechanisms, a compressible flamelet/progress variable (FPV) model was proposed that extends current FPV model formulations to high-speed, compressible flows. Development of this model relied on observations garnered from an a priori analysis of the Reynolds-Averaged Navier-Stokes (RANS) data obtained for the Hypersonic International Flight Research and Experimentation (HI-FiRE) dual-mode scramjet combustor. The RANS data were obtained using a reduced chemical mechanism for the combustion of a JP-7 surrogate and were validated using avail- able experimental data. These RANS data were then post-processed to obtain, in an a priori fashion, the scalar fields corresponding to an FPV-based modeling approach. In the current work, in addition to the proposed compressible flamelet model, a standard incompressible FPV model was also considered. Several candidate progress variables were investigated for their ability to recover static temperature and major and minor product species. The effects of pressure and temperature on the tabulated progress variable source term were characterized, and model coupling terms embedded in the Reynolds- averaged Navier-Stokes equations were studied. Finally, results for the novel compressible flamelet/progress variable model were presented to demonstrate the improvement attained by modeling the effects of pressure and flamelet boundary conditions on the combustion.

  9. Particular mechanism for continuously varying the compression ratio for an internal combustion engine

    NASA Astrophysics Data System (ADS)

    Raţiu, S.; Cătălinoiu, R.; Alexa, V.; Miklos, I.; Cioată, V.

    2018-01-01

    Variable compression ratio (VCR) is a technology to adjust the compression ratio of an internal combustion engine while the engine is in operation. The paper proposes the presentation of a particular mechanism allowing the position of the top dead centre to be changed, while the position of the bottom dead centre remains fixed. The kinematics of the mechanism is studied and its trajectories are graphically represented for different positions of operation.

  10. On Fully Developed Channel Flows: Some Solutions and Limitations, and Effects of Compressibility, Variable Properties, and Body Forces

    NASA Technical Reports Server (NTRS)

    Maslen, Stephen H.

    1959-01-01

    An examination of the effects of compressibility, variable properties, and body forces on fully developed laminar flow has indicated several limitations on such streams. In the absence of a pressure gradient, but presence of a body force (e.g., gravity), an exact fully developed gas flow results. For a liquid this follows also for the case of a constant streamwise pressure gradient. These motions are exact in the sense of a Couette flow. In the liquid case two solutions (not a new result) can occur for the same boundary conditions. An approximate analytic solution was found which agrees closely with machine calculations.In the case of approximately exact flows, it turns out that for large temperature variations across the channel the effects of convection (due to, say, a wall temperature gradient) and frictional heating must be negligible. In such a case the energy and momentum equations are separated, and the solutions are readily obtained. If the temperature variations are small, then both convection effects and frictional heating can consistently be considered. This case becomes the constant-property incompressible case (or quasi-incompressible case for free-convection flows) considered by many authors. Finally there is a brief discussion of cases wherein streamwise variations of all quantities are allowed but only a such form that independent variables are separable. For the case where the streamwise velocity varies inversely as the square root distance along the channel a solution is given.

  11. Universal data compression

    NASA Astrophysics Data System (ADS)

    Lindsay, R. A.; Cox, B. V.

    Universal and adaptive data compression techniques have the capability to globally compress all types of data without loss of information but have the disadvantage of complexity and computation speed. Advances in hardware speed and the reduction of computational costs have made universal data compression feasible. Implementations of the Adaptive Huffman and Lempel-Ziv compression algorithms are evaluated for performance. Compression ratios versus run times for different size data files are graphically presented and discussed in the paper. Required adjustments needed for optimum performance of the algorithms relative to theoretical achievable limits will be outlined.

  12. Effects of formulation variables and post-compression curing on drug release from a new sustained-release matrix material: polyvinylacetate-povidone.

    PubMed

    Shao, Z J; Farooqi, M I; Diaz, S; Krishna, A K; Muhammad, N A

    2001-01-01

    A new commercially available sustained-release matrix material, Kollidon SR, composed of polyvinylacetate and povidone, was evaluated with respect to its ability to modulate the in vitro release of a highly water-soluble model compound, diphenhydramine HCl. Kollidon SR was found to provide a sustained-release effect for the model compound, with certain formulation and processing variables playing an important role in controlling its release kinetics. Formulation variables affecting the release include the level of the polymeric material in the matrix, excipient level, as well as the nature of the excipients (water soluble vs. water insoluble). Increasing the ratio of a water-insoluble excipient, Emcompress, to Kollidon SR enhanced drug release. The incorporation of a water-soluble excipient, lactose, accelerated its release rate in a more pronounced manner. Stability studies conducted at 40 degrees C/75% RH revealed a slow-down in dissolution rate for the drug-Kollidon SR formulation, as a result of polyvinylacetate relaxation. Further studies demonstrated that a post-compression curing step effectively stabilized the release pattern of formulations containing > or = 47% Kollidon SR. The release mechanism of Kollidon-drug and drug-Kollidon-Emcompress formulations appears to be diffusion controlled, while that of the drug-Kollidon-lactose formulation appears to be controlled predominantly by diffusion along with erosion.

  13. Are Compression Stockings an Effective Treatment for Orthostatic Presyncope?

    PubMed Central

    Protheroe, Clare Louise; Dikareva, Anastasia; Menon, Carlo; Claydon, Victoria Elizabeth

    2011-01-01

    Background Syncope, or fainting, affects approximately 6.2% of the population, and is associated with significant comorbidity. Many syncopal events occur secondary to excessive venous pooling and capillary filtration in the lower limbs when upright. As such, a common approach to the management of syncope is the use of compression stockings. However, research confirming their efficacy is lacking. We aimed to investigate the effect of graded calf compression stockings on orthostatic tolerance. Methodology/Principal Findings We evaluated orthostatic tolerance (OT) and haemodynamic control in 15 healthy volunteers wearing graded calf compression stockings compared to two placebo stockings in a randomized, cross-over, double-blind fashion. OT (time to presyncope, min) was determined using combined head-upright tilting and lower body negative pressure applied until presyncope. Throughout testing we continuously monitored beat-to-beat blood pressures, heart rate, stroke volume and cardiac output (finger plethysmography), cerebral and forearm blood flow velocities (Doppler ultrasound) and breath-by-breath end tidal gases. There were no significant differences in OT between compression stocking (26.0±2.3 min) and calf (29.3±2.4 min) or ankle (27.6±3.1 min) placebo conditions. Cardiovascular, cerebral and respiratory responses were similar in all conditions. The efficacy of compression stockings was related to anthropometric parameters, and could be predicted by a model based on the subject's calf circumference and shoe size (r = 0.780, p = 0.004). Conclusions/Significance These data question the use of calf compression stockings for orthostatic intolerance and highlight the need for individualised therapy accounting for anthropometric variables when considering treatment with compression stockings. PMID:22194814

  14. Data compression: The end-to-end information systems perspective for NASA space science missions

    NASA Technical Reports Server (NTRS)

    Tai, Wallace

    1991-01-01

    The unique characteristics of compressed data have important implications to the design of space science data systems, science applications, and data compression techniques. The sequential nature or data dependence between each of the sample values within a block of compressed data introduces an error multiplication or propagation factor which compounds the effects of communication errors. The data communication characteristics of the onboard data acquisition, storage, and telecommunication channels may influence the size of the compressed blocks and the frequency of included re-initialization points. The organization of the compressed data are continually changing depending on the entropy of the input data. This also results in a variable output rate from the instrument which may require buffering to interface with the spacecraft data system. On the ground, there exist key tradeoff issues associated with the distribution and management of the science data products when data compression techniques are applied in order to alleviate the constraints imposed by ground communication bandwidth and data storage capacity.

  15. 2D-pattern matching image and video compression: theory, algorithms, and experiments.

    PubMed

    Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth

    2002-01-01

    In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.

  16. Confinement and controlling the effective compressive stiffness of carbyne

    NASA Astrophysics Data System (ADS)

    Kocsis, Ashley J.; Aditya Reddy Yedama, Neta; Cranford, Steven W.

    2014-08-01

    Carbyne is a one-dimensional chain of carbon atoms, consisting of repeating sp-hybridized groups, thereby representing a minimalist molecular rod or chain. While exhibiting exemplary mechanical properties in tension (a 1D modulus on the order of 313 nN and a strength on the order of 11 nN), its use as a structural component at the molecular scale is limited due to its relative weakness in compression and the immediate onset of buckling under load. To circumvent this effect, here, we probe the effect of confinement to enhance the mechanical behavior of carbyne chains in compression. Through full atomistic molecular dynamics, we characterize the mechanical properties of a free (unconfined chain) and explore the effect of confinement radius (R), free chain length (L) and temperature (T) on the effective compressive stiffness of carbyne chains and demonstrate that the stiffness can be tuned over an order of magnitude (from approximately 0.54 kcal mol-1 Å2 to 46 kcal mol-1 Å2) by geometric control. Confinement may inherently stabilize the chains, potentially providing a platform for the synthesis of extraordinarily long chains (tens of nanometers) with variable compressive response.

  17. An efficient and extensible approach for compressing phylogenetic trees

    PubMed Central

    2011-01-01

    Background Biologists require new algorithms to efficiently compress and store their large collections of phylogenetic trees. Our previous work showed that TreeZip is a promising approach for compressing phylogenetic trees. In this paper, we extend our TreeZip algorithm by handling trees with weighted branches. Furthermore, by using the compressed TreeZip file as input, we have designed an extensible decompressor that can extract subcollections of trees, compute majority and strict consensus trees, and merge tree collections using set operations such as union, intersection, and set difference. Results On unweighted phylogenetic trees, TreeZip is able to compress Newick files in excess of 98%. On weighted phylogenetic trees, TreeZip is able to compress a Newick file by at least 73%. TreeZip can be combined with 7zip with little overhead, allowing space savings in excess of 99% (unweighted) and 92%(weighted). Unlike TreeZip, 7zip is not immune to branch rotations, and performs worse as the level of variability in the Newick string representation increases. Finally, since the TreeZip compressed text (TRZ) file contains all the semantic information in a collection of trees, we can easily filter and decompress a subset of trees of interest (such as the set of unique trees), or build the resulting consensus tree in a matter of seconds. We also show the ease of which set operations can be performed on TRZ files, at speeds quicker than those performed on Newick or 7zip compressed Newick files, and without loss of space savings. Conclusions TreeZip is an efficient approach for compressing large collections of phylogenetic trees. The semantic and compact nature of the TRZ file allow it to be operated upon directly and quickly, without a need to decompress the original Newick file. We believe that TreeZip will be vital for compressing and archiving trees in the biological community. PMID:22165819

  18. Distributed Relaxation Multigrid and Defect Correction Applied to the Compressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Thomas, J. L.; Diskin, B.; Brandt, A.

    1999-01-01

    The distributed-relaxation multigrid and defect- correction methods are applied to the two- dimensional compressible Navier-Stokes equations. The formulation is intended for high Reynolds number applications and several applications are made at a laminar Reynolds number of 10,000. A staggered- grid arrangement of variables is used; the coupled pressure and internal energy equations are solved together with multigrid, requiring a block 2x2 matrix solution. Textbook multigrid efficiencies are attained for incompressible and slightly compressible simulations of the boundary layer on a flat plate. Textbook efficiencies are obtained for compressible simulations up to Mach numbers of 0.7 for a viscous wake simulation.

  19. Applications of wavelet-based compression to multidimensional Earth science data

    NASA Technical Reports Server (NTRS)

    Bradley, Jonathan N.; Brislawn, Christopher M.

    1993-01-01

    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithms (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm are reported, as are signal-to-noise (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme. The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.

  20. Extension of lattice Boltzmann flux solver for simulation of compressible multi-component flows

    NASA Astrophysics Data System (ADS)

    Yang, Li-Ming; Shu, Chang; Yang, Wen-Ming; Wang, Yan

    2018-05-01

    The lattice Boltzmann flux solver (LBFS), which was presented by Shu and his coworkers for solving compressible fluid flow problems, is extended to simulate compressible multi-component flows in this work. To solve the two-phase gas-liquid problems, the model equations with stiffened gas equation of state are adopted. In this model, two additional non-conservative equations are introduced to represent the material interfaces, apart from the classical Euler equations. We first convert the interface equations into the full conservative form by applying the mass equation. After that, we calculate the numerical fluxes of the classical Euler equations by the existing LBFS and the numerical fluxes of the interface equations by the passive scalar approach. Once all the numerical fluxes at the cell interface are obtained, the conservative variables at cell centers can be updated by marching the equations in time and the material interfaces can be identified via the distributions of the additional variables. The numerical accuracy and stability of present scheme are validated by its application to several compressible multi-component fluid flow problems.

  1. Fracture Gap Reduction With Variable-Pitch Headless Screws.

    PubMed

    Roebke, Austin J; Roebke, Logan J; Goyal, Kanu S

    2018-04-01

    Fully threaded, variable-pitch, headless screws are used in many settings in surgery and have been extensively studied in this context, especially in regard to scaphoid fractures. However, it is not well understood how screw parameters such as diameter, length, and pitch variation, as well as technique parameters such as depth of drilling, affect gap closure. Acutrak 2 fully threaded variable-pitch headless screws of various diameters (Standard, Mini, and Micro) and lengths (16-28 mm) were inserted into polyurethane blocks of "normal" and "osteoporotic" bone model densities using a custom jig. Three drilling techniques (drill only through first block, 4 mm into second block, or completely through both blocks) were used. During screw insertion, fluoroscopic images were taken and later analyzed to measure gap reduction. The effect of backing the screw out after compression was evaluated. Drilling at least 4 mm past the fracture site reduces distal fragment push-off compared with drilling only through the proximal fragment. There were no significant differences in gap closure in the normal versus the osteoporotic model. The Micro screw had a smaller gap closure than both the Standard and the Mini screws. After block contact and compression with 2 subsequent full forward turns, backing the screw out by only 1 full turn resulted in gapping between the blocks. Intuitively, fully threaded headless variable-pitch screws can obtain compression between bone fragments only if the initial gap is less than the gap closed. Gap closure may be affected by drilling technique, screw size, and screw length. Fragment compression may be immediately lost if the screw is reversed. We describe characteristics of variable-pitch headless screws that may assist the surgeon in screw choice and method of use. Copyright © 2018 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  2. Study on the application of the time-compressed speech in children.

    PubMed

    Padilha, Fernanda Yasmin Odila Maestri Miguel; Pinheiro, Maria Madalena Canina

    2017-11-09

    To analyze the performance of children without alteration of central auditory processing in the Time-compressed Speech Test. This is a descriptive, observational, cross-sectional study. Study participants were 22 children aged 7-11 years without central auditory processing disorders. The following instruments were used to assess whether these children presented central auditory processing disorders: Scale of Auditory Behaviors, simplified evaluation of central auditory processing, and Dichotic Test of Digits (binaural integration stage). The Time-compressed Speech Test was applied to the children without auditory changes. The participants presented better performance in the list of monosyllabic words than in the list of disyllabic words, but with no statistically significant difference. No influence on test performance was observed with respect to order of presentation of the lists and the variables gender and ear. Regarding age, difference in performance was observed only in the list of disyllabic words. The mean score of children in the Time-compressed Speech Test was lower than that of adults reported in the national literature. Difference in test performance was observed only with respect to the age variable for the list of disyllabic words. No difference was observed in the order of presentation of the lists or in the type of stimulus.

  3. Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di, Sheng; Cappello, Franck

    Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points canmore » be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.« less

  4. Clinical trials needed to evaluate compression therapy in breast cancer related lymphedema (BCRL). Proposals from an expert group.

    PubMed

    Partsch, H; Stout, N; Forner-Cordero, I; Flour, M; Moffatt, C; Szuba, A; Milic, D; Szolnoky, G; Brorson, H; Abel, M; Schuren, J; Schingale, F; Vignes, S; Piller, N; Döller, W

    2010-10-01

    A mainstay of lymphedema management involves the use of compression therapy. Compression therapy application is variable at different levels of disease severity. Evidence is scant to direct clinicians in best practice regarding compression therapy use. Further, compression clinical trials are fragmented and poorly extrapolable to the greater population. An ideal construct for conducting clinical trials in regards to compression therapy will promote parallel global initiatives based on a standard research agenda. The purpose of this article is to review current evidence in practice regarding compression therapy for BCRL management and based on this evidence, offer an expert consensus recommendation for a research agenda and prescriptive trials. Recommendations herein focus solely on compression interventions. This document represents the proceedings of a session organized by the International Compression Club (ICC) in June 2009 in Ponzano (Veneto, Italy). The purpose of the meeting was to enable a group of experts to discuss the existing evidence for compression treatment in breast cancer related lymphedema (BCRL) concentrating on areas where randomized controlled trials (RCTs) are lacking. The current body of research suggests efficacy of compression interventions in the treatment and management of lymphedema. However, studies to date have failed to adequately address various forms of compression therapy and their optimal application in BCRL. We offer recommendations for standardized compression research trials for prophylaxis of arm lymphedema and for the management of chronic BCRL. Suggestions are also made regarding; inclusion and exclusion criteria, measurement methodology and additional variables of interest for researchers to capture. This document should inform future research trials in compression therapy and serve as a guide to clinical researchers, industry researchers and lymphologists regarding the strengths, weaknesses and shortcomings of the current

  5. Modeling and Simulation of Compression Molding Process for Sheet Molding Compound (SMC) of Chopped Carbon Fiber Composites

    DOE PAGES

    Li, Yang; Chen, Zhangxing; Xu, Hongyi; ...

    2017-01-02

    Compression molded SMC composed of chopped carbon fiber and resin polymer which balances the mechanical performance and manufacturing cost presents a promising solution for vehicle lightweight strategy. However, the performance of the SMC molded parts highly depends on the compression molding process and local microstructure, which greatly increases the cost for the part level performance testing and elongates the design cycle. ICME (Integrated Computational Material Engineering) approaches are thus necessary tools to reduce the number of experiments required during part design and speed up the deployment of the SMC materials. As the fundamental stage of the ICME workflow, commercial softwaremore » packages for SMC compression molding exist yet remain not fully validated especially for chopped fiber systems. In this study, SMC plaques are prepared through compression molding process. The corresponding simulation models are built in Autodesk Moldflow with the same part geometry and processing conditions as in the molding tests. The output variables of the compression molding simulations, including press force history and fiber orientation of the part, are compared with experimental data. Influence of the processing conditions to the fiber orientation of the SMC plaque is also discussed. It is found that generally Autodesk Moldflow can achieve a good simulation of the compression molding process for chopped carbon fiber SMC, yet quantitative discrepancies still remain between predicted variables and experimental results.« less

  6. Modeling and Simulation of Compression Molding Process for Sheet Molding Compound (SMC) of Chopped Carbon Fiber Composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yang; Chen, Zhangxing; Xu, Hongyi

    Compression molded SMC composed of chopped carbon fiber and resin polymer which balances the mechanical performance and manufacturing cost presents a promising solution for vehicle lightweight strategy. However, the performance of the SMC molded parts highly depends on the compression molding process and local microstructure, which greatly increases the cost for the part level performance testing and elongates the design cycle. ICME (Integrated Computational Material Engineering) approaches are thus necessary tools to reduce the number of experiments required during part design and speed up the deployment of the SMC materials. As the fundamental stage of the ICME workflow, commercial softwaremore » packages for SMC compression molding exist yet remain not fully validated especially for chopped fiber systems. In this study, SMC plaques are prepared through compression molding process. The corresponding simulation models are built in Autodesk Moldflow with the same part geometry and processing conditions as in the molding tests. The output variables of the compression molding simulations, including press force history and fiber orientation of the part, are compared with experimental data. Influence of the processing conditions to the fiber orientation of the SMC plaque is also discussed. It is found that generally Autodesk Moldflow can achieve a good simulation of the compression molding process for chopped carbon fiber SMC, yet quantitative discrepancies still remain between predicted variables and experimental results.« less

  7. Recce imagery compression options

    NASA Astrophysics Data System (ADS)

    Healy, Donald J.

    1995-09-01

    The errors introduced into reconstructed RECCE imagery by ATARS DPCM compression are compared to those introduced by the more modern DCT-based JPEG compression algorithm. For storage applications in which uncompressed sensor data is available JPEG provides better mean-square-error performance while also providing more flexibility in the selection of compressed data rates. When ATARS DPCM compression has already been performed, lossless encoding techniques may be applied to the DPCM deltas to achieve further compression without introducing additional errors. The abilities of several lossless compression algorithms including Huffman, Lempel-Ziv, Lempel-Ziv-Welch, and Rice encoding to provide this additional compression of ATARS DPCM deltas are compared. It is shown that the amount of noise in the original imagery significantly affects these comparisons.

  8. Compressing turbulence and sudden viscous dissipation with compression-dependent ionization state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidovits, Seth; Fisch, Nathaniel J.

    Turbulent plasma flow, amplified by rapid three-dimensional compression, can be suddenly dissipated under continuing compression. Furthermore, this effect relies on the sensitivity of the plasma viscosity to the temperature, μ ~ T 5 / 2 . The plasma viscosity is also sensitive to the plasma ionization state. Here, we show that the sudden dissipation phenomenon may be prevented when the plasma ionization state increases during compression, and we demonstrate the regime of net viscosity dependence on compression where sudden dissipation is guaranteed. In addition, it is shown that, compared to cases with no ionization, ionization during compression is associated withmore » larger increases in turbulent energy and can make the difference between growing and decreasing turbulent energy.« less

  9. Compressing turbulence and sudden viscous dissipation with compression-dependent ionization state

    DOE PAGES

    Davidovits, Seth; Fisch, Nathaniel J.

    2016-11-14

    Turbulent plasma flow, amplified by rapid three-dimensional compression, can be suddenly dissipated under continuing compression. Furthermore, this effect relies on the sensitivity of the plasma viscosity to the temperature, μ ~ T 5 / 2 . The plasma viscosity is also sensitive to the plasma ionization state. Here, we show that the sudden dissipation phenomenon may be prevented when the plasma ionization state increases during compression, and we demonstrate the regime of net viscosity dependence on compression where sudden dissipation is guaranteed. In addition, it is shown that, compared to cases with no ionization, ionization during compression is associated withmore » larger increases in turbulent energy and can make the difference between growing and decreasing turbulent energy.« less

  10. Apparatus for measuring tensile and compressive properties of solid materials at cryogenic temperatures

    DOEpatents

    Gonczy, J.D.; Markley, F.W.; McCaw, W.R.; Niemann, R.C.

    1992-04-21

    An apparatus for evaluating the tensile and compressive properties of material samples at very low or cryogenic temperatures employs a stationary frame and a dewar mounted below the frame. A pair of coaxial cylindrical tubes extend downward towards the bottom of the dewar. A compressive or tensile load is generated hydraulically and is transmitted by the inner tube to the material sample. The material sample is located near the bottom of the dewar in a liquid refrigerant bath. The apparatus employs a displacement measuring device, such as a linear variable differential transformer, to measure the deformation of the material sample relative to the amount of compressive or tensile force applied to the sample. 7 figs.

  11. Apparatus for measuring tensile and compressive properties of solid materials at cryogenic temperatures

    DOEpatents

    Gonczy, John D.; Markley, Finley W.; McCaw, William R.; Niemann, Ralph C.

    1992-01-01

    An apparatus for evaluating the tensile and compressive properties of material samples at very low or cryogenic temperatures employs a stationary frame and a dewar mounted below the frame. A pair of coaxial cylindrical tubes extend downward towards the bottom of the dewar. A compressive or tensile load is generated hydraulically and is transmitted by the inner tube to the material sample. The material sample is located near the bottom of the dewar in a liquid refrigerant bath. The apparatus employs a displacement measuring device, such as a linear variable differential transformer, to measure the deformation of the material sample relative to the amount of compressive or tensile force applied to the sample.

  12. An efficient coding algorithm for the compression of ECG signals using the wavelet transform.

    PubMed

    Rajoub, Bashar A

    2002-04-01

    A wavelet-based electrocardiogram (ECG) data compression algorithm is proposed in this paper. The ECG signal is first preprocessed, the discrete wavelet transform (DWT) is then applied to the preprocessed signal. Preprocessing guarantees that the magnitudes of the wavelet coefficients be less than one, and reduces the reconstruction errors near both ends of the compressed signal. The DWT coefficients are divided into three groups, each group is thresholded using a threshold based on a desired energy packing efficiency. A binary significance map is then generated by scanning the wavelet decomposition coefficients and outputting a binary one if the scanned coefficient is significant, and a binary zero if it is insignificant. Compression is achieved by 1) using a variable length code based on run length encoding to compress the significance map and 2) using direct binary representation for representing the significant coefficients. The ability of the coding algorithm to compress ECG signals is investigated, the results were obtained by compressing and decompressing the test signals. The proposed algorithm is compared with direct-based and wavelet-based compression algorithms and showed superior performance. A compression ratio of 24:1 was achieved for MIT-BIH record 117 with a percent root mean square difference as low as 1.08%.

  13. Application of a Reynolds stress turbulence model to the compressible shear layer

    NASA Technical Reports Server (NTRS)

    Sarkar, S.; Balakrishnan, L.

    1990-01-01

    Theoretically based turbulence models have had success in predicting many features of incompressible, free shear layers. However, attempts to extend these models to the high-speed, compressible shear layer have been less effective. In the present work, the compressible shear layer was studied with a second-order turbulence closure, which initially used only variable density extensions of incompressible models for the Reynolds stress transport equation and the dissipation rate transport equation. The quasi-incompressible closure was unsuccessful; the predicted effect of the convective Mach number on the shear layer growth rate was significantly smaller than that observed in experiments. Having thus confirmed that compressibility effects have to be explicitly considered, a new model for the compressible dissipation was introduced into the closure. This model is based on a low Mach number, asymptotic analysis of the Navier-Stokes equations, and on direct numerical simulation of compressible, isotropic turbulence. The use of the new model for the compressible dissipation led to good agreement of the computed growth rates with the experimental data. Both the computations and the experiments indicate a dramatic reduction in the growth rate when the convective Mach number is increased. Experimental data on the normalized maximum turbulence intensities and shear stress also show a reduction with increasing Mach number.

  14. Compressibility effects in the shear layer over a rectangular cavity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beresh, Steven J.; Wagner, Justin L.; Casper, Katya M.

    2016-10-26

    we studied the influence of compressibility on the shear layer over a rectangular cavity of variable width in a free stream Mach number range of 0.6–2.5 using particle image velocimetry data in the streamwise centre plane. As the Mach number increases, the vertical component of the turbulence intensity diminishes modestly in the widest cavity, but the two narrower cavities show a more substantial drop in all three components as well as the turbulent shear stress. Furthermore, this contrasts with canonical free shear layers, which show significant reductions in only the vertical component and the turbulent shear stress due to compressibility.more » The vorticity thickness of the cavity shear layer grows rapidly as it initially develops, then transitions to a slower growth rate once its instability saturates. When normalized by their estimated incompressible values, the growth rates prior to saturation display the classic compressibility effect of suppression as the convective Mach number rises, in excellent agreement with comparable free shear layer data. The specific trend of the reduction in growth rate due to compressibility is modified by the cavity width.« less

  15. Effect of compressive force on PEM fuel cell performance

    NASA Astrophysics Data System (ADS)

    MacDonald, Colin Stephen

    Polymer electrolyte membrane (PEM) fuel cells possess the potential, as a zero-emission power source, to replace the internal combustion engine as the primary option for transportation applications. Though there are a number of obstacles to vast PEM fuel cell commercialization, such as high cost and limited durability, there has been significant progress in the field to achieve this goal. Experimental testing and analysis of fuel cell performance has been an important tool in this advancement. Experimental studies of the PEM fuel cell not only identify unfiltered performance response to manipulation of variables, but also aid in the advancement of fuel cell modelling, by allowing for validation of computational schemes. Compressive force used to contain a fuel cell assembly can play a significant role in how effectively the cell functions, the most obvious example being to ensure proper sealing within the cell. Compression can have a considerable impact on cell performance beyond the sealing aspects. The force can manipulate the ability to deliver reactants and the electrochemical functions of the cell, by altering the layers in the cell susceptible to this force. For these reasons an experimental study was undertaken, presented in this thesis, with specific focus placed on cell compression; in order to study its effect on reactant flow fields and performance response. The goal of the thesis was to develop a consistent and accurate general test procedure for the experimental analysis of a PEM fuel cell in order to analyse the effects of compression on performance. The factors potentially affecting cell performance, which were a function of compression, were identified as: (1) Sealing and surface contact; (2) Pressure drop across the flow channel; (3) Porosity of the GDL. Each factor was analysed independently in order to determine the individual contribution to changes in performance. An optimal degree of compression was identified for the cell configuration in

  16. Compression for radiological images

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.

    1992-07-01

    The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

  17. Intelligent bandwith compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.

  18. Advanced application flight experiment breadboard pulse compression radar altimeter program

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Design, development and performance of the pulse compression radar altimeter is described. The high resolution breadboard system is designed to operate from an aircraft at 10 Kft above the ocean and to accurately measure altitude, sea wave height and sea reflectivity. The minicomputer controlled Ku band system provides six basic variables and an extensive digital recording capability for experimentation purposes. Signal bandwidths of 360 MHz are obtained using a reflective array compression line. Stretch processing is used to achieve 1000:1 pulse compression. The system range command LSB is 0.62 ns or 9.25 cm. A second order altitude tracker, aided by accelerometer inputs is implemented in the system software. During flight tests the system demonstrated an altitude resolution capability of 2.1 cm and sea wave height estimation accuracy of 10%. The altitude measurement performance exceeds that of the Skylab and GEOS-C predecessors by approximately an order of magnitude.

  19. The pointwise estimates of diffusion wave of the compressible micropolar fluids

    NASA Astrophysics Data System (ADS)

    Wu, Zhigang; Wang, Weike

    2018-09-01

    The pointwise estimates for the compressible micropolar fluids in dimension three are given, which exhibit generalized Huygens' principle for the fluid density and fluid momentum as the compressible Navier-Stokes equation, while the micro-rational momentum behaves like the fluid momentum of the Euler equation with damping. To circumvent the complexity from 7 × 7 Green's matrix, we use the decomposition of fluid part and electromagnetic part for the momentums to study three smaller Green's matrices. The following from this decomposition is that we have to deal with the new problem that the nonlinear terms contain nonlocal operators. We solve it by using the natural match of these new Green's functions and the nonlinear terms. Moreover, to derive the different pointwise estimates for different unknown variables such that the estimate of each unknown variable is in agreement with its Green's function, we develop some new estimates on the nonlinear interplay between different waves.

  20. Turbulence in Compressible Flows

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Lecture notes for the AGARD Fluid Dynamics Panel (FDP) Special Course on 'Turbulence in Compressible Flows' have been assembled in this report. The following topics were covered: Compressible Turbulent Boundary Layers, Compressible Turbulent Free Shear Layers, Turbulent Combustion, DNS/LES and RANS Simulations of Compressible Turbulent Flows, and Case Studies of Applications of Turbulence Models in Aerospace.

  1. Automatic attention-based prioritization of unconstrained video for compression

    NASA Astrophysics Data System (ADS)

    Itti, Laurent

    2004-06-01

    We apply a biologically-motivated algorithm that selects visually-salient regions of interest in video streams to multiply-foveated video compression. Regions of high encoding priority are selected based on nonlinear integration of low-level visual cues, mimicking processing in primate occipital and posterior parietal cortex. A dynamic foveation filter then blurs (foveates) every frame, increasingly with distance from high-priority regions. Two variants of the model (one with continuously-variable blur proportional to saliency at every pixel, and the other with blur proportional to distance from three independent foveation centers) are validated against eye fixations from 4-6 human observers on 50 video clips (synthetic stimuli, video games, outdoors day and night home video, television newscast, sports, talk-shows, etc). Significant overlap is found between human and algorithmic foveations on every clip with one variant, and on 48 out of 50 clips with the other. Substantial compressed file size reductions by a factor 0.5 on average are obtained for foveated compared to unfoveated clips. These results suggest a general-purpose usefulness of the algorithm in improving compression ratios of unconstrained video.

  2. The present state and future direction of second order closure models for compressible flows

    NASA Technical Reports Server (NTRS)

    Gatski, Thomas B.; Sarkar, Sutanu; Speziale, Charles G.

    1992-01-01

    The topics are presented in viewgraph form and include: (1) Reynolds stress closure models; (2) Favre averages and governing equations; (3) the model for the deviatoric part of the pressure-strain rate correlation; (4) the SSG pressure-strain correlation model; (5) a compressible turbulent dissipation rate model; (6) variable viscosity effects; (7) near-wall stiffness problems; (8) models of the Reynolds mass and heat flux; and (9) a numerical solution of the compressible turbulent transport equation.

  3. Effect of multilayer high-compression bandaging on ankle range of motion and oxygen cost of walking

    PubMed Central

    Roaldsen, K S; Elfving, B; Stanghelle, J K; Mattsson, E

    2012-01-01

    Objective To evaluate the effects of multilayer high-compression bandaging on ankle range of motion, oxygen consumption and subjective walking ability in healthy subjects. Method A volunteer sample of 22 healthy subjects (10 women and 12 men; aged 67 [63–83] years) were studied. The intervention included treadmill-walking at self-selected speed with and without multilayer high-compression bandaging (Proforeº), randomly selected. The primary outcome variables were ankle range of motion, oxygen consumption and subjective walking ability. Results Total ankle range of motion decreased 4% with compression. No change in oxygen cost of walking was observed. Less than half the subjects reported that walking-shoe comfort or walking distance was negatively affected. Conclusion Ankle range of motion decreased with compression but could probably be counteracted with a regular exercise programme. There were no indications that walking with compression was more exhausting than walking without. Appropriate walking shoes could seem important to secure gait efficiency when using compression garments. PMID:21810941

  4. Knock-Limited Performance of Triptane and 28-R Fuel Blends as Affected by Changes in Compression Ratio and in Engine Operating Variables

    NASA Technical Reports Server (NTRS)

    Brun, Rinaldo J.; Feder, Melvin S.; Fisher, William F.

    1947-01-01

    A knock-limited performance investigation was conducted on blends of triptane and 28-P fuel with a 12-cylinder, V-type, liquid-cooled aircraft engine of 1710-cubic-inch displacement at three compression ratios: 6.65, 7.93, and 9.68. At each compression ratio, the effect of changes in temperature of the inlet air to the auxiliary-stage supercharger and in fuel-air ratio were investigated at engine speeds of 2280 and. 3000 rpm. The results show that knock-limited engine performance, as improved by the use of triptane, allowed operation at both take-off and cruising power at a compression ratio of 9.68. At an inlet-air temperature of 60 deg F, an engine speed of 3000 rpm ; and a fuel-air ratio of 0,095 (approximately take-off conditions), a knock-limited engine output of 1500 brake horsepower was possible with 100-percent 28-R fuel at a compression ratio of 6.65; 20-percent triptane was required for the same power output at a compression ratio of 7.93, and 75 percent at a compression ratio of 9.68 allowed an output of 1480 brake horsepower. Knock-limited power output was more sensitive to changes in fuel-air ratio as the engine speed was increased from 2280 to 3000 rpm, as the compression ratio is raised from 6.65 to 9.68, or as the inlet-air temperature is raised from 0 deg to 120 deg F.

  5. Compressed gas manifold

    DOEpatents

    Hildebrand, Richard J.; Wozniak, John J.

    2001-01-01

    A compressed gas storage cell interconnecting manifold including a thermally activated pressure relief device, a manual safety shut-off valve, and a port for connecting the compressed gas storage cells to a motor vehicle power source and to a refueling adapter. The manifold is mechanically and pneumatically connected to a compressed gas storage cell by a bolt including a gas passage therein.

  6. Data Compression Techniques for Maps

    DTIC Science & Technology

    1989-01-01

    Lempel - Ziv compression is applied to the classified and unclassified images as also to the output of the compression algorithms . The algorithms ...resulted in a compression of 7:1. The output of the quadtree coding algorithm was then compressed using Lempel - Ziv coding. The compression ratio achieved...using Lempel - Ziv coding. The unclassified image gave a compression ratio of only 1.4:1. The K means classified image

  7. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    PubMed

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.

  8. Hidden negative linear compressibility in lithium l-tartrate.

    PubMed

    Yeung, Hamish H-M; Kilmurray, Rebecca; Hobday, Claire L; McKellar, Scott C; Cheetham, Anthony K; Allan, David R; Moggach, Stephen A

    2017-02-01

    By decoupling the mechanical behaviour of building units for the first time in a wine-rack framework containing two different strut types, we show that lithium l-tartrate exhibits NLC with a maximum value, K max = -21 TPa -1 , and an overall NLC capacity, χ NLC = 5.1%, that are comparable to the most exceptional materials to date. Furthermore, the contributions from molecular strut compression and angle opening interplay to give rise to so-called "hidden" negative linear compressibility, in which NLC is absent at ambient pressure, switched on at 2 GPa and sustained up to the limit of our experiment, 5.5 GPa. Analysis of the changes in crystal structure using variable-pressure synchrotron X-ray diffraction reveals new chemical and geometrical design rules to assist the discovery of other materials with exciting hidden anomalous mechanical properties.

  9. Alternative Compression Garments

    NASA Technical Reports Server (NTRS)

    Stenger, M. B.; Lee, S. M. C.; Ribeiro, L. C.; Brown, A. K.; Westby, C. M.; Platts, S. H.

    2011-01-01

    Orthostatic intolerance after spaceflight is still an issue for astronauts as no in-flight countermeasure has been 100% effective. Future anti-gravity suits (AGS) may be similar to the Shuttle era inflatable AGS or may be a mechanical compression device like the Russian Kentavr. We have evaluated the above garments as well as elastic, gradient compression garments of varying magnitude and determined that breast-high elastic compression garments may be a suitable replacement to the current AGS. This new garment should be more comfortable than the AGS, easy to don and doff, and as effective a countermeasure to orthostatic intolerance. Furthermore, these new compression garments could be worn for several days after space flight as necessary if symptoms persisted. We conducted two studies to evaluate elastic, gradient compression garments. The purpose of these studies was to evaluate the comfort and efficacy of an alternative compression garment (ACG) immediately after actual space flight and 6 degree head-down tilt bed rest as a model of space flight, and to determine if they would impact recovery if worn for up to three days after bed rest.

  10. Intelligent bandwidth compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 bandwidth-compressed images are presented.

  11. Biological sequence compression algorithms.

    PubMed

    Matsumoto, T; Sadakane, K; Imai, H

    2000-01-01

    Today, more and more DNA sequences are becoming available. The information about DNA sequences are stored in molecular biology databases. The size and importance of these databases will be bigger and bigger in the future, therefore this information must be stored or communicated efficiently. Furthermore, sequence compression can be used to define similarities between biological sequences. The standard compression algorithms such as gzip or compress cannot compress DNA sequences, but only expand them in size. On the other hand, CTW (Context Tree Weighting Method) can compress DNA sequences less than two bits per symbol. These algorithms do not use special structures of biological sequences. Two characteristic structures of DNA sequences are known. One is called palindromes or reverse complements and the other structure is approximate repeats. Several specific algorithms for DNA sequences that use these structures can compress them less than two bits per symbol. In this paper, we improve the CTW so that characteristic structures of DNA sequences are available. Before encoding the next symbol, the algorithm searches an approximate repeat and palindrome using hash and dynamic programming. If there is a palindrome or an approximate repeat with enough length then our algorithm represents it with length and distance. By using this preprocessing, a new program achieves a little higher compression ratio than that of existing DNA-oriented compression algorithms. We also describe new compression algorithm for protein sequences.

  12. High-grade video compression of echocardiographic studies: a multicenter validation study of selected motion pictures expert groups (MPEG)-4 algorithms.

    PubMed

    Barbier, Paolo; Alimento, Marina; Berna, Giovanni; Celeste, Fabrizio; Gentile, Francesco; Mantero, Antonio; Montericcio, Vincenzo; Muratori, Manuela

    2007-05-01

    Large files produced by standard compression algorithms slow down spread of digital and tele-echocardiography. We validated echocardiographic video high-grade compression with the new Motion Pictures Expert Groups (MPEG)-4 algorithms with a multicenter study. Seven expert cardiologists blindly scored (5-point scale) 165 uncompressed and compressed 2-dimensional and color Doppler video clips, based on combined diagnostic content and image quality (uncompressed files as references). One digital video and 3 MPEG-4 algorithms (WM9, MV2, and DivX) were used, the latter at 3 compression levels (0%, 35%, and 60%). Compressed file sizes decreased from 12 to 83 MB to 0.03 to 2.3 MB (1:1051-1:26 reduction ratios). Mean SD of differences was 0.81 for intraobserver variability (uncompressed and digital video files). Compared with uncompressed files, only the DivX mean score at 35% (P = .04) and 60% (P = .001) compression was significantly reduced. At subcategory analysis, these differences were still significant for gray-scale and fundamental imaging but not for color or second harmonic tissue imaging. Original image quality, session sequence, compression grade, and bitrate were all independent determinants of mean score. Our study supports use of MPEG-4 algorithms to greatly reduce echocardiographic file sizes, thus facilitating archiving and transmission. Quality evaluation studies should account for the many independent variables that affect image quality grading.

  13. An investigation of the compressive strength of Kevlar 49/epoxy composites

    NASA Technical Reports Server (NTRS)

    Kulkarni, S. V.; Rosen, B. W.; Rice, J. S.

    1975-01-01

    Tests were performed to evaluate the effect of a wide range of variables including matrix properties, interface properties, fiber prestressing, secondary reinforcement, and others on the ultimate compressive strength of Kevlar 49/epoxy composites. Scanning electron microscopy is used to assess the resulting failure surfaces. In addition, a theoretical study is conducted to determine the influence of fiber anisotropy and lack of perfect bond between fiber and matrix on the shear mode microbuckling. The experimental evaluation of the effect of various constituent and process characteristics on the behavior of these unidirectional composites in compression did not reveal any substantial increase in strength. However, theoretical evaluations indicate that the high degree of fiber anisotropy results in a significant drop in the predicted stress level for internal instability. Scanning electron microscope data analysis suggests that internal fiber failure and smooth surface debonding could be responsible for the measured low compressive strengths.

  14. Video bandwidth compression system

    NASA Astrophysics Data System (ADS)

    Ludington, D.

    1980-08-01

    The objective of this program was the development of a Video Bandwidth Compression brassboard model for use by the Air Force Avionics Laboratory, Wright-Patterson Air Force Base, in evaluation of bandwidth compression techniques for use in tactical weapons and to aid in the selection of particular operational modes to be implemented in an advanced flyable model. The bandwidth compression system is partitioned into two major divisions: the encoder, which processes the input video with a compression algorithm and transmits the most significant information; and the decoder where the compressed data is reconstructed into a video image for display.

  15. Mental Aptitude and Comprehension of Time-Compressed and Compressed-Expanded Listening Selections.

    ERIC Educational Resources Information Center

    Sticht, Thomas G.

    The comprehensibility of materials compressed and then expanded by means of an electromechanical process was tested with 280 Army inductees divided into groups of high and low mental aptitude. Three short listening selections relating to military activities were subjected to compression and compression-expansion to produce seven versions. Data…

  16. Managment oriented analysis of sediment yield time compression

    NASA Astrophysics Data System (ADS)

    Smetanova, Anna; Le Bissonnais, Yves; Raclot, Damien; Nunes, João P.; Licciardello, Feliciana; Le Bouteiller, Caroline; Latron, Jérôme; Rodríguez Caballero, Emilio; Mathys, Nicolle; Klotz, Sébastien; Mekki, Insaf; Gallart, Francesc; Solé Benet, Albert; Pérez Gallego, Nuria; Andrieux, Patrick; Moussa, Roger; Planchon, Olivier; Marisa Santos, Juliana; Alshihabi, Omran; Chikhaoui, Mohamed

    2016-04-01

    The understanding of inter- and intra-annual variability of sediment yield is important for the land use planning and management decisions for sustainable landscapes. It is of particular importance in the regions where the annual sediment yield is often highly dependent on the occurrence of few large events which produce the majority of sediments, such as in the Mediterranean. This phenomenon is referred as time compression, and relevance of its consideration growths with the increase in magnitude and frequency of extreme events due to climate change in many other regions. So far, time compression has ben studied mainly on events datasets, providing high resolution, but (in terms of data amount, required data precision and methods), demanding analysis. In order to provide an alternative simplified approach, the monthly and yearly time compressions were evaluated in eight Mediterranean catchments (of the R-OSMed network), representing a wide range of Mediterranean landscapes. The annual sediment yield varied between 0 to ~27100 Mg•km-2•a-1, and the monthly sediment yield between 0 to ~11600 Mg•km-2•month-1. The catchment's sediment yield was un-equally distributed at inter- and intra-annual scale, and large differences were observed between the catchments. Two types of time compression were distinguished - (i) the inter-annual (based on annual values) and intra- annual (based on monthly values). Four different rainfall-runoff-sediment yield time compression patterns were observed: (i) no time-compression of rainfall, runoff, nor sediment yield, (ii) low time compression of rainfall and runoff, but high compression of sediment yield, (iii) low compression of rainfall and high of runoff and sediment yield, and (iv) low, medium and high compression of rainfall, runoff and sediment yield. All four patterns were present at inter-annual scale, while at intra-annual scale only the two latter were present. This implies that high sediment yields occurred in

  17. CSAM: Compressed SAM format.

    PubMed

    Cánovas, Rodrigo; Moffat, Alistair; Turpin, Andrew

    2016-12-15

    Next generation sequencing machines produce vast amounts of genomic data. For the data to be useful, it is essential that it can be stored and manipulated efficiently. This work responds to the combined challenge of compressing genomic data, while providing fast access to regions of interest, without necessitating decompression of whole files. We describe CSAM (Compressed SAM format), a compression approach offering lossless and lossy compression for SAM files. The structures and techniques proposed are suitable for representing SAM files, as well as supporting fast access to the compressed information. They generate more compact lossless representations than BAM, which is currently the preferred lossless compressed SAM-equivalent format; and are self-contained, that is, they do not depend on any external resources to compress or decompress SAM files. An implementation is available at https://github.com/rcanovas/libCSAM CONTACT: canovas-ba@lirmm.frSupplementary Information: Supplementary data is available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Effects of Local Compression on Peroneal Nerve Function in Humans

    NASA Technical Reports Server (NTRS)

    Hargens, Alan R.; Botte, Michael J.; Swenson, Michael R.; Gelberman, Richard H.; Rhoades, Charles E.; Akeson, Wayne H.

    1993-01-01

    A new apparatus was developed to compress the anterior compartment selectively and reproducibly in humans. Thirty-five normal volunteers were studied to determine short-term thresholds of local tissue pressure that produce significant neuromuscular dysfunction. Local tissue fluid pressure adjacent to the deep peroneal nerve was elevated by the compression apparatus and continuously monitored for 2-3 h by the slit catheter technique. Elevation of tissue fluid pressure to within 35-40 mm Hg of diastolic blood pressure (approx. 40 mm Hg of in situ pressure in our subjects) elicited a consistent progression of neuromuscular deterioration including, in order, (a) gradual loss of sensation, as assessed by Semmes-Weinstein monofilaments, (b) subjective complaints, (c) reduced nerve conduction velocity, (d) decreased action potential amplitude of the extensor digitorum brevis muscle, and (e) motor weakness of muscles within the anterior compartment. Generally, higher intracompartment at pressures caused more rapid deterioration of neuromuscular function. In two subjects, when in situ compression levels were 0 and 30 mm Hg, normal neuromuscular function was maintained for 3 h. Threshold pressures for significant dysfunction were not always the same for each functional parameter studied, and the magnitudes of each functional deficit did not always correlate with compression level. This variable tolerance to elevated pressure emphasizes the need to monitor clinical signs and symptoms carefully in the diagnosis of compartment syndromes. The nature of the present studies was short term; longer term compression of myoneural tissues may result in dysfunction at lower pressure thresholds.

  19. Fatigue life of additively manufactured Ti6Al4V scaffolds under tension-tension, tension-compression and compression-compression fatigue load.

    PubMed

    Lietaert, Karel; Cutolo, Antonio; Boey, Dries; Van Hooreweder, Brecht

    2018-03-21

    Mechanical performance of additively manufactured (AM) Ti6Al4V scaffolds has mostly been studied in uniaxial compression. However, in real-life applications, more complex load conditions occur. To address this, a novel sample geometry was designed, tested and analyzed in this work. The new scaffold geometry, with porosity gradient between the solid ends and scaffold middle, was successfully used for quasi-static tension, tension-tension (R = 0.1), tension-compression (R = -1) and compression-compression (R = 10) fatigue tests. Results show that global loading in tension-tension leads to a decreased fatigue performance compared to global loading in compression-compression. This difference in fatigue life can be understood fairly well by approximating the local tensile stress amplitudes in the struts near the nodes. Local stress based Haigh diagrams were constructed to provide more insight in the fatigue behavior. When fatigue life is interpreted in terms of local stresses, the behavior of single struts is shown to be qualitatively the same as bulk Ti6Al4V. Compression-compression and tension-tension fatigue regimes lead to a shorter fatigue life than fully reversed loading due to the presence of a mean local tensile stress. Fractographic analysis showed that most fracture sites were located close to the nodes, where the highest tensile stresses are located.

  20. Modeling Compressibility Effects in High-Speed Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Sarkar, S.

    2004-01-01

    Man has strived to make objects fly faster, first from subsonic to supersonic and then to hypersonic speeds. Spacecraft and high-speed missiles routinely fly at hypersonic Mach numbers, M greater than 5. In defense applications, aircraft reach hypersonic speeds at high altitude and so may civilian aircraft in the future. Hypersonic flight, while presenting opportunities, has formidable challenges that have spurred vigorous research and development, mainly by NASA and the Air Force in the USA. Although NASP, the premier hypersonic concept of the eighties and early nineties, did not lead to flight demonstration, much basic research and technology development was possible. There is renewed interest in supersonic and hypersonic flight with the HyTech program of the Air Force and the Hyper-X program at NASA being examples of current thrusts in the field. At high-subsonic to supersonic speeds, fluid compressibility becomes increasingly important in the turbulent boundary layers and shear layers associated with the flow around aerospace vehicles. Changes in thermodynamic variables: density, temperature and pressure, interact strongly with the underlying vortical, turbulent flow. The ensuing changes to the flow may be qualitative such as shocks which have no incompressible counterpart, or quantitative such as the reduction of skin friction with Mach number, large heat transfer rates due to viscous heating, and the dramatic reduction of fuel/oxidant mixing at high convective Mach number. The peculiarities of compressible turbulence, so-called compressibility effects, have been reviewed by Fernholz and Finley. Predictions of aerodynamic performance in high-speed applications require accurate computational modeling of these "compressibility effects" on turbulence. During the course of the project we have made fundamental advances in modeling the pressure-strain correlation and developed a code to evaluate alternate turbulence models in the compressible shear layer.

  1. Chapter 22: Compressed Air Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W; Benton, Nathanael; Burns, Patrick

    Compressed-air systems are used widely throughout industry for many operations, including pneumatic tools, packaging and automation equipment, conveyors, and other industrial process operations. Compressed-air systems are defined as a group of subsystems composed of air compressors, air treatment equipment, controls, piping, pneumatic tools, pneumatically powered machinery, and process applications using compressed air. A compressed-air system has three primary functional subsystems: supply, distribution, and demand. Air compressors are the primary energy consumers in a compressed-air system and are the primary focus of this protocol. The two compressed-air energy efficiency measures specifically addressed in this protocol are: High-efficiency/variable speed drive (VSD) compressormore » replacing modulating, load/unload, or constant-speed compressor; and Compressed-air leak survey and repairs. This protocol provides direction on how to reliably verify savings from these two measures using a consistent approach for each.« less

  2. Expiratory rib cage compression in mechanically ventilated adults: systematic review with meta-analysis

    PubMed Central

    Borges, Lúcia Faria; Saraiva, Mateus Sasso; Saraiva, Marcos Ariel Sasso; Macagnan, Fabrício Edler; Kessler, Adriana

    2017-01-01

    Objective To review the literature on the effects of expiratory rib cage compression on ventilatory mechanics, airway clearance, and oxygen and hemodynamic indices in mechanically ventilated adults. Methods Systematic review with meta-analysis of randomized clinical trials in the databases MEDLINE (via PubMed), EMBASE, Cochrane CENTRAL, PEDro, and LILACS. Studies on adult patients hospitalized in intensive care units and under mechanical ventilation that analyzed the effects of expiratory rib cage compression with respect to a control group (without expiratory rib cage compression) and evaluated the outcomes static and dynamic compliance, sputum volume, systolic blood pressure, diastolic blood pressure, mean arterial pressure, heart rate, peripheral oxygen saturation, and ratio of arterial oxygen partial pressure to fraction of inspired oxygen were included. Experimental studies with animals and those with incomplete data were excluded. Results The search strategy produced 5,816 studies, of which only three randomized crossover trials were included, totaling 93 patients. With respect to the outcome of heart rate, values were reduced in the expiratory rib cage compression group compared with the control group [-2.81 bpm (95% confidence interval [95%CI]: -4.73 to 0.89; I2: 0%)]. Regarding dynamic compliance, there was no significant difference between groups [-0.58mL/cmH2O (95%CI: -2.98 to 1.82; I2: 1%)]. Regarding the variables systolic blood pressure and diastolic blood pressure, significant differences were found after descriptive evaluation. However, there was no difference between groups regarding the variables secretion volume, static compliance, ratio of arterial oxygen partial pressure to fraction of inspired oxygen, and peripheral oxygen saturation. Conclusion There is a lack of evidence to support the use of expiratory rib cage compression in routine care, given that the literature on this topic offers low methodological quality and is inconclusive. PMID

  3. Subjective evaluation of mobile 3D video content: depth range versus compression artifacts

    NASA Astrophysics Data System (ADS)

    Jumisko-Pyykkö, Satu; Haustola, Tomi; Boev, Atanas; Gotchev, Atanas

    2011-02-01

    Mobile 3D television is a new form of media experience, which combines the freedom of mobility with the greater realism of presenting visual scenes in 3D. Achieving this combination is a challenging task as greater viewing experience has to be achieved with the limited resources of the mobile delivery channel such as limited bandwidth and power constrained handheld player. This challenge sets need for tight optimization of the overall mobile 3DTV system. Presence of depth and compression artifacts in the played 3D video are two major factors that influence viewer's subjective quality of experience and satisfaction. The primary goal of this study has been to examine the influence of varying depth and compression artifacts on the subjective quality of experience for mobile 3D video content. In addition, the influence of the studied variables on simulator sickness symptoms has been studied and vocabulary-based descriptive quality of experience has been conducted for a sub-set of variables in order to understand the perceptual characteristics in detail. In the experiment, 30 participants have evaluated the overall quality of different 3D video contents with varying depth ranges and compressed with varying quantization parameters. The test video content has been presented on a portable autostereoscopic LCD display with horizontal double density pixel arrangement. The results of the psychometric study indicate that compression artifacts are a dominant factor determining the quality of experience compared to varying depth range. More specifically, contents with strong compression has been rejected by the viewers and deemed unacceptable. The results of descriptive study confirm the dominance of visible spatial artifacts along the added value of depth for artifact-free content. The level of visual discomfort has been determined as not offending.

  4. Thermofluidic compression effects to achieve combustion in a low-compression scramjet engine

    NASA Astrophysics Data System (ADS)

    Moura, A. F.; Wheatley, V.; Jahn, I.

    2018-07-01

    The compression provided by a scramjet inlet is an important parameter in its design. It must be low enough to limit thermal and structural loads and stagnation pressure losses, but high enough to provide the conditions favourable for combustion. Inlets are typically designed to achieve sufficient compression without accounting for the fluidic, and subsequently thermal, compression provided by the fuel injection, which can enable robust combustion in a low-compression engine. This is investigated using Reynolds-averaged Navier-Stokes numerical simulations of a simplified scramjet engine designed to have insufficient compression to auto-ignite fuel in the absence of thermofluidic compression. The engine was designed with a wide rectangular combustor and a single centrally located injector, in order to reduce three-dimensional effects of the walls on the fuel plume. By varying the injected mass flow rate of hydrogen fuel (equivalence ratios of 0.22, 0.17, and 0.13), it is demonstrated that higher equivalence ratios lead to earlier ignition and more rapid combustion, even though mean conditions in the combustor change by no more than 5% for pressure and 3% for temperature with higher equivalence ratio. By supplementing the lower equivalence ratio with helium to achieve a higher mass flow rate, it is confirmed that these benefits are primarily due to the local compression provided by the extra injected mass. Investigation of the conditions around the fuel plume indicated two connected mechanisms. The higher mass flow rate for higher equivalence ratios generated a stronger injector bow shock that compresses the free-stream gas, increasing OH radical production and promoting ignition. This was observed both in the higher equivalence ratio case and in the case with helium. This earlier ignition led to increased temperature and pressure downstream and, consequently, stronger combustion. The heat release from combustion provided thermal compression in the combustor, further

  5. Thermofluidic compression effects to achieve combustion in a low-compression scramjet engine

    NASA Astrophysics Data System (ADS)

    Moura, A. F.; Wheatley, V.; Jahn, I.

    2017-12-01

    The compression provided by a scramjet inlet is an important parameter in its design. It must be low enough to limit thermal and structural loads and stagnation pressure losses, but high enough to provide the conditions favourable for combustion. Inlets are typically designed to achieve sufficient compression without accounting for the fluidic, and subsequently thermal, compression provided by the fuel injection, which can enable robust combustion in a low-compression engine. This is investigated using Reynolds-averaged Navier-Stokes numerical simulations of a simplified scramjet engine designed to have insufficient compression to auto-ignite fuel in the absence of thermofluidic compression. The engine was designed with a wide rectangular combustor and a single centrally located injector, in order to reduce three-dimensional effects of the walls on the fuel plume. By varying the injected mass flow rate of hydrogen fuel (equivalence ratios of 0.22, 0.17, and 0.13), it is demonstrated that higher equivalence ratios lead to earlier ignition and more rapid combustion, even though mean conditions in the combustor change by no more than 5% for pressure and 3% for temperature with higher equivalence ratio. By supplementing the lower equivalence ratio with helium to achieve a higher mass flow rate, it is confirmed that these benefits are primarily due to the local compression provided by the extra injected mass. Investigation of the conditions around the fuel plume indicated two connected mechanisms. The higher mass flow rate for higher equivalence ratios generated a stronger injector bow shock that compresses the free-stream gas, increasing OH radical production and promoting ignition. This was observed both in the higher equivalence ratio case and in the case with helium. This earlier ignition led to increased temperature and pressure downstream and, consequently, stronger combustion. The heat release from combustion provided thermal compression in the combustor, further

  6. Bunch length compression method for free electron lasers to avoid parasitic compressions

    DOEpatents

    Douglas, David R.; Benson, Stephen; Nguyen, Dinh Cong; Tennant, Christopher; Wilson, Guy

    2015-05-26

    A method of bunch length compression method for a free electron laser (FEL) that avoids parasitic compressions by 1) applying acceleration on the falling portion of the RF waveform, 2) compressing using a positive momentum compaction (R.sub.56>0), and 3) compensating for aberration by using nonlinear magnets in the compressor beam line.

  7. The Compressibility Burble

    NASA Technical Reports Server (NTRS)

    Stack, John

    1935-01-01

    Simultaneous air-flow photographs and pressure-distribution measurements have been made of the NACA 4412 airfoil at high speeds in order to determine the physical nature of the compressibility burble. The flow photographs were obtained by the Schlieren method and the pressures were simultaneously measured for 54 stations on the 5-inch-chord wing by means of a multiple-tube photographic manometer. Pressure-measurement results and typical Schlieren photographs are presented. The general nature of the phenomenon called the "compressibility burble" is shown by these experiments. The source of the increased drag is the compression shock that occurs, the excess drag being due to the conversion of a considerable amount of the air-stream kinetic energy into heat at the compression shock.

  8. On the Suitability of Suffix Arrays for Lempel-Ziv Data Compression

    NASA Astrophysics Data System (ADS)

    Ferreira, Artur J.; Oliveira, Arlindo L.; Figueiredo, Mário A. T.

    Lossless compression algorithms of the Lempel-Ziv (LZ) family are widely used nowadays. Regarding time and memory requirements, LZ encoding is much more demanding than decoding. In order to speed up the encoding process, efficient data structures, like suffix trees, have been used. In this paper, we explore the use of suffix arrays to hold the dictionary of the LZ encoder, and propose an algorithm to search over it. We show that the resulting encoder attains roughly the same compression ratios as those based on suffix trees. However, the amount of memory required by the suffix array is fixed, and much lower than the variable amount of memory used by encoders based on suffix trees (which depends on the text to encode). We conclude that suffix arrays, when compared to suffix trees in terms of the trade-off among time, memory, and compression ratio, may be preferable in scenarios (e.g., embedded systems) where memory is at a premium and high speed is not critical.

  9. Bringing light into the dark: effects of compression clothing on performance and recovery.

    PubMed

    Born, Dennis-Peter; Sperlich, Billy; Holmberg, Hans-Christer

    2013-01-01

    To assess original research addressing the effect of the application of compression clothing on sport performance and recovery after exercise, a computer-based literature research was performed in July 2011 using the electronic databases PubMed, MEDLINE, SPORTDiscus, and Web of Science. Studies examining the effect of compression clothing on endurance, strength and power, motor control, and physiological, psychological, and biomechanical parameters during or after exercise were included, and means and measures of variability of the outcome measures were recorded to estimate the effect size (Hedges g) and associated 95% confidence intervals for comparisons of experimental (compression) and control trials (noncompression). The characteristics of the compression clothing, participants, and study design were also extracted. The original research from peer-reviewed journals was examined using the Physiotherapy Evidence Database (PEDro) Scale. Results indicated small effect sizes for the application of compression clothing during exercise for short-duration sprints (10-60 m), vertical-jump height, extending time to exhaustion (such as running at VO2max or during incremental tests), and time-trial performance (3-60 min). When compression clothing was applied for recovery purposes after exercise, small to moderate effect sizes were observed in recovery of maximal strength and power, especially vertical-jump exercise; reductions in muscle swelling and perceived muscle pain; blood lactate removal; and increases in body temperature. These results suggest that the application of compression clothing may assist athletic performance and recovery in given situations with consideration of the effects magnitude and practical relevance.

  10. Evaluating lossy data compression on climate simulation data within a large ensemble

    DOE PAGES

    Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.; ...

    2016-12-07

    High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data,more » the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that

  11. Evaluating lossy data compression on climate simulation data within a large ensemble

    NASA Astrophysics Data System (ADS)

    Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.; Xu, Haiying; Stolpe, Martin B.; Naveau, Phillipe; Sanderson, Ben; Ebert-Uphoff, Imme; Samarasinghe, Savini; De Simone, Francesco; Carbone, Francesco; Gencarelli, Christian N.; Dennis, John M.; Kay, Jennifer E.; Lindstrom, Peter

    2016-12-01

    High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data, the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that applying

  12. Evaluating lossy data compression on climate simulation data within a large ensemble

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.

    High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data,more » the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that

  13. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  14. Using a visual discrimination model for the detection of compression artifacts in virtual pathology images.

    PubMed

    Johnson, Jeffrey P; Krupinski, Elizabeth A; Yan, Michelle; Roehrig, Hans; Graham, Anna R; Weinstein, Ronald S

    2011-02-01

    A major issue in telepathology is the extremely large and growing size of digitized "virtual" slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. "Visually lossless" compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods.

  15. Effects of axial compression and rotation angle on torsional mechanical properties of bovine caudal discs.

    PubMed

    Bezci, Semih E; Klineberg, Eric O; O'Connell, Grace D

    2018-01-01

    The intervertebral disc is a complex joint that acts to support and transfer large multidirectional loads, including combinations of compression, tension, bending, and torsion. Direct comparison of disc torsion mechanics across studies has been difficult, due to differences in loading protocols. In particular, the lack of information on the combined effect of multiple parameters, including axial compressive preload and rotation angle, makes it difficult to discern whether disc torsion mechanics are sensitive to the variables used in the test protocol. Thus, the objective of this study was to evaluate compression-torsion mechanical behavior of healthy discs under a wide range of rotation angles. Bovine caudal discs were tested under a range of compressive preloads (150, 300, 600, and 900N) and rotation angles (± 1, 2, 3, 4, or 5°) applied at a rate of 0.5°/s. Torque-rotation data were used to characterize shape changes in the hysteresis loop and to calculate disc torsion mechanics. Torsional mechanical properties were described using multivariate regression models. The rate of change in torsional mechanical properties with compression depended on the maximum rotation angle applied, indicating a strong interaction between compressive stress and maximum rotation angle. The regression models reported here can be used to predict disc torsion mechanics under axial compression for a given disc geometry, compressive preload, and rotation angle. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Predictor variable resolution governs modeled soil types

    USDA-ARS?s Scientific Manuscript database

    Soil mapping identifies different soil types by compressing a unique suite of spatial patterns and processes across multiple spatial scales. It can be quite difficult to quantify spatial patterns of soil properties with remotely sensed predictor variables. More specifically, matching the right scale...

  17. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  18. Sequential neural text compression.

    PubMed

    Schmidhuber, J; Heil, S

    1996-01-01

    The purpose of this paper is to show that neural networks may be promising tools for data compression without loss of information. We combine predictive neural nets and statistical coding techniques to compress text files. We apply our methods to certain short newspaper articles and obtain compression ratios exceeding those of the widely used Lempel-Ziv algorithms (which build the basis of the UNIX functions "compress" and "gzip"). The main disadvantage of our methods is that they are about three orders of magnitude slower than standard methods.

  19. Matrix-Inversion-Free Compressed Sensing With Variable Orthogonal Multi-Matching Pursuit Based on Prior Information for ECG Signals.

    PubMed

    Cheng, Yih-Chun; Tsai, Pei-Yun; Huang, Ming-Hao

    2016-05-19

    Low-complexity compressed sensing (CS) techniques for monitoring electrocardiogram (ECG) signals in wireless body sensor network (WBSN) are presented. The prior probability of ECG sparsity in the wavelet domain is first exploited. Then, variable orthogonal multi-matching pursuit (vOMMP) algorithm that consists of two phases is proposed. In the first phase, orthogonal matching pursuit (OMP) algorithm is adopted to effectively augment the support set with reliable indices and in the second phase, the orthogonal multi-matching pursuit (OMMP) is employed to rescue the missing indices. The reconstruction performance is thus enhanced with the prior information and the vOMMP algorithm. Furthermore, the computation-intensive pseudo-inverse operation is simplified by the matrix-inversion-free (MIF) technique based on QR decomposition. The vOMMP-MIF CS decoder is then implemented in 90 nm CMOS technology. The QR decomposition is accomplished by two systolic arrays working in parallel. The implementation supports three settings for obtaining 40, 44, and 48 coefficients in the sparse vector. From the measurement result, the power consumption is 11.7 mW at 0.9 V and 12 MHz. Compared to prior chip implementations, our design shows good hardware efficiency and is suitable for low-energy applications.

  20. A modified JPEG-LS lossless compression method for remote sensing images

    NASA Astrophysics Data System (ADS)

    Deng, Lihua; Huang, Zhenghua

    2015-12-01

    As many variable length source coders, JPEG-LS is highly vulnerable to channel errors which occur in the transmission of remote sensing images. The error diffusion is one of the important factors which infect its robustness. The common method of improving the error resilience of JPEG-LS is dividing the image into many strips or blocks, and then coding each of them independently, but this method reduces the coding efficiency. In this paper, a block based JPEP-LS lossless compression method with an adaptive parameter is proposed. In the modified scheme, the threshold parameter RESET is adapted to an image and the compression efficiency is close to that of the conventional JPEG-LS.

  1. A Simple, Low Overhead Data Compression Algorithm for Converting Lossy Compression Processes to Lossless

    DTIC Science & Technology

    1993-12-01

    0~0 S* NAVAL POSTGRADUATE SCHOOL Monterey, California DTIC ELECTE THESIS S APR 11 1994DU A SIMPLE, LOW OVERHEAD DATA COMPRESSION ALGORITHM FOR...A SIMPLE. LOW OVERHEAD DATA COMPRESSION ALGORITHM FOR CONVERTING LOSSY COMPRESSION PROCESSES TO LOSSLESS. 6. AUTHOR(S) Abbott, Walter D., III 7...Approved for public release; distribution is unlimited. A Simple, Low Overhead Data Compression Algorithm for Converting Lossy Processes to Lossless by

  2. Real-time transmission of digital video using variable-length coding

    NASA Technical Reports Server (NTRS)

    Bizon, Thomas P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1993-01-01

    Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.

  3. A stable penalty method for the compressible Navier-Stokes equations. 1: Open boundary conditions

    NASA Technical Reports Server (NTRS)

    Hesthaven, J. S.; Gottlieb, D.

    1994-01-01

    The purpose of this paper is to present asymptotically stable open boundary conditions for the numerical approximation of the compressible Navier-Stokes equations in three spatial dimensions. The treatment uses the conservation form of the Navier-Stokes equations and utilizes linearization and localization at the boundaries based on these variables. The proposed boundary conditions are applied through a penalty procedure, thus ensuring correct behavior of the scheme as the Reynolds number tends to infinity. The versatility of this method is demonstrated for the problem of a compressible flow past a circular cylinder.

  4. Constraining slepton and chargino through compressed top squark search

    NASA Astrophysics Data System (ADS)

    Konar, Partha; Mondal, Tanmoy; Swain, Abhaya Kumar

    2018-04-01

    We examine the compressed mass spectrum with sub-TeV top squark (\\tilde{t}) as lightest colored (s)particle in natural supersymmetry (SUSY). Such spectra are searched along with an additional hard jet, not only to boost the soft decay particles, also to yield enough missing transverse momentum. Several interesting kinematic variables are proposed to improve the probe performance for this difficult region, where we concentrate on relatively clean dileptonic channel of the top squark decaying into lightest neutralino (χ 1 0 ), which is also the lightest supersymmetric particle. In this work, we investigate the merit of these kinematic variables, sensitive to compressed mass region extending the search by introducing additional states, chargino and slepton (sneutrino) having masses in between the \\tilde{t} and χ 1 0 . Enhanced production and lack of branching suppression are capable of providing a strong limit on chargino and slepton/sneutrino mass along with top squark mass. We perform a detailed collider analysis using simplified SUSY spectrum and found that with the present LHC data {M}_{\\tilde{t}} can be excluded up to 710 GeV right away for {M}_{χ_1^0} of 640 GeV for a particular mass gap between different states.

  5. Compressing DNA sequence databases with coil.

    PubMed

    White, W Timothy J; Hendy, Michael D

    2008-05-20

    Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression - an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression - the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  6. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  7. Amnioinfusion for umbilical cord compression in labour.

    PubMed

    Hofmeyr, G J

    2000-01-01

    Amnioinfusion aims to prevent or relieve umbilical cord compression during labour by infusing a solution into the uterine cavity. The objective of this review was to assess the effects of amnioinfusion on maternal and perinatal outcome for potential or suspected umbilical cord compression or potential amnionitis. The Cochrane Pregnancy and Childbirth Group trials register and the Cochrane Controlled Trials Register were searched. Randomised trials of amnioinfusion compared with no amnioinfusion in women with babies at risk of umbilical cord compression; and women at risk of intrauterine infection. Eligibility and trial quality were assessed by the reviewer. Twelve studies were included. Transcervical amnioinfusion for potential or suspected umbilical cord compression was associated with the following reductions: fetal heart rate decelerations (relative risk 0.54, 95% confidence interval 0.43 to 0.68); caesarean section for suspected fetal distress (relative risk 0.35, 95% confidence interval 0.24 to 0.52); neonatal hospital stay greater than 3 days (relative risk 0.40, 95% confidence interval 0. 26 to 0.62); maternal hospital stay greater than 3 days (relative risk 0.46, 95% 0.29 to 0.74). Transabdominal amnioinfusion showed similar results. Transcervical amnioinfusion to prevent infection in women with membranes ruptured for more than 6 hours was associated with a reduction in puerperal infection (relative risk 0.50, 95% confidence interval 0.26 to 0.97). Amnioinfusion appears to reduce the occurrence of variable heart rate decelerations and lower the use of caesarean section. However the studies were done in settings where fetal distress was not confirmed by fetal blood sampling. The results may therefore only be relevant where caesarean sections are commonly done for abnormal fetal heart rate alone. The trials reviewed are too small to address the possibility of rare but serious maternal adverse effects of amnioinfusion.

  8. Structure and Properties of Silica Glass Densified in Cold Compression and Hot Compression

    NASA Astrophysics Data System (ADS)

    Guerette, Michael; Ackerson, Michael R.; Thomas, Jay; Yuan, Fenglin; Bruce Watson, E.; Walker, David; Huang, Liping

    2015-10-01

    Silica glass has been shown in numerous studies to possess significant capacity for permanent densification under pressure at different temperatures to form high density amorphous (HDA) silica. However, it is unknown to what extent the processes leading to irreversible densification of silica glass in cold-compression at room temperature and in hot-compression (e.g., near glass transition temperature) are common in nature. In this work, a hot-compression technique was used to quench silica glass from high temperature (1100 °C) and high pressure (up to 8 GPa) conditions, which leads to density increase of ~25% and Young’s modulus increase of ~71% relative to that of pristine silica glass at ambient conditions. Our experiments and molecular dynamics (MD) simulations provide solid evidences that the intermediate-range order of the hot-compressed HDA silica is distinct from that of the counterpart cold-compressed at room temperature. This explains the much higher thermal and mechanical stability of the former than the latter upon heating and compression as revealed in our in-situ Brillouin light scattering (BLS) experiments. Our studies demonstrate the limitation of the resulting density as a structural indicator of polyamorphism, and point out the importance of temperature during compression in order to fundamentally understand HDA silica.

  9. Compressing DNA sequence databases with coil

    PubMed Central

    White, W Timothy J; Hendy, Michael D

    2008-01-01

    Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work. PMID:18489794

  10. Tensile and compressive behavior of Borsic/aluminum

    NASA Technical Reports Server (NTRS)

    Herakovich, C. T.; Davis, J. G., Jr.; Viswanathan, C. N.

    1977-01-01

    The results of an experimental investigation of the mechanical behavior of Borsic/aluminum are presented. Composite laminates were tested in tension and compression for monotonically increasing load and also for variable loading cycles in which the maximum load was increased in each successive cycle. It is shown that significant strain-hardening, and corresponding increase in yield stress, is exhibited by the metal matrix laminates. For matrix dominated laminates, the current yield stress is essentially identical to the previous maximum stress, and unloading is essentially linear with large permanent strains after unloading. For laminates with fiber dominated behavior, the yield stress increases with increase in the previous maximum stress, but the increase in yield stress does not keep pace with the previous maximum stress. These fiber dominated laminates exhibit smaller nonlinear strains, reversed nonlinear behavior during unloading, and smaller permanent strains after unloading. Compression results from sandwich beams and flat coupons are shown to differ considerably. Results from beam specimens tend to exhibit higher values for modulus, yield stress, and strength.

  11. Compressive sensing in medical imaging

    PubMed Central

    Graff, Christian G.; Sidky, Emil Y.

    2015-01-01

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400

  12. Torsional and axial compressive properties of tibiotarsal bones of red-tailed hawks (Buteo jamaicensis).

    PubMed

    Kerrigan, Shannon M; Kapatkin, Amy S; Garcia, Tanya C; Robinson, Duane A; Guzman, David Sanchez-Migallon; Stover, Susan M

    2018-04-01

    OBJECTIVE To describe the torsional and axial compressive properties of tibiotarsal bones of red-tailed hawks (Buteo jamaicensis). SAMPLE 16 cadaveric tibiotarsal bones from 8 red-tailed hawks. PROCEDURES 1 tibiotarsal bone from each bird was randomly assigned to be tested in torsion, and the contralateral bone was tested in axial compression. Intact bones were monotonically loaded in either torsion (n = 8) or axial compression (8) to failure. Mechanical variables were derived from load-deformation curves. Fracture configurations were described. Effects of sex, limb side, and bone dimensions on mechanical properties were assessed with a mixed-model ANOVA. Correlations between equivalent torsional and compressive properties were determined. RESULTS Limb side and bone dimensions were not associated with any mechanical property. During compression tests, mean ultimate cumulative energy and postyield energy for female bones were significantly greater than those for male bones. All 8 bones developed a spiral diaphyseal fracture and a metaphyseal fissure or fracture during torsional tests. During compression tests, all bones developed a crushed metaphysis and a fissure or comminuted fracture of the diaphysis. Positive correlations were apparent between most yield and ultimate torsional and compressive properties. CONCLUSIONS AND CLINICAL RELEVANCE The torsional and axial compressive properties of tibiotarsal bones described in this study can be used as a reference for investigations into fixation methods for tibiotarsal fractures in red-tailed hawks. Although the comminuted and spiral diaphyseal fractures induced in this study were consistent with those observed in clinical practice, the metaphyseal disruption observed was not and warrants further research.

  13. Creep and cracking of concrete hinges: insight from centric and eccentric compression experiments.

    PubMed

    Schlappal, Thomas; Schweigler, Michael; Gmainer, Susanne; Peyerl, Martin; Pichler, Bernhard

    2017-01-01

    Existing design guidelines for concrete hinges consider bending-induced tensile cracking, but the structural behavior is oversimplified to be time-independent. This is the motivation to study creep and bending-induced tensile cracking of initially monolithic concrete hinges systematically. Material tests on plain concrete specimens and structural tests on marginally reinforced concrete hinges are performed. The experiments characterize material and structural creep under centric compression as well as bending-induced tensile cracking and the interaction between creep and cracking of concrete hinges. As for the latter two aims, three nominally identical concrete hinges are subjected to short-term and to longer-term eccentric compression tests. Obtained material and structural creep functions referring to centric compression are found to be very similar. The structural creep activity under eccentric compression is significantly larger because of the interaction between creep and cracking, i.e. bending-induced cracks progressively open and propagate under sustained eccentric loading. As for concrete hinges in frame-like integral bridge construction, it is concluded (i) that realistic simulation of variable loads requires consideration of the here-studied time-dependent behavior and (ii) that permanent compressive normal forces shall be limited by 45% of the ultimate load carrying capacity, in order to avoid damage of concrete hinges under sustained loading.

  14. Statistical Compression of Wind Speed Data

    NASA Astrophysics Data System (ADS)

    Tagle, F.; Castruccio, S.; Crippa, P.; Genton, M.

    2017-12-01

    In this work we introduce a lossy compression approach that utilizes a stochastic wind generator based on a non-Gaussian distribution to reproduce the internal climate variability of daily wind speed as represented by the CESM Large Ensemble over Saudi Arabia. Stochastic wind generators, and stochastic weather generators more generally, are statistical models that aim to match certain statistical properties of the data on which they are trained. They have been used extensively in applications ranging from agricultural models to climate impact studies. In this novel context, the parameters of the fitted model can be interpreted as encoding the information contained in the original uncompressed data. The statistical model is fit to only 3 of the 30 ensemble members and it adequately captures the variability of the ensemble in terms of seasonal internannual variability of daily wind speed. To deal with such a large spatial domain, it is partitioned into 9 region, and the model is fit independently to each of these. We further discuss a recent refinement of the model, which relaxes this assumption of regional independence, by introducing a large-scale component that interacts with the fine-scale regional effects.

  15. (Finite) statistical size effects on compressive strength.

    PubMed

    Weiss, Jérôme; Girard, Lucas; Gimbert, Florent; Amitrano, David; Vandembroucq, Damien

    2014-04-29

    The larger structures are, the lower their mechanical strength. Already discussed by Leonardo da Vinci and Edmé Mariotte several centuries ago, size effects on strength remain of crucial importance in modern engineering for the elaboration of safety regulations in structural design or the extrapolation of laboratory results to geophysical field scales. Under tensile loading, statistical size effects are traditionally modeled with a weakest-link approach. One of its prominent results is a prediction of vanishing strength at large scales that can be quantified in the framework of extreme value statistics. Despite a frequent use outside its range of validity, this approach remains the dominant tool in the field of statistical size effects. Here we focus on compressive failure, which concerns a wide range of geophysical and geotechnical situations. We show on historical and recent experimental data that weakest-link predictions are not obeyed. In particular, the mechanical strength saturates at a nonzero value toward large scales. Accounting explicitly for the elastic interactions between defects during the damage process, we build a formal analogy of compressive failure with the depinning transition of an elastic manifold. This critical transition interpretation naturally entails finite-size scaling laws for the mean strength and its associated variability. Theoretical predictions are in remarkable agreement with measurements reported for various materials such as rocks, ice, coal, or concrete. This formalism, which can also be extended to the flowing instability of granular media under multiaxial compression, has important practical consequences for future design rules.

  16. Control of traumatic wound bleeding by compression with a compact elastic adhesive dressing.

    PubMed

    Naimer, Sody Abby; Tanami, Menachem; Malichi, Avishai; Moryosef, David

    2006-07-01

    Compression dressing has been assumed effective, but never formally examined in the field. A prospective interventional trial examined efficacy and feasibility of an elastic adhesive dressing compression device in the arena of the traumatic incident. The primary variable examined was the bleeding rate from wounds compared before and after dressing. Sixty-two consecutive bleeding wounds resulting from penetrating trauma were treated. Bleeding intensity was profuse in 58%, moderate 23%, and mild in 19%. Full control of bleeding was achieved in 87%, a significantly diminished rate in 11%, and, in 1 case, the technique had no influence on the bleeding rate. The Wilcoxon test for variables comparing bleeding rates before and after the procedure obtained significant difference (Z = -6.9, p < 0.01). No significant complications were observed. Caregivers were highly satisfied in 90% of cases. Elastic adhesive dressing was observed as an effective and reliable technique, demonstrating a high rate of success without complications.

  17. Adaptive efficient compression of genomes

    PubMed Central

    2012-01-01

    Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. However, memory requirements of the current algorithms are high and run times often are slow. In this paper, we propose an adaptive, parallel and highly efficient referential sequence compression method which allows fine-tuning of the trade-off between required memory and compression speed. When using 12 MB of memory, our method is for human genomes on-par with the best previous algorithms in terms of compression ratio (400:1) and compression speed. In contrast, it compresses a complete human genome in just 11 seconds when provided with 9 GB of main memory, which is almost three times faster than the best competitor while using less main memory. PMID:23146997

  18. A study of compressibility and compactibility of directly compressible tableting materials containing tramadol hydrochloride.

    PubMed

    Mužíková, Jitka; Kubíčková, Alena

    2016-09-01

    The paper evaluates and compares the compressibility and compactibility of directly compressible tableting materials for the preparation of hydrophilic gel matrix tablets containing tramadol hydrochloride and the coprocessed dry binders Prosolv® SMCC 90 and Disintequik™ MCC 25. The selected types of hypromellose are Methocel™ Premium K4M and Methocel™ Premium K100M in 30 and 50 % concentrations, the lubricant being magnesium stearate in a 1 % concentration. Compressibility is evaluated by means of the energy profile of compression process and compactibility by the tensile strength of tablets. The values of total energy of compression and plasticity were higher in the tableting materials containing Prosolv® SMCC 90 than in those containing Disintequik™ MCC 25. Tramadol slightly decreased the values of total energy of compression and plasticity. Tableting materials containing Prosolv® SMCC 90 yielded stronger tablets. Tramadol decreased the strength of tablets from both coprocessed dry binders.

  19. Compression fractures of the back

    MedlinePlus

    ... treatments. Surgery can include: Balloon kyphoplasty Vertebroplasty Spinal fusion Other surgery may be done to remove bone ... Alternative Names Vertebral compression fractures; Osteoporosis - compression fracture Images Compression fracture References Cosman F, de Beur SJ, ...

  20. Some Effects of Compressibility on the Flow Through Fans and Turbines

    NASA Technical Reports Server (NTRS)

    Perl, W.; Epstein, H. T.

    1946-01-01

    The laws of conservation of mass, momentum, and energy are applied to the compressible flow through a two-dimensional cascade of airfoils. A fundamental relation between the ultimate upstream and downstream flow angles, the inlet Mach number, and the pressure ratio across the cascade is derived. Comparison with the corresponding relation for incompressible flow shows large differences. The fundamental relation reveals two ranges of flow angles and inlet Mach numbers, for which no ideal pressure ratio exists. One of these nonideal operating ranges is analogous to a similar type in incompressible flow. The other is characteristic only of compressible flow. The effect of variable axial-flow area is treated. Some implications of the basic conservation laws in the case of nonideal flow through cascades are discussed.

  1. Digital compression algorithms for HDTV transmission

    NASA Technical Reports Server (NTRS)

    Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.

    1990-01-01

    Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.

  2. A probabilistic mechanical model for prediction of aggregates’ size distribution effect on concrete compressive strength

    NASA Astrophysics Data System (ADS)

    Miled, Karim; Limam, Oualid; Sab, Karam

    2012-06-01

    To predict aggregates' size distribution effect on the concrete compressive strength, a probabilistic mechanical model is proposed. Within this model, a Voronoi tessellation of a set of non-overlapping and rigid spherical aggregates is used to describe the concrete microstructure. Moreover, aggregates' diameters are defined as statistical variables and their size distribution function is identified to the experimental sieve curve. Then, an inter-aggregate failure criterion is proposed to describe the compressive-shear crushing of the hardened cement paste when concrete is subjected to uniaxial compression. Using a homogenization approach based on statistical homogenization and on geometrical simplifications, an analytical formula predicting the concrete compressive strength is obtained. This formula highlights the effects of cement paste strength and aggregates' size distribution and volume fraction on the concrete compressive strength. According to the proposed model, increasing the concrete strength for the same cement paste and the same aggregates' volume fraction is obtained by decreasing both aggregates' maximum size and the percentage of coarse aggregates. Finally, the validity of the model has been discussed through a comparison with experimental results (15 concrete compressive strengths ranging between 46 and 106 MPa) taken from literature and showing a good agreement with the model predictions.

  3. Subjective evaluation of compressed image quality

    NASA Astrophysics Data System (ADS)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  4. A data compression technique for synthetic aperture radar images

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Minden, G. J.

    1986-01-01

    A data compression technique is developed for synthetic aperture radar (SAR) imagery. The technique is based on an SAR image model and is designed to preserve the local statistics in the image by an adaptive variable rate modification of block truncation coding (BTC). A data rate of approximately 1.6 bit/pixel is achieved with the technique while maintaining the image quality and cultural (pointlike) targets. The algorithm requires no large data storage and is computationally simple.

  5. Influence of compression parameters on mechanical behavior of mozzarella cheese.

    PubMed

    Fogaça, Davi Novaes Ladeia; da Silva, William Soares; Rodrigues, Luciano Brito

    2017-10-01

    Studies on the interaction between direction and degree of compression in the Texture Profile Analysis (TPA) of cheeses are limited. For this reason the present study aimed to evaluate the mechanical properties of Mozzarella cheese by TPA at different compression degrees (65, 75, and 85%) and directions (axes X, Y, and Z). Data obtained were compared in order to identify possible interaction between both factors. Compression direction did not affect any mechanical variable, or rather, the cheese had an isotropic behavior for TPA. Compression degree had a significant influence (p < 0.05) on TPA responses, excepting for chewiness TPA (N), which remained constant. Data from texture profile were adjusted to models to explain the mechanical behavior according to the compression degree used in the test. The isotropic behavior observed may be result of differences in production method of Mozzarella cheese especially on stretching of cheese mass. Texture Profile Analysis (TPA) is a technique largely used to assess the mechanical properties of food, particularly cheese. The precise choice of the instrumental test configuration is essential for achieving results that represent the material analyzed. The method of manufacturing is another factor that may directly influence the mechanical properties of food. This can be seen, for instance, in stretched curd cheese, such as Mozzarella. Knowledge on such mechanical properties is highly relevant for food industries due to the mechanical resistance in piling, pressing, manufacture of packages, and food transport, or to melting features presented by the food at high temperatures in preparation of several foods, such as pizzas, snacks, sandwiches, and appetizers. © 2016 Wiley Periodicals, Inc.

  6. Sensitivity Analysis in RIPless Compressed Sensing

    DTIC Science & Technology

    2014-10-01

    SECURITY CLASSIFICATION OF: The compressive sensing framework finds a wide range of applications in signal processing and analysis. Within this...Analysis of Compressive Sensing Solutions Report Title The compressive sensing framework finds a wide range of applications in signal processing and...compressed sensing. More specifically, we show that in a noiseless and RIP-less setting [11], the recovery process of a compressed sensing framework is

  7. Compression set in gas-blown condensation-cured polysiloxane elastomers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patel, Mogon; Chinn, Sarah; Maxwell, Robert S.

    2010-12-01

    Accelerated thermal ageing studies on foamed condensation cured polysiloxane materials have been performed in support of life assessment and material replacement programmes. Two different types of filled hydrogen-blown and condensation cured polysiloxane foams were tested; commercial (RTV S5370), and an in-house formulated polysiloxane elastomer (Silfoam). Compression set properties were investigated using Thermomechanical (TMA) studies and compared against two separate longer term ageing trials carried out in air and in dry inert gas atmospheres using compression jigs. Isotherms measured from these studies were assessed using time-temperature (T/t) superposition. Acceleration factors were determined and fitted to Arrhenius kinetics. For both materials, themore » thermo-mechanical results were found to closely follow the longer term accelerated ageing trials. Comparison of the accelerated ageing data in dry nitrogen atmospheres against field trial results showed the accelerated ageing trends over predict, however the comparison is difficult as the field data suffer from significant component to component variability. Of the long term ageing trials reported here, those carried out in air deviate more significantly from field trials data compared to those carried out in dry nitrogen atmospheres. For field return samples, there is evidence for residual post-curing reactions influencing mechanical performance, which would accelerate compression set. Multiple quantum-NMR studies suggest that compression set is not associated with significant changes in net crosslink density, but that some degree of network rearrangement has occurred due to viscoelastic relaxation as well as bond breaking and forming processes, with possible post-curing reactions at early times.« less

  8. Comparison of the effectiveness of compression stockings and layer compression systems in venous ulceration treatment

    PubMed Central

    Jawień, Arkadiusz; Cierzniakowska, Katarzyna; Cwajda-Białasik, Justyna; Mościcka, Paulina

    2010-01-01

    Introduction The aim of the research was to compare the dynamics of venous ulcer healing when treated with the use of compression stockings as well as original two- and four-layer bandage systems. Material and methods A group of 46 patients suffering from venous ulcers was studied. This group consisted of 36 (78.3%) women and 10 (21.70%) men aged between 41 and 88 years (the average age was 66.6 years and the median was 67). Patients were randomized into three groups, for treatment with the ProGuide two-layer system, Profore four-layer compression, and with the use of compression stockings class II. In the case of multi-layer compression, compression ensuring 40 mmHg blood pressure at ankle level was used. Results In all patients, independently of the type of compression therapy, a few significant statistical changes of ulceration area in time were observed (Student’s t test for matched pairs, p < 0.05). The largest loss of ulceration area in each of the successive measurements was observed in patients treated with the four-layer system – on average 0.63 cm2/per week. The smallest loss of ulceration area was observed in patients using compression stockings – on average 0.44 cm2/per week. However, the observed differences were not statistically significant (Kruskal-Wallis test H = 4.45, p > 0.05). Conclusions A systematic compression therapy, applied with preliminary blood pressure of 40 mmHg, is an effective method of conservative treatment of venous ulcers. Compression stockings and prepared systems of multi-layer compression were characterized by similar clinical effectiveness. PMID:22419941

  9. Novel image compression-encryption hybrid algorithm based on key-controlled measurement matrix in compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua

    2014-10-01

    The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.

  10. Comparison of chest compression quality between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method during CPR.

    PubMed

    Park, Sang-Sub

    2014-01-01

    The purpose of this study is to grasp difference in quality of chest compression accuracy between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method. Participants were progressed 64 people except 6 absentees among 70 people who agreed to participation with completing the CPR curriculum. In the classification of group in participants, the modified chest compression method was called as smartphone group (33 people). The standardized chest compression method was called as traditional group (31 people). The common equipments in both groups were used Manikin for practice and Manikin for evaluation. In the meantime, the smartphone group for application was utilized Android and iOS Operating System (OS) of 2 smartphone products (G, i). The measurement period was conducted from September 25th to 26th, 2012. Data analysis was used SPSS WIN 12.0 program. As a result of research, the proper compression depth (mm) was shown the proper compression depth (p< 0.01) in traditional group (53.77 mm) compared to smartphone group (48.35 mm). Even the proper chest compression (%) was formed suitably (p< 0.05) in traditional group (73.96%) more than smartphone group (60.51%). As for the awareness of chest compression accuracy, the traditional group (3.83 points) had the higher awareness of chest compression accuracy (p< 0.001) than the smartphone group (2.32 points). In the questionnaire that was additionally carried out 1 question only in smartphone group, the modified chest compression method with the use of smartphone had the high negative reason in rescuer for occurrence of hand back pain (48.5%) and unstable posture (21.2%).

  11. Survey of Header Compression Techniques

    NASA Technical Reports Server (NTRS)

    Ishac, Joseph

    2001-01-01

    This report provides a summary of several different header compression techniques. The different techniques included are: (1) Van Jacobson's header compression (RFC 1144); (2) SCPS (Space Communications Protocol Standards) header compression (SCPS-TP, SCPS-NP); (3) Robust header compression (ROHC); and (4) The header compression techniques in RFC2507 and RFC2508. The methodology for compression and error correction for these schemes are described in the remainder of this document. All of the header compression schemes support compression over simplex links, provided that the end receiver has some means of sending data back to the sender. However, if that return path does not exist, then neither Van Jacobson's nor SCPS can be used, since both rely on TCP (Transmission Control Protocol). In addition, under link conditions of low delay and low error, all of the schemes perform as expected. However, based on the methodology of the schemes, each scheme is likely to behave differently as conditions degrade. Van Jacobson's header compression relies heavily on the TCP retransmission timer and would suffer an increase in loss propagation should the link possess a high delay and/or bit error rate (BER). The SCPS header compression scheme protects against high delay environments by avoiding delta encoding between packets. Thus, loss propagation is avoided. However, SCPS is still affected by an increased BER (bit-error-rate) since the lack of delta encoding results in larger header sizes. Next, the schemes found in RFC2507 and RFC2508 perform well for non-TCP connections in poor conditions. RFC2507 performance with TCP connections is improved by various techniques over Van Jacobson's, but still suffers a performance hit with poor link properties. Also, RFC2507 offers the ability to send TCP data without delta encoding, similar to what SCPS offers. ROHC is similar to the previous two schemes, but adds additional CRCs (cyclic redundancy check) into headers and improves

  12. Reversible Watermarking Surviving JPEG Compression.

    PubMed

    Zain, J; Clarke, M

    2005-01-01

    This paper will discuss the properties of watermarking medical images. We will also discuss the possibility of such images being compressed by JPEG and give an overview of JPEG compression. We will then propose a watermarking scheme that is reversible and robust to JPEG compression. The purpose is to verify the integrity and authenticity of medical images. We used 800x600x8 bits ultrasound (US) images in our experiment. SHA-256 of the image is then embedded in the Least significant bits (LSB) of an 8x8 block in the Region of Non Interest (RONI). The image is then compressed using JPEG and decompressed using Photoshop 6.0. If the image has not been altered, the watermark extracted will match the hash (SHA256) of the original image. The result shown that the embedded watermark is robust to JPEG compression up to image quality 60 (~91% compressed).

  13. Changes in organisation of instep kicking as a function of wearing compression and textured materials.

    PubMed

    Hasan, Hosni; Davids, Keith; Chow, Jia Yi; Kerr, Graham

    2017-04-01

    This study investigated effects of wearing compression garments and textured insoles on modes of movement organisation emerging during performance of lower limb interceptive actions in association football. Participants were six skilled (age = 15.67 ± 0.74 years) and six less-skilled (age = 15.17 ± 1.1 years) football players. All participants performed 20 instep kicks with maximum velocity in four randomly organised insoles and socks conditions, (a) Smooth Socks with Smooth Insoles (SSSI); (b) Smooth Socks with Textured Insoles (SSTI); (c) Compression Socks with Smooth Insoles (CSSI); and (d), Compression Socks with Textured Insoles (CSTI). Results showed that, when wearing textured and compression materials (CSSI condition), less-skilled participants displayed significantly greater hip extension and flexion towards the ball contact phase, indicating larger ranges of motion in the kicking limb than in other conditions. Less-skilled participants also demonstrated greater variability in knee-ankle intralimb (angle-angle plots) coordination modes in the CSTI condition. Findings suggested that use of textured and compression materials increased attunement to somatosensory information from lower limb movement, to regulate performance of dynamic interceptive actions like kicking, especially in less-skilled individuals.

  14. Variable word length encoder reduces TV bandwith requirements

    NASA Technical Reports Server (NTRS)

    Sivertson, W. E., Jr.

    1965-01-01

    Adaptive variable resolution encoding technique provides an adaptive compression pseudo-random noise signal processor for reducing television bandwidth requirements. Complementary processors are required in both the transmitting and receiving systems. The pretransmission processor is analog-to-digital, while the postreception processor is digital-to-analog.

  15. Compressed Sensing for Body MRI

    PubMed Central

    Feng, Li; Benkert, Thomas; Block, Kai Tobias; Sodickson, Daniel K; Otazo, Ricardo; Chandarana, Hersh

    2016-01-01

    The introduction of compressed sensing for increasing imaging speed in MRI has raised significant interest among researchers and clinicians, and has initiated a large body of research across multiple clinical applications over the last decade. Compressed sensing aims to reconstruct unaliased images from fewer measurements than that are traditionally required in MRI by exploiting image compressibility or sparsity. Moreover, appropriate combinations of compressed sensing with previously introduced fast imaging approaches, such as parallel imaging, have demonstrated further improved performance. The advent of compressed sensing marks the prelude to a new era of rapid MRI, where the focus of data acquisition has changed from sampling based on the nominal number of voxels and/or frames to sampling based on the desired information content. This paper presents a brief overview of the application of compressed sensing techniques in body MRI, where imaging speed is crucial due to the presence of respiratory motion along with stringent constraints on spatial and temporal resolution. The first section provides an overview of the basic compressed sensing methodology, including the notion of sparsity, incoherence, and non-linear reconstruction. The second section reviews state-of-the-art compressed sensing techniques that have been demonstrated for various clinical body MRI applications. In the final section, the paper discusses current challenges and future opportunities. PMID:27981664

  16. Radiometric resolution enhancement by lossy compression as compared to truncation followed by lossless compression

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Manohar, Mareboyana

    1994-01-01

    Recent advances in imaging technology make it possible to obtain imagery data of the Earth at high spatial, spectral and radiometric resolutions from Earth orbiting satellites. The rate at which the data is collected from these satellites can far exceed the channel capacity of the data downlink. Reducing the data rate to within the channel capacity can often require painful trade-offs in which certain scientific returns are sacrificed for the sake of others. In this paper we model the radiometric version of this form of lossy compression by dropping a specified number of least significant bits from each data pixel and compressing the remaining bits using an appropriate lossless compression technique. We call this approach 'truncation followed by lossless compression' or TLLC. We compare the TLLC approach with applying a lossy compression technique to the data for reducing the data rate to the channel capacity, and demonstrate that each of three different lossy compression techniques (JPEG/DCT, VQ and Model-Based VQ) give a better effective radiometric resolution than TLLC for a given channel rate.

  17. Dimensional Changes of Tracheids during Drying of Radiata Pine (Pinus radiata D. Don) Compression Woods: A Study Using Variable-Pressure Scanning Electron Microscopy (VP-SEM)

    PubMed Central

    Zhang, Miao; Smith, Bronwen G.; McArdle, Brian H.; Chavan, Ramesh R.; James, Bryony J.

    2018-01-01

    Variable-pressure scanning electron microscopy was used to investigate the dimensional changes in longitudinal, tangential and radial directions, on wetting and drying, of tracheids of opposite wood (OW) and three grades of compression woods (CWs), including severe CW (SCW) and two grades of mild compression wood (MCW) (MCW1 and MCW2) in corewood of radiata pine (Pinus radiata) saplings. The CW was formed on the underside and OW on the upper side of slightly tilted stems. In the longitudinal direction, the shrinkage of SCW tracheids was ~300% greater than that of OW tracheids, with the shrinkage of the MCW1 and MCW2 tracheids being intermediate. Longitudinal swelling was also investigated and hysteresis was demonstrated for the tracheids of all corewood types, with the extent of hysteresis increasing with CW severity. A statistical association was found between longitudinal shrinkage and the content of lignin and galactosyl residues in the cell-wall matrix. The galactosyl residues are present mostly as (1→4)-β-galactans, which are known to have a high capacity for binding water and swell on hydration. The small proportions of (1→3)-β-glucans in the CWs have similar properties. These polysaccharides may play a functional role in the longitudinal shrinking and swelling of CW tracheids. Tangential shrinkage of tracheids was greater than radial shrinkage but both were greatest for OW and least for SCW, with the MCW1 and MCW2 being intermediate. PMID:29495536

  18. Report from the 2013 meeting of the International Compression Club on advances and challenges of compression therapy.

    PubMed

    Delos Reyes, Arthur P; Partsch, Hugo; Mosti, Giovanni; Obi, Andrea; Lurie, Fedor

    2014-10-01

    The International Compression Club, a collaboration of medical experts and industry representatives, was founded in 2005 to develop consensus reports and recommendations regarding the use of compression therapy in the treatment of acute and chronic vascular disease. During the recent meeting of the International Compression Club, member presentations were focused on the clinical application of intermittent pneumatic compression in different disease scenarios as well as on the use of inelastic and short stretch compression therapy. In addition, several new compression devices and systems were introduced by industry representatives. This article summarizes the presentations and subsequent discussions and provides a description of the new compression therapies presented. Copyright © 2014 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  19. A Bunch Compression Method for Free Electron Lasers that Avoids Parasitic Compressions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benson, Stephen V.; Douglas, David R.; Tennant, Christopher D.

    2015-09-01

    Virtually all existing high energy (>few MeV) linac-driven FELs compress the electron bunch length though the use of off-crest acceleration on the rising side of the RF waveform followed by transport through a magnetic chicane. This approach has at least three flaws: 1) it is difficult to correct aberrations--particularly RF curvature, 2) rising side acceleration exacerbates space charge-induced distortion of the longitudinal phase space, and 3) all achromatic "negative compaction" compressors create parasitic compression during the final compression process, increasing the CSR-induced emittance growth. One can avoid these deficiencies by using acceleration on the falling side of the RF waveformmore » and a compressor with M 56>0. This approach offers multiple advantages: 1) It is readily achieved in beam lines supporting simple schemes for aberration compensation, 2) Longitudinal space charge (LSC)-induced phase space distortion tends, on the falling side of the RF waveform, to enhance the chirp, and 3) Compressors with M 56>0 can be configured to avoid spurious over-compression. We will discuss this bunch compression scheme in detail and give results of a successful beam test in April 2012 using the JLab UV Demo FEL« less

  20. Effects of number of ply, compression temperature, pressure and time on mechanical properties of prepreg kenaf-polypropilene composites

    NASA Astrophysics Data System (ADS)

    Tomo, H. S. S.; Ujianto, O.; Rizal, R.; Pratama, Y.

    2017-07-01

    Composite material thermoplastic was prepared from polypropilen granule as matrix, kenaf fiber as reinforcement and grafted polypropylene copolymer maleic anhydride as coupling agent. Composite products were produced as sandwich structures using compression molding. This research aimed to observe the influence of number of ply, temperature, pressure, and compression time using factorial design. Effects of variables on tensile and flexural strength were analyzed. Experimental results showed that tensile and flexural strength were influenced by degradation, fiber compaction, and matrix - fiber interaction mechanisms. Flexural strength was significantly affected by number of ply and its interaction to another process parameters (temperature, pressure, and compression time), but no significant effect of process parameters on tensile strength. The highest tensile strength (62.0 MPa) was produced at 3 ply, 210 °C, 50 Bar, and 3 min compression time (low, high, high, low), while the highest flexural strength (80.3 MPa) was produced at 3 ply, 190 °C, 50 Bar, and 3 min compression time (low, low, high, low).

  1. Generalized massive optimal data compression

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Wandelt, Benjamin

    2018-05-01

    In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

  2. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 7 2013-07-01 2013-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed a...

  3. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 7 2012-07-01 2012-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed a...

  4. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 7 2014-07-01 2014-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed a...

  5. Iterative spectral methods and spectral solutions to compressible flows

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y.; Zang, T. A.

    1982-01-01

    A spectral multigrid scheme is described which can solve pseudospectral discretizations of self-adjoint elliptic problems in O(N log N) operations. An iterative technique for efficiently implementing semi-implicit time-stepping for pseudospectral discretizations of Navier-Stokes equations is discussed. This approach can handle variable coefficient terms in an effective manner. Pseudospectral solutions of compressible flow problems are presented. These include one dimensional problems and two dimensional Euler solutions. Results are given both for shock-capturing approaches and for shock-fitting ones.

  6. Image quality (IQ) guided multispectral image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  7. Compression Ratio Adjuster

    NASA Technical Reports Server (NTRS)

    Akkerman, J. W.

    1982-01-01

    New mechanism alters compression ratio of internal-combustion engine according to load so that engine operates at top fuel efficiency. Ordinary gasoline, diesel and gas engines with their fixed compression ratios are inefficient at partial load and at low-speed full load. Mechanism ensures engines operate as efficiently under these conditions as they do at highload and high speed.

  8. Quantitative analysis of spatial variability of geotechnical parameters

    NASA Astrophysics Data System (ADS)

    Fang, Xing

    2018-04-01

    Geotechnical parameters are the basic parameters of geotechnical engineering design, while the geotechnical parameters have strong regional characteristics. At the same time, the spatial variability of geotechnical parameters has been recognized. It is gradually introduced into the reliability analysis of geotechnical engineering. Based on the statistical theory of geostatistical spatial information, the spatial variability of geotechnical parameters is quantitatively analyzed. At the same time, the evaluation of geotechnical parameters and the correlation coefficient between geotechnical parameters are calculated. A residential district of Tianjin Survey Institute was selected as the research object. There are 68 boreholes in this area and 9 layers of mechanical stratification. The parameters are water content, natural gravity, void ratio, liquid limit, plasticity index, liquidity index, compressibility coefficient, compressive modulus, internal friction angle, cohesion and SP index. According to the principle of statistical correlation, the correlation coefficient of geotechnical parameters is calculated. According to the correlation coefficient, the law of geotechnical parameters is obtained.

  9. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  10. Compressibility of the protein-water interface

    NASA Astrophysics Data System (ADS)

    Persson, Filip; Halle, Bertil

    2018-06-01

    The compressibility of a protein relates to its stability, flexibility, and hydrophobic interactions, but the measurement, interpretation, and computation of this important thermodynamic parameter present technical and conceptual challenges. Here, we present a theoretical analysis of protein compressibility and apply it to molecular dynamics simulations of four globular proteins. Using additively weighted Voronoi tessellation, we decompose the solution compressibility into contributions from the protein and its hydration shells. We find that positively cross-correlated protein-water volume fluctuations account for more than half of the protein compressibility that governs the protein's pressure response, while the self correlations correspond to small (˜0.7%) fluctuations of the protein volume. The self compressibility is nearly the same as for ice, whereas the total protein compressibility, including cross correlations, is ˜45% of the bulk-water value. Taking the inhomogeneous solvent density into account, we decompose the experimentally accessible protein partial compressibility into intrinsic, hydration, and molecular exchange contributions and show how they can be computed with good statistical accuracy despite the dominant bulk-water contribution. The exchange contribution describes how the protein solution responds to an applied pressure by redistributing water molecules from lower to higher density; it is negligibly small for native proteins, but potentially important for non-native states. Because the hydration shell is an open system, the conventional closed-system compressibility definitions yield a pseudo-compressibility. We define an intrinsic shell compressibility, unaffected by occupation number fluctuations, and show that it approaches the bulk-water value exponentially with a decay "length" of one shell, less than the bulk-water compressibility correlation length. In the first hydration shell, the intrinsic compressibility is 25%-30% lower than in

  11. Cosmological Particle Data Compression in Practice

    NASA Astrophysics Data System (ADS)

    Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.

    2017-12-01

    In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.

  12. Compressibility of the protein-water interface.

    PubMed

    Persson, Filip; Halle, Bertil

    2018-06-07

    The compressibility of a protein relates to its stability, flexibility, and hydrophobic interactions, but the measurement, interpretation, and computation of this important thermodynamic parameter present technical and conceptual challenges. Here, we present a theoretical analysis of protein compressibility and apply it to molecular dynamics simulations of four globular proteins. Using additively weighted Voronoi tessellation, we decompose the solution compressibility into contributions from the protein and its hydration shells. We find that positively cross-correlated protein-water volume fluctuations account for more than half of the protein compressibility that governs the protein's pressure response, while the self correlations correspond to small (∼0.7%) fluctuations of the protein volume. The self compressibility is nearly the same as for ice, whereas the total protein compressibility, including cross correlations, is ∼45% of the bulk-water value. Taking the inhomogeneous solvent density into account, we decompose the experimentally accessible protein partial compressibility into intrinsic, hydration, and molecular exchange contributions and show how they can be computed with good statistical accuracy despite the dominant bulk-water contribution. The exchange contribution describes how the protein solution responds to an applied pressure by redistributing water molecules from lower to higher density; it is negligibly small for native proteins, but potentially important for non-native states. Because the hydration shell is an open system, the conventional closed-system compressibility definitions yield a pseudo-compressibility. We define an intrinsic shell compressibility, unaffected by occupation number fluctuations, and show that it approaches the bulk-water value exponentially with a decay "length" of one shell, less than the bulk-water compressibility correlation length. In the first hydration shell, the intrinsic compressibility is 25%-30% lower than

  13. Magnetic compression laser driving circuit

    DOEpatents

    Ball, D.G.; Birx, D.; Cook, E.G.

    1993-01-05

    A magnetic compression laser driving circuit is disclosed. The magnetic compression laser driving circuit compresses voltage pulses in the range of 1.5 microseconds at 20 kilovolts of amplitude to pulses in the range of 40 nanoseconds and 60 kilovolts of amplitude. The magnetic compression laser driving circuit includes a multi-stage magnetic switch where the last stage includes a switch having at least two turns which has larger saturated inductance with less core material so that the efficiency of the circuit and hence the laser is increased.

  14. Magnetic compression laser driving circuit

    DOEpatents

    Ball, Don G.; Birx, Dan; Cook, Edward G.

    1993-01-01

    A magnetic compression laser driving circuit is disclosed. The magnetic compression laser driving circuit compresses voltage pulses in the range of 1.5 microseconds at 20 Kilovolts of amplitude to pulses in the range of 40 nanoseconds and 60 Kilovolts of amplitude. The magnetic compression laser driving circuit includes a multi-stage magnetic switch where the last stage includes a switch having at least two turns which has larger saturated inductance with less core material so that the efficiency of the circuit and hence the laser is increased.

  15. Effect of compressibility on the hypervelocity penetration

    NASA Astrophysics Data System (ADS)

    Song, W. J.; Chen, X. W.; Chen, P.

    2018-02-01

    We further consider the effect of rod strength by employing the compressible penetration model to study the effect of compressibility on hypervelocity penetration. Meanwhile, we define different instances of penetration efficiency in various modified models and compare these penetration efficiencies to identify the effects of different factors in the compressible model. To systematically discuss the effect of compressibility in different metallic rod-target combinations, we construct three cases, i.e., the penetrations by the more compressible rod into the less compressible target, rod into the analogously compressible target, and the less compressible rod into the more compressible target. The effects of volumetric strain, internal energy, and strength on the penetration efficiency are analyzed simultaneously. It indicates that the compressibility of the rod and target increases the pressure at the rod/target interface. The more compressible rod/target has larger volumetric strain and higher internal energy. Both the larger volumetric strain and higher strength enhance the penetration or anti-penetration ability. On the other hand, the higher internal energy weakens the penetration or anti-penetration ability. The two trends conflict, but the volumetric strain dominates in the variation of the penetration efficiency, which would not approach the hydrodynamic limit if the rod and target are not analogously compressible. However, if the compressibility of the rod and target is analogous, it has little effect on the penetration efficiency.

  16. Authenticity examination of compressed audio recordings using detection of multiple compression and encoders' identification.

    PubMed

    Korycki, Rafal

    2014-05-01

    Since the appearance of digital audio recordings, audio authentication has been becoming increasingly difficult. The currently available technologies and free editing software allow a forger to cut or paste any single word without audible artifacts. Nowadays, the only method referring to digital audio files commonly approved by forensic experts is the ENF criterion. It consists in fluctuation analysis of the mains frequency induced in electronic circuits of recording devices. Therefore, its effectiveness is strictly dependent on the presence of mains signal in the recording, which is a rare occurrence. Recently, much attention has been paid to authenticity analysis of compressed multimedia files and several solutions were proposed for detection of double compression in both digital video and digital audio. This paper addresses the problem of tampering detection in compressed audio files and discusses new methods that can be used for authenticity analysis of digital recordings. Presented approaches consist in evaluation of statistical features extracted from the MDCT coefficients as well as other parameters that may be obtained from compressed audio files. Calculated feature vectors are used for training selected machine learning algorithms. The detection of multiple compression covers up tampering activities as well as identification of traces of montage in digital audio recordings. To enhance the methods' robustness an encoder identification algorithm was developed and applied based on analysis of inherent parameters of compression. The effectiveness of tampering detection algorithms is tested on a predefined large music database consisting of nearly one million of compressed audio files. The influence of compression algorithms' parameters on the classification performance is discussed, based on the results of the current study. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.

    PubMed

    Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi

    2018-02-01

    On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.

  18. FRESCO: Referential compression of highly similar sequences.

    PubMed

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

  19. Micromechanics of composite laminate compression failure

    NASA Technical Reports Server (NTRS)

    Guynn, E. Gail; Bradley, Walter L.

    1986-01-01

    The Dugdale analysis for metals loaded in tension was adapted to model the failure of notched composite laminates loaded in compression. Compression testing details, MTS alignment verification, and equipment needs were resolved. Thus far, only 2 ductile material systems, HST7 and F155, were selected for study. A Wild M8 Zoom Stereomicroscope and necessary attachments for video taping and 35 mm pictures were purchased. Currently, this compression test system is fully operational. A specimen is loaded in compression, and load vs shear-crippling zone size is monitored and recorded. Data from initial compression tests indicate that the Dugdale model does not accurately predict the load vs damage zone size relationship of notched composite specimens loaded in compression.

  20. Randomised crossover trial of rate feedback and force during chest compressions for paediatric cardiopulmonary resuscitation.

    PubMed

    Gregson, Rachael Kathleen; Cole, Tim James; Skellett, Sophie; Bagkeris, Emmanouil; Welsby, Denise; Peters, Mark John

    2017-05-01

    To determine the effect of visual feedback on rate of chest compressions, secondarily relating the forces used. Randomised crossover trial. Tertiary teaching hospital. Fifty trained hospital staff. A thin sensor-mat placed over the manikin's chest measured rate and force. Rescuers applied compressions to the same paediatric manikin for two sessions. During one session they received visual feedback comparing their real-time rate with published guidelines. Primary: compression rate. Secondary: compression and residual forces. Rate of chest compressions (compressions per minute (compressions per minute; cpm)) varied widely (mean (SD) 111 (13), range 89-168), with a fourfold difference in variation during session 1 between those receiving and not receiving feedback (108 (5) vs 120 (20)). The interaction of session by feedback order was highly significant, indicating that this difference in mean rate between sessions was 14 cpm less (95% CI -22 to -5, p=0.002) in those given feedback first compared with those given it second. Compression force (N) varied widely (mean (SD) 306 (94); range 142-769). Those receiving feedback second (as opposed to first) used significantly lower force (adjusted mean difference -80 (95% CI -128 to -32), p=0.002). Mean residual force (18 N, SD 12, range 0-49) was unaffected by the intervention. While visual feedback restricted excessive compression rates to within the prescribed range, applied force remained widely variable. The forces required may differ with growth, but such variation treating one manikin is alarming. Feedback technologies additionally measuring force (effort) could help to standardise and define effective treatments throughout childhood. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  1. JPEG and wavelet compression of ophthalmic images

    NASA Astrophysics Data System (ADS)

    Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.

    1999-05-01

    This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.

  2. Quasi 1D Modeling of Mixed Compression Supersonic Inlets

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Connolly, Joseph W.; Paxson, Daniel E.; Woolwine, Kyle J.

    2012-01-01

    The AeroServoElasticity task under the NASA Supersonics Project is developing dynamic models of the propulsion system and the vehicle in order to conduct research for integrated vehicle dynamic performance. As part of this effort, a nonlinear quasi 1-dimensional model of the 2-dimensional bifurcated mixed compression supersonic inlet is being developed. The model utilizes computational fluid dynamics for both the supersonic and subsonic diffusers. The oblique shocks are modeled utilizing compressible flow equations. This model also implements variable geometry required to control the normal shock position. The model is flexible and can also be utilized to simulate other mixed compression supersonic inlet designs. The model was validated both in time and in the frequency domain against the legacy LArge Perturbation INlet code, which has been previously verified using test data. This legacy code written in FORTRAN is quite extensive and complex in terms of the amount of software and number of subroutines. Further, the legacy code is not suitable for closed loop feedback controls design, and the simulation environment is not amenable to systems integration. Therefore, a solution is to develop an innovative, more simplified, mixed compression inlet model with the same steady state and dynamic performance as the legacy code that also can be used for controls design. The new nonlinear dynamic model is implemented in MATLAB Simulink. This environment allows easier development of linear models for controls design for shock positioning. The new model is also well suited for integration with a propulsion system model to study inlet/propulsion system performance, and integration with an aero-servo-elastic system model to study integrated vehicle ride quality, vehicle stability, and efficiency.

  3. Safety and Efficacy of Defibrillator Charging During Ongoing Chest Compressions: A Multicenter Study

    PubMed Central

    Edelson, Dana P.; Robertson-Dick, Brian J.; Yuen, Trevor C.; Eilevstjønn, Joar; Walsh, Deborah; Bareis, Charles J.; Vanden Hoek, Terry L.; Abella, Benjamin S.

    2013-01-01

    BACKGROUND Pauses in chest compressions during cardiopulmonary resuscitation have been shown to correlate with poor outcomes. In an attempt to minimize these pauses, the American Heart Association recommends charging the defibrillator during chest compressions. While simulation work suggests decreased pause times using this technique, little is known about its use in clinical practice. METHODS We conducted a multicenter, retrospective study of defibrillator charging at three US academic teaching hospitals between April 2006 and April 2009. Data were abstracted from CPR-sensing defibrillator transcripts. Pre-shock pauses and total hands- off time preceding the defibrillation attempts were compared among techniques. RESULTS A total of 680 charge-cycles from 244 cardiac arrests were analyzed. The defibrillator was charged during ongoing chest compressions in 448 (65.9%) instances with wide variability across the three sites. Charging during compressions correlated with a decrease in median pre-shock pause [2.6 (IQR 1.9–3.8) vs 13.3 (IQR 8.6–19.5) s; p < 0.001] and total hands-off time in the 30 s preceding defibrillation [10.3 (IQR 6.4–13.8) vs 14.8 (IQR 11.0–19.6) s; p < 0.001]. The improvement in hands-off time was most pronounced when rescuers charged the defibrillator in anticipation of the pause, prior to any rhythm analysis. There was no difference in inappropriate shocks when charging during chest compressions (20.0 vs 20.1%; p=0.97) and there was only one instance noted of inadvertent shock administration during compressions, which went unnoticed by the compressor. CONCLUSIONS Charging during compressions is underutilized in clinical practice. The technique is associated with decreased hands-off time preceding defibrillation, with minimal risk to patients or rescuers. PMID:20807672

  4. A biomechanical evaluation of a cannulated compressive screw for use in fractures of the scaphoid.

    PubMed

    Rankin, G; Kuschner, S H; Orlando, C; McKellop, H; Brien, W W; Sherman, R

    1991-11-01

    The compressive force generated by a 3.5 mm ASIF cannulated cancellous screw with a 5 mm head was compared with that generated by a standard 3.5 mm ASIF screw (6 mm head), a 2.7 mm ASIF screw (5 mm head), and a Herbert screw. The screws were evaluated in the laboratory with the use of a custom-designed load washer (transducer) to the maximum compressive force generated by each screw until failure, either by thread stripping or by head migration into the specimen. Testing was done on paired cadaver scaphoids. To minimize the variability that occurs with human bone, and because of the cost and difficulty of obtaining human tissue specimens, a study was also done on polyurethane foam simulated bones. The 3.5 cannulated screw generated greater compressive forces than the Herbert screw but less compression than the 2.7 mm and 3.5 mm ASIF cortical screws. The 3.5 mm cannulated screw offers more rigid internal fixation for scaphoid fractures than the Herbert screw and gives the added advantage of placement over a guide wire.

  5. Finite element computation of compressible flows with the SUPG formulation

    NASA Technical Reports Server (NTRS)

    Le Beau, G. J.; Tezduyar, T. E.

    1991-01-01

    Finite element computation of compressible Euler equations is presented in the context of the streamline-upwind/Petrov-Galerkin (SUPG) formulation. The SUPG formulation, which is based on adding stabilizing terms to the Galerkin formulation, is further supplemented with a shock capturing operator which addresses the difficulty in maintaining a satisfactory solution near discontinuities in the solution field. The shock capturing operator, which has been derived from work done in entropy variables for a similar operator, is shown to lead to an appropriate level of additional stabilization near shocks, without resulting in excessive numerical diffusion. An implicit treatment of the impermeable wall boundary condition is also presented. This treatment of the no-penetration condition offers increased stability for large Courant numbers, and accelerated convergence of the computations for both implicit and explicit applications. Several examples are presented to demonstrate the ability of this method to solve the equations governing compressible fluid flow.

  6. Compression evaluation of surgery video recordings retaining diagnostic credibility (compression evaluation of surgery video)

    NASA Astrophysics Data System (ADS)

    Duplaga, M.; Leszczuk, M. I.; Papir, Z.; Przelaskowski, A.

    2008-12-01

    Wider dissemination of medical digital video libraries is affected by two correlated factors, resource effective content compression that directly influences its diagnostic credibility. It has been proved that it is possible to meet these contradictory requirements halfway for long-lasting and low motion surgery recordings at compression ratios close to 100 (bronchoscopic procedures were a case study investigated). As the main supporting assumption, it has been accepted that the content can be compressed as far as clinicians are not able to sense a loss of video diagnostic fidelity (a visually lossless compression). Different market codecs were inspected by means of the combined subjective and objective tests toward their usability in medical video libraries. Subjective tests involved a panel of clinicians who had to classify compressed bronchoscopic video content according to its quality under the bubble sort algorithm. For objective tests, two metrics (hybrid vector measure and hosaka Plots) were calculated frame by frame and averaged over a whole sequence.

  7. Self-similar regimes of turbulence in weakly coupled plasmas under compression

    NASA Astrophysics Data System (ADS)

    Viciconte, Giovanni; Gréa, Benoît-Joseph; Godeferd, Fabien S.

    2018-02-01

    Turbulence in weakly coupled plasmas under compression can experience a sudden dissipation of kinetic energy due to the abrupt growth of the viscosity coefficient governed by the temperature increase. We investigate in detail this phenomenon by considering a turbulent velocity field obeying the incompressible Navier-Stokes equations with a source term resulting from the mean velocity. The system can be simplified by a nonlinear change of variable, and then solved using both highly resolved direct numerical simulations and a spectral model based on the eddy-damped quasinormal Markovian closure. The model allows us to explore a wide range of initial Reynolds and compression numbers, beyond the reach of simulations, and thus permits us to evidence the presence of a nonlinear cascade phase. We find self-similarity of intermediate regimes as well as of the final decay of turbulence, and we demonstrate the importance of initial distribution of energy at large scales. This effect can explain the global sensitivity of the flow dynamics to initial conditions, which we also illustrate with simulations of compressed homogeneous isotropic turbulence and of imploding spherical turbulent layers relevant to inertial confinement fusion.

  8. Competitive Parallel Processing For Compression Of Data

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Fender, Antony R. H.

    1990-01-01

    Momentarily-best compression algorithm selected. Proposed competitive-parallel-processing system compresses data for transmission in channel of limited band-width. Likely application for compression lies in high-resolution, stereoscopic color-television broadcasting. Data from information-rich source like color-television camera compressed by several processors, each operating with different algorithm. Referee processor selects momentarily-best compressed output.

  9. Video compression via log polar mapping

    NASA Astrophysics Data System (ADS)

    Weiman, Carl F. R.

    1990-09-01

    A three stage process for compressing real time color imagery by factors in the range of 1600-to-i is proposed for remote driving'. The key is to match the resolution gradient of human vision and preserve only those cues important for driving. Some hardware components have been built and a research prototype is planned. Stage 1 is log polar mapping, which reduces peripheral image sampling resolution to match the peripheral gradient in human visual acuity. This can yield 25-to-i compression. Stage 2 partitions color and contrast into separate channels. This can yield 8-to-i compression. Stage 3 is conventional block data compression such as hybrid DCT/DPCM which can yield 8-to-i compression. The product of all three stages is i600-to-i data compression. The compressed signal can be transmitted over FM bands which do not require line-of-sight, greatly increasing the range of operation and reducing the topographic exposure of teleoperated vehicles. Since the compressed channel data contains the essential constituents of human visual perception, imagery reconstructed by inverting each of the three compression stages is perceived as complete, provided the operator's direction of gaze is at the center of the mapping. This can be achieved by eye-tracker feedback which steers the center of log polar mapping in the remote vehicle to match the teleoperator's direction of gaze.

  10. The Distinction of Hot Herbal Compress, Hot Compress, and Topical Diclofenac as Myofascial Pain Syndrome Treatment.

    PubMed

    Boonruab, Jurairat; Nimpitakpong, Netraya; Damjuti, Watchara

    2018-01-01

    This randomized controlled trial aimed to investigate the distinctness after treatment among hot herbal compress, hot compress, and topical diclofenac. The registrants were equally divided into groups and received the different treatments including hot herbal compress, hot compress, and topical diclofenac group, which served as the control group. After treatment courses, Visual Analog Scale and 36-Item Short Form Health survey were, respectively, used to establish the level of pain intensity and quality of life. In addition, cervical range of motion and pressure pain threshold were also examined to identify the motional effects. All treatments showed significantly decreased level of pain intensity and increased cervical range of motion, while the intervention groups exhibited extraordinary capability compared with the topical diclofenac group in pressure pain threshold and quality of life. In summary, hot herbal compress holds promise to be an efficacious treatment parallel to hot compress and topical diclofenac.

  11. Prediction of compression-induced image interpretability degradation

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Chen, Hua-Mei; Irvine, John M.; Wang, Zhonghai; Chen, Genshe; Nagy, James; Scott, Stephen

    2018-04-01

    Image compression is an important component in modern imaging systems as the volume of the raw data collected is increasing. To reduce the volume of data while collecting imagery useful for analysis, choosing the appropriate image compression method is desired. Lossless compression is able to preserve all the information, but it has limited reduction power. On the other hand, lossy compression, which may result in very high compression ratios, suffers from information loss. We model the compression-induced information loss in terms of the National Imagery Interpretability Rating Scale or NIIRS. NIIRS is a user-based quantification of image interpretability widely adopted by the Geographic Information System community. Specifically, we present the Compression Degradation Image Function Index (CoDIFI) framework that predicts the NIIRS degradation (i.e., a decrease of NIIRS level) for a given compression setting. The CoDIFI-NIIRS framework enables a user to broker the maximum compression setting while maintaining a specified NIIRS rating.

  12. Video Compression

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.

  13. Comparison of Several Numerical Methods for Simulation of Compressible Shear Layers

    NASA Technical Reports Server (NTRS)

    Kennedy, Christopher A.; Carpenter, Mark H.

    1997-01-01

    An investigation is conducted on several numerical schemes for use in the computation of two-dimensional, spatially evolving, laminar variable-density compressible shear layers. Schemes with various temporal accuracies and arbitrary spatial accuracy for both inviscid and viscous terms are presented and analyzed. All integration schemes use explicit or compact finite-difference derivative operators. Three classes of schemes are considered: an extension of MacCormack's original second-order temporally accurate method, a new third-order variant of the schemes proposed by Rusanov and by Kutier, Lomax, and Warming (RKLW), and third- and fourth-order Runge-Kutta schemes. In each scheme, stability and formal accuracy are considered for the interior operators on the convection-diffusion equation U(sub t) + aU(sub x) = alpha U(sub xx). Accuracy is also verified on the nonlinear problem, U(sub t) + F(sub x) = 0. Numerical treatments of various orders of accuracy are chosen and evaluated for asymptotic stability. Formally accurate boundary conditions are derived for several sixth- and eighth-order central-difference schemes. Damping of high wave-number data is accomplished with explicit filters of arbitrary order. Several schemes are used to compute variable-density compressible shear layers, where regions of large gradients exist.

  14. Data compression for sequencing data

    PubMed Central

    2013-01-01

    Post-Sanger sequencing methods produce tons of data, and there is a general agreement that the challenge to store and process them must be addressed with data compression. In this review we first answer the question “why compression” in a quantitative manner. Then we also answer the questions “what” and “how”, by sketching the fundamental compression ideas, describing the main sequencing data types and formats, and comparing the specialized compression algorithms and tools. Finally, we go back to the question “why compression” and give other, perhaps surprising answers, demonstrating the pervasiveness of data compression techniques in computational biology. PMID:24252160

  15. Effect of interfragmentary gap on compression force in a headless compression screw used for scaphoid fixation.

    PubMed

    Tan, E S; Mat Jais, I S; Abdul Rahim, S; Tay, S C

    2018-01-01

    We investigated the effect of an interfragmentary gap on the final compression force using the Acutrak 2 Mini headless compression screw (length 26 mm) (Acumed, Hillsboro, OR, USA). Two blocks of solid rigid polyurethane foam in a custom jig were separated by spacers of varying thickness (1.0, 1.5, 2.0 and 2.5 mm) to simulate an interfragmentary gap. The spacers were removed before full insertion of the screw and the compression force was measured when the screw was buried 2 mm below the surface of the upper block. Gaps of 1.5 mm and 2.0 mm resulted in significantly decreased compression forces, whereas there was no significant decrease in compression force with a gap of 1 mm. An interfragmentary gap of 2.5 mm did not result in any contact between blocks. We conclude that an increased interfragmentary gap leads to decreased compression force with this screw, which may have implications on fracture healing.

  16. Compression techniques in tele-radiology

    NASA Astrophysics Data System (ADS)

    Lu, Tianyu; Xiong, Zixiang; Yun, David Y.

    1999-10-01

    This paper describes a prototype telemedicine system for remote 3D radiation treatment planning. Due to voluminous medical image data and image streams generated in interactive frame rate involved in the application, the importance of deploying adjustable lossy to lossless compression techniques is emphasized in order to achieve acceptable performance via various kinds of communication networks. In particular, the compression of the data substantially reduces the transmission time and therefore allows large-scale radiation distribution simulation and interactive volume visualization using remote supercomputing resources in a timely fashion. The compression algorithms currently used in the software we developed are JPEG and H.263 lossy methods and Lempel-Ziv (LZ77) lossless methods. Both objective and subjective assessment of the effect of lossy compression methods on the volume data are conducted. Favorable results are obtained showing that substantial compression ratio is achievable within distortion tolerance. From our experience, we conclude that 30dB (PSNR) is about the lower bound to achieve acceptable quality when applying lossy compression to anatomy volume data (e.g. CT). For computer simulated data, much higher PSNR (up to 100dB) is expectable. This work not only introduces such novel approach for delivering medical services that will have significant impact on the existing cooperative image-based services, but also provides a platform for the physicians to assess the effects of lossy compression techniques on the diagnostic and aesthetic appearance of medical imaging.

  17. Impact of Various Compression Ratio on the Compression Ignition Engine with Diesel and Jatropha Biodiesel

    NASA Astrophysics Data System (ADS)

    Sivaganesan, S.; Chandrasekaran, M.; Ruban, M.

    2017-03-01

    The present experimental investigation evaluates the effects of using blends of diesel fuel with 20% concentration of Methyl Ester of Jatropha biodiesel blended with various compression ratio. Both the diesel and biodiesel fuel blend was injected at 23º BTDC to the combustion chamber. The experiment was carried out with three different compression ratio. Biodiesel was extracted from Jatropha oil, 20% (B20) concentration is found to be best blend ratio from the earlier experimental study. The engine was maintained at various compression ratio i.e., 17.5, 16.5 and 15.5 respectively. The main objective is to obtain minimum specific fuel consumption, better efficiency and lesser Emission with different compression ratio. The results concluded that full load show an increase in efficiency when compared with diesel, highest efficiency is obtained with B20MEOJBA with compression ratio 17.5. It is noted that there is an increase in thermal efficiency as the blend ratio increases. Biodiesel blend has performance closer to diesel, but emission is reduced in all blends of B20MEOJBA compared to diesel. Thus this work focuses on the best compression ratio and suitability of biodiesel blends in diesel engine as an alternate fuel.

  18. Compression of Probabilistic XML Documents

    NASA Astrophysics Data System (ADS)

    Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice

    Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

  19. Variable delivery, fixed displacement pump

    DOEpatents

    Sommars, Mark F.

    2001-01-01

    A variable delivery, fixed displacement pump comprises a plurality of pistons reciprocated within corresponding cylinders in a cylinder block. The pistons are reciprocated by rotation of a fixed angle swash plate connected to the pistons. The pistons and cylinders cooperate to define a plurality of fluid compression chambers each have a delivery outlet. A vent port is provided from each fluid compression chamber to vent fluid therefrom during at least a portion of the reciprocal stroke of the piston. Each piston and cylinder combination cooperates to close the associated vent port during another portion of the reciprocal stroke so that fluid is then pumped through the associated delivery outlet. The delivery rate of the pump is varied by adjusting the axial position of the swash plate relative to the cylinder block, which varies the duration of the piston stroke during which the vent port is closed.

  20. Oblivious image watermarking combined with JPEG compression

    NASA Astrophysics Data System (ADS)

    Chen, Qing; Maitre, Henri; Pesquet-Popescu, Beatrice

    2003-06-01

    For most data hiding applications, the main source of concern is the effect of lossy compression on hidden information. The objective of watermarking is fundamentally in conflict with lossy compression. The latter attempts to remove all irrelevant and redundant information from a signal, while the former uses the irrelevant information to mask the presence of hidden data. Compression on a watermarked image can significantly affect the retrieval of the watermark. Past investigations of this problem have heavily relied on simulation. It is desirable not only to measure the effect of compression on embedded watermark, but also to control the embedding process to survive lossy compression. In this paper, we focus on oblivious watermarking by assuming that the watermarked image inevitably undergoes JPEG compression prior to watermark extraction. We propose an image-adaptive watermarking scheme where the watermarking algorithm and the JPEG compression standard are jointly considered. Watermark embedding takes into consideration the JPEG compression quality factor and exploits an HVS model to adaptively attain a proper trade-off among transparency, hiding data rate, and robustness to JPEG compression. The scheme estimates the image-dependent payload under JPEG compression to achieve the watermarking bit allocation in a determinate way, while maintaining consistent watermark retrieval performance.

  1. Fractal-Based Image Compression, II

    DTIC Science & Technology

    1990-06-01

    data for figure 3 ----------------------------------- 10 iv 1. INTRODUCTION The need for data compression is not new. With humble beginnings such as...the use of acronyms and abbreviations in spoken and written word, the methods for data compression became more advanced as the need for information...grew. The Morse code, developed because of the need for faster telegraphy, was an early example of a data compression technique. Largely because of the

  2. Compressible Flow Toolbox

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.

    2006-01-01

    The Compressible Flow Toolbox is primarily a MATLAB-language implementation of a set of algorithms that solve approximately 280 linear and nonlinear classical equations for compressible flow. The toolbox is useful for analysis of one-dimensional steady flow with either constant entropy, friction, heat transfer, or Mach number greater than 1. The toolbox also contains algorithms for comparing and validating the equation-solving algorithms against solutions previously published in open literature. The classical equations solved by the Compressible Flow Toolbox are as follows: The isentropic-flow equations, The Fanno flow equations (pertaining to flow of an ideal gas in a pipe with friction), The Rayleigh flow equations (pertaining to frictionless flow of an ideal gas, with heat transfer, in a pipe of constant cross section), The normal-shock equations, The oblique-shock equations, and The expansion equations.

  3. Music preferences with hearing aids: effects of signal properties, compression settings, and listener characteristics.

    PubMed

    Croghan, Naomi B H; Arehart, Kathryn H; Kates, James M

    2014-01-01

    Current knowledge of how to design and fit hearing aids to optimize music listening is limited. Many hearing-aid users listen to recorded music, which often undergoes compression limiting (CL) in the music industry. Therefore, hearing-aid users may experience twofold effects of compression when listening to recorded music: music-industry CL and hearing-aid wide dynamic-range compression (WDRC). The goal of this study was to examine the roles of input-signal properties, hearing-aid processing, and individual variability in the perception of recorded music, with a focus on the effects of dynamic-range compression. A group of 18 experienced hearing-aid users made paired-comparison preference judgments for classical and rock music samples using simulated hearing aids. Music samples were either unprocessed before hearing-aid input or had different levels of music-industry CL. Hearing-aid conditions included linear gain and individually fitted WDRC. Combinations of four WDRC parameters were included: fast release time (50 msec), slow release time (1,000 msec), three channels, and 18 channels. Listeners also completed several psychophysical tasks. Acoustic analyses showed that CL and WDRC reduced temporal envelope contrasts, changed amplitude distributions across the acoustic spectrum, and smoothed the peaks of the modulation spectrum. Listener judgments revealed that fast WDRC was least preferred for both genres of music. For classical music, linear processing and slow WDRC were equally preferred, and the main effect of number of channels was not significant. For rock music, linear processing was preferred over slow WDRC, and three channels were preferred to 18 channels. Heavy CL was least preferred for classical music, but the amount of CL did not change the patterns of WDRC preferences for either genre. Auditory filter bandwidth as estimated from psychophysical tuning curves was associated with variability in listeners' preferences for classical music. Fast

  4. Compressed Sensing for Chemistry

    NASA Astrophysics Data System (ADS)

    Sanders, Jacob Nathan

    Many chemical applications, from spectroscopy to quantum chemistry, involve measuring or computing a large amount of data, and then compressing this data to retain the most chemically-relevant information. In contrast, compressed sensing is an emergent technique that makes it possible to measure or compute an amount of data that is roughly proportional to its information content. In particular, compressed sensing enables the recovery of a sparse quantity of information from significantly undersampled data by solving an ℓ 1-optimization problem. This thesis represents the application of compressed sensing to problems in chemistry. The first half of this thesis is about spectroscopy. Compressed sensing is used to accelerate the computation of vibrational and electronic spectra from real-time time-dependent density functional theory simulations. Using compressed sensing as a drop-in replacement for the discrete Fourier transform, well-resolved frequency spectra are obtained at one-fifth the typical simulation time and computational cost. The technique is generalized to multiple dimensions and applied to two-dimensional absorption spectroscopy using experimental data collected on atomic rubidium vapor. Finally, a related technique known as super-resolution is applied to open quantum systems to obtain realistic models of a protein environment, in the form of atomistic spectral densities, at lower computational cost. The second half of this thesis deals with matrices in quantum chemistry. It presents a new use of compressed sensing for more efficient matrix recovery whenever the calculation of individual matrix elements is the computational bottleneck. The technique is applied to the computation of the second-derivative Hessian matrices in electronic structure calculations to obtain the vibrational modes and frequencies of molecules. When applied to anthracene, this technique results in a threefold speed-up, with greater speed-ups possible for larger molecules. The

  5. Subjective and objective assessment of patients' compression therapy skills as a predicator of ulcer recurrence.

    PubMed

    Mościcka, Paulina; Szewczyk, Maria T; Jawień, Arkadiusz; Cierzniakowska, Katarzyna; Cwajda-Białasik, Justyna

    2016-07-01

    To verify whether the subjectively and objectively assessed patient's skills in applying compression therapy constitute a predicting factor of venous ulcer recurrence. Systematic implementation of compression therapy by the patient is a core of prophylaxis for recurrent ulcers. Therefore, patient education constitutes a significant element of care. However, controversies remain if all individuals benefit equally from education. A retrospective analysis. The study included medical records of patients with venous ulcers (n = 351) treated between 2001 and 2011 at the Clinic for Chronic Wounds at Bydgoszcz Clinical Hospital. We compared two groups of patients, (1) with at least one episode of recurrent ulcer during the five-year observation period, and (2) without recurrences throughout the analysed period in terms of their theoretical skills and knowledge on compression therapy recorded at baseline and after one month. Very good self-assessment of a patient's compression therapy skills and weak assessment of these skills by a nurse proved significant risk factors for recurrence of the ulcers on univariate analysis. The significance of these variables as independent risk factors for recurrent ulcers has been also confirmed on multivariate analysis, which also took into account other clinical parameters. Building up proper compression therapy skills among the patients should be the key element of a properly construed nurse-based prophylactic program, as it is the most significant modifiable risk factor for recurrent ulcers. Although the development of compression skills is undeniably important, also other factors should be considered, e.g. surgical correction of superficial reflux. Instruction on compression therapy should be conducted by properly trained nursing personnel - the nurses should have received both content and psychological training. The compression therapy training should contain practical instruction with guided exercises and in-depth objective

  6. Compression Frequency Choice for Compression Mass Gauge Method and Effect on Measurement Accuracy

    NASA Astrophysics Data System (ADS)

    Fu, Juan; Chen, Xiaoqian; Huang, Yiyong

    2013-12-01

    It is a difficult job to gauge the liquid fuel mass in a tank on spacecrafts under microgravity condition. Without the presence of strong buoyancy, the configuration of the liquid and gas in the tank is uncertain and more than one bubble may exist in the liquid part. All these will affect the measure accuracy of liquid mass gauge, especially for a method called Compression Mass Gauge (CMG). Four resonance resources affect the choice of compression frequency for CMG method. There are the structure resonance, liquid sloshing, transducer resonance and bubble resonance. Ground experimental apparatus are designed and built to validate the gauging method and the influence of different compression frequencies at different fill levels on the measurement accuracy. Harmonic phenomenon should be considered during filter design when processing test data. Results demonstrate the ground experiment system performances well with high accuracy and the measurement accuracy increases as the compression frequency climbs in low fill levels. But low compression frequencies should be the better choice for high fill levels. Liquid sloshing induces the measurement accuracy to degrade when the surface is excited to wave by external disturbance at the liquid natural frequency. The measurement accuracy is still acceptable at small amplitude vibration.

  7. Compressed normalized block difference for object tracking

    NASA Astrophysics Data System (ADS)

    Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge

    2018-04-01

    Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.

  8. Induction of a shorter compression phase is correlated with a deeper chest compression during metronome-guided cardiopulmonary resuscitation: a manikin study.

    PubMed

    Chung, Tae Nyoung; Bae, Jinkun; Kim, Eui Chung; Cho, Yun Kyung; You, Je Sung; Choi, Sung Wook; Kim, Ok Jun

    2013-07-01

    Recent studies have shown that there may be an interaction between duty cycle and other factors related to the quality of chest compression. Duty cycle represents the fraction of compression phase. We aimed to investigate the effect of shorter compression phase on average chest compression depth during metronome-guided cardiopulmonary resuscitation. Senior medical students performed 12 sets of chest compressions following the guiding sounds, with three down-stroke patterns (normal, fast and very fast) and four rates (80, 100, 120 and 140 compressions/min) in random sequence. Repeated-measures analysis of variance was used to compare the average chest compression depth and duty cycle among the trials. The average chest compression depth increased and the duty cycle decreased in a linear fashion as the down-stroke pattern shifted from normal to very fast (p<0.001 for both). Linear increase of average chest compression depth following the increase of the rate of chest compression was observed only with normal down-stroke pattern (p=0.004). Induction of a shorter compression phase is correlated with a deeper chest compression during metronome-guided cardiopulmonary resuscitation.

  9. Effects from equation of state and rheology in dissipative heating in compressible mantle convection

    NASA Technical Reports Server (NTRS)

    Yuen, David A.; Quareni, Francesca; Hong, H.-J.

    1987-01-01

    The effects of compressibility on mantle convection are considered, incorporating the effects of equations of state and rheology in the dissipative heating term of the energy equation. The ways in which compression may raise the interior mantle temperature are explicitly demonstrated, and it is shown how this effect can be used to constrain some of the intrinsic parameters associated with the equation of state in the mantle. It is concluded that the coupling between variable viscosity and equation of state in dissipative heating is potentially an important mechanism in mantle convection. These findings emphasize that rheology, equation of state, and radiogenic heating are all linked to each other by nonlinear thermomechanical couplings.

  10. Tomographic Image Compression Using Multidimensional Transforms.

    ERIC Educational Resources Information Center

    Villasenor, John D.

    1994-01-01

    Describes a method for compressing tomographic images obtained using Positron Emission Tomography (PET) and Magnetic Resonance (MR) by applying transform compression using all available dimensions. This takes maximum advantage of redundancy of the data, allowing significant increases in compression efficiency and performance. (13 references) (KRN)

  11. 30 CFR 77.412 - Compressed air systems.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not be...

  12. 30 CFR 77.412 - Compressed air systems.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not be...

  13. Damage development under compression-compression fatigue loading in a stitched uniwoven graphite/epoxy composite material

    NASA Technical Reports Server (NTRS)

    Vandermey, Nancy E.; Morris, Don H.; Masters, John E.

    1991-01-01

    Damage initiation and growth under compression-compression fatigue loading were investigated for a stitched uniweave material system with an underlying AS4/3501-6 quasi-isotropic layup. Performance of unnotched specimens having stitch rows at either 0 degree or 90 degrees to the loading direction was compared. Special attention was given to the effects of stitching related manufacturing defects. Damage evaluation techniques included edge replication, stiffness monitoring, x-ray radiography, residual compressive strength, and laminate sectioning. It was found that the manufacturing defect of inclined stitches had the greatest adverse effect on material performance. Zero degree and 90 degree specimen performances were generally the same. While the stitches were the source of damage initiation, they also slowed damage propagation both along the length and across the width and affected through-the-thickness damage growth. A pinched layer zone formed by the stitches particularly affected damage initiation and growth. The compressive failure mode was transverse shear for all specimens, both in static compression and fatigue cycling effects.

  14. Application of content-based image compression to telepathology

    NASA Astrophysics Data System (ADS)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  15. Efficient compression of molecular dynamics trajectory files.

    PubMed

    Marais, Patrick; Kenwood, Julian; Smith, Keegan Carruthers; Kuttel, Michelle M; Gain, James

    2012-10-15

    We investigate whether specific properties of molecular dynamics trajectory files can be exploited to achieve effective file compression. We explore two classes of lossy, quantized compression scheme: "interframe" predictors, which exploit temporal coherence between successive frames in a simulation, and more complex "intraframe" schemes, which compress each frame independently. Our interframe predictors are fast, memory-efficient and well suited to on-the-fly compression of massive simulation data sets, and significantly outperform the benchmark BZip2 application. Our schemes are configurable: atomic positional accuracy can be sacrificed to achieve greater compression. For high fidelity compression, our linear interframe predictor gives the best results at very little computational cost: at moderate levels of approximation (12-bit quantization, maximum error ≈ 10(-2) Å), we can compress a 1-2 fs trajectory file to 5-8% of its original size. For 200 fs time steps-typically used in fine grained water diffusion experiments-we can compress files to ~25% of their input size, still substantially better than BZip2. While compression performance degrades with high levels of quantization, the simulation error is typically much greater than the associated approximation error in such cases. Copyright © 2012 Wiley Periodicals, Inc.

  16. Poor chest compression quality with mechanical compressions in simulated cardiopulmonary resuscitation: a randomized, cross-over manikin study.

    PubMed

    Blomberg, Hans; Gedeborg, Rolf; Berglund, Lars; Karlsten, Rolf; Johansson, Jakob

    2011-10-01

    Mechanical chest compression devices are being implemented as an aid in cardiopulmonary resuscitation (CPR), despite lack of evidence of improved outcome. This manikin study evaluates the CPR-performance of ambulance crews, who had a mechanical chest compression device implemented in their routine clinical practice 8 months previously. The objectives were to evaluate time to first defibrillation, no-flow time, and estimate the quality of compressions. The performance of 21 ambulance crews (ambulance nurse and emergency medical technician) with the authorization to perform advanced life support was studied in an experimental, randomized cross-over study in a manikin setup. Each crew performed two identical CPR scenarios, with and without the aid of the mechanical compression device LUCAS. A computerized manikin was used for data sampling. There were no substantial differences in time to first defibrillation or no-flow time until first defibrillation. However, the fraction of adequate compressions in relation to total compressions was remarkably low in LUCAS-CPR (58%) compared to manual CPR (88%) (95% confidence interval for the difference: 13-50%). Only 12 out of the 21 ambulance crews (57%) applied the mandatory stabilization strap on the LUCAS device. The use of a mechanical compression aid was not associated with substantial differences in time to first defibrillation or no-flow time in the early phase of CPR. However, constant but poor chest compressions due to failure in recognizing and correcting a malposition of the device may counteract a potential benefit of mechanical chest compressions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  17. MHD simulation of plasma compression experiments

    NASA Astrophysics Data System (ADS)

    Reynolds, Meritt; Barsky, Sandra; de Vietien, Peter

    2017-10-01

    General Fusion (GF) is working to build a magnetized target fusion (MTF) power plant based on compression of magnetically-confined plasma by liquid metal. GF is testing this compression concept by collapsing solid aluminum liners onto plasmas formed by coaxial helicity injection in a series of experiments called PCS (Plasma Compression, Small). We simulate the PCS experiments using the finite-volume MHD code VAC. The single-fluid plasma model includes temperature-dependent resistivity and anisotropic heat transport. The time-dependent curvilinear mesh for MHD simulation is derived from LS-DYNA simulations of actual field tests of liner implosion. We will discuss how 3D simulations reproduced instability observed in the PCS13 experiment and correctly predicted stabilization of PCS14 by ramping the shaft current during compression. We will also present a comparison of simulated Mirnov and x-ray diagnostics with experimental measurements indicating that PCS14 compressed well to a linear compression ratio of 2.5:1.

  18. Visually lossless compression of digital hologram sequences

    NASA Astrophysics Data System (ADS)

    Darakis, Emmanouil; Kowiel, Marcin; Näsänen, Risto; Naughton, Thomas J.

    2010-01-01

    Digital hologram sequences have great potential for the recording of 3D scenes of moving macroscopic objects as their numerical reconstruction can yield a range of perspective views of the scene. Digital holograms inherently have large information content and lossless coding of holographic data is rather inefficient due to the speckled nature of the interference fringes they contain. Lossy coding of still holograms and hologram sequences has shown promising results. By definition, lossy compression introduces errors in the reconstruction. In all of the previous studies, numerical metrics were used to measure the compression error and through it, the coding quality. Digital hologram reconstructions are highly speckled and the speckle pattern is very sensitive to data changes. Hence, numerical quality metrics can be misleading. For example, for low compression ratios, a numerically significant coding error can have visually negligible effects. Yet, in several cases, it is of high interest to know how much lossy compression can be achieved, while maintaining the reconstruction quality at visually lossless levels. Using an experimental threshold estimation method, the staircase algorithm, we determined the highest compression ratio that was not perceptible to human observers for objects compressed with Dirac and MPEG-4 compression methods. This level of compression can be regarded as the point below which compression is perceptually lossless although physically the compression is lossy. It was found that up to 4 to 7.5 fold compression can be obtained with the above methods without any perceptible change in the appearance of video sequences.

  19. 46 CFR 147.60 - Compressed gases.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Compressed gases. 147.60 Section 147.60 Shipping COAST... Other Special Requirements for Particular Materials § 147.60 Compressed gases. (a) Cylinder requirements. Cylinders used for containing hazardous ships' stores that are compressed gases must be— (1) Authorized for...

  20. Compressing with dominant hand improves quality of manual chest compressions for rescuers who performed suboptimal CPR in manikins.

    PubMed

    Wang, Juan; Tang, Ce; Zhang, Lei; Gong, Yushun; Yin, Changlin; Li, Yongqin

    2015-07-01

    The question of whether the placement of the dominant hand against the sternum could improve the quality of manual chest compressions remains controversial. In the present study, we evaluated the influence of dominant vs nondominant hand positioning on the quality of conventional cardiopulmonary resuscitation (CPR) during prolonged basic life support (BLS) by rescuers who performed optimal and suboptimal compressions. Six months after completing a standard BLS training course, 101 medical students were instructed to perform adult single-rescuer BLS for 8 minutes on a manikin with a randomized hand position. Twenty-four hours later, the students placed the opposite hand in contact with the sternum while performing CPR. Those with an average compression depth of less than 50 mm were considered suboptimal. Participants who had performed suboptimal compressions were significantly shorter (170.2 ± 6.8 vs 174.0 ± 5.6 cm, P = .008) and lighter (58.9 ± 7.6 vs 66.9 ± 9.6 kg, P < .001) than those who performed optimal compressions. No significant differences in CPR quality were observed between dominant and nondominant hand placements for these who had an average compression depth of greater than 50 mm. However, both the compression depth (49.7 ± 4.2 vs 46.5 ± 4.1 mm, P = .003) and proportion of chest compressions with an appropriate depth (47.6% ± 27.8% vs 28.0% ± 23.4%, P = .006) were remarkably higher when compressing the chest with the dominant hand against the sternum for those who performed suboptimal CPR. Chest compression quality significantly improved when the dominant hand was placed against the sternum for those who performed suboptimal compressions during conventional CPR. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Cluster compression algorithm: A joint clustering/data compression concept

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.

    1977-01-01

    The Cluster Compression Algorithm (CCA), which was developed to reduce costs associated with transmitting, storing, distributing, and interpreting LANDSAT multispectral image data is described. The CCA is a preprocessing algorithm that uses feature extraction and data compression to more efficiently represent the information in the image data. The format of the preprocessed data enables simply a look-up table decoding and direct use of the extracted features to reduce user computation for either image reconstruction, or computer interpretation of the image data. Basically, the CCA uses spatially local clustering to extract features from the image data to describe spectral characteristics of the data set. In addition, the features may be used to form a sequence of scalar numbers that define each picture element in terms of the cluster features. This sequence, called the feature map, is then efficiently represented by using source encoding concepts. Various forms of the CCA are defined and experimental results are presented to show trade-offs and characteristics of the various implementations. Examples are provided that demonstrate the application of the cluster compression concept to multi-spectral images from LANDSAT and other sources.

  2. Study on the influence of supplying compressed air channels and evicting channels on pneumatical oscillation systems for vibromooshing

    NASA Astrophysics Data System (ADS)

    Glăvan, D. O.; Radu, I.; Babanatsas, T.; Babanatis Merce, R. M.; Kiss, I.; Gaspar, M. C.

    2018-01-01

    The paper presents a pneumatic system with two oscillating masses. The system is composed of a cylinder (framework) with mass m1, which has a piston with mass m2 inside. The cylinder (framework system) has one supplying channel for compressed air and one evicting channel for each work chamber (left and right of the piston). Functionality of the piston position comparatively with the cylinder (framework) is possible through the supplying or evicting of compressed air. The variable force that keeps the movement depends on variation of the pressure that is changing depending on the piston position according to the cylinder (framework) and to the section form that is supplying and evicting channels with compressed air. The paper presents the physical model/pattern, the mathematical model/pattern (differential equations) and numerical solution of the differential equations in hypothesis with the section form of supplying and evicting channels with compressed air is rectangular (variation linear) or circular (variation nonlinear).

  3. Comparison of three portable instruments to measure compression pressure.

    PubMed

    Partsch, H; Mosti, G

    2010-10-01

    Measurement of interface pressure between the skin and a compression device has gained practical importance not only for characterizing the efficacy of different compression products in physiological and clinical studies but also for the training of medical staff. A newly developed portable pneumatic pressure transducer (Picopress®) was compared with two established systems (Kikuhime® and SIGaT tester®) measuring linearity, variability and accuracy on a cylindrical model using a stepwise inflated sphygmomanometer as the reference. In addition the variation coefficients were measured by applying the transducers repeatedly under a blood pressure cuff on the distal lower leg of a healthy human subject with stepwise inflation. In the pressure range between 10 and 80 mmHg all three devices showed a linear association compared with the sphygmomanometer values (Pearson r>0.99). The best reproducibility (variation coefficients between 1.05-7.4%) and the highest degree of accuracy demonstrated by Bland-Altman plots was achieved with the Picopress® transducer. Repeated measurements of pressure in a human leg revealed average variation coefficients for the three devices of 4.17% (Kikuhime®), 8.52% (SIGaT®) and 2.79% (Picopress®). The results suggest that the Picopress® transducer, which also allows dynamic pressure tracing in connection with a software program and which may be left under a bandage for several days, is a reliable instrument for measuring the pressure under a compression device.

  4. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  5. Determination of friction coefficient in unconfined compression of brain tissue.

    PubMed

    Rashid, Badar; Destrade, Michel; Gilchrist, Michael D

    2012-10-01

    Unconfined compression tests are more convenient to perform on cylindrical samples of brain tissue than tensile tests in order to estimate mechanical properties of the brain tissue because they allow homogeneous deformations. The reliability of these tests depends significantly on the amount of friction generated at the specimen/platen interface. Thus, there is a crucial need to find an approximate value of the friction coefficient in order to predict a possible overestimation of stresses during unconfined compression tests. In this study, a combined experimental-computational approach was adopted to estimate the dynamic friction coefficient μ of porcine brain matter against metal platens in compressive tests. Cylindrical samples of porcine brain tissue were tested up to 30% strain at variable strain rates, both under bonded and lubricated conditions in the same controlled environment. It was established that μ was equal to 0.09±0.03, 0.18±0.04, 0.18±0.04 and 0.20±0.02 at strain rates of 1, 30, 60 and 90/s, respectively. Additional tests were also performed to analyze brain tissue under lubricated and bonded conditions, with and without initial contact of the top platen with the brain tissue, with different specimen aspect ratios and with different lubricants (Phosphate Buffer Saline (PBS), Polytetrafluoroethylene (PTFE) and Silicone). The test conditions (lubricant used, biological tissue, loading velocity) adopted in this study were similar to the studies conducted by other research groups. This study will help to understand the amount of friction generated during unconfined compression of brain tissue for strain rates of up to 90/s. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. MP3 compression of Doppler ultrasound signals.

    PubMed

    Poepping, Tamie L; Gill, Jeremy; Fenster, Aaron; Holdsworth, David W

    2003-01-01

    The effect of lossy, MP3 compression on spectral parameters derived from Doppler ultrasound (US) signals was investigated. Compression was tested on signals acquired from two sources: 1. phase quadrature and 2. stereo audio directional output. A total of 11, 10-s acquisitions of Doppler US signal were collected from each source at three sites in a flow phantom. Doppler signals were digitized at 44.1 kHz and compressed using four grades of MP3 compression (in kilobits per second, kbps; compression ratios in brackets): 1400 kbps (uncompressed), 128 kbps (11:1), 64 kbps (22:1) and 32 kbps (44:1). Doppler spectra were characterized by peak velocity, mean velocity, spectral width, integrated power and ratio of spectral power between negative and positive velocities. The results suggest that MP3 compression on digital Doppler US signals is feasible at 128 kbps, with a resulting 11:1 compression ratio, without compromising clinically relevant information. Higher compression ratios led to significant differences for both signal sources when compared with the uncompressed signals. Copyright 2003 World Federation for Ultrasound in Medicine & Biology

  7. A New Compression Method for FITS Tables

    NASA Technical Reports Server (NTRS)

    Pence, William; Seaman, Rob; White, Richard L.

    2010-01-01

    As the size and number of FITS binary tables generated by astronomical observatories increases, so does the need for a more efficient compression method to reduce the amount disk space and network bandwidth required to archive and down1oad the data tables. We have developed a new compression method for FITS binary tables that is modeled after the FITS tiled-image compression compression convention that has been in use for the past decade. Tests of this new method on a sample of FITS binary tables from a variety of current missions show that on average this new compression technique saves about 50% more disk space than when simply compressing the whole FITS file with gzip. Other advantages of this method are (1) the compressed FITS table is itself a valid FITS table, (2) the FITS headers remain uncompressed, thus allowing rapid read and write access to the keyword values, and (3) in the common case where the FITS file contains multiple tables, each table is compressed separately and may be accessed without having to uncompress the whole file.

  8. 76 FR 4338 - Research and Development Strategies for Compressed & Cryo-Compressed Hydrogen Storage Workshops

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-25

    ... DEPARTMENT OF ENERGY Research and Development Strategies for Compressed & Cryo- Compressed Hydrogen Storage Workshops AGENCY: Fuel Cell Technologies Program, Office of Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Notice of meeting. SUMMARY: The Systems Integration group of...

  9. Multichannel Compression, Temporal Cues, and Audibility.

    ERIC Educational Resources Information Center

    Souza, Pamela E.; Turner, Christopher W.

    1998-01-01

    The effect of the reduction of the temporal envelope produced by multichannel compression on recognition was examined in 16 listeners with hearing loss, with particular focus on audibility of the speech signal. Multichannel compression improved speech recognition when superior audibility was provided by a two-channel compression system over linear…

  10. A simple and efficient algorithm operating with linear time for MCEEG data compression.

    PubMed

    Titus, Geevarghese; Sudhakar, M S

    2017-09-01

    Popularisation of electroencephalograph (EEG) signals in diversified fields have increased the need for devices capable of operating at lower power and storage requirements. This has led to a great deal of research in data compression, that can address (a) low latency in the coding of the signal, (b) reduced hardware and software dependencies, (c) quantify the system anomalies, and (d) effectively reconstruct the compressed signal. This paper proposes a computationally simple and novel coding scheme named spatial pseudo codec (SPC), to achieve lossy to near lossless compression of multichannel EEG (MCEEG). In the proposed system, MCEEG signals are initially normalized, followed by two parallel processes: one operating on integer part and the other, on fractional part of the normalized data. The redundancies in integer part are exploited using spatial domain encoder, and the fractional part is coded as pseudo integers. The proposed method has been tested on a wide range of databases having variable sampling rates and resolutions. Results indicate that the algorithm has a good recovery performance with an average percentage root mean square deviation (PRD) of 2.72 for an average compression ratio (CR) of 3.16. Furthermore, the algorithm has a complexity of only O(n) with an average encoding and decoding time per sample of 0.3 ms and 0.04 ms respectively. The performance of the algorithm is comparable with recent methods like fast discrete cosine transform (fDCT) and tensor decomposition methods. The results validated the feasibility of the proposed compression scheme for practical MCEEG recording, archiving and brain computer interfacing systems.

  11. Enhancement of orientation gradients during simple shear deformation by application of simple compression

    NASA Astrophysics Data System (ADS)

    Jahedi, Mohammad; Ardeljan, Milan; Beyerlein, Irene J.; Paydar, Mohammad Hossein; Knezevic, Marko

    2015-06-01

    We use a multi-scale, polycrystal plasticity micromechanics model to study the development of orientation gradients within crystals deforming by slip. At the largest scale, the model is a full-field crystal plasticity finite element model with explicit 3D grain structures created by DREAM.3D, and at the finest scale, at each integration point, slip is governed by a dislocation density based hardening law. For deformed polycrystals, the model predicts intra-granular misorientation distributions that follow well the scaling law seen experimentally by Hughes et al., Acta Mater. 45(1), 105-112 (1997), independent of strain level and deformation mode. We reveal that the application of a simple compression step prior to simple shearing significantly enhances the development of intra-granular misorientations compared to simple shearing alone for the same amount of total strain. We rationalize that the changes in crystallographic orientation and shape evolution when going from simple compression to simple shearing increase the local heterogeneity in slip, leading to the boost in intra-granular misorientation development. In addition, the analysis finds that simple compression introduces additional crystal orientations that are prone to developing intra-granular misorientations, which also help to increase intra-granular misorientations. Many metal working techniques for refining grain sizes involve a preliminary or concurrent application of compression with severe simple shearing. Our finding reveals that a pre-compression deformation step can, in fact, serve as another processing variable for improving the rate of grain refinement during the simple shearing of polycrystalline metals.

  12. Compression-sensitive magnetic resonance elastography

    NASA Astrophysics Data System (ADS)

    Hirsch, Sebastian; Beyer, Frauke; Guo, Jing; Papazoglou, Sebastian; Tzschaetzsch, Heiko; Braun, Juergen; Sack, Ingolf

    2013-08-01

    Magnetic resonance elastography (MRE) quantifies the shear modulus of biological tissue to detect disease. Complementary to the shear elastic properties of tissue, the compression modulus may be a clinically useful biomarker because it is sensitive to tissue pressure and poromechanical interactions. In this work, we analyze the capability of MRE to measure volumetric strain and the dynamic bulk modulus (P-wave modulus) at a harmonic drive frequency commonly used in shear-wave-based MRE. Gel phantoms with various densities were created by introducing CO2-filled cavities to establish a compressible effective medium. The dependence of the effective medium's bulk modulus on phantom density was investigated via static compression tests, which confirmed theoretical predictions. The P-wave modulus of three compressible phantoms was calculated from volumetric strain measured by 3D wave-field MRE at 50 Hz drive frequency. The results demonstrate the MRE-derived volumetric strain and P-wave modulus to be sensitive to the compression properties of effective media. Since the reconstruction of the P-wave modulus requires third-order derivatives, noise remains critical, and P-wave moduli are systematically underestimated. Focusing on relative changes in the effective bulk modulus of tissue, compression-sensitive MRE may be useful for the noninvasive detection of diseases involving pathological pressure alterations such as hepatic hypertension or hydrocephalus.

  13. Increased tissue oxygenation explains the attenuation of hyperemia upon repetitive pneumatic compression of the lower leg.

    PubMed

    Messere, Alessandro; Ceravolo, Gianluca; Franco, Walter; Maffiodo, Daniela; Ferraresi, Carlo; Roatta, Silvestro

    2017-12-01

    The rapid hyperemia evoked by muscle compression is short lived and was recently shown to undergo a rapid decrease even in spite of continuing mechanical stimulation. The present study aims at investigating the mechanisms underlying this attenuation, which include local metabolic mechanisms, desensitization of mechanosensitive pathways, and reduced efficacy of the muscle pump. In 10 healthy subjects, short sequences of mechanical compressions ( n = 3-6; 150 mmHg) of the lower leg were delivered at different interstimulus intervals (ranging from 20 to 160 s) through a customized pneumatic device. Hemodynamic monitoring included near-infrared spectroscopy, detecting tissue oxygenation and blood volume in calf muscles, and simultaneous echo-Doppler measurement of arterial (superficial femoral artery) and venous (femoral vein) blood flow. The results indicate that 1 ) a long-lasting (>100 s) increase in local tissue oxygenation follows compression-induced hyperemia, 2 ) compression-induced hyperemia exhibits different patterns of attenuation depending on the interstimulus interval, 3 ) the amplitude of the hyperemia is not correlated with the amount of blood volume displaced by the compression, and 4 ) the extent of attenuation negatively correlates with tissue oxygenation ( r  = -0,78, P < 0.05). Increased tissue oxygenation appears to be the key factor for the attenuation of hyperemia upon repetitive compressive stimulation. Tissue oxygenation monitoring is suggested as a useful integration in medical treatments aimed at improving local circulation by repetitive tissue compression. NEW & NOTEWORTHY This study shows that 1 ) the hyperemia induced by muscle compression produces a long-lasting increase in tissue oxygenation, 2 ) the hyperemia produced by subsequent muscle compressions exhibits different patterns of attenuation at different interstimulus intervals, and 3 ) the extent of attenuation of the compression-induced hyperemia is proportional to the level of

  14. Effect of Kollidon VA®64 particle size and morphology as directly compressible excipient on tablet compression properties.

    PubMed

    Chaudhary, R S; Patel, C; Sevak, V; Chan, M

    2018-01-01

    The study evaluates use of Kollidon VA ® 64 and a combination of Kollidon VA ® 64 with Kollidon VA ® 64 Fine as excipient in direct compression process of tablets. The combination of the two grades of material is evaluated for capping, lamination and excessive friability. Inter particulate void space is higher for such excipient due to the hollow structure of the Kollidon VA ® 64 particles. During tablet compression air remains trapped in the blend exhibiting poor compression with compromised physical properties of the tablets. Composition of Kollidon VA ® 64 and Kollidon VA ® 64 Fine is evaluated by design of experiment (DoE). A scanning electron microscopy (SEM) of two grades of Kollidon VA ® 64 exhibits morphological differences between coarse and fine grade. The tablet compression process is evaluated with a mix consisting of entirely Kollidon VA ® 64 and two mixes containing Kollidon VA ® 64 and Kollidon VA ® 64 Fine in ratio of 77:23 and 65:35. A statistical modeling on the results from the DoE trials resulted in the optimum composition for direct tablet compression as combination of Kollidon VA ® 64 and Kollidon VA ® 64 Fine in ratio of 77:23. This combination compressed with the predicted parameters based on the statistical modeling and applying main compression force between 5 and 15 kN, pre-compression force between 2 and 3 kN, feeder speed fixed at 25 rpm and compression range of 45-49 rpm produced tablets with hardness ranging between 19 and 21 kp, with no friability, capping, or lamination issue.

  15. GPU Lossless Hyperspectral Data Compression System

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh I.; Keymeulen, Didier; Kiely, Aaron B.; Klimesh, Matthew A.

    2014-01-01

    Hyperspectral imaging systems onboard aircraft or spacecraft can acquire large amounts of data, putting a strain on limited downlink and storage resources. Onboard data compression can mitigate this problem but may require a system capable of a high throughput. In order to achieve a high throughput with a software compressor, a graphics processing unit (GPU) implementation of a compressor was developed targeting the current state-of-the-art GPUs from NVIDIA(R). The implementation is based on the fast lossless (FL) compression algorithm reported in "Fast Lossless Compression of Multispectral-Image Data" (NPO- 42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which operates on hyperspectral data and achieves excellent compression performance while having low complexity. The FL compressor uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. The new Consultative Committee for Space Data Systems (CCSDS) Standard for Lossless Multispectral & Hyperspectral image compression (CCSDS 123) is based on the FL compressor. The software makes use of the highly-parallel processing capability of GPUs to achieve a throughput at least six times higher than that of a software implementation running on a single-core CPU. This implementation provides a practical real-time solution for compression of data from airborne hyperspectral instruments.

  16. Lossless compression of otoneurological eye movement signals.

    PubMed

    Tossavainen, Timo; Juhola, Martti

    2002-12-01

    We studied the performance of several lossless compression algorithms on eye movement signals recorded in otoneurological balance and other physiological laboratories. Despite the wide use of these signals their compression has not been studied prior to our research. The compression methods were based on the common model of using a predictor to decorrelate the input and using an entropy coder to encode the residual. We found that these eye movement signals recorded at 400 Hz and with 13 bit amplitude resolution could losslessly be compressed with a compression ratio of about 2.7.

  17. Baroreflex Coupling Assessed by Cross-Compression Entropy

    PubMed Central

    Schumann, Andy; Schulz, Steffen; Voss, Andreas; Scharbrodt, Susann; Baumert, Mathias; Bär, Karl-Jürgen

    2017-01-01

    Estimating interactions between physiological systems is an important challenge in modern biomedical research. Here, we explore a new concept for quantifying information common in two time series by cross-compressibility. Cross-compression entropy (CCE) exploits the ZIP data compression algorithm extended to bivariate data analysis. First, time series are transformed into symbol vectors. Symbols of the target time series are coded by the symbols of the source series. Uncoupled and linearly coupled surrogates were derived from cardiovascular recordings of 36 healthy controls obtained during rest to demonstrate suitability of this method for assessing physiological coupling. CCE at rest was compared to that of isometric handgrip exercise. Finally, spontaneous baroreflex interaction assessed by CCEBRS was compared between 21 patients suffering from acute schizophrenia and 21 matched controls. The CCEBRS of original time series was significantly higher than in uncoupled surrogates in 89% of the subjects and higher than in linearly coupled surrogates in 47% of the subjects. Handgrip exercise led to sympathetic activation and vagal inhibition accompanied by reduced baroreflex sensitivity. CCEBRS decreased from 0.553 ± 0.030 at rest to 0.514 ± 0.035 during exercise (p < 0.001). In acute schizophrenia, heart rate, and blood pressure were elevated. Heart rate variability indicated a change of sympathovagal balance. The CCEBRS of patients with schizophrenia was reduced compared to healthy controls (0.546 ± 0.042 vs. 0.507 ± 0.046, p < 0.01) and revealed a decrease of blood pressure influence on heart rate in patients with schizophrenia. Our results indicate that CCE is suitable for the investigation of linear and non-linear coupling in cardiovascular time series. CCE can quantify causal interactions in short, noisy and non-stationary physiological time series. PMID:28539889

  18. Widefield compressive multiphoton microscopy.

    PubMed

    Alemohammad, Milad; Shin, Jaewook; Tran, Dung N; Stroud, Jasper R; Chin, Sang Peter; Tran, Trac D; Foster, Mark A

    2018-06-15

    A single-pixel compressively sensed architecture is exploited to simultaneously achieve a 10× reduction in acquired data compared with the Nyquist rate, while alleviating limitations faced by conventional widefield temporal focusing microscopes due to scattering of the fluorescence signal. Additionally, we demonstrate an adaptive sampling scheme that further improves the compression and speed of our approach.

  19. Fast Lossless Compression of Multispectral-Image Data

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew

    2006-01-01

    An algorithm that effects fast lossless compression of multispectral-image data is based on low-complexity, proven adaptive-filtering algorithms. This algorithm is intended for use in compressing multispectral-image data aboard spacecraft for transmission to Earth stations. Variants of this algorithm could be useful for lossless compression of three-dimensional medical imagery and, perhaps, for compressing image data in general.

  20. Magnetized Plasma Compression for Fusion Energy

    NASA Astrophysics Data System (ADS)

    Degnan, James; Grabowski, Christopher; Domonkos, Matthew; Amdahl, David

    2013-10-01

    Magnetized Plasma Compression (MPC) uses magnetic inhibition of thermal conduction and enhancement of charge particle product capture to greatly reduce the temporal and spatial compression required relative to un-magnetized inertial fusion (IFE)--to microseconds, centimeters vs nanoseconds, sub-millimeter. MPC greatly reduces the required confinement time relative to MFE--to microseconds vs minutes. Proof of principle can be demonstrated or refuted using high current pulsed power driven compression of magnetized plasmas using magnetic pressure driven implosions of metal shells, known as imploding liners. This can be done at a cost of a few tens of millions of dollars. If demonstrated, it becomes worthwhile to develop repetitive implosion drivers. One approach is to use arrays of heavy ion beams for energy production, though with much less temporal and spatial compression than that envisioned for un-magnetized IFE, with larger compression targets, and with much less ambitious compression ratios. A less expensive, repetitive pulsed power driver, if feasible, would require engineering development for transient, rapidly replaceable transmission lines such as envisioned by Sandia National Laboratories. Supported by DOE-OFES.

  1. SteamTables: An approach of multiple variable sets

    NASA Astrophysics Data System (ADS)

    Verma, Mahendra P.

    2009-10-01

    Using the IAPWS-95 formulation, an ActiveX component SteamTablesIIE in Visual Basic 6.0 is developed to calculate thermodynamic properties of pure water as a function of two independent intensive variables: (1) temperature ( T) or pressure ( P) and (2) T, P, volume ( V), internal energy ( U), enthalpy ( H), entropy ( S) or Gibbs free energy ( G). The second variable cannot be the same as variable 1. Additionally, it calculates the properties along the separation boundaries (i.e., sublimation, saturation, critical isochor, ice I melting, ice III to ice IIV melting and minimum volume curves) considering the input parameter as T or P for the variable 1. SteamTablesIIE is an extension of the ActiveX component SteamTables implemented earlier considering T (190 to 2000 K) and P (3.23×10 -8 to 10000 MPa) as independent variables. It takes into account the following 27 intensive properties: temperature ( T), pressure ( P), fraction, state, volume ( V), density ( Den), compressibility factor ( Z0), internal energy ( U), enthalpy ( H), Gibbs free energy ( G), Helmholtz free energy ( A), entropy ( S), heat capacity at constant pressure ( C p), heat capacity at constant volume ( C v), coefficient of thermal expansion ( CTE), isothermal compressibility ( Z iso), speed of sound ( VelS), partial derivative of P with T at constant V ( dPdT), partial derivative of T with V at constant P ( dTdV), partial derivative of V with P at constant T ( dVdP), Joule-Thomson coefficient ( JTC), isothermal throttling coefficient ( IJTC), viscosity ( Vis), thermal conductivity ( ThrmCond), surface tension ( SurfTen), Prandtl number ( PrdNum) and dielectric constant ( DielCons).

  2. Lossless compression of VLSI layout image data.

    PubMed

    Dai, Vito; Zakhor, Avideh

    2006-09-01

    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.

  3. Compressed/reconstructed test images for CRAF/Cassini

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.

    1991-01-01

    A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.

  4. Use of customised pressure-guided elastic bandages to improve efficacy of compression bandaging for venous ulcers.

    PubMed

    Sermsathanasawadi, Nuttawut; Chatjaturapat, Choedpong; Pianchareonsin, Rattana; Puangpunngam, Nattawut; Wongwanit, Chumpol; Chinsakchai, Khamin; Ruangsetakit, Chanean; Mutirangura, Pramook

    2017-08-01

    Compression bandaging is a major treatment of chronic venous ulcers. Its efficacy depends on the applied pressure, which is dependent on the skill of the individual applying the bandage. To improve the quality of bandaging by reducing the variability in compression bandage interface pressures, we changed elastic bandages into a customised version by marking them with circular ink stamps, applied when the stretch achieves an interface pressure between 35 and 45 mmHg. Repeated applications by 20 residents of the customised bandage and non-marked bandage to one smaller and one larger leg were evaluated by measuring the sub-bandage pressure. The results demonstrated that the target pressure range is more often attained with the customised bandage compared with the non-marked bandage. The customised bandage improved the efficacy of compression bandaging for venous ulcers, with optimal sub-bandage pressure. © 2016 Medicalhelplines.com Inc and John Wiley & Sons Ltd.

  5. Compression of rehydratable vegetables and cereals

    NASA Technical Reports Server (NTRS)

    Burns, E. E.

    1978-01-01

    Characteristics of freeze-dried compressed carrots, such as rehydration, volatile retention, and texture, were studied by relating histological changes to textural quality evaluation, and by determining the effects of storage temperature on freeze-dried compressed carrot bars. Results show that samples compressed with a high moisture content undergo only slight structural damage and rehydrate quickly. Cellular disruption as a result of compression at low moisture levels was the main reason for rehydration and texture differences. Products prepared from carrot cubes having 48% moisture compared favorably with a freshly cooked product in cohesiveness and elasticity, but were found slightly harder and more chewy.

  6. Memory hierarchy using row-based compression

    DOEpatents

    Loh, Gabriel H.; O'Connor, James M.

    2016-10-25

    A system includes a first memory and a device coupleable to the first memory. The device includes a second memory to cache data from the first memory. The second memory includes a plurality of rows, each row including a corresponding set of compressed data blocks of non-uniform sizes and a corresponding set of tag blocks. Each tag block represents a corresponding compressed data block of the row. The device further includes decompression logic to decompress data blocks accessed from the second memory. The device further includes compression logic to compress data blocks to be stored in the second memory.

  7. Corneal Staining and Hot Black Tea Compresses.

    PubMed

    Achiron, Asaf; Birger, Yael; Karmona, Lily; Avizemer, Haggay; Bartov, Elisha; Rahamim, Yocheved; Burgansky-Eliash, Zvia

    2017-03-01

    Warm compresses are widely touted as an effective treatment for ocular surface disorders. Black tea compresses are a common household remedy, although there is no evidence in the medical literature proving their effect and their use may lead to harmful side effects. To describe a case in which the application of black tea to an eye with a corneal epithelial defect led to anterior stromal discoloration; evaluate the prevalence of hot tea compress use; and analyze, in vitro, the discoloring effect of tea compresses on a model of a porcine eye. We assessed the prevalence of hot tea compresses in our community and explored the effect of warm tea compresses on the cornea when the corneal epithelium's integrity is disrupted. An in vitro experiment in which warm compresses were applied to 18 fresh porcine eyes was performed. In half the eyes a corneal epithelial defect was created and in the other half the epithelium was intact. Both groups were divided into subgroups of three eyes each and treated experimentally with warm black tea compresses, pure water, or chamomile tea compresses. We also performed a study in patients with a history of tea compress use. Brown discoloration of the anterior stroma appeared only in the porcine corneas that had an epithelial defect and were treated with black tea compresses. No other eyes from any group showed discoloration. Of the patients included in our survey, approximately 50% had applied some sort of tea ingredient as a solid compressor or as the hot liquid. An intact corneal epithelium serves as an effective barrier against tea-stain discoloration. Only when this layer is disrupted does the damage occur. Therefore, direct application of black tea (Camellia sinensis) to a cornea with an epithelial defect should be avoided.

  8. On the Origins of the Intercorrelations Between Solar Wind Variables

    NASA Astrophysics Data System (ADS)

    Borovsky, Joseph E.

    2018-01-01

    It is well known that the time variations of the diverse solar wind variables at 1 AU (e.g., solar wind speed, density, proton temperature, electron temperature, magnetic field strength, specific entropy, heavy-ion charge-state densities, and electron strahl intensity) are highly intercorrelated with each other. In correlation studies of the driving of the Earth's magnetosphere-ionosphere-thermosphere system by the solar wind, these solar wind intercorrelations make determining cause and effect very difficult. In this report analyses of solar wind spacecraft measurements and compressible-fluid computer simulations are used to study the origins of the solar wind intercorrelations. Two causes are found: (1) synchronized changes in the values of the solar wind variables as the plasma types of the solar wind are switched by solar rotation and (2) dynamic interactions (compressions and rarefactions) in the solar wind between the Sun and the Earth. These findings provide an incremental increase in the understanding of how the Sun-Earth system operates.

  9. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  10. Friction of Compression-ignition Engines

    NASA Technical Reports Server (NTRS)

    Moore, Charles S; Collins, John H , Jr

    1936-01-01

    The cost in mean effective pressure of generating air flow in the combustion chambers of single-cylinder compression-ignition engines was determined for the prechamber and the displaced-piston types of combustion chamber. For each type a wide range of air-flow quantities, speeds, and boost pressures was investigated. Supplementary tests were made to determine the effect of lubricating-oil temperature, cooling-water temperature, and compression ratio on the friction mean effective pressure of the single-cylinder test engine. Friction curves are included for two 9-cylinder, radial, compression-ignition aircraft engines. The results indicate that generating the optimum forced air flow increased the motoring losses approximately 5 pounds per square inch mean effective pressure regardless of chamber type or engine speed. With a given type of chamber, the rate of increase in friction mean effective pressure with engine speed is independent of the air-flow speed. The effect of boost pressure on the friction cannot be predicted because the friction was decreased, unchanged, or increased depending on the combustion-chamber type and design details. High compression ratio accounts for approximately 5 pounds per square inch mean effective pressure of the friction of these single-cylinder compression-ignition engines. The single-cylinder test engines used in this investigation had a much higher friction mean effective pressure than conventional aircraft engines or than the 9-cylinder, radial, compression-ignition engines tested so that performance should be compared on an indicated basis.

  11. Optimal Compression Methods for Floating-point Format Images

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2009-01-01

    We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.

  12. LOW-VELOCITY COMPRESSIBLE FLOW THEORY

    EPA Science Inventory

    The widespread application of incompressible flow theory dominates low-velocity fluid dynamics, virtually preventing research into compressible low-velocity flow dynamics. Yet, compressible solutions to simple and well-defined flow problems and a series of contradictions in incom...

  13. Dogmas and controversies in compression therapy: report of an International Compression Club (ICC) meeting, Brussels, May 2011.

    PubMed

    Flour, Mieke; Clark, Michael; Partsch, Hugo; Mosti, Giovanni; Uhl, Jean-Francois; Chauveau, Michel; Cros, Francois; Gelade, Pierre; Bender, Dean; Andriessen, Anneke; Schuren, Jan; Cornu-Thenard, André; Arkans, Ed; Milic, Dragan; Benigni, Jean-Patrick; Damstra, Robert; Szolnoky, Gyozo; Schingale, Franz

    2013-10-01

    The International Compression Club (ICC) is a partnership between academics, clinicians and industry focused upon understanding the role of compression in the management of different clinical conditions. The ICC meet regularly and from these meetings have produced a series of eight consensus publications upon topics ranging from evidence-based compression to compression trials for arm lymphoedema. All of the current consensus documents can be accessed on the ICC website (http://www.icc-compressionclub.com/index.php). In May 2011, the ICC met in Brussels during the European Wound Management Association (EWMA) annual conference. With almost 50 members in attendance, the day-long ICC meeting challenged a series of dogmas and myths that exist when considering compression therapies. In preparation for a discussion on beliefs surrounding compression, a forum was established on the ICC website where presenters were able to display a summary of their thoughts upon each dogma to be discussed during the meeting. Members of the ICC could then provide comments on each topic thereby widening the discussion to the entire membership of the ICC rather than simply those who were attending the EWMA conference. This article presents an extended report of the issues that were discussed, with each dogma covered in a separate section. The ICC discussed 12 'dogmas' with areas 1 through 7 dedicated to materials and application techniques used to apply compression with the remaining topics (8 through 12) related to the indications for using compression. © 2012 The Authors. International Wound Journal © 2012 John Wiley & Sons Ltd and Medicalhelplines.com Inc.

  14. Real-Time Aggressive Image Data Compression

    DTIC Science & Technology

    1990-03-31

    implemented with higher degrees of modularity, concurrency, and higher levels of machine intelligence , thereby providing higher data -throughput rates...Project Summary Project Title: Real-Time Aggressive Image Data Compression Principal Investigators: Dr. Yih-Fang Huang and Dr. Ruey-wen Liu Institution...Summary The objective of the proposed research is to develop reliable algorithms !.hat can achieve aggressive image data compression (with a compression

  15. Transverse compression of PPTA fibers

    NASA Astrophysics Data System (ADS)

    Singletary, James

    2000-07-01

    Results of single transverse compression testing of PPTA and PIPD fibers, using a novel test device, are presented and discussed. In the tests, short lengths of single fibers are compressed between two parallel, stiff platens. The fiber elastic deformation is analyzed as a Hertzian contact problem. The inelastic deformation is analyzed by elastic-plastic FE simulation and by laser-scanning confocal microscopy of the compressed fibers ex post facto. The results obtained are compared to those in the literature and to the theoretical predictions of PPTA fiber transverse elasticity based on PPTA crystal elasticity.

  16. Influence of image compression on the interpretation of spectral-domain optical coherence tomography in exudative age-related macular degeneration

    PubMed Central

    Kim, J H; Kang, S W; Kim, J-r; Chang, Y S

    2014-01-01

    Purpose To evaluate the effect of image compression of spectral-domain optical coherence tomography (OCT) images in the examination of eyes with exudative age-related macular degeneration (AMD). Methods Thirty eyes from 30 patients who were diagnosed with exudative AMD were included in this retrospective observational case series. The horizontal OCT scans centered at the center of the fovea were conducted using spectral-domain OCT. The images were exported to Tag Image File Format (TIFF) and 100, 75, 50, 25 and 10% quality of Joint Photographic Experts Group (JPEG) format. OCT images were taken before and after intravitreal ranibizumab injections, and after relapse. The prevalence of subretinal and intraretinal fluids was determined. Differences in choroidal thickness between the TIFF and JPEG images were compared with the intra-observer variability. Results The prevalence of subretinal and intraretinal fluids was comparable regardless of the degree of compression. However, the chorio–scleral interface was not clearly identified in many images with a high degree of compression. In images with 25 and 10% quality of JPEG, the difference in choroidal thickness between the TIFF images and the respective JPEG images was significantly greater than the intra-observer variability of the TIFF images (P=0.029 and P=0.024, respectively). Conclusions In OCT images of eyes with AMD, 50% of the quality of the JPEG format would be an optimal degree of compression for efficient data storage and transfer without sacrificing image quality. PMID:24788012

  17. Data Compression Using the Dictionary Approach Algorithm

    DTIC Science & Technology

    1990-12-01

    Compression Technique The LZ77 is an OPM/L data compression scheme suggested by Ziv and Lempel . A slightly modified...June 1984. 12. Witten H. I., Neal M. R. and Cleary G. J., Arithmetic Coding For Data Compression , Communication ACM June 1987. 13. Ziv I. and Lempel A...AD-A242 539 NAVAL POSTGRADUATE SCHOOL Monterey, California DTIC NOV 181991 0 THESIS DATA COMPRESSION USING THE DICTIONARY APPROACH ALGORITHM

  18. [Efficacy, safety and comfort of compression therapy models in the immediate post-operative period after a greater saphenectomy. A prospective randomised study].

    PubMed

    Collazo Chao, Eliseo; Luque, María Antonia; González-Ripoll, Carmen

    2010-10-01

    There is still controversy on the best compression therapy after performing a greater saphenectomy. The purpose of this study is to establish whether the use of a controlled compression stocking has the same level of safety and efficacy as a compression bandage in the immediate post-operative period after a greater saphenectomy. A prospective, randomised, open-labelled study, comparing three groups: a) a conventional compression bandage for one week, b) a conventional compression bandage replaced by a controlled tubular compression stocking at 5h of its putting in place, c) immediate direct use of the controlled tubular compression stocking, was conducted on fifty-five consecutive outpatients with a greater saphenectomy in one of their legs, and who fulfilled the inclusion criteria. The working hypothesis was that the controlled tubular compression stocking could replace, in terms of efficacy, safety and comfort, the usual controlled compression in the immediate post-operative period after saphenous vein stripping. The analysis variables were pain, control of bleeding, analgesics in the post-operative period, bruising, incapacity during the first week after the operation and comfort level. There were no statistically significant differences found between the three types of compressions studied as regards, safety, efficacy, comfort level, pain and analgesic consumption, but there was as regards the level of convenience in favour of the use of the stocking. The controlled tubular compression stocking can replace the compression bandage with more advantages after greater saphenous vein stripping in outpatients, having the same safety and efficacy. Copyright © 2009 AEC. Published by Elsevier Espana. All rights reserved.

  19. Robust QRS detection for HRV estimation from compressively sensed ECG measurements for remote health-monitoring systems.

    PubMed

    Pant, Jeevan K; Krishnan, Sridhar

    2018-03-15

    To present a new compressive sensing (CS)-based method for the acquisition of ECG signals and for robust estimation of heart-rate variability (HRV) parameters from compressively sensed measurements with high compression ratio. CS is used in the biosensor to compress the ECG signal. Estimation of the locations of QRS segments is carried out by applying two algorithms on the compressed measurements. The first algorithm reconstructs the ECG signal by enforcing a block-sparse structure on the first-order difference of the signal, so the transient QRS segments are significantly emphasized on the first-order difference of the signal. Multiple block-divisions of the signals are carried out with various block lengths, and multiple reconstructed signals are combined to enhance the robustness of the localization of the QRS segments. The second algorithm removes errors in the locations of QRS segments by applying low-pass filtering and morphological operations. The proposed CS-based method is found to be effective for the reconstruction of ECG signals by enforcing transient QRS structures on the first-order difference of the signal. It is demonstrated to be robust not only to high compression ratio but also to various artefacts present in ECG signals acquired by using on-body wireless sensors. HRV parameters computed by using the QRS locations estimated from the signals reconstructed with a compression ratio as high as 90% are comparable with that computed by using QRS locations estimated by using the Pan-Tompkins algorithm. The proposed method is useful for the realization of long-term HRV monitoring systems by using CS-based low-power wireless on-body biosensors.

  20. A hybrid data compression approach for online backup service

    NASA Astrophysics Data System (ADS)

    Wang, Hua; Zhou, Ke; Qin, MingKang

    2009-08-01

    With the popularity of Saas (Software as a service), backup service has becoming a hot topic of storage application. Due to the numerous backup users, how to reduce the massive data load is a key problem for system designer. Data compression provides a good solution. Traditional data compression application used to adopt a single method, which has limitations in some respects. For example data stream compression can only realize intra-file compression, de-duplication is used to eliminate inter-file redundant data, compression efficiency cannot meet the need of backup service software. This paper proposes a novel hybrid compression approach, which includes two levels: global compression and block compression. The former can eliminate redundant inter-file copies across different users, the latter adopts data stream compression technology to realize intra-file de-duplication. Several compressing algorithms were adopted to measure the compression ratio and CPU time. Adaptability using different algorithm in certain situation is also analyzed. The performance analysis shows that great improvement is made through the hybrid compression policy.

  1. Comparative performance between compressed and uncompressed airborne imagery

    NASA Astrophysics Data System (ADS)

    Phan, Chung; Rupp, Ronald; Agarwal, Sanjeev; Trang, Anh; Nair, Sumesh

    2008-04-01

    The US Army's RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD), Countermine Division is evaluating the compressibility of airborne multi-spectral imagery for mine and minefield detection application. Of particular interest is to assess the highest image data compression rate that can be afforded without the loss of image quality for war fighters in the loop and performance of near real time mine detection algorithm. The JPEG-2000 compression standard is used to perform data compression. Both lossless and lossy compressions are considered. A multi-spectral anomaly detector such as RX (Reed & Xiaoli), which is widely used as a core algorithm baseline in airborne mine and minefield detection on different mine types, minefields, and terrains to identify potential individual targets, is used to compare the mine detection performance. This paper presents the compression scheme and compares detection performance results between compressed and uncompressed imagery for various level of compressions. The compression efficiency is evaluated and its dependence upon different backgrounds and other factors are documented and presented using multi-spectral data.

  2. Lossy compression of weak lensing data

    DOE PAGES

    Vanderveld, R. Ali; Bernstein, Gary M.; Stoughton, Chris; ...

    2011-07-12

    Future orbiting observatories will survey large areas of sky in order to constrain the physics of dark matter and dark energy using weak gravitational lensing and other methods. Lossy compression of the resultant data will improve the cost and feasibility of transmitting the images through the space communication network. We evaluate the consequences of the lossy compression algorithm of Bernstein et al. (2010) for the high-precision measurement of weak-lensing galaxy ellipticities. This square-root algorithm compresses each pixel independently, and the information discarded is by construction less than the Poisson error from photon shot noise. For simulated space-based images (without cosmicmore » rays) digitized to the typical 16 bits per pixel, application of the lossy compression followed by image-wise lossless compression yields images with only 2.4 bits per pixel, a factor of 6.7 compression. We demonstrate that this compression introduces no bias in the sky background. The compression introduces a small amount of additional digitization noise to the images, and we demonstrate a corresponding small increase in ellipticity measurement noise. The ellipticity measurement method is biased by the addition of noise, so the additional digitization noise is expected to induce a multiplicative bias on the galaxies measured ellipticities. After correcting for this known noise-induced bias, we find a residual multiplicative ellipticity bias of m {approx} -4 x 10 -4. This bias is small when compared to the many other issues that precision weak lensing surveys must confront, and furthermore we expect it to be reduced further with better calibration of ellipticity measurement methods.« less

  3. High-performance compression of astronomical images

    NASA Technical Reports Server (NTRS)

    White, Richard L.

    1993-01-01

    Astronomical images have some rather unusual characteristics that make many existing image compression techniques either ineffective or inapplicable. A typical image consists of a nearly flat background sprinkled with point sources and occasional extended sources. The images are often noisy, so that lossless compression does not work very well; furthermore, the images are usually subjected to stringent quantitative analysis, so any lossy compression method must be proven not to discard useful information, but must instead discard only the noise. Finally, the images can be extremely large. For example, the Space Telescope Science Institute has digitized photographic plates covering the entire sky, generating 1500 images each having 14000 x 14000 16-bit pixels. Several astronomical groups are now constructing cameras with mosaics of large CCD's (each 2048 x 2048 or larger); these instruments will be used in projects that generate data at a rate exceeding 100 MBytes every 5 minutes for many years. An effective technique for image compression may be based on the H-transform (Fritze et al. 1977). The method that we have developed can be used for either lossless or lossy compression. The digitized sky survey images can be compressed by at least a factor of 10 with no noticeable losses in the astrometric and photometric properties of the compressed images. The method has been designed to be computationally efficient: compression or decompression of a 512 x 512 image requires only 4 seconds on a Sun SPARCstation 1. The algorithm uses only integer arithmetic, so it is completely reversible in its lossless mode, and it could easily be implemented in hardware for space applications.

  4. ERGC: an efficient referential genome compression algorithm

    PubMed Central

    Saha, Subrata; Rajasekaran, Sanguthevar

    2015-01-01

    Motivation: Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. Results: We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. Availability and implementation: The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. Contact: rajasek@engr.uconn.edu PMID:26139636

  5. Compression and fast retrieval of SNP data

    PubMed Central

    Sambo, Francesco; Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio

    2014-01-01

    Motivation: The increasing interest in rare genetic variants and epistatic genetic effects on complex phenotypic traits is currently pushing genome-wide association study design towards datasets of increasing size, both in the number of studied subjects and in the number of genotyped single nucleotide polymorphisms (SNPs). This, in turn, is leading to a compelling need for new methods for compression and fast retrieval of SNP data. Results: We present a novel algorithm and file format for compressing and retrieving SNP data, specifically designed for large-scale association studies. Our algorithm is based on two main ideas: (i) compress linkage disequilibrium blocks in terms of differences with a reference SNP and (ii) compress reference SNPs exploiting information on their call rate and minor allele frequency. Tested on two SNP datasets and compared with several state-of-the-art software tools, our compression algorithm is shown to be competitive in terms of compression rate and to outperform all tools in terms of time to load compressed data. Availability and implementation: Our compression and decompression algorithms are implemented in a C++ library, are released under the GNU General Public License and are freely downloadable from http://www.dei.unipd.it/~sambofra/snpack.html. Contact: sambofra@dei.unipd.it or cobelli@dei.unipd.it. PMID:25064564

  6. Improved compression technique for multipass color printers

    NASA Astrophysics Data System (ADS)

    Honsinger, Chris

    1998-01-01

    A multipass color printer prints a color image by printing one color place at a time in a prescribed order, e.g., in a four-color systems, the cyan plane may be printed first, the magenta next, and so on. It is desirable to discard the data related to each color plane once it has been printed, so that data from the next print may be downloaded. In this paper, we present a compression scheme that allows the release of a color plane memory, but still takes advantage of the correlation between the color planes. The compression scheme is based on a block adaptive technique for decorrelating the color planes followed by a spatial lossy compression of the decorrelated data. A preferred method of lossy compression is the DCT-based JPEG compression standard, as it is shown that the block adaptive decorrelation operations can be efficiently performed in the DCT domain. The result of the compression technique are compared to that of using JPEG on RGB data without any decorrelating transform. In general, the technique is shown to improve the compression performance over a practical range of compression ratios by at least 30 percent in all images, and up to 45 percent in some images.

  7. Simulating compressible-incompressible two-phase flows

    NASA Astrophysics Data System (ADS)

    Denner, Fabian; van Wachem, Berend

    2017-11-01

    Simulating compressible gas-liquid flows, e.g. air-water flows, presents considerable numerical issues and requires substantial computational resources, particularly because of the stiff equation of state for the liquid and the different Mach number regimes. Treating the liquid phase (low Mach number) as incompressible, yet concurrently considering the gas phase (high Mach number) as compressible, can improve the computational performance of such simulations significantly without sacrificing important physical mechanisms. A pressure-based algorithm for the simulation of two-phase flows is presented, in which a compressible and an incompressible fluid are separated by a sharp interface. The algorithm is based on a coupled finite-volume framework, discretised in conservative form, with a compressive VOF method to represent the interface. The bulk phases are coupled via a novel acoustically-conservative interface discretisation method that retains the acoustic properties of the compressible phase and does not require a Riemann solver. Representative test cases are presented to scrutinize the proposed algorithm, including the reflection of acoustic waves at the compressible-incompressible interface, shock-drop interaction and gas-liquid flows with surface tension. Financial support from the EPSRC (Grant EP/M021556/1) is gratefully acknowledged.

  8. Distributed Coding of Compressively Sensed Sources

    NASA Astrophysics Data System (ADS)

    Goukhshtein, Maxim

    In this work we propose a new method for compressing multiple correlated sources with a very low-complexity encoder in the presence of side information. Our approach uses ideas from compressed sensing and distributed source coding. At the encoder, syndromes of the quantized compressively sensed sources are generated and transmitted. The decoder uses side information to predict the compressed sources. The predictions are then used to recover the quantized measurements via a two-stage decoding process consisting of bitplane prediction and syndrome decoding. Finally, guided by the structure of the sources and the side information, the sources are reconstructed from the recovered measurements. As a motivating example, we consider the compression of multispectral images acquired on board satellites, where resources, such as computational power and memory, are scarce. Our experimental results exhibit a significant improvement in the rate-distortion trade-off when compared against approaches with similar encoder complexity.

  9. Exploring compression techniques for ROOT IO

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Bockelman, B.

    2017-10-01

    ROOT provides an flexible format used throughout the HEP community. The number of use cases - from an archival data format to end-stage analysis - has required a number of tradeoffs to be exposed to the user. For example, a high “compression level” in the traditional DEFLATE algorithm will result in a smaller file (saving disk space) at the cost of slower decompression (costing CPU time when read). At the scale of the LHC experiment, poor design choices can result in terabytes of wasted space or wasted CPU time. We explore and attempt to quantify some of these tradeoffs. Specifically, we explore: the use of alternate compressing algorithms to optimize for read performance; an alternate method of compressing individual events to allow efficient random access; and a new approach to whole-file compression. Quantitative results are given, as well as guidance on how to make compression decisions for different use cases.

  10. Wavelet-based audio embedding and audio/video compression

    NASA Astrophysics Data System (ADS)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  11. Homogenous charge compression ignition engine having a cylinder including a high compression space

    DOEpatents

    Agama, Jorge R.; Fiveland, Scott B.; Maloney, Ronald P.; Faletti, James J.; Clarke, John M.

    2003-12-30

    The present invention relates generally to the field of homogeneous charge compression engines. In these engines, fuel is injected upstream or directly into the cylinder when the power piston is relatively close to its bottom dead center position. The fuel mixes with air in the cylinder as the power piston advances to create a relatively lean homogeneous mixture that preferably ignites when the power piston is relatively close to the top dead center position. However, if the ignition event occurs either earlier or later than desired, lowered performance, engine misfire, or even engine damage, can result. Thus, the present invention divides the homogeneous charge between a controlled volume higher compression space and a lower compression space to better control the start of ignition.

  12. Psychophysical Comparisons in Image Compression Algorithms.

    DTIC Science & Technology

    1999-03-01

    Leister, M., "Lossy Lempel - Ziv Algorithm for Large Alphabet Sources and Applications to Image Compression ," IEEE Proceedings, v.I, pp. 225-228, September...1623-1642, September 1990. Sanford, M.A., An Analysis of Data Compression Algorithms used in the Transmission of Imagery, Master’s Thesis, Naval...NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS PSYCHOPHYSICAL COMPARISONS IN IMAGE COMPRESSION ALGORITHMS by % Christopher J. Bodine • March

  13. The effects of compressive preloads on the compression-after-impact strength of carbon/epoxy

    NASA Technical Reports Server (NTRS)

    Nettles, A. T.; Lance, D. G.

    1992-01-01

    A preloading device was used to examine the effects of compressive prestress on the compression-after-impact (CAI) strength of 16-ply, quasi-isotropic carbon epoxy test coupons. T300/934 material was evaluated at preloads from 200 to 4000 lb at impact energies from 1 to 9 joules. IM7/8551-7 material was evaluated at preloads from 4000 to 10,000 lb at impact energies from 4 to 16 joules. Advanced design of experiments methodology was used to design and evaluate the test matrices. The results showed that no statistically significant change in CAI strength could be contributed to the amount of compressive preload applied to the specimen.

  14. Design of a Lossless Image Compression System for Video Capsule Endoscopy and Its Performance in In-Vivo Trials

    PubMed Central

    Khan, Tareq H.; Wahid, Khan A.

    2014-01-01

    In this paper, a new low complexity and lossless image compression system for capsule endoscopy (CE) is presented. The compressor consists of a low-cost YEF color space converter and variable-length predictive with a combination of Golomb-Rice and unary encoding. All these components have been heavily optimized for low-power and low-cost and lossless in nature. As a result, the entire compression system does not incur any loss of image information. Unlike transform based algorithms, the compressor can be interfaced with commercial image sensors which send pixel data in raster-scan fashion that eliminates the need of having large buffer memory. The compression algorithm is capable to work with white light imaging (WLI) and narrow band imaging (NBI) with average compression ratio of 78% and 84% respectively. Finally, a complete capsule endoscopy system is developed on a single, low-power, 65-nm field programmable gate arrays (FPGA) chip. The prototype is developed using circular PCBs having a diameter of 16 mm. Several in-vivo and ex-vivo trials using pig's intestine have been conducted using the prototype to validate the performance of the proposed lossless compression algorithm. The results show that, compared with all other existing works, the proposed algorithm offers a solution to wireless capsule endoscopy with lossless and yet acceptable level of compression. PMID:25375753

  15. Semi-discrete Galerkin solution of the compressible boundary-layer equations with viscous-inviscid interaction

    NASA Technical Reports Server (NTRS)

    Day, Brad A.; Meade, Andrew J., Jr.

    1993-01-01

    A semi-discrete Galerkin (SDG) method is under development to model attached, turbulent, and compressible boundary layers for transonic airfoil analysis problems. For the boundary-layer formulation the method models the spatial variable normal to the surface with linear finite elements and the time-like variable with finite differences. A Dorodnitsyn transformed system of equations is used to bound the infinite spatial domain thereby providing high resolution near the wall and permitting the use of a uniform finite element grid which automatically follows boundary-layer growth. The second-order accurate Crank-Nicholson scheme is applied along with a linearization method to take advantage of the parabolic nature of the boundary-layer equations and generate a non-iterative marching routine. The SDG code can be applied to any smoothly-connected airfoil shape without modification and can be coupled to any inviscid flow solver. In this analysis, a direct viscous-inviscid interaction is accomplished between the Euler and boundary-layer codes through the application of a transpiration velocity boundary condition. Results are presented for compressible turbulent flow past RAE 2822 and NACA 0012 airfoils at various freestream Mach numbers, Reynolds numbers, and angles of attack.

  16. An efficient compression scheme for bitmap indices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2004-04-13

    When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap codemore » (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query

  17. Chest compression quality management and return of spontaneous circulation: a matched-pair registry study.

    PubMed

    Lukas, Roman-Patrik; Gräsner, Jan Thorsten; Seewald, Stephan; Lefering, Rolf; Weber, Thomas Peter; Van Aken, Hugo; Fischer, Matthias; Bohn, Andreas

    2012-10-01

    Investigating the effects of any intervention during cardiac arrest remains difficult. The ROSC after cardiac arrest score was introduced to facilitate comparison of rates of return of spontaneous circulation (ROSC) between different ambulance services. To study the influence of chest compression quality management (including training, real-time feedback devices, and debriefing) in comparison with conventional cardiopulmonary resuscitation (CPR), a matched-pair analysis was conducted using data from the German Resuscitation Registry, with the calculated ROSC after cardiac arrest score as the baseline. Matching for independent ROSC after cardiac arrest score variables yielded 319 matched cases from the study period (January 2007-March 2011). The score predicted a 45% ROSC rate for the matched pairs. The observed ROSC increased significantly with chest compression quality management, to 52% (P=0.013; 95% CI, 46-57%). No significant differences were seen in the conventional CPR group (47%; 95% CI, 42-53%). The difference between the observed ROSC rates was not statistically significant. Chest compression quality management leads to significantly higher ROSC rates than those predicted by the prognostic score (ROSC after cardiac arrest score). Matched-pair analysis shows that with conventional CPR, the observed ROSC rate was not significantly different from the predicted rate. Analysis shows a trend toward a higher ROSC rate for chest compression quality management in comparison with conventional CPR. It is unclear whether a single aspect of chest compression quality management or the combination of training, real-time feedback, and debriefing contributed to this result. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  18. A New Approach for Fingerprint Image Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefactsmore » which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.« less

  19. A biological compression model and its applications.

    PubMed

    Cao, Minh Duc; Dix, Trevor I; Allison, Lloyd

    2011-01-01

    A biological compression model, expert model, is presented which is superior to existing compression algorithms in both compression performance and speed. The model is able to compress whole eukaryotic genomes. Most importantly, the model provides a framework for knowledge discovery from biological data. It can be used for repeat element discovery, sequence alignment and phylogenetic analysis. We demonstrate that the model can handle statistically biased sequences and distantly related sequences where conventional knowledge discovery tools often fail.

  20. Wearable EEG via lossless compression.

    PubMed

    Dufort, Guillermo; Favaro, Federico; Lecumberry, Federico; Martin, Alvaro; Oliver, Juan P; Oreggioni, Julian; Ramirez, Ignacio; Seroussi, Gadiel; Steinfeld, Leonardo

    2016-08-01

    This work presents a wearable multi-channel EEG recording system featuring a lossless compression algorithm. The algorithm, based in a previously reported algorithm by the authors, exploits the existing temporal correlation between samples at different sampling times, and the spatial correlation between different electrodes across the scalp. The low-power platform is able to compress, by a factor between 2.3 and 3.6, up to 300sps from 64 channels with a power consumption of 176μW/ch. The performance of the algorithm compares favorably with the best compression rates reported up to date in the literature.

  1. Wavelet compression of noisy tomographic images

    NASA Astrophysics Data System (ADS)

    Kappeler, Christian; Mueller, Stefan P.

    1995-09-01

    3D data acquisition is increasingly used in positron emission tomography (PET) to collect a larger fraction of the emitted radiation. A major practical difficulty with data storage and transmission in 3D-PET is the large size of the data sets. A typical dynamic study contains about 200 Mbyte of data. PET images inherently have a high level of photon noise and therefore usually are evaluated after being processed by a smoothing filter. In this work we examined lossy compression schemes under the postulate not induce image modifications exceeding those resulting from low pass filtering. The standard we will refer to is the Hanning filter. Resolution and inhomogeneity serve as figures of merit for quantification of image quality. The images to be compressed are transformed to a wavelet representation using Daubechies12 wavelets and compressed after filtering by thresholding. We do not include further compression by quantization and coding here. Achievable compression factors at this level of processing are thirty to fifty.

  2. Optimisation algorithms for ECG data compression.

    PubMed

    Haugland, D; Heber, J G; Husøy, J H

    1997-07-01

    The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.

  3. Nonpainful wide-area compression inhibits experimental pain

    PubMed Central

    Honigman, Liat; Bar-Bachar, Ofrit; Yarnitsky, David; Sprecher, Elliot; Granovsky, Yelena

    2016-01-01

    Abstract Compression therapy, a well-recognized treatment for lymphoedema and venous disorders, pressurizes limbs and generates massive non-noxious afferent sensory barrages. The aim of this study was to study whether such afferent activity has an analgesic effect when applied on the lower limbs, hypothesizing that larger compression areas will induce stronger analgesic effects, and whether this effect correlates with conditioned pain modulation (CPM). Thirty young healthy subjects received painful heat and pressure stimuli (47°C for 30 seconds, forearm; 300 kPa for 15 seconds, wrist) before and during 3 compression protocols of either SMALL (up to ankles), MEDIUM (up to knees), or LARGE (up to hips) compression areas. Conditioned pain modulation (heat pain conditioned by noxious cold water) was tested before and after each compression protocol. The LARGE protocol induced more analgesia for heat than the SMALL protocol (P < 0.001). The analgesic effect interacted with gender (P = 0.015). The LARGE protocol was more efficient for females, whereas the MEDIUM protocol was more efficient for males. Pressure pain was reduced by all protocols (P < 0.001) with no differences between protocols and no gender effect. Conditioned pain modulation was more efficient than the compression-induced analgesia. For the LARGE protocol, precompression CPM efficiency positively correlated with compression-induced analgesia. Large body area compression exerts an area-dependent analgesic effect on experimental pain stimuli. The observed correlation with pain inhibition in response to robust non-noxious sensory stimulation may suggest that compression therapy shares similar mechanisms with inhibitory pain modulation assessed through CPM. PMID:27152691

  4. ERGC: an efficient referential genome compression algorithm.

    PubMed

    Saha, Subrata; Rajasekaran, Sanguthevar

    2015-11-01

    Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. rajasek@engr.uconn.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. Nonpainful wide-area compression inhibits experimental pain.

    PubMed

    Honigman, Liat; Bar-Bachar, Ofrit; Yarnitsky, David; Sprecher, Elliot; Granovsky, Yelena

    2016-09-01

    Compression therapy, a well-recognized treatment for lymphoedema and venous disorders, pressurizes limbs and generates massive non-noxious afferent sensory barrages. The aim of this study was to study whether such afferent activity has an analgesic effect when applied on the lower limbs, hypothesizing that larger compression areas will induce stronger analgesic effects, and whether this effect correlates with conditioned pain modulation (CPM). Thirty young healthy subjects received painful heat and pressure stimuli (47°C for 30 seconds, forearm; 300 kPa for 15 seconds, wrist) before and during 3 compression protocols of either SMALL (up to ankles), MEDIUM (up to knees), or LARGE (up to hips) compression areas. Conditioned pain modulation (heat pain conditioned by noxious cold water) was tested before and after each compression protocol. The LARGE protocol induced more analgesia for heat than the SMALL protocol (P < 0.001). The analgesic effect interacted with gender (P = 0.015). The LARGE protocol was more efficient for females, whereas the MEDIUM protocol was more efficient for males. Pressure pain was reduced by all protocols (P < 0.001) with no differences between protocols and no gender effect. Conditioned pain modulation was more efficient than the compression-induced analgesia. For the LARGE protocol, precompression CPM efficiency positively correlated with compression-induced analgesia. Large body area compression exerts an area-dependent analgesic effect on experimental pain stimuli. The observed correlation with pain inhibition in response to robust non-noxious sensory stimulation may suggest that compression therapy shares similar mechanisms with inhibitory pain modulation assessed through CPM.

  6. Classification Techniques for Digital Map Compression

    DTIC Science & Technology

    1989-03-01

    classification improved the performance of the K-means classification algorithm resulting in a compression of 8.06:1 with Lempel - Ziv coding. Run-length coding... compression performance are run-length coding [2], [8] and Lempel - Ziv coding 110], [11]. These techniques are chosen because they are most efficient when...investigated. After the classification, some standard file compression methods, such as Lempel - Ziv and run-length encoding were applied to the

  7. Compression of surface myoelectric signals using MP3 encoding.

    PubMed

    Chan, Adrian D C

    2011-01-01

    The potential of MP3 compression of surface myoelectric signals is explored in this paper. MP3 compression is a perceptual-based encoder scheme, used traditionally to compress audio signals. The ubiquity of MP3 compression (e.g., portable consumer electronics and internet applications) makes it an attractive option for remote monitoring and telemedicine applications. The effects of muscle site and contraction type are examined at different MP3 encoding bitrates. Results demonstrate that MP3 compression is sensitive to the myoelectric signal bandwidth, with larger signal distortion associated with myoelectric signals that have higher bandwidths. Compared to other myoelectric signal compression techniques reported previously (embedded zero-tree wavelet compression and adaptive differential pulse code modulation), MP3 compression demonstrates superior performance (i.e., lower percent residual differences for the same compression ratios).

  8. Near-wall modeling of compressible turbulent flow

    NASA Technical Reports Server (NTRS)

    So, Ronald M. C.

    1991-01-01

    A near-wall two-equation model for compressible flows is proposed. The model is formulated by relaxing the assumption of dynamic field similarity between compressible and incompressible flows. A postulate is made to justify the extension of incompressible models to ammount for compressibility effects. This requires formulation the turbulent kinetic energy equation in a form similar to its incompressible counterpart. As a result, the compressible dissipation function has to be split into a solenoidal part, which is not sensitive to changes of compressibility indicators, and a dilatational part, which is directly affected by these changes. A model with an explicit dependence on the turbulent Mach number is proposed for the dilatational dissipation rate.

  9. Estimates of the effective compressive strength

    NASA Astrophysics Data System (ADS)

    Goldstein, R. V.; Osipenko, N. M.

    2017-07-01

    One problem encountered when determining the effective mechanical properties of large-scale objects, which requires calculating their strength in processes of mechanical interaction with other objects, is related to the possible variability in their local properties including those due to the action of external physical factors. Such problems comprise the determination of the effective strength of bodies one of whose dimensions (thickness) is significantly less than the others and whose properties and/or composition can vary with the thickness. A method for estimating the effective strength of such bodies is proposed and illustrated with example of ice cover strength under longitudinal compression with regard to a partial loss of the ice bearing capacity in deformation. The role of failure localization processes is shown. It is demonstrated that the proposed approach can be used in other problems of fracture mechanics.

  10. Advances in high throughput DNA sequence data compression.

    PubMed

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz

    2016-06-01

    Advances in high throughput sequencing technologies and reduction in cost of sequencing have led to exponential growth in high throughput DNA sequence data. This growth has posed challenges such as storage, retrieval, and transmission of sequencing data. Data compression is used to cope with these challenges. Various methods have been developed to compress genomic and sequencing data. In this article, we present a comprehensive review of compression methods for genome and reads compression. Algorithms are categorized as referential or reference free. Experimental results and comparative analysis of various methods for data compression are presented. Finally, key challenges and research directions in DNA sequence data compression are highlighted.

  11. Parallel Tensor Compression for Large-Scale Scientific Data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolda, Tamara G.; Ballard, Grey; Austin, Woody Nathan

    As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memorymore » parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.« less

  12. Compression and fast retrieval of SNP data.

    PubMed

    Sambo, Francesco; Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio

    2014-11-01

    The increasing interest in rare genetic variants and epistatic genetic effects on complex phenotypic traits is currently pushing genome-wide association study design towards datasets of increasing size, both in the number of studied subjects and in the number of genotyped single nucleotide polymorphisms (SNPs). This, in turn, is leading to a compelling need for new methods for compression and fast retrieval of SNP data. We present a novel algorithm and file format for compressing and retrieving SNP data, specifically designed for large-scale association studies. Our algorithm is based on two main ideas: (i) compress linkage disequilibrium blocks in terms of differences with a reference SNP and (ii) compress reference SNPs exploiting information on their call rate and minor allele frequency. Tested on two SNP datasets and compared with several state-of-the-art software tools, our compression algorithm is shown to be competitive in terms of compression rate and to outperform all tools in terms of time to load compressed data. Our compression and decompression algorithms are implemented in a C++ library, are released under the GNU General Public License and are freely downloadable from http://www.dei.unipd.it/~sambofra/snpack.html. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Compressive Behaviour and Energy Absorption of Aluminium Foam Sandwich

    NASA Astrophysics Data System (ADS)

    Endut, N. A.; Hazza, M. H. F. Al; Sidek, A. A.; Adesta, E. T. Y.; Ibrahim, N. A.

    2018-01-01

    Development of materials in automotive industries plays an important role in order to retain the safety, performance and cost. Metal foams are one of the idea to evolve new material in automotive industries since it can absorb energy when it deformed and good for crash management. Recently, new technology had been introduced to replace metallic foam by using aluminium foam sandwich (AFS) due to lightweight and high energy absorption behaviour. Therefore, this paper provides reliable data that can be used to analyze the energy absorption behaviour of aluminium foam sandwich by conducting experimental work which is compression test. Six experiments of the compression test were carried out to analyze the stress-strain relationship in terms of energy absorption behavior. The effects of input variables include varying the thickness of aluminium foam core and aluminium sheets on energy absorption behavior were evaluated comprehensively. Stress-strain relationship curves was used for energy absorption of aluminium foam sandwich calculation. The result highlights that the energy absorption of aluminium foam sandwich increases from 12.74 J to 64.42 J respectively with increasing the foam and skin thickness.

  14. Highly Efficient Compression Algorithms for Multichannel EEG.

    PubMed

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  15. Halftoning processing on a JPEG-compressed image

    NASA Astrophysics Data System (ADS)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  16. Data Compression With Application to Geo-Location

    DTIC Science & Technology

    2010-08-01

    wireless sensor network requires the estimation of time-difference-of-arrival (TDOA) parameters using data collected by a set of spatially separated sensors. Compressing the data that is shared among the sensors can provide tremendous savings in terms of the energy and transmission latency. Traditional MSE and perceptual based data compression schemes fail to accurately capture the effects of compression on the TDOA estimation task; therefore, it is necessary to investigate compression algorithms suitable for TDOA parameter estimation. This thesis explores the

  17. Crystal and Particle Engineering Strategies for Improving Powder Compression and Flow Properties to Enable Continuous Tablet Manufacturing by Direct Compression.

    PubMed

    Chattoraj, Sayantan; Sun, Changquan Calvin

    2018-04-01

    Continuous manufacturing of tablets has many advantages, including batch size flexibility, demand-adaptive scale up or scale down, consistent product quality, small operational foot print, and increased manufacturing efficiency. Simplicity makes direct compression the most suitable process for continuous tablet manufacturing. However, deficiencies in powder flow and compression of active pharmaceutical ingredients (APIs) limit the range of drug loading that can routinely be considered for direct compression. For the widespread adoption of continuous direct compression, effective API engineering strategies to address power flow and compression problems are needed. Appropriate implementation of these strategies would facilitate the design of high-quality robust drug products, as stipulated by the Quality-by-Design framework. Here, several crystal and particle engineering strategies for improving powder flow and compression properties are summarized. The focus is on the underlying materials science, which is the foundation for effective API engineering to enable successful continuous manufacturing by the direct compression process. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  18. Temporal compressive imaging for video

    NASA Astrophysics Data System (ADS)

    Zhou, Qun; Zhang, Linxia; Ke, Jun

    2018-01-01

    In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.

  19. Compressed gas fuel storage system

    DOEpatents

    Wozniak, John J.; Tiller, Dale B.; Wienhold, Paul D.; Hildebrand, Richard J.

    2001-01-01

    A compressed gas vehicle fuel storage system comprised of a plurality of compressed gas pressure cells supported by shock-absorbing foam positioned within a shape-conforming container. The container is dimensioned relative to the compressed gas pressure cells whereby a radial air gap surrounds each compressed gas pressure cell. The radial air gap allows pressure-induced expansion of the pressure cells without resulting in the application of pressure to adjacent pressure cells or physical pressure to the container. The pressure cells are interconnected by a gas control assembly including a thermally activated pressure relief device, a manual safety shut-off valve, and means for connecting the fuel storage system to a vehicle power source and a refueling adapter. The gas control assembly is enclosed by a protective cover attached to the container. The system is attached to the vehicle with straps to enable the chassis to deform as intended in a high-speed collision.

  20. 30 CFR 57.13020 - Use of compressed air.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward a...

  1. 30 CFR 57.13020 - Use of compressed air.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward a...

  2. 30 CFR 57.13020 - Use of compressed air.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward a...

  3. Quasi-isentropic compression using compressed water flow generated by underwater electrical explosion of a wire array

    NASA Astrophysics Data System (ADS)

    Gurovich, V.; Virozub, A.; Rososhek, A.; Bland, S.; Spielman, R. B.; Krasik, Ya. E.

    2018-05-01

    A major experimental research area in material equation-of-state today involves the use of off-Hugoniot measurements rather than shock experiments that give only Hugoniot data. There is a wide range of applications using quasi-isentropic compression of matter including the direct measurement of the complete isentrope of materials in a single experiment and minimizing the heating of flyer plates for high-velocity shock measurements. We propose a novel approach to generating quasi-isentropic compression of matter. Using analytical modeling and hydrodynamic simulations, we show that a working fluid composed of compressed water, generated by an underwater electrical explosion of a planar wire array, might be used to efficiently drive the quasi-isentropic compression of a copper target to pressures ˜2 × 1011 Pa without any complex target designs.

  4. Centrifugal Gas Compression Cycle

    NASA Astrophysics Data System (ADS)

    Fultun, Roy

    2002-11-01

    A centrifuged gas of kinetic, elastic hard spheres compresses isothermally and without flow of heat in a process that reverses free expansion. This theorem follows from stated assumptions via a collection of thought experiments, theorems and other supporting results, and it excludes application of the reversible mechanical adiabatic power law in this context. The existence of an isothermal adiabatic centrifugal compression process makes a three-process cycle possible using a fixed sample of the working gas. The three processes are: adiabatic mechanical expansion and cooling against a piston, isothermal adiabatic centrifugal compression back to the original volume, and isochoric temperature rise back to the original temperature due to an influx of heat. This cycle forms the basis for a Thomson perpetuum mobile that induces a loop of energy flow in an isolated system consisting of a heat bath connectable by a thermal path to the working gas, a mechanical extractor of the gas's internal energy, and a device that uses that mechanical energy and dissipates it as heat back into the heat bath. We present a simple experimental procedure to test the assertion that adiabatic centrifugal compression is isothermal. An energy budget for the cycle provides a criterion for breakeven in the conversion of heat to mechanical energy.

  5. CoGI: Towards Compressing Genomes as an Image.

    PubMed

    Xie, Xiaojing; Zhou, Shuigeng; Guan, Jihong

    2015-01-01

    Genomic science is now facing an explosive increase of data thanks to the fast development of sequencing technology. This situation poses serious challenges to genomic data storage and transferring. It is desirable to compress data to reduce storage and transferring cost, and thus to boost data distribution and utilization efficiency. Up to now, a number of algorithms / tools have been developed for compressing genomic sequences. Unlike the existing algorithms, most of which treat genomes as one-dimensional text strings and compress them based on dictionaries or probability models, this paper proposes a novel approach called CoGI (the abbreviation of Compressing Genomes as an Image) for genome compression, which transforms the genomic sequences to a two-dimensional binary image (or bitmap), then applies a rectangular partition coding algorithm to compress the binary image. CoGI can be used as either a reference-based compressor or a reference-free compressor. For the former, we develop two entropy-based algorithms to select a proper reference genome. Performance evaluation is conducted on various genomes. Experimental results show that the reference-based CoGI significantly outperforms two state-of-the-art reference-based genome compressors GReEn and RLZ-opt in both compression ratio and compression efficiency. It also achieves comparable compression ratio but two orders of magnitude higher compression efficiency in comparison with XM--one state-of-the-art reference-free genome compressor. Furthermore, our approach performs much better than Gzip--a general-purpose and widely-used compressor, in both compression speed and compression ratio. So, CoGI can serve as an effective and practical genome compressor. The source code and other related documents of CoGI are available at: http://admis.fudan.edu.cn/projects/cogi.htm.

  6. 30 CFR 56.13020 - Use of compressed air.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a person...

  7. 30 CFR 56.13020 - Use of compressed air.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a person...

  8. 30 CFR 56.13020 - Use of compressed air.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a person...

  9. 30 CFR 56.13020 - Use of compressed air.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a person...

  10. The Columbia Thyroid Eye Disease-Compressive Optic Neuropathy Formula.

    PubMed

    Callahan, Alison B; Campbell, Ashley A; Oropesa, Susel; Baraban, Aryeh; Kazim, Michael

    2018-06-13

    Diagnosing thyroid eye disease-compressive optic neuropathy (TED-CON) is challenging, particularly in cases lacking a relative afferent pupillary defect. Large case series of TED-CON patients and accessible diagnostic tools are lacking in the current literature. This study aims to create a mathematical formula that accurately predicts the presence or absence of CON based on the most salient clinical measures of optic neuropathy. A retrospective case series compares 108 patients (216 orbits) with either unilateral or bilateral TED-CON and 41 age-matched patients (82 orbits) with noncompressive TED. Utilizing clinical variables assessing optic nerve function and/or risk of compressive disease, and with the aid of generalized linear regression modeling, the authors create a mathematical formula that weighs the relative contribution of each clinical variable in the overall prediction of CON. Data from 213 orbits in 110 patients derived the formula: y = -0.69 + 2.58 × (afferent pupillary defect) - 0.31 × (summed limitation of ductions) - 0.2 × (mean deviation on Humphrey visual field testing) - 0.02 × (% color plates). This accurately predicted the presence of CON (y > 0) versus non-CON (y < 0) in 82% of cases with 83% sensitivity and 81% specificity. When there was no relative afferent pupillary defect, which was the case in 63% of CON orbits, the formula correctly predicted CON in 78% of orbits with 73% sensitivity and 83% specificity. The authors developed a mathematical formula, the Columbia TED-CON Formula (CTD Formula), that can help guide clinicians in accurately diagnosing TED-CON, particularly in the presence of bilateral disease and when no relative afferent pupillary defect is present.

  11. Cloud Optimized Image Format and Compression

    NASA Astrophysics Data System (ADS)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  12. Aerodynamics inside a rapid compression machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Gaurav; Sung, Chih-Jen

    2006-04-15

    The aerodynamics inside a rapid compression machine after the end of compression is investigated using planar laser-induced fluorescence (PLIF) of acetone. To study the effect of reaction chamber configuration on the resulting aerodynamics and temperature field, experiments are conducted and compared using a creviced piston and a flat piston under varying conditions. Results show that the flat piston design leads to significant mixing of the cold vortex with the hot core region, which causes alternate hot and cold regions inside the combustion chamber. At higher pressures, the effect of the vortex is reduced. The creviced piston head configuration is demonstratedmore » to result in drastic reduction of the effect of the vortex. Experimental conditions are also simulated using the Star-CD computational fluid dynamics package. Computed results closely match with experimental observation. Numerical results indicate that with a flat piston design, gas velocity after compression is very high and the core region shrinks quickly due to rapid entrainment of cold gases. Whereas, for a creviced piston head design, gas velocity after compression is significantly lower and the core region remains unaffected for a long duration. As a consequence, for the flat piston, adiabatic core assumption can significantly overpredict the maximum temperature after the end of compression. For the creviced piston, the adiabatic core assumption is found to be valid even up to 100 ms after compression. This work therefore experimentally and numerically substantiates the importance of piston head design for achieving a homogeneous core region inside a rapid compression machine. (author)« less

  13. Chaos-Based Simultaneous Compression and Encryption for Hadoop.

    PubMed

    Usama, Muhammad; Zakaria, Nordin

    2017-01-01

    Data compression and encryption are key components of commonly deployed platforms such as Hadoop. Numerous data compression and encryption tools are presently available on such platforms and the tools are characteristically applied in sequence, i.e., compression followed by encryption or encryption followed by compression. This paper focuses on the open-source Hadoop framework and proposes a data storage method that efficiently couples data compression with encryption. A simultaneous compression and encryption scheme is introduced that addresses an important implementation issue of source coding based on Tent Map and Piece-wise Linear Chaotic Map (PWLM), which is the infinite precision of real numbers that result from their long products. The approach proposed here solves the implementation issue by removing fractional components that are generated by the long products of real numbers. Moreover, it incorporates a stealth key that performs a cyclic shift in PWLM without compromising compression capabilities. In addition, the proposed approach implements a masking pseudorandom keystream that enhances encryption quality. The proposed algorithm demonstrated a congruent fit within the Hadoop framework, providing robust encryption security and compression.

  14. Chaos-Based Simultaneous Compression and Encryption for Hadoop

    PubMed Central

    Zakaria, Nordin

    2017-01-01

    Data compression and encryption are key components of commonly deployed platforms such as Hadoop. Numerous data compression and encryption tools are presently available on such platforms and the tools are characteristically applied in sequence, i.e., compression followed by encryption or encryption followed by compression. This paper focuses on the open-source Hadoop framework and proposes a data storage method that efficiently couples data compression with encryption. A simultaneous compression and encryption scheme is introduced that addresses an important implementation issue of source coding based on Tent Map and Piece-wise Linear Chaotic Map (PWLM), which is the infinite precision of real numbers that result from their long products. The approach proposed here solves the implementation issue by removing fractional components that are generated by the long products of real numbers. Moreover, it incorporates a stealth key that performs a cyclic shift in PWLM without compromising compression capabilities. In addition, the proposed approach implements a masking pseudorandom keystream that enhances encryption quality. The proposed algorithm demonstrated a congruent fit within the Hadoop framework, providing robust encryption security and compression. PMID:28072850

  15. Method for preventing jamming conditions in a compression device

    DOEpatents

    Williams, Paul M.; Faller, Kenneth M.; Bauer, Edward J.

    2002-06-18

    A compression device for feeding a waste material to a reactor includes a waste material feed assembly having a hopper, a supply tube and a compression tube. Each of the supply and compression tubes includes feed-inlet and feed-outlet ends. A feed-discharge valve assembly is located between the feed-outlet end of the compression tube and the reactor. A feed auger-screw extends axially in the supply tube between the feed-inlet and feed-outlet ends thereof. A compression auger-screw extends axially in the compression tube between the feed-inlet and feed-outlet ends thereof. The compression tube is sloped downwardly towards the reactor to drain fluid from the waste material to the reactor and is oriented at generally right angle to the supply tube such that the feed-outlet end of the supply tube is adjacent to the feed-inlet end of the compression tube. A programmable logic controller is provided for controlling the rotational speed of the feed and compression auger-screws for selectively varying the compression of the waste material and for overcoming jamming conditions within either the supply tube or the compression tube.

  16. Multiresolution Distance Volumes for Progressive Surface Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laney, D E; Bertram, M; Duchaineau, M A

    2002-04-18

    We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distancemore » volumes for surface compression and progressive reconstruction for complex high genus surfaces.« less

  17. Compressive Properties of Metal Matrix Syntactic Foams in Free and Constrained Compression

    NASA Astrophysics Data System (ADS)

    Orbulov, Imre Norbert; Májlinger, Kornél

    2014-06-01

    Metal matrix syntactic foam (MMSF) blocks were produced by an inert gas-assisted pressure infiltration technique. MMSFs are advanced hollow sphere reinforced-composite materials having promising application in the fields of aviation, transport, and automotive engineering, as well as in civil engineering. The produced blocks were investigated in free and constrained compression modes, and besides the characteristic mechanical properties, their deformation mechanisms and failure modes were studied. In the tests, the chemical composition of the matrix material, the size of the reinforcing ceramic hollow spheres, the applied heat treatment, and the compression mode were considered as investigation parameters. The monitored mechanical properties were the compressive strength, the fracture strain, the structural stiffness, the fracture energy, and the overall absorbed energy. These characteristics were strongly influenced by the test parameters. By the proper selection of the matrix and the reinforcement and by proper design, the mechanical properties of the MMSFs can be effectively tailored for specific and given applications.

  18. Optimizing management of the condensing heat and cooling of gases compression in oxy block using of a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Brzęczek, Mateusz; Bartela, Łukasz

    2013-12-01

    This paper presents the parameters of the reference oxy combustion block operating with supercritical steam parameters, equipped with an air separation unit and a carbon dioxide capture and compression installation. The possibility to recover the heat in the analyzed power plant is discussed. The decision variables and the thermodynamic functions for the optimization algorithm were identified. The principles of operation of genetic algorithm and methodology of conducted calculations are presented. The sensitivity analysis was performed for the best solutions to determine the effects of the selected variables on the power and efficiency of the unit. Optimization of the heat recovery from the air separation unit, flue gas condition and CO2 capture and compression installation using genetic algorithm was designed to replace the low-pressure section of the regenerative water heaters of steam cycle in analyzed unit. The result was to increase the power and efficiency of the entire power plant.

  19. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  20. Perceptually lossless fractal image compression

    NASA Astrophysics Data System (ADS)

    Lin, Huawu; Venetsanopoulos, Anastasios N.

    1996-02-01

    According to the collage theorem, the encoding distortion for fractal image compression is directly related to the metric used in the encoding process. In this paper, we introduce a perceptually meaningful distortion measure based on the human visual system's nonlinear response to luminance and the visual masking effects. Blackwell's psychophysical raw data on contrast threshold are first interpolated as a function of background luminance and visual angle, and are then used as an error upper bound for perceptually lossless image compression. For a variety of images, experimental results show that the algorithm produces a compression ratio of 8:1 to 10:1 without introducing visual artifacts.

  1. Data compression using Chebyshev transform

    NASA Technical Reports Server (NTRS)

    Cheng, Andrew F. (Inventor); Hawkins, III, S. Edward (Inventor); Nguyen, Lillian (Inventor); Monaco, Christopher A. (Inventor); Seagrave, Gordon G. (Inventor)

    2007-01-01

    The present invention is a method, system, and computer program product for implementation of a capable, general purpose compression algorithm that can be engaged on the fly. This invention has particular practical application with time-series data, and more particularly, time-series data obtained form a spacecraft, or similar situations where cost, size and/or power limitations are prevalent, although it is not limited to such applications. It is also particularly applicable to the compression of serial data streams and works in one, two, or three dimensions. The original input data is approximated by Chebyshev polynomials, achieving very high compression ratios on serial data streams with minimal loss of scientific information.

  2. Compression fractures detection on CT

    NASA Astrophysics Data System (ADS)

    Bar, Amir; Wolf, Lior; Bergman Amitai, Orna; Toledano, Eyal; Elnekave, Eldad

    2017-03-01

    The presence of a vertebral compression fracture is highly indicative of osteoporosis and represents the single most robust predictor for development of a second osteoporotic fracture in the spine or elsewhere. Less than one third of vertebral compression fractures are diagnosed clinically. We present an automated method for detecting spine compression fractures in Computed Tomography (CT) scans. The algorithm is composed of three processes. First, the spinal column is segmented and sagittal patches are extracted. The patches are then binary classified using a Convolutional Neural Network (CNN). Finally a Recurrent Neural Network (RNN) is utilized to predict whether a vertebral fracture is present in the series of patches.

  3. Task-oriented lossy compression of magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques

    1996-04-01

    A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.

  4. 46 CFR 197.338 - Compressed gas cylinders.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... STANDARDS GENERAL PROVISIONS Commercial Diving Operations Equipment § 197.338 Compressed gas cylinders. Each compressed gas cylinder must— (a) Be stored in a ventilated area; (b) Be protected from excessive heat; (c... 46 Shipping 7 2010-10-01 2010-10-01 false Compressed gas cylinders. 197.338 Section 197.338...

  5. Light-weight reference-based compression of FASTQ data.

    PubMed

    Zhang, Yongpeng; Li, Linsen; Yang, Yanli; Yang, Xiao; He, Shan; Zhu, Zexuan

    2015-06-09

    The exponential growth of next generation sequencing (NGS) data has posed big challenges to data storage, management and archive. Data compression is one of the effective solutions, where reference-based compression strategies can typically achieve superior compression ratios compared to the ones not relying on any reference. This paper presents a lossless light-weight reference-based compression algorithm namely LW-FQZip to compress FASTQ data. The three components of any given input, i.e., metadata, short reads and quality score strings, are first parsed into three data streams in which the redundancy information are identified and eliminated independently. Particularly, well-designed incremental and run-length-limited encoding schemes are utilized to compress the metadata and quality score streams, respectively. To handle the short reads, LW-FQZip uses a novel light-weight mapping model to fast map them against external reference sequence(s) and produce concise alignment results for storage. The three processed data streams are then packed together with some general purpose compression algorithms like LZMA. LW-FQZip was evaluated on eight real-world NGS data sets and achieved compression ratios in the range of 0.111-0.201. This is comparable or superior to other state-of-the-art lossless NGS data compression algorithms. LW-FQZip is a program that enables efficient lossless FASTQ data compression. It contributes to the state of art applications for NGS data storage and transmission. LW-FQZip is freely available online at: http://csse.szu.edu.cn/staff/zhuzx/LWFQZip.

  6. Compression failure of composite laminates

    NASA Technical Reports Server (NTRS)

    Pipes, R. B.

    1983-01-01

    This presentation attempts to characterize the compressive behavior of Hercules AS-1/3501-6 graphite-epoxy composite. The effect of varying specimen geometry on test results is examined. The transition region is determined between buckling and compressive failure. Failure modes are defined and analytical models to describe these modes are presented.

  7. TEM Video Compressive Sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into amore » single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental

  8. Real-time demonstration hardware for enhanced DPCM video compression algorithm

    NASA Technical Reports Server (NTRS)

    Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.

    1992-01-01

    along with implementation of a buffer control algorithm to accommodate the variable data rate output of the multilevel Huffman encoder. A video CODEC of this type could be used to compress NTSC color television signals where high quality reconstruction is desirable (e.g., Space Station video transmission, transmission direct-to-the-home via direct broadcast satellite systems or cable television distribution to system headends and direct-to-the-home).

  9. Compressed Air/Vacuum Transportation Techniques

    NASA Astrophysics Data System (ADS)

    Guha, Shyamal

    2011-03-01

    General theory of compressed air/vacuum transportation will be presented. In this transportation, a vehicle (such as an automobile or a rail car) is powered either by compressed air or by air at near vacuum pressure. Four version of such transportation is feasible. In all versions, a ``c-shaped'' plastic or ceramic pipe lies buried a few inches under the ground surface. This pipe carries compressed air or air at near vacuum pressure. In type I transportation, a vehicle draws compressed air (or vacuum) from this buried pipe. Using turbine or reciprocating air cylinder, mechanical power is generated from compressed air (or from vacuum). This mechanical power transferred to the wheels of an automobile (or a rail car) drives the vehicle. In type II-IV transportation techniques, a horizontal force is generated inside the plastic (or ceramic) pipe. A set of vertical and horizontal steel bars is used to transmit this force to the automobile on the road (or to a rail car on rail track). The proposed transportation system has following merits: virtually accident free; highly energy efficient; pollution free and it will not contribute to carbon dioxide emission. Some developmental work on this transportation will be needed before it can be used by the traveling public. The entire transportation system could be computer controlled.

  10. Compression selective solid-state chemistry

    NASA Astrophysics Data System (ADS)

    Hu, Anguang

    Compression selective solid-state chemistry refers to mechanically induced selective reactions of solids under thermomechanical extreme conditions. Advanced quantum solid-state chemistry simulations, based on density functional theory with localized basis functions, were performed to provide a remarkable insight into bonding pathways of high-pressure chemical reactions in all agreement with experiments. These pathways clearly demonstrate reaction mechanisms in unprecedented structural details, showing not only the chemical identity of reactive intermediates but also how atoms move along the reaction coordinate associated with a specific vibrational mode, directed by induced chemical stress occurred during bond breaking and forming. It indicates that chemical bonds in solids can break and form precisely under compression as we wish. This can be realized through strongly coupling of mechanical work to an initiation vibrational mode when all other modes can be suppressed under compression, resulting in ultrafast reactions to take place isothermally in a few femtoseconds. Thermodynamically, such reactions correspond to an entropy minimum process on an isotherm where the compression can force thermal expansion coefficient equal to zero. Combining a significantly brief reaction process with specific mode selectivity, both statistical laws and quantum uncertainty principle can be bypassed to precisely break chemical bonds, establishing fundamental principles of compression selective solid-state chemistry. Naturally this leads to understand the ''alchemy'' to purify, grow, and perfect certain materials such as emerging novel disruptive energetics.

  11. Compression failure mechanisms of composite structures

    NASA Technical Reports Server (NTRS)

    Hahn, H. T.; Sohi, M.; Moon, S.

    1986-01-01

    An experimental and analytical study was conducted to delineate the compression failure mechanisms of composite structures. The present report summarizes further results on kink band formation in unidirectional composites. In order to assess the compressive strengths and failure modes of fibers them selves, a fiber bundle was embedded in epoxy casting and tested in compression. A total of six different fibers were used together with two resins of different stiffnesses. The failure of highly anisotropic fibers such as Kevlar 49 and P-75 graphite was due to kinking of fibrils. However, the remaining fibers--T300 and T700 graphite, E-glass, and alumina--failed by localized microbuckling. Compressive strengths of the latter group of fibers were not fully utilized in their respective composite. In addition, acoustic emission monitoring revealed that fiber-matrix debonding did not occur gradually but suddenly at final failure. The kink band formation in unidirectional composites under compression was studied analytically and through microscopy. The material combinations selected include seven graphite/epoxy composites, two graphite/thermoplastic resin composites, one Kevlar 49/epoxy composite and one S-glass/epoxy composite.

  12. The New CCSDS Image Compression Recommendation

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron; Masschelein, Bart; Moury, Gilles; Schaefer, Christoph

    2005-01-01

    The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists of a two-dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-Earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An Application-Specific Integrated Circuit (ASIC) implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm. Performance results and comparisons with other compressors are given for a test set of space images.

  13. Fast lossless compression via cascading Bloom filters

    PubMed Central

    2014-01-01

    Background Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. Results We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Conclusions Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time

  14. Fast lossless compression via cascading Bloom filters.

    PubMed

    Rozov, Roye; Shamir, Ron; Halperin, Eran

    2014-01-01

    Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space

  15. A zero-error operational video data compression system

    NASA Technical Reports Server (NTRS)

    Kutz, R. L.

    1973-01-01

    A data compression system has been operating since February 1972, using ATS spin-scan cloud cover data. With the launch of ITOS 3 in October 1972, this data compression system has become the only source of near-realtime very high resolution radiometer image data at the data processing facility. The VHRR image data are compressed and transmitted over a 50 kilobit per second wideband ground link. The goal of the data compression experiment was to send data quantized to six bits at twice the rate possible when no compression is used, while maintaining zero error between the transmitted and reconstructed data. All objectives of the data compression experiment were met, and thus a capability of doubling the data throughput of the system has been achieved.

  16. Internal combustion engine with compressed air collection system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, P.W.

    1988-08-23

    This patent describes an internal combustion engine comprising cylinders respectively including a pressure port, pistons respectively movable in the cylinders through respective compression strokes, fuel injectors respectively connected to the cylinders and operative to supply, from a fuel source to the respective cylinders, a metered quantity of fuel conveyed by compressed gas in response to fuel injector operation during the compression strokes of the respective cylinders, a storage tank for accumulating and storing compressed gas, means for selectively connecting the pressure ports to the storage tank only during the compression strokes of the respective cylinders, and duct means connecting themore » storage tank to the fuel injectors for supplying the fuel injectors with compressed gas in response to fuel injector operation.« less

  17. Compressed Genotyping

    PubMed Central

    Erlich, Yaniv; Gordon, Assaf; Brand, Michael; Hannon, Gregory J.; Mitra, Partha P.

    2011-01-01

    Over the past three decades we have steadily increased our knowledge on the genetic basis of many severe disorders. Nevertheless, there are still great challenges in applying this knowledge routinely in the clinic, mainly due to the relatively tedious and expensive process of genotyping. Since the genetic variations that underlie the disorders are relatively rare in the population, they can be thought of as a sparse signal. Using methods and ideas from compressed sensing and group testing, we have developed a cost-effective genotyping protocol to detect carriers for severe genetic disorders. In particular, we have adapted our scheme to a recently developed class of high throughput DNA sequencing technologies. The mathematical framework presented here has some important distinctions from the ’traditional’ compressed sensing and group testing frameworks in order to address biological and technical constraints of our setting. PMID:21451737

  18. Pressure Oscillations in Adiabatic Compression

    ERIC Educational Resources Information Center

    Stout, Roland

    2011-01-01

    After finding Moloney and McGarvey's modified adiabatic compression apparatus, I decided to insert this experiment into my physical chemistry laboratory at the last minute, replacing a problematic experiment. With insufficient time to build the apparatus, we placed a bottle between two thick textbooks and compressed it with a third textbook forced…

  19. Optimal wavelets for biomedical signal compression.

    PubMed

    Nielsen, Mogens; Kamavuako, Ernest Nlandu; Andersen, Michael Midtgaard; Lucas, Marie-Françoise; Farina, Dario

    2006-07-01

    Signal compression is gaining importance in biomedical engineering due to the potential applications in telemedicine. In this work, we propose a novel scheme of signal compression based on signal-dependent wavelets. To adapt the mother wavelet to the signal for the purpose of compression, it is necessary to define (1) a family of wavelets that depend on a set of parameters and (2) a quality criterion for wavelet selection (i.e., wavelet parameter optimization). We propose the use of an unconstrained parameterization of the wavelet for wavelet optimization. A natural performance criterion for compression is the minimization of the signal distortion rate given the desired compression rate. For coding the wavelet coefficients, we adopted the embedded zerotree wavelet coding algorithm, although any coding scheme may be used with the proposed wavelet optimization. As a representative example of application, the coding/encoding scheme was applied to surface electromyographic signals recorded from ten subjects. The distortion rate strongly depended on the mother wavelet (for example, for 50% compression rate, optimal wavelet, mean+/-SD, 5.46+/-1.01%; worst wavelet 12.76+/-2.73%). Thus, optimization significantly improved performance with respect to previous approaches based on classic wavelets. The algorithm can be applied to any signal type since the optimal wavelet is selected on a signal-by-signal basis. Examples of application to ECG and EEG signals are also reported.

  20. DEVELOPMENT OF COLD CLIMATE HEAT PUMP USING TWO-STAGE COMPRESSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Bo; Rice, C Keith; Abdelaziz, Omar

    2015-01-01

    This paper uses a well-regarded, hardware based heat pump system model to investigate a two-stage economizing cycle for cold climate heat pump applications. The two-stage compression cycle has two variable-speed compressors. The high stage compressor was modelled using a compressor map, and the low stage compressor was experimentally studied using calorimeter testing. A single-stage heat pump system was modelled as the baseline. The system performance predictions are compared between the two-stage and single-stage systems. Special considerations for designing a cold climate heat pump are addressed at both the system and component levels.

  1. Quantization Distortion in Block Transform-Compressed Data

    NASA Technical Reports Server (NTRS)

    Boden, A. F.

    1995-01-01

    The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.

  2. Digital Data Registration and Differencing Compression System

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1996-01-01

    A process for X-ray registration and differencing results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic X-ray digital images.

  3. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1992-01-01

    A process for x ray registration and differencing results in more efficient compression is discussed. Differencing of registered modeled subject image with a modeled reference image forms a differential image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three dimensional model, which three dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  4. Ultrahigh Pressure Dynamic Compression

    NASA Astrophysics Data System (ADS)

    Duffy, T. S.

    2017-12-01

    Laser-based dynamic compression provides a new opportunity to study the lattice structure and other properties of geological materials to ultrahigh pressure conditions ranging from 100 - 1000 GPa (1 TPa) and beyond. Such studies have fundamental applications to understanding the Earth's core as well as the interior structure of super-Earths and giant planets. This talk will review recent dynamic compression experiments using high-powered lasers on materials including Fe-Si, MgO, and SiC. Experiments were conducted at the Omega laser (University of Rochester) and the Linac Coherent Light Source (LCLS, Stanford). At Omega, laser drives as large as 2 kJ are applied over 10 ns to samples that are 50 microns thick. At peak compression, the sample is probed with quasi-monochromatic X-rays from a laser-plasma source and diffraction is recorded on image plates. At LCLS, shock waves are driven into the sample using a 40-J laser with a 10-ns pulse. The sample is probed with X-rays form the LCLS free electron laser providing 1012 photons in a monochromatic pulse near 10 keV energy. Diffraction is recorded using pixel array detectors. By varying the delay between the laser and the x-ray beam, the sample can be probed at various times relative to the shock wave transiting the sample. By controlling the shape and duration of the incident laser pulse, either shock or ramp (shockless) loading can be produced. Ramp compression produces less heating than shock compression, allowing samples to be probed to ultrahigh pressures without melting. Results for iron alloys, oxides, and carbides provide new constraints on equations of state and phase transitions that are relevant to the interior structure of large, extrasolar terrestrial-type planets.

  5. Texture Studies and Compression Behaviour of Apple Flesh

    NASA Astrophysics Data System (ADS)

    James, Bryony; Fonseca, Celia

    Compressive behavior of fruit flesh has been studied using mechanical tests and microstructural analysis. Apple flesh from two cultivars (Braeburn and Cox's Orange Pippin) was investigated to represent the extremes in a spectrum of fruit flesh types, hard and juicy (Braeburn) and soft and mealy (Cox's). Force-deformation curves produced during compression of unconstrained discs of apple flesh followed trends predicted from the literature for each of the "juicy" and "mealy" types. The curves display the rupture point and, in some cases, a point of inflection that may be related to the point of incipient juice release. During compression these discs of flesh generally failed along the centre line, perpendicular to the direction of loading, through a barrelling mechanism. Cryo-Scanning Electron Microscopy (cryo-SEM) was used to examine the behavior of the parenchyma cells during fracture and compression using a purpose designed sample holder and compression tester. Fracture behavior reinforced the difference in mechanical properties between crisp and mealy fruit flesh. During compression testing prior to cryo-SEM imaging the apple flesh was constrained perpendicular to the direction of loading. Microstructural analysis suggests that, in this arrangement, the material fails along a compression front ahead of the compressing plate. Failure progresses by whole lines of parenchyma cells collapsing, or rupturing, with juice filling intercellular spaces, before the compression force is transferred to the next row of cells.

  6. Experimental Investigation of Piston Heat Transfer in a Light Duty Engine Under Conventional Diesel, Homogeneous Charge Compression Ignition, and Reactivity Controlled Compression Ignition Combustion Regimes

    DTIC Science & Technology

    2014-01-15

    in a Light Duty Engine Under Conventional Diesel, Homogeneous Charge Compression Ignition , and Reactivity Controlled Compression Ignition ...Conventional Diesel (CDC), Homogeneous Charge Compression Ignition (HCCI), and Reactivity Controlled Compression Ignition (RCCI) combustion...LTC) regimes, including reactivity controlled compression ignition (RCCI), partially premixed combustion (PPC), and homogenous charge compression

  7. Compression failure of angle-ply laminates

    NASA Technical Reports Server (NTRS)

    Peel, Larry D.; Hyer, Michael W.; Shuart, Mark J.

    1991-01-01

    The present work deals with modes and mechanisms of failure in compression of angle-ply laminates. Experimental results were obtained from 42 angle-ply IM7/8551-7a specimens with a lay-up of ((plus or minus theta)/(plus or minus theta)) sub 6s where theta, the off-axis angle, ranged from 0 degrees to 90 degrees. The results showed four failure modes, these modes being a function of off-axis angle. Failure modes include fiber compression, inplane transverse tension, inplane shear, and inplane transverse compression. Excessive interlaminar shear strain was also considered as an important mode of failure. At low off-axis angles, experimentally observed values were considerably lower than published strengths. It was determined that laminate imperfections in the form of layer waviness could be a major factor in reducing compression strength. Previously developed linear buckling and geometrically nonlinear theories were used, with modifications and enhancements, to examine the influence of layer waviness on compression response. The wavy layer is described by a wave amplitude and a wave length. Linear elastic stress-strain response is assumed. The geometrically nonlinear theory, in conjunction with the maximum stress failure criterion, was used to predict compression failure and failure modes for the angle-ply laminates. A range of wave length and amplitudes were used. It was found that for 0 less than or equal to theta less than or equal to 15 degrees failure was most likely due to fiber compression. For 15 degrees less than theta less than or equal to 35 degrees, failure was most likely due to inplane transverse tension. For 35 degrees less than theta less than or equal to 70 degrees, failure was most likely due to inplane shear. For theta less than 70 degrees, failure was most likely due to inplane transverse compression. The fiber compression and transverse tension failure modes depended more heavily on wave length than on wave amplitude. Thus using a single

  8. Outer planet Pioneer imaging communications system study. [data compression

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The effects of different types of imaging data compression on the elements of the Pioneer end-to-end data system were studied for three imaging transmission methods. These were: no data compression, moderate data compression, and the advanced imaging communications system. It is concluded that: (1) the value of data compression is inversely related to the downlink telemetry bit rate; (2) the rolling characteristics of the spacecraft limit the selection of data compression ratios; and (3) data compression might be used to perform acceptable outer planet mission at reduced downlink telemetry bit rates.

  9. Fuels for high-compression engines

    NASA Technical Reports Server (NTRS)

    Sparrow, Stanwood W

    1926-01-01

    From theoretical considerations one would expect an increase in power and thermal efficiency to result from increasing the compression ratio of an internal combustion engine. In reality it is upon the expansion ratio that the power and thermal efficiency depend, but since in conventional engines this is equal to the compression ratio, it is generally understood that a change in one ratio is accompanied by an equal change in the other. Tests over a wide range of compression ratios (extending to ratios as high as 14.1) have shown that ordinarily an increase in power and thermal efficiency is obtained as expected provided serious detonation or preignition does not result from the increase in ratio.

  10. Rectal perforation by compressed air.

    PubMed

    Park, Young Jin

    2017-07-01

    As the use of compressed air in industrial work has increased, so has the risk of associated pneumatic injury from its improper use. However, damage of large intestine caused by compressed air is uncommon. Herein a case of pneumatic rupture of the rectum is described. The patient was admitted to the Emergency Room complaining of abdominal pain and distension. His colleague triggered a compressed air nozzle over his buttock. On arrival, vital signs were stable but physical examination revealed peritoneal irritation and marked distension of the abdomen. Computed tomography showed a large volume of air in the peritoneal cavity and subcutaneous emphysema at the perineum. A rectal perforation was found at laparotomy and the Hartmann procedure was performed.

  11. Comparison of Open-Hole Compression Strength and Compression After Impact Strength on Carbon Fiber/Epoxy Laminates for the Ares I Composite Interstage

    NASA Technical Reports Server (NTRS)

    Hodge, Andrew J.; Nettles, Alan T.; Jackson, Justin R.

    2011-01-01

    Notched (open hole) composite laminates were tested in compression. The effect on strength of various sizes of through holes was examined. Results were compared to the average stress criterion model. Additionally, laminated sandwich structures were damaged from low-velocity impact with various impact energy levels and different impactor geometries. The compression strength relative to damage size was compared to the notched compression result strength. Open-hole compression strength was found to provide a reasonable bound on compression after impact.

  12. Existence and Stability of Compressible Current-Vortex Sheets in Three-Dimensional Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Chen, Gui-Qiang; Wang, Ya-Guang

    2008-03-01

    estimates, in terms of the nonhomogeneous terms and variable coefficients. Then we exploit these results to develop a suitable iteration scheme of the Nash Moser Hörmander type to deal with the loss of the order of derivative in the nonlinear level and establish its convergence, which leads to the existence and stability of compressible current-vortex sheets, locally in time, in three-dimensional MHD.

  13. The capability of professional- and lay-rescuers to estimate the chest compression-depth target: a short, randomized experiment.

    PubMed

    van Tulder, Raphael; Laggner, Roberta; Kienbacher, Calvin; Schmid, Bernhard; Zajicek, Andreas; Haidvogel, Jochen; Sebald, Dieter; Laggner, Anton N; Herkner, Harald; Sterz, Fritz; Eisenburger, Philip

    2015-04-01

    In CPR, sufficient compression depth is essential. The American Heart Association ("at least 5cm", AHA-R) and the European Resuscitation Council ("at least 5cm, but not to exceed 6cm", ERC-R) recommendations differ, and both are hardly achieved. This study aims to investigate the effects of differing target depth instructions on compression depth performances of professional and lay-rescuers. 110 professional-rescuers and 110 lay-rescuers were randomized (1:1, 4 groups) to estimate the AHA-R or ERC-R on a paper sheet (given horizontal axis) using a pencil and to perform chest compressions according to AHA-R or ERC-R on a manikin. Distance estimation and compression depth were the outcome variables. Professional-rescuers estimated the distance according to AHA-R in 19/55 (34.5%) and to ERC-R in 20/55 (36.4%) cases (p=0.84). Professional-rescuers achieved correct compression depth according to AHA-R in 39/55 (70.9%) and to ERC-R in 36/55 (65.4%) cases (p=0.97). Lay-rescuers estimated the distance correctly according to AHA-R in 18/55 (32.7%) and to ERC-R in 20/55 (36.4%) cases (p=0.59). Lay-rescuers yielded correct compression depth according to AHA-R in 39/55 (70.9%) and to ERC-R in 26/55 (47.3%) cases (p=0.02). Professional and lay-rescuers have severe difficulties in correctly estimating distance on a sheet of paper. Professional-rescuers are able to yield AHA-R and ERC-R targets likewise. In lay-rescuers AHA-R was associated with significantly higher success rates. The inability to estimate distance could explain the failure to appropriately perform chest compressions. For teaching lay-rescuers, the AHA-R with no upper limit of compression depth might be preferable. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder

    NASA Astrophysics Data System (ADS)

    August, Isaac; Oiknine, Yaniv; Abuleil, Marwan; Abdulhalim, Ibrahim; Stern, Adrian

    2016-03-01

    Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems.

  15. Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder.

    PubMed

    August, Isaac; Oiknine, Yaniv; AbuLeil, Marwan; Abdulhalim, Ibrahim; Stern, Adrian

    2016-03-23

    Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems.

  16. Stokes Profile Compression Applied to VSM Data

    NASA Astrophysics Data System (ADS)

    Toussaint, W. A.; Henney, C. J.; Harvey, J. W.

    2012-02-01

    The practical details of applying the Expansion in Hermite Functions (EHF) method to compression of full-disk full-Stokes solar spectroscopic data from the SOLIS/VSM instrument are discussed in this paper. The algorithm developed and discussed here preserves the 630.15 and 630.25 nm Fe i lines, along with the local continuum and telluric lines. This compression greatly reduces the amount of space required to store these data sets while maintaining the quality of the data, allowing these observations to be archived and made publicly available with limited bandwidth. Applying EHF to the full-Stokes profiles and saving the coefficient files with Rice compression reduces the disk space required to store these observations by a factor of 20, while maintaining the quality of the data and with a total compression time only 35% slower than the standard gzip (GNU zip) compression.

  17. Logarithmic compression methods for spectral data

    DOEpatents

    Dunham, Mark E.

    2003-01-01

    A method is provided for logarithmic compression, transmission, and expansion of spectral data. A log Gabor transformation is made of incoming time series data to output spectral phase and logarithmic magnitude values. The output phase and logarithmic magnitude values are compressed by selecting only magnitude values above a selected threshold and corresponding phase values to transmit compressed phase and logarithmic magnitude values. A reverse log Gabor transformation is then performed on the transmitted phase and logarithmic magnitude values to output transmitted time series data to a user.

  18. End-to-end communication test on variable length packet structures utilizing AOS testbed

    NASA Technical Reports Server (NTRS)

    Miller, Warner H.; Sank, V.; Fong, Wai; Miko, J.; Powers, M.; Folk, John; Conaway, B.; Michael, K.; Yeh, Pen-Shu

    1994-01-01

    This paper describes a communication test, which successfully demonstrated the transfer of losslessly compressed images in an end-to-end system. These compressed images were first formatted into variable length Consultative Committee for Space Data Systems (CCSDS) packets in the Advanced Orbiting System Testbed (AOST). The CCSDS data Structures were transferred from the AOST to the Radio Frequency Simulations Operations Center (RFSOC), via a fiber optic link, where data was then transmitted through the Tracking and Data Relay Satellite System (TDRSS). The received data acquired at the White Sands Complex (WSC) was transferred back to the AOST where the data was captured and decompressed back to the original images. This paper describes the compression algorithm, the AOST configuration, key flight components, data formats, and the communication link characteristics and test results.

  19. Machine compliance in compression tests

    NASA Astrophysics Data System (ADS)

    Sousa, Pedro; Ivens, Jan; Lomov, Stepan V.

    2018-05-01

    The compression behavior of a material cannot be accurately determined if the machine compliance is not accounted prior to the measurements. This work discusses the machine compliance during a compressibility test with fiberglass fabrics. The thickness variation was measured during loading and unloading cycles with a relaxation stage of 30 minutes between them. The measurements were performed using an indirect technique based on the comparison between the displacement at a free compression cycle and the displacement with a sample. Relating to the free test, it has been noticed the nonexistence of machine relaxation during relaxation stage. Considering relaxation or not, the characteristic curves for a free compression cycle can be overlapped precisely in the majority of the points. For the compression test with sample, it was noticed a non-physical decrease of about 30 µm during the relaxation stage, what can be explained by the greater fabric relaxation in relation to the machine relaxation. Beyond the technique normally used, another technique was used which allows a constant thickness during relaxation. Within this second method, machine displacement with sample is simply subtracted to the machine displacement without sample being imposed as constant. If imposed as a constant it will remain constant during relaxation stage and it will suddenly decrease after relaxation. If constantly calculated it will decrease gradually during relaxation stage. Independently of the technique used the final result will remain unchanged. The uncertainty introduced by this imprecision is about ±15 µm.

  20. Cardiovascular causes of airway compression.

    PubMed

    Kussman, Barry D; Geva, Tal; McGowan, Francis X

    2004-01-01

    Compression of the paediatric airway is a relatively common and often unrecognized complication of congenital cardiac and aortic arch anomalies. Airway obstruction may be the result of an anomalous relationship between the tracheobronchial tree and vascular structures (producing a vascular ring) or the result of extrinsic compression caused by dilated pulmonary arteries, left atrial enlargement, massive cardiomegaly, or intraluminal bronchial obstruction. A high index of suspicion of mechanical airway compression should be maintained in infants and children with recurrent respiratory difficulties, stridor, wheezing, dysphagia, or apnoea unexplained by other causes. Prompt diagnosis is required to avoid death and minimize airway damage. In addition to plain chest radiography and echocardiography, diagnostic investigations may consist of barium oesophagography, magnetic resonance imaging (MRI), computed tomography, cardiac catheterization and bronchoscopy. The most important recent advance is MRI, which can produce high quality three-dimensional reconstruction of all anatomic elements allowing for precise anatomic delineation and improved surgical planning. Anaesthetic technique will depend on the type of vascular ring and the presence of any congenital heart disease or intrinsic lesions of the tracheobronchial tree. Vascular rings may be repaired through a conventional posterolateral thoracotomy, or utilizing video-assisted thoracoscopic surgery (VATS) or robotic endoscopic surgery. Persistent airway obstruction following surgical repair may be due to residual compression, secondary airway wall instability (malacia), or intrinsic lesions of the airway. Simultaneous repair of cardiac defects and vascular tracheobronchial compression carries a higher risk of morbidity and mortality.

  1. H.264/AVC Video Compression on Smartphones

    NASA Astrophysics Data System (ADS)

    Sharabayko, M. P.; Markov, N. G.

    2017-01-01

    In this paper, we studied the usage of H.264/AVC video compression tools by the flagship smartphones. The results show that only a subset of tools is used, meaning that there is still a potential to achieve higher compression efficiency within the H.264/AVC standard, but the most advanced smartphones are already reaching the compression efficiency limit of H.264/AVC.

  2. Wavelet compression techniques for hyperspectral data

    NASA Technical Reports Server (NTRS)

    Evans, Bruce; Ringer, Brian; Yeates, Mathew

    1994-01-01

    Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet

  3. Design, Optimization, and Evaluation of A1-2139 Compression Panel with Integral T-Stiffeners

    NASA Technical Reports Server (NTRS)

    Mulani, Sameer B.; Havens, David; Norris, Ashley; Bird, R. Keith; Kapania, Rakesh K.; Olliffe, Robert

    2012-01-01

    A T-stiffened panel was designed and optimized for minimum mass subjected to constraints on buckling load, yielding, and crippling or local stiffener failure using a new analysis and design tool named EBF3PanelOpt. The panel was designed for a compression loading configuration, a realistic load case for a typical aircraft skin-stiffened panel. The panel was integrally machined from 2139 aluminum alloy plate and was tested in compression. The panel was loaded beyond buckling and strains and out-of-plane displacements were extracted from 36 strain gages and one linear variable displacement transducer. A digital photogrammetric system was used to obtain full field displacements and strains on the smooth (unstiffened) side of the panel. The experimental data were compared with the strains and out-of-plane deflections from a high-fidelity nonlinear finite element analysis.

  4. Lossless medical image compression with a hybrid coder

    NASA Astrophysics Data System (ADS)

    Way, Jing-Dar; Cheng, Po-Yuen

    1998-10-01

    The volume of medical image data is expected to increase dramatically in the next decade due to the large use of radiological image for medical diagnosis. The economics of distributing the medical image dictate that data compression is essential. While there is lossy image compression, the medical image must be recorded and transmitted lossless before it reaches the users to avoid wrong diagnosis due to the image data lost. Therefore, a low complexity, high performance lossless compression schematic that can approach the theoretic bound and operate in near real-time is needed. In this paper, we propose a hybrid image coder to compress the digitized medical image without any data loss. The hybrid coder is constituted of two key components: an embedded wavelet coder and a lossless run-length coder. In this system, the medical image is compressed with the lossy wavelet coder first, and the residual image between the original and the compressed ones is further compressed with the run-length coder. Several optimization schemes have been used in these coders to increase the coding performance. It is shown that the proposed algorithm is with higher compression ratio than run-length entropy coders such as arithmetic, Huffman and Lempel-Ziv coders.

  5. Comparison of two SVD-based color image compression schemes.

    PubMed

    Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli

    2017-01-01

    Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR.

  6. Comparison of two SVD-based color image compression schemes

    PubMed Central

    Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli

    2017-01-01

    Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR. PMID:28257451

  7. Slow, Fast and Mixed Compressible Modes near the Magnetopause

    NASA Astrophysics Data System (ADS)

    Scudder, J. D.; Maynard, N. C.; Burke, W. J.

    2003-12-01

    We motivate and illustrate a new technique to certify time variations, observed in spacecraft frame of reference, as compressible slow or fast magnetosonic waves. Like the Walén test for Alfvén waves, our method for identifying compressible modes requires no Galilean transformation. Unlike the Walén test, we use covariance techniques with magnetic field time series to select three special projections of B(t). The projections of magnetic fluctuations are associated with three, usually non-orthogonal, wavevectors that, in principle, contribute to the locally sampled density fluctuations. Wavevector directions ({\\hat k}(CoV)) are derived from eigenvectors of covariance matrices and mean field directions, Bo. Linear theory for compressible modes indicates that these projections are proportional to the density fluctuations. Regression techniques are then applied to observed density and magnetic field profiles to specify coefficients of proportionality. Signs of proportionality constants, connecting the three projections of δ B and δ ρ , determine whether the compressional modes are of the fast (+) or slow (-) type. Within a polytropic-closure framework, the proportionality between magnetic and density fluctuations can be computed by relating {\\hat k}, the polytropic index, γ , and the plasma β . Our certification program validates the direct interpretation of proportionality constants comparing their best-fit and error values with the directions of wavevectors required by the dispersion relation, {\\hat k}(Disp) inferred from experimental measurements of β and γ . Final certification requires that for each mode retained in the correlation, the scalar product of wavevectors determined through covariance and dispersion-relation analyses are approximately unity \\hat k (CoV)\\cdot \\hat k (Disp)≈ 1. This quality check is the compressible-mode analogue to slope-one tests in the Walén test expressed in Elsässer [1950] variables. By products of completed

  8. Adaptive Integration of the Compressed Algorithm of CS and NPC for the ECG Signal Compressed Algorithm in VLSI Implementation

    PubMed Central

    Tseng, Yun-Hua; Lu, Chih-Wen

    2017-01-01

    Compressed sensing (CS) is a promising approach to the compression and reconstruction of electrocardiogram (ECG) signals. It has been shown that following reconstruction, most of the changes between the original and reconstructed signals are distributed in the Q, R, and S waves (QRS) region. Furthermore, any increase in the compression ratio tends to increase the magnitude of the change. This paper presents a novel approach integrating the near-precise compressed (NPC) and CS algorithms. The simulation results presented notable improvements in signal-to-noise ratio (SNR) and compression ratio (CR). The efficacy of this approach was verified by fabricating a highly efficient low-cost chip using the Taiwan Semiconductor Manufacturing Company’s (TSMC) 0.18-μm Complementary Metal-Oxide-Semiconductor (CMOS) technology. The proposed core has an operating frequency of 60 MHz and gate counts of 2.69 K. PMID:28991216

  9. Lossless Astronomical Image Compression and the Effects of Random Noise

    NASA Technical Reports Server (NTRS)

    Pence, William

    2009-01-01

    In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.

  10. Method for compression of binary data

    DOEpatents

    Berlin, G.J.

    1996-03-26

    The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression. 5 figs.

  11. Method for compression of binary data

    DOEpatents

    Berlin, Gary J.

    1996-01-01

    The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression.

  12. Breaking of rod-shaped model material during compression

    NASA Astrophysics Data System (ADS)

    Lukas, Kulaviak; Vera, Penkavova; Marek, Ruzicka; Miroslav, Puncochar; Petr, Zamostny; Zdenek, Grof; Frantisek, Stepanek; Marek, Schongut; Jaromir, Havlica

    2017-06-01

    The breakage of a model anisometric dry granular material caused by uniaxial compression was studied. The bed of uniform rod-like pasta particles (8 mm long, aspect ratio 1:8) was compressed (Gamlen Tablet Press) and their size distribution was measured after each run (Dynamic Image Analysing). The compression dynamics was recorded and the effect of several parameters was tested (rate of compression, volume of granular bed, pressure magnitude and mode of application). Besides the experiments, numerical modelling of the compressed breakable material was performed as well, employing the DEM approach (Discrete Element Method). The comparison between the data and the model looks promising.

  13. Dynamic Time Expansion and Compression Using Nonlinear Waveguides

    DOEpatents

    Findikoglu, Alp T.; Hahn, Sangkoo F.; Jia, Quanxi

    2004-06-22

    Dynamic time expansion or compression of a small amplitude input signal generated with an initial scale is performed using a nonlinear waveguide. A nonlinear waveguide having a variable refractive index is connected to a bias voltage source having a bias signal amplitude that is large relative to the input signal to vary the reflective index and concomitant speed of propagation of the nonlinear waveguide and an electrical circuit for applying the small amplitude signal and the large amplitude bias signal simultaneously to the nonlinear waveguide. The large amplitude bias signal with the input signal alters the speed of propagation of the small-amplitude signal with time in the nonlinear waveguide to expand or contract the initial time scale of the small-amplitude input signal.

  14. Dynamic time expansion and compression using nonlinear waveguides

    DOEpatents

    Findikoglu, Alp T [Los Alamos, NM; Hahn, Sangkoo F [Los Alamos, NM; Jia, Quanxi [Los Alamos, NM

    2004-06-22

    Dynamic time expansion or compression of a small-amplitude input signal generated with an initial scale is performed using a nonlinear waveguide. A nonlinear waveguide having a variable refractive index is connected to a bias voltage source having a bias signal amplitude that is large relative to the input signal to vary the reflective index and concomitant speed of propagation of the nonlinear waveguide and an electrical circuit for applying the small-amplitude signal and the large amplitude bias signal simultaneously to the nonlinear waveguide. The large amplitude bias signal with the input signal alters the speed of propagation of the small-amplitude signal with time in the nonlinear waveguide to expand or contract the initial time scale of the small-amplitude input signal.

  15. Sara Lee: Improved Compressed Air System Increases Efficiency and Saves Energy at an Industrial Bakery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    This case study was prepared for the Industrial Technologies Program of the U.S. Department of Energy (DOE); it describes the energy and costs savings resulting from improving the compressed air system of a large Sara Lee bakery in Sacramento, California. The compressed air system supports many operations of the bread-making machines, and it had been performing poorly. A specialist from Draw Professional Services, a DOE Allied Partner, evaluated the system, and his suggestions included repairing a controller, fixing leaks, and replacing a compressor with a new one fitted with an energy-saving variable-speed drive. As a result, the bakery has reducedmore » its energy use by 471,000 kilowatt-hours annually and is saving $50,000 per year in operating and maintenance costs.« less

  16. NRGC: a novel referential genome compression algorithm.

    PubMed

    Saha, Subrata; Rajasekaran, Sanguthevar

    2016-11-15

    Next-generation sequencing techniques produce millions to billions of short reads. The procedure is not only very cost effective but also can be done in laboratory environment. The state-of-the-art sequence assemblers then construct the whole genomic sequence from these reads. Current cutting edge computing technology makes it possible to build genomic sequences from the billions of reads within a minimal cost and time. As a consequence, we see an explosion of biological sequences in recent times. In turn, the cost of storing the sequences in physical memory or transmitting them over the internet is becoming a major bottleneck for research and future medical applications. Data compression techniques are one of the most important remedies in this context. We are in need of suitable data compression algorithms that can exploit the inherent structure of biological sequences. Although standard data compression algorithms are prevalent, they are not suitable to compress biological sequencing data effectively. In this article, we propose a novel referential genome compression algorithm (NRGC) to effectively and efficiently compress the genomic sequences. We have done rigorous experiments to evaluate NRGC by taking a set of real human genomes. The simulation results show that our algorithm is indeed an effective genome compression algorithm that performs better than the best-known algorithms in most of the cases. Compression and decompression times are also very impressive. The implementations are freely available for non-commercial purposes. They can be downloaded from: http://www.engr.uconn.edu/~rajasek/NRGC.zip CONTACT: rajasek@engr.uconn.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Compression mechanisms in the plasma focus pinch

    NASA Astrophysics Data System (ADS)

    Lee, S.; Saw, S. H.; Ali, Jalil

    2017-03-01

    The compression of the plasma focus pinch is a dynamic process, governed by the electrodynamics of pinch elongation and opposed by the negative rate of change of current dI/dt associated with the current dip. The compressibility of the plasma is influenced by the thermodynamics primarily the specific heat ratio; with greater compressibility as the specific heat ratio γ reduces with increasing degree of freedom f of the plasma ensemble due to ionization energy for the higher Z (atomic number) gases. The most drastic compression occurs when the emitted radiation of a high-Z plasma dominates the dynamics leading in extreme cases to radiative collapse which is terminated only when the compressed density is sufficiently high for the inevitable self-absorption of radiation to occur. We discuss the central pinch equation which contains the basic electrodynamic terms with built-in thermodynamic factors and a dQ/dt term; with Q made up of a Joule heat component and absorption-corrected radiative terms. Deuterium is considered as a thermodynamic reference (fully ionized perfect gas with f = 3) as well as a zero-radiation reference (bremsstrahlung only; with radiation power negligible compared with electrodynamic power). Higher Z gases are then considered and regimes of thermodynamic enhancement of compression are systematically identified as are regimes of radiation-enhancement. The code which incorporates all these effects is used to compute pinch radius ratios in various gases as a measure of compression. Systematic numerical experiments reveal increasing severity in radiation-enhancement of compressions as atomic number increases. The work progresses towards a scaling law for radiative collapse and a generalized specific heat ratio incorporating radiation.

  18. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1990-01-01

    A process is disclosed for x ray registration and differencing which results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  19. Image compression system and method having optimized quantization tables

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)

    1998-01-01

    A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.

  20. Some Results Relevant to Statistical Closures for Compressible Turbulence

    NASA Technical Reports Server (NTRS)

    Ristorcelli, J. R.

    1998-01-01

    For weakly compressible turbulent fluctuations there exists a small parameter, the square of the fluctuating Mach number, that allows an investigation using a perturbative treatment. The consequences of such a perturbative analysis in three different subject areas are described: 1) initial conditions in direct numerical simulations, 2) an explanation for the oscillations seen in the compressible pressure in the direct numerical simulations of homogeneous shear, and 3) for turbulence closures accounting for the compressibility of velocity fluctuations. Initial conditions consistent with small turbulent Mach number asymptotics are constructed. The importance of consistent initial conditions in the direct numerical simulation of compressible turbulence is dramatically illustrated: spurious oscillations associated with inconsistent initial conditions are avoided, and the fluctuating dilatational field is some two orders of magnitude smaller for a compressible isotropic turbulence. For the isotropic decay it is shown that the choice of initial conditions can change the scaling law for the compressible dissipation. A two-time expansion of the Navier-Stokes equations is used to distinguish compressible acoustic and compressible advective modes. A simple conceptual model for weakly compressible turbulence - a forced linear oscillator is described. It is shown that the evolution equations for the compressible portions of turbulence can be understood as a forced wave equation with refraction. Acoustic modes of the flow can be amplified by refraction and are able to manifest themselves in large fluctuations of the compressible pressure.

  1. Induced compression wood formation in Douglas fir (Pseudotsuga menziesii) in microgravity

    NASA Technical Reports Server (NTRS)

    Kwon, M.; Bedgar, D. L.; Piastuch, W.; Davin, L. B.; Lewis, N. G.

    2001-01-01

    In the microgravity environment of the Space Shuttle Columbia (Life and Microgravity Mission STS-78), were grown 1-year-old Douglas fir and loblolly pine plants in a NASA plant growth facility. Several plants were harnessed (at 45 degrees ) to establish if compression wood biosynthesis, involving altered cellulose and lignin deposition and cell wall structure would occur under those conditions of induced mechanical stress. Selected plants were harnessed at day 2 in orbit, with stem sections of specific plants harvested and fixed for subsequent microscopic analyses on days 8, 10 and 15. At the end of the total space mission period (17 days), the remaining healthy harnessed plants and their vertical (upright) controls were harvested and fixed on earth. All harnessed (at 45 degrees ) plant specimens, whether grown at 1 g or in microgravity, formed compression wood. Moreover, not only the cambial cells but also the developing tracheid cells underwent significant morphological changes. This indicated that the developing tracheids from the primary cell wall expansion stage to the fully lignified maturation stage are involved in the perception and transduction of the stimuli stipulating the need for alteration of cell wall architecture. It is thus apparent that, even in a microgravity environment, woody plants can make appropriate corrections to compensate for stress gradients introduced by mechanical bending, thereby enabling compression wood to be formed. The evolutionary implications of these findings are discussed in terms of "variability" in cell wall biosynthesis.

  2. Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.

  3. Utility of a simple lighting device to improve chest compressions learning.

    PubMed

    González-Calvete, L; Barcala-Furelos, R; Moure-González, J D; Abelairas-Gómez, C; Rodríguez-Núñez, A

    2017-11-01

    The recommendations on cardiopulmonary resuscitation (CPR) emphasize the quality of the manoeuvres, especially chest compressions (CC). Audiovisual feedback devices could improve the quality of the CC during CPR. The aim of this study was to evaluate the usefulness of a simple lighting device as a visual aid during CPR on a mannequin. Twenty-two paediatricians who attended an accredited paediatric CPR course performed, in random order, 2min of CPR on a mannequin without and with the help of a simple lighting device, which flashes at a frequency of 100 cycles per minute. The following CC variables were analyzed using a validated compression quality meter (CPRmeter ® ): depth, decompression, rate, CPR time and percentage of compressions. With the lighting device, participants increased average quality (60.23±54.50 vs. 79.24±9.80%; P=.005), percentage in target depth (48.86±42.67 vs. 72.95±20.25%; P=.036) and rate (35.82±37.54 vs. 67.09±31.95%; P=.024). A simple light device that flashes at the recommended frequency improves the quality of CC performed by paediatric residents on a mannequin. The usefulness of this CPR aid system should be assessed in real patients. Copyright © 2017 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Publicado por Elsevier España, S.L.U. All rights reserved.

  4. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  5. Rectal perforation by compressed air

    PubMed Central

    2017-01-01

    As the use of compressed air in industrial work has increased, so has the risk of associated pneumatic injury from its improper use. However, damage of large intestine caused by compressed air is uncommon. Herein a case of pneumatic rupture of the rectum is described. The patient was admitted to the Emergency Room complaining of abdominal pain and distension. His colleague triggered a compressed air nozzle over his buttock. On arrival, vital signs were stable but physical examination revealed peritoneal irritation and marked distension of the abdomen. Computed tomography showed a large volume of air in the peritoneal cavity and subcutaneous emphysema at the perineum. A rectal perforation was found at laparotomy and the Hartmann procedure was performed. PMID:28706893

  6. A near-wall two-equation model for compressible turbulent flows

    NASA Technical Reports Server (NTRS)

    Zhang, H. S.; So, R. M. C.; Speziale, C. G.; Lai, Y. G.

    1991-01-01

    A near-wall two-equation turbulence model of the K - epsilon type is developed for the description of high-speed compressible flows. The Favre-averaged equations of motion are solved in conjunction with modeled transport equations for the turbulent kinetic energy and solenoidal dissipation wherein a variable density extension of the asymptotically consistent near-wall model of So and co-workers is supplemented with new dilatational models. The resulting compressible two-equation model is tested in the supersonic flat plate boundary layer - with an adiabatic wall and with wall cooling - for Mach numbers as large as 10. Direct comparisons of the predictions of the new model with raw experimental data and with results from the K - omega model indicate that it performs well for a wide range of Mach numbers. The surprising finding is that the Morkovin hypothesis, where turbulent dilatational terms are neglected, works well at high Mach numbers, provided that the near wall model is asymptotically consistent. Instances where the model predictions deviate from the experiments appear to be attributable to the assumption of constant turbulent Prandtl number - a deficiency that will be addressed in a future paper.

  7. ZFP compression plugin (filter) for HDF5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Mark C.

    H5Z-ZFP is a compression plugin (filter) for the HDF5 library based upon the ZFP-0.5.0 compression library. It supports 4- or 8-byte integer or floating point HDF5 datasets of any dimension but partitioned in 1, 2, or 3 dimensional chunks. It supports ZFP's four fundamental modes of operation; rate, precision, accuracy or expert. It is a lossy compression plugin.

  8. Sandia 25-meter compressed helium/air gun

    NASA Astrophysics Data System (ADS)

    Setchell, R. E.

    1982-04-01

    For nearly twenty years the Sandia 25-meter compressed gas gun has been an important tool for studying condensed materials subjected to transient shock compression. Major system modifications are now in progress to provide new control, instrumentation, and data acquisition capabilities. These features will ensure that the facility can continue as an effective means of investigating a variety of physical and chemical processes in shock-compressed solids.

  9. On Compression of a Heavy Compressible Layer of an Elastoplastic or Elastoviscoplastic Medium

    NASA Astrophysics Data System (ADS)

    Kovtanyuk, L. V.; Panchenko, G. L.

    2017-11-01

    The problem of deformation of a horizontal plane layer of a compressible material is solved in the framework of the theory of small strains. The upper boundary of the layer is under the action of shear and compressing loads, and the no-slip condition is satisfied on the lower boundary of the layer. The loads increase in absolute value with time, then become constant, and then decrease to zero.Various plasticity conditions are consideredwith regard to the material compressibility, namely, the Coulomb-Mohr plasticity condition, the von Mises-Schleicher plasticity condition, and the same conditions with the viscous properties of the material taken into account. To solve the system of partial differential equations for the components of irreversible strains, a finite-difference scheme is developed for a spatial domain increasing with time. The laws of motion of elastoplastic boundaries are presented, the stresses, strains, rates of strain, and displacements are calculated, and the residual stresses and strains are found.

  10. Dissipative processes under the shock compression of glass

    NASA Astrophysics Data System (ADS)

    Savinykh, A. S.; Kanel, G. I.; Cherepanov, I. A.; Razorenov, S. V.

    2016-03-01

    New experimental data on the behavior of the K8 and TF1 glasses under shock-wave loading conditions are obtained. It is found that the propagation of shock waves is close to the self-similar one in the maximum compression stress range 4-12 GPa. Deviations from a general deformation diagram, which are related to viscous dissipation, take place when the final state of compression is approached. The parameter region in which failure waves form in glass is found not to be limited to the elastic compression stress range, as was thought earlier. The failure front velocity increases with the shock compression stress. Outside the region covered by a failure wave, the glasses demonstrate a high tensile dynamic strength (6-7 GPa) in the case of elastic compression, and this strength is still very high after transition through the elastic limit in a compression wave.

  11. Investigation of the Behavior of Hardening Masonry Exposed to Variable Stresses

    PubMed Central

    Šlivinskas, Tomas; Jonaitis, Bronius; Marčiukaitis, Jonas Gediminas

    2018-01-01

    This paper analyzes the behavior of masonry under variable loads during execution (construction stage). It specifies the creep coefficient for calcium silicate brick masonry, presenting the research data of masonry deformation under variable and constant long-term loads. The interaction of separate layers of composite material in masonry is introduced and the formulae for determining long-term deformations are offered. The research results of masonry’s compressive strength and deformation properties under variable and constant long-term loads are presented. These are then compared to calculated ones. According to the presented comparison, the calculated long-term deformations coincide quite well with those determined experimentally. PMID:29710802

  12. Investigation of the Behavior of Hardening Masonry Exposed to Variable Stresses.

    PubMed

    Šlivinskas, Tomas; Jonaitis, Bronius; Marčiukaitis, Jonas Gediminas; Zavalis, Robertas

    2018-04-28

    This paper analyzes the behavior of masonry under variable loads during execution (construction stage). It specifies the creep coefficient for calcium silicate brick masonry, presenting the research data of masonry deformation under variable and constant long-term loads. The interaction of separate layers of composite material in masonry is introduced and the formulae for determining long-term deformations are offered. The research results of masonry’s compressive strength and deformation properties under variable and constant long-term loads are presented. These are then compared to calculated ones. According to the presented comparison, the calculated long-term deformations coincide quite well with those determined experimentally.

  13. High-speed and high-ratio referential genome compression.

    PubMed

    Liu, Yuansheng; Peng, Hui; Wong, Limsoon; Li, Jinyan

    2017-11-01

    The rapidly increasing number of genomes generated by high-throughput sequencing platforms and assembly algorithms is accompanied by problems in data storage, compression and communication. Traditional compression algorithms are unable to meet the demand of high compression ratio due to the intrinsic challenging features of DNA sequences such as small alphabet size, frequent repeats and palindromes. Reference-based lossless compression, by which only the differences between two similar genomes are stored, is a promising approach with high compression ratio. We present a high-performance referential genome compression algorithm named HiRGC. It is based on a 2-bit encoding scheme and an advanced greedy-matching search on a hash table. We compare the performance of HiRGC with four state-of-the-art compression methods on a benchmark dataset of eight human genomes. HiRGC takes <30 min to compress about 21 gigabytes of each set of the seven target genomes into 96-260 megabytes, achieving compression ratios of 217 to 82 times. This performance is at least 1.9 times better than the best competing algorithm on its best case. Our compression speed is also at least 2.9 times faster. HiRGC is stable and robust to deal with different reference genomes. In contrast, the competing methods' performance varies widely on different reference genomes. More experiments on 100 human genomes from the 1000 Genome Project and on genomes of several other species again demonstrate that HiRGC's performance is consistently excellent. The C ++ and Java source codes of our algorithm are freely available for academic and non-commercial use. They can be downloaded from https://github.com/yuansliu/HiRGC. jinyan.li@uts.edu.au. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  14. Quasi-one-dimensional compressible flow across face seals and narrow slots. 2: Computer program

    NASA Technical Reports Server (NTRS)

    Zuk, J.; Smith, P. J.

    1972-01-01

    A computer program is presented for compressible fluid flow with friction across face seals and through narrow slots. The computer program carries out a quasi-one-dimensional flow analysis which is valid for laminar and turbulent flows under both subsonic and choked flow conditions for parallel surfaces. The program is written in FORTRAN IV. The input and output variables are in either the International System of Units (SI) or the U.S. customary system.

  15. Fluffy dust forms icy planetesimals by static compression

    NASA Astrophysics Data System (ADS)

    Kataoka, Akimasa; Tanaka, Hidekazu; Okuzumi, Satoshi; Wada, Koji

    2013-09-01

    Context. Several barriers have been proposed in planetesimal formation theory: bouncing, fragmentation, and radial drift problems. Understanding the structure evolution of dust aggregates is a key in planetesimal formation. Dust grains become fluffy by coagulation in protoplanetary disks. However, once they are fluffy, they are not sufficiently compressed by collisional compression to form compact planetesimals. Aims: We aim to reveal the pathway of dust structure evolution from dust grains to compact planetesimals. Methods: Using the compressive strength formula, we analytically investigate how fluffy dust aggregates are compressed by static compression due to ram pressure of the disk gas and self-gravity of the aggregates in protoplanetary disks. Results: We reveal the pathway of the porosity evolution from dust grains via fluffy aggregates to form planetesimals, circumventing the barriers in planetesimal formation. The aggregates are compressed by the disk gas to a density of 10-3 g/cm3 in coagulation, which is more compact than is the case with collisional compression. Then, they are compressed more by self-gravity to 10-1 g/cm3 when the radius is 10 km. Although the gas compression decelerates the growth, the aggregates grow rapidly enough to avoid the radial drift barrier when the orbital radius is ≲6 AU in a typical disk. Conclusions: We propose a fluffy dust growth scenario from grains to planetesimals. It enables icy planetesimal formation in a wide range beyond the snowline in protoplanetary disks. This result proposes a concrete initial condition of planetesimals for the later stages of the planet formation.

  16. Nonlinear theory of magnetohydrodynamic flows of a compressible fluid in the shallow water approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klimachkov, D. A., E-mail: klimchakovdmitry@gmail.com; Petrosyan, A. S., E-mail: apetrosy@iki.rssi.ru

    2016-09-15

    Shallow water magnetohydrodynamic (MHD) theory describing incompressible flows of plasma is generalized to the case of compressible flows. A system of MHD equations is obtained that describes the flow of a thin layer of compressible rotating plasma in a gravitational field in the shallow water approximation. The system of quasilinear hyperbolic equations obtained admits a complete simple wave analysis and a solution to the initial discontinuity decay problem in the simplest version of nonrotating flows. In the new equations, sound waves are filtered out, and the dependence of density on pressure on large scales is taken into account that describesmore » static compressibility phenomena. In the equations obtained, the mass conservation law is formulated for a variable that nontrivially depends on the shape of the lower boundary, the characteristic vertical scale of the flow, and the scale of heights at which the variation of density becomes significant. A simple wave theory is developed for the system of equations obtained. All self-similar discontinuous solutions and all continuous centered self-similar solutions of the system are obtained. The initial discontinuity decay problem is solved explicitly for compressible MHD equations in the shallow water approximation. It is shown that there exist five different configurations that provide a solution to the initial discontinuity decay problem. For each configuration, conditions are found that are necessary and sufficient for its implementation. Differences between incompressible and compressible cases are analyzed. In spite of the formal similarity between the solutions in the classical case of MHD flows of an incompressible and compressible fluids, the nonlinear dynamics described by the solutions are essentially different due to the difference in the expressions for the squared propagation velocity of weak perturbations. In addition, the solutions obtained describe new physical phenomena related to the dependence

  17. Performance of target detection algorithm in compressive sensing miniature ultraspectral imaging compressed sensing system

    NASA Astrophysics Data System (ADS)

    Gedalin, Daniel; Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Rotman, Stanley R.; Stern, Adrian

    2017-04-01

    Compressive sensing theory was proposed to deal with the high quantity of measurements demanded by traditional hyperspectral systems. Recently, a compressive spectral imaging technique dubbed compressive sensing miniature ultraspectral imaging (CS-MUSI) was presented. This system uses a voltage controlled liquid crystal device to create multiplexed hyperspectral cubes. We evaluate the utility of the data captured using the CS-MUSI system for the task of target detection. Specifically, we compare the performance of the matched filter target detection algorithm in traditional hyperspectral systems and in CS-MUSI multiplexed hyperspectral cubes. We found that the target detection algorithm performs similarly in both cases, despite the fact that the CS-MUSI data is up to an order of magnitude less than that in conventional hyperspectral cubes. Moreover, the target detection is approximately an order of magnitude faster in CS-MUSI data.

  18. Hyperspectral data compression using a Wiener filter predictor

    NASA Astrophysics Data System (ADS)

    Villeneuve, Pierre V.; Beaven, Scott G.; Stocker, Alan D.

    2013-09-01

    The application of compression to hyperspectral image data is a significant technical challenge. A primary bottleneck in disseminating data products to the tactical user community is the limited communication bandwidth between the airborne sensor and the ground station receiver. This report summarizes the newly-developed "Z-Chrome" algorithm for lossless compression of hyperspectral image data. A Wiener filter prediction framework is used as a basis for modeling new image bands from already-encoded bands. The resulting residual errors are then compressed using available state-of-the-art lossless image compression functions. Compression performance is demonstrated using a large number of test data collected over a wide variety of scene content from six different airborne and spaceborne sensors .

  19. Breast compression in mammography: how much is enough?

    PubMed

    Poulos, Ann; McLean, Donald; Rickard, Mary; Heard, Robert

    2003-06-01

    The amount of breast compression that is applied during mammography potentially influences image quality and the discomfort experienced. The aim of this study was to determine the relationship between applied compression force, breast thickness, reported discomfort and image quality. Participants were women attending routine breast screening by mammography at BreastScreen New South Wales Central and Eastern Sydney. During the mammographic procedure, an 'extra' craniocaudal (CC) film was taken at a reduced level of compression ranging from 10 to 30 Newtons. Breast thickness measurements were recorded for both the normal and the extra CC film. Details of discomfort experienced, cup size, menstrual status, existing breast pain and breast problems were also recorded. Radiologists were asked to compare the image quality of the normal and manipulated film. The results indicated that 24% of women did not experience a difference in thickness when the compression was reduced. This is an important new finding because the aim of breast compression is to reduce breast thickness. If breast thickness is not reduced when compression force is applied then discomfort is increased with no benefit in image quality. This has implications for mammographic practice when determining how much breast compression is sufficient. Radiologists found a decrease in contrast resolution within the fatty area of the breast between the normal and the extra CC film, confirming a decrease in image quality due to insufficient applied compression force.

  20. Fundamental study of compression for movie files of coronary angiography

    NASA Astrophysics Data System (ADS)

    Ando, Takekazu; Tsuchiya, Yuichiro; Kodera, Yoshie

    2005-04-01

    When network distribution of movie files was considered as reference, it could be useful that the lossy compression movie files which has small file size. We chouse three kinds of coronary stricture movies with different moving speed as an examination object; heart rate of slow, normal and fast movies. The movies of MPEG-1, DivX5.11, WMV9 (Windows Media Video 9), and WMV9-VCM (Windows Media Video 9-Video Compression Manager) were made from three kinds of AVI format movies with different moving speeds. Five kinds of movies that are four kinds of compression movies and non-compression AVI instead of the DICOM format were evaluated by Thurstone's method. The Evaluation factors of movies were determined as "sharpness, granularity, contrast, and comprehensive evaluation." In the virtual bradycardia movie, AVI was the best evaluation at all evaluation factors except the granularity. In the virtual normal movie, an excellent compression technique is different in all evaluation factors. In the virtual tachycardia movie, MPEG-1 was the best evaluation at all evaluation factors expects the contrast. There is a good compression form depending on the speed of movies because of the difference of compression algorithm. It is thought that it is an influence by the difference of the compression between frames. The compression algorithm for movie has the compression between the frames and the intra-frame compression. As the compression algorithm give the different influence to image by each compression method, it is necessary to examine the relation of the compression algorithm and our results.

  1. Combined Industry, Space and Earth Science Data Compression Workshop

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron B. (Editor); Renner, Robert L. (Editor)

    1996-01-01

    The sixth annual Space and Earth Science Data Compression Workshop and the third annual Data Compression Industry Workshop were held as a single combined workshop. The workshop was held April 4, 1996 in Snowbird, Utah in conjunction with the 1996 IEEE Data Compression Conference, which was held at the same location March 31 - April 3, 1996. The Space and Earth Science Data Compression sessions seek to explore opportunities for data compression to enhance the collection, analysis, and retrieval of space and earth science data. Of particular interest is data compression research that is integrated into, or has the potential to be integrated into, a particular space or earth science data information system. Preference is given to data compression research that takes into account the scien- tist's data requirements, and the constraints imposed by the data collection, transmission, distribution and archival systems.

  2. Free-beam soliton self-compression in air

    NASA Astrophysics Data System (ADS)

    Voronin, A. A.; Mitrofanov, A. V.; Sidorov-Biryukov, D. A.; Fedotov, A. B.; Pugžlys, A.; Panchenko, V. Ya; Shumakova, V.; Ališauskas, S.; Baltuška, A.; Zheltikov, A. M.

    2018-02-01

    We identify a physical scenario whereby soliton transients generated in freely propagating laser beams within the regions of anomalous dispersion in air can be compressed as a part of their free-beam spatiotemporal evolution to yield few-cycle mid- and long-wavelength-infrared field waveforms, whose peak power is substantially higher than the peak power of the input pulses. We show that this free-beam soliton self-compression scenario does not require ionization or laser-induced filamentation, enabling high-throughput self-compression of mid- and long-wavelength-infrared laser pulses within a broad range of peak powers from tens of gigawatts up to the terawatt level. We also demonstrate that this method of pulse compression can be extended to long-range propagation, providing self-compression of high-peak-power laser pulses in atmospheric air within propagation ranges as long as hundreds of meters, suggesting new ways towards longer-range standoff detection and remote sensing.

  3. Interactive computer graphics applications for compressible aerodynamics

    NASA Technical Reports Server (NTRS)

    Benson, Thomas J.

    1994-01-01

    Three computer applications have been developed to solve inviscid compressible fluids problems using interactive computer graphics. The first application is a compressible flow calculator which solves for isentropic flow, normal shocks, and oblique shocks or centered expansions produced by two dimensional ramps. The second application couples the solutions generated by the first application to a more graphical presentation of the results to produce a desk top simulator of three compressible flow problems: 1) flow past a single compression ramp; 2) flow past two ramps in series; and 3) flow past two opposed ramps. The third application extends the results of the second to produce a design tool which solves for the flow through supersonic external or mixed compression inlets. The applications were originally developed to run on SGI or IBM workstations running GL graphics. They are currently being extended to solve additional types of flow problems and modified to operate on any X-based workstation.

  4. Fast and accurate face recognition based on image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2017-05-01

    Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.

  5. Towards Natural Transition in Compressible Boundary Layers

    DTIC Science & Technology

    2016-06-29

    AFRL-AFOSR-CL-TR-2016-0011 Towards natural transition in compressible boundary layers Marcello Faraco de Medeiros FUNDACAO PARA O INCREMENTO DA...to 29-03-2016 Towards natural transition in compressible boundary layers FA9550-11-1-0354 Marcello A. Faraco de Medeiros Germán Andrés Gaviria...unlimited. 109 Final report Towards natural transition in compressible boundary layers Principal Investigator: Marcello Augusto Faraco de Medeiros

  6. POLYCOMP: Efficient and configurable compression of astronomical timelines

    NASA Astrophysics Data System (ADS)

    Tomasi, M.

    2016-07-01

    This paper describes the implementation of polycomp, a open-sourced, publicly available program for compressing one-dimensional data series in tabular format. The program is particularly suited for compressing smooth, noiseless streams of data like pointing information, as one of the algorithms it implements applies a combination of least squares polynomial fitting and discrete Chebyshev transforms that is able to achieve a compression ratio Cr up to ≈ 40 in the examples discussed in this work. This performance comes at the expense of a loss of information, whose upper bound is configured by the user. I show two areas in which the usage of polycomp is interesting. In the first example, I compress the ephemeris table of an astronomical object (Ganymede), obtaining Cr ≈ 20, with a compression error on the x , y , z coordinates smaller than 1 m. In the second example, I compress the publicly available timelines recorded by the Low Frequency Instrument (LFI), an array of microwave radiometers onboard the ESA Planck spacecraft. The compression reduces the needed storage from ∼ 6.5 TB to ≈ 0.75 TB (Cr ≈ 9), thus making them small enough to be kept in a portable hard drive.

  7. Coil Compression for Accelerated Imaging with Cartesian Sampling

    PubMed Central

    Zhang, Tao; Pauly, John M.; Vasanawala, Shreyas S.; Lustig, Michael

    2012-01-01

    MRI using receiver arrays with many coil elements can provide high signal-to-noise ratio and increase parallel imaging acceleration. At the same time, the growing number of elements results in larger datasets and more computation in the reconstruction. This is of particular concern in 3D acquisitions and in iterative reconstructions. Coil compression algorithms are effective in mitigating this problem by compressing data from many channels into fewer virtual coils. In Cartesian sampling there often are fully sampled k-space dimensions. In this work, a new coil compression technique for Cartesian sampling is presented that exploits the spatially varying coil sensitivities in these non-subsampled dimensions for better compression and computation reduction. Instead of directly compressing in k-space, coil compression is performed separately for each spatial location along the fully-sampled directions, followed by an additional alignment process that guarantees the smoothness of the virtual coil sensitivities. This important step provides compatibility with autocalibrating parallel imaging techniques. Its performance is not susceptible to artifacts caused by a tight imaging fieldof-view. High quality compression of in-vivo 3D data from a 32 channel pediatric coil into 6 virtual coils is demonstrated. PMID:22488589

  8. On Compressible Vortex Sheets

    NASA Astrophysics Data System (ADS)

    Secchi, Paolo

    2005-05-01

    We introduce the main known results of the theory of incompressible and compressible vortex sheets. Moreover, we present recent results obtained by the author with J. F. Coulombel about supersonic compressible vortex sheets in two space dimensions. The problem is a nonlinear free boundary hyperbolic problem with two difficulties: the free boundary is characteristic and the Lopatinski condition holds only in a weak sense, yielding losses of derivatives. Under a supersonic condition that precludes violent instabilities, we prove an energy estimate for the boundary value problem obtained by linearization around an unsteady piecewise solution.

  9. Compression device for feeding a waste material to a reactor

    DOEpatents

    Williams, Paul M.; Faller, Kenneth M.; Bauer, Edward J.

    2001-08-21

    A compression device for feeding a waste material to a reactor includes a waste material feed assembly having a hopper, a supply tube and a compression tube. Each of the supply and compression tubes includes feed-inlet and feed-outlet ends. A feed-discharge valve assembly is located between the feed-outlet end of the compression tube and the reactor. A feed auger-screw extends axially in the supply tube between the feed-inlet and feed-outlet ends thereof. A compression auger-screw extends axially in the compression tube between the feed-inlet and feed-outlet ends thereof. The compression tube is sloped downwardly towards the reactor to drain fluid from the waste material to the reactor and is oriented at generally right angle to the supply tube such that the feed-outlet end of the supply tube is adjacent to the feed-inlet end of the compression tube. A programmable logic controller is provided for controlling the rotational speed of the feed and compression auger-screws for selectively varying the compression of the waste material and for overcoming jamming conditions within either the supply tube or the compression tube.

  10. High-quality lossy compression: current and future trends

    NASA Astrophysics Data System (ADS)

    McLaughlin, Steven W.

    1995-01-01

    This paper is concerned with current and future trends in the lossy compression of real sources such as imagery, video, speech and music. We put all lossy compression schemes into common framework where each can be characterized in terms of three well-defined advantages: cell shape, region shape and memory advantages. We concentrate on image compression and discuss how new entropy constrained trellis-based compressors achieve cell- shape, region-shape and memory gain resulting in high fidelity and high compression.

  11. Compression of the Global Land 1-km AVHRR dataset

    USGS Publications Warehouse

    Kess, B. L.; Steinwand, D.R.; Reichenbach, S.E.

    1996-01-01

    Large datasets, such as the Global Land 1-km Advanced Very High Resolution Radiometer (AVHRR) Data Set (Eidenshink and Faundeen 1994), require compression methods that provide efficient storage and quick access to portions of the data. A method of lossless compression is described that provides multiresolution decompression within geographic subwindows of multi-spectral, global, 1-km, AVHRR images. The compression algorithm segments each image into blocks and compresses each block in a hierarchical format. Users can access the data by specifying either a geographic subwindow or the whole image and a resolution (1,2,4, 8, or 16 km). The Global Land 1-km AVHRR data are presented in the Interrupted Goode's Homolosine map projection. These images contain masked regions for non-land areas which comprise 80 per cent of the image. A quadtree algorithm is used to compress the masked regions. The compressed region data are stored separately from the compressed land data. Results show that the masked regions compress to 0·143 per cent of the bytes they occupy in the test image and the land areas are compressed to 33·2 per cent of their original size. The entire image is compressed hierarchically to 6·72 per cent of the original image size, reducing the data from 9·05 gigabytes to 623 megabytes. These results are compared to the first order entropy of the residual image produced with lossless Joint Photographic Experts Group predictors. Compression results are also given for Lempel-Ziv-Welch (LZW) and LZ77, the algorithms used by UNIX compress and GZIP respectively. In addition to providing multiresolution decompression of geographic subwindows of the data, the hierarchical approach and the use of quadtrees for storing the masked regions gives a marked improvement over these popular methods.

  12. Recognizable or Not: Towards Image Semantic Quality Assessment for Compression

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Dandan; Li, Houqiang

    2017-12-01

    Traditionally, image compression was optimized for the pixel-wise fidelity or the perceptual quality of the compressed images given a bit-rate budget. But recently, compressed images are more and more utilized for automatic semantic analysis tasks such as recognition and retrieval. For these tasks, we argue that the optimization target of compression is no longer perceptual quality, but the utility of the compressed images in the given automatic semantic analysis task. Accordingly, we propose to evaluate the quality of the compressed images neither at pixel level nor at perceptual level, but at semantic level. In this paper, we make preliminary efforts towards image semantic quality assessment (ISQA), focusing on the task of optical character recognition (OCR) from compressed images. We propose a full-reference ISQA measure by comparing the features extracted from text regions of original and compressed images. We then propose to integrate the ISQA measure into an image compression scheme. Experimental results show that our proposed ISQA measure is much better than PSNR and SSIM in evaluating the semantic quality of compressed images; accordingly, adopting our ISQA measure to optimize compression for OCR leads to significant bit-rate saving compared to using PSNR or SSIM. Moreover, we perform subjective test about text recognition from compressed images, and observe that our ISQA measure has high consistency with subjective recognizability. Our work explores new dimensions in image quality assessment, and demonstrates promising direction to achieve higher compression ratio for specific semantic analysis tasks.

  13. Monitoring compaction and compressibility changes in offshore chalk reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dean, G.; Hardy, R.; Eltvik, P.

    1994-03-01

    Some of the North Sea's largest and most important oil fields are in chalk reservoirs. In these fields, it is important to measure reservoir compaction and compressibility because compaction can result in platform subsidence. Also, compaction drive is a main drive mechanism in these fields, so an accurate reserves estimate cannot be made without first measuring compressibility. Estimating compaction and reserves is difficult because compressibility changes throughout field life. Installing of accurate, permanent downhole pressure gauges on offshore chalk fields makes it possible to use a new method to monitor compressibility -- measurement of reservoir pressure changes caused by themore » tide. This tidal-monitoring technique is an in-situ method that can greatly increase compressibility information. It can be used to estimate compressibility and to measure compressibility variation over time. This paper concentrates on application of the tidal-monitoring technique to North Sea chalk reservoirs. However, the method is applicable for any tidal offshore area and can be applied whenever necessary to monitor in-situ rock compressibility. One such application would be if platform subsidence was expected.« less

  14. A general method to determine the stability of compressible flows

    NASA Technical Reports Server (NTRS)

    Guenther, R. A.; Chang, I. D.

    1982-01-01

    Several problems were studied using two completely different approaches. The initial method was to use the standard linearized perturbation theory by finding the value of the individual small disturbance quantities based on the equations of motion. These were serially eliminated from the equations of motion to derive a single equation that governs the stability of fluid dynamic system. These equations could not be reduced unless the steady state variable depends only on one coordinate. The stability equation based on one dependent variable was found and was examined to determine the stability of a compressible swirling jet. The second method applied a Lagrangian approach to the problem. Since the equations developed were based on different assumptions, the condition of stability was compared only for the Rayleigh problem of a swirling flow, both examples reduce to the Rayleigh criterion. This technique allows including the viscous shear terms which is not possible in the first method. The same problem was then examined to see what effect shear has on stability.

  15. Compressible homogeneous shear: Simulation and modeling

    NASA Technical Reports Server (NTRS)

    Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.

    1992-01-01

    Compressibility effects were studied on turbulence by direct numerical simulation of homogeneous shear flow. A primary observation is that the growth of the turbulent kinetic energy decreases with increasing turbulent Mach number. The sinks provided by compressible dissipation and the pressure dilatation, along with reduced Reynolds shear stress, are shown to contribute to the reduced growth of kinetic energy. Models are proposed for these dilatational terms and verified by direct comparison with the simulations. The differences between the incompressible and compressible fields are brought out by the examination of spectra, statistical moments, and structure of the rate of strain tensor.

  16. Compressible homogeneous shear - Simulation and modeling

    NASA Technical Reports Server (NTRS)

    Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.

    1991-01-01

    Compressibility effects were studied on turbulence by direct numerical simulation of homogeneous shear flow. A primary observation is that the growth of the turbulent kinetic energy decreases with increasing turbulent Mach number. The sinks provided by compressible dissipation and the pressure dilatation, along with reduced Reynolds shear stress, are shown to contribute to the reduced growth of kinetic energy. Models are proposed for these dilatational terms and verified by direct comparison with the simulations. The differences between the incompressible and compressible fields are brought out by the examination of spectra, statistical moments, and structure of the rate of strain tensor.

  17. Effects of compressibility on turbulent relative particle dispersion

    NASA Astrophysics Data System (ADS)

    Shivamoggi, Bhimsen K.

    2016-08-01

    In this paper, phenomenological developments are used to explore the effects of compressibility on the relative particle dispersion (RPD) in three-dimensional (3D) fully developed turbulence (FDT). The role played by the compressible FDT cascade physics underlying this process is investigated. Compressibility effects are found to lead to reduction of RPD, development of the ballistic regime and particle clustering, corroborating the laboratory experiment and numerical simulation results (Cressman J. R. et al., New J. Phys., 6 (2004) 53) on the motion of Lagrangian tracers on a surface flow that constitutes a 2D compressible subsystem. These formulations are developed from the scaling relations for compressible FDT and are validated further via an alternative dimensional/scaling development for compressible FDT similar to the one given for incompressible FDT by Batchelor and Townsend (Surveys in Mechanics (Cambridge University Press) 1956, p. 352). The rationale for spatial intermittency effects is legitimized via the nonlinear scaling dependence of RPD on the kinetic-energy dissipation rate.

  18. Effects of video compression on target acquisition performance

    NASA Astrophysics Data System (ADS)

    Espinola, Richard L.; Cha, Jae; Preece, Bradley

    2008-04-01

    The bandwidth requirements of modern target acquisition systems continue to increase with larger sensor formats and multi-spectral capabilities. To obviate this problem, still and moving imagery can be compressed, often resulting in greater than 100 fold decrease in required bandwidth. Compression, however, is generally not error-free and the generated artifacts can adversely affect task performance. The U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate recently performed an assessment of various compression techniques on static imagery for tank identification. In this paper, we expand this initial assessment by studying and quantifying the effect of various video compression algorithms and their impact on tank identification performance. We perform a series of controlled human perception tests using three dynamic simulated scenarios: target moving/sensor static, target static/sensor static, sensor tracking the target. Results of this study will quantify the effect of video compression on target identification and provide a framework to evaluate video compression on future sensor systems.

  19. Multivariable control of vapor compression systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, X.D.; Liu, S.; Asada, H.H.

    1999-07-01

    This paper presents the results of a study of multi-input multi-output (MIMO) control of vapor compression cycles that have multiple actuators and sensors for regulating multiple outputs, e.g., superheat and evaporating temperature. The conventional single-input single-output (SISO) control was shown to have very limited performance. A low order lumped-parameter model was developed to describe the significant dynamics of vapor compression cycles. Dynamic modes were analyzed based on the low order model to provide physical insight of system dynamic behavior. To synthesize a MIMO control system, the Linear-Quadratic Gaussian (LQG) technique was applied to coordinate compressor speed and expansion valve openingmore » with guaranteed stability robustness in the design. Furthermore, to control a vapor compression cycle over a wide range of operating conditions where system nonlinearities become evident, a gain scheduling scheme was used so that the MIMO controller could adapt to changing operating conditions. Both analytical studies and experimental tests showed that the MIMO control could significantly improve the transient behavior of vapor compression cycles compared to the conventional SISO control scheme. The MIMO control proposed in this paper could be extended to the control of vapor compression cycles in a variety of HVAC and refrigeration applications to improve system performance and energy efficiency.« less

  20. Effects of Instantaneous Multiband Dynamic Compression on Speech Intelligibility

    NASA Astrophysics Data System (ADS)

    Herzke, Tobias; Hohmann, Volker

    2005-12-01

    The recruitment phenomenon, that is, the reduced dynamic range between threshold and uncomfortable level, is attributed to the loss of instantaneous dynamic compression on the basilar membrane. Despite this, hearing aids commonly use slow-acting dynamic compression for its compensation, because this was found to be the most successful strategy in terms of speech quality and intelligibility rehabilitation. Former attempts to use fast-acting compression gave ambiguous results, raising the question as to whether auditory-based recruitment compensation by instantaneous compression is in principle applicable in hearing aids. This study thus investigates instantaneous multiband dynamic compression based on an auditory filterbank. Instantaneous envelope compression is performed in each frequency band of a gammatone filterbank, which provides a combination of time and frequency resolution comparable to the normal healthy cochlea. The gain characteristics used for dynamic compression are deduced from categorical loudness scaling. In speech intelligibility tests, the instantaneous dynamic compression scheme was compared against a linear amplification scheme, which used the same filterbank for frequency analysis, but employed constant gain factors that restored the sound level for medium perceived loudness in each frequency band. In subjective comparisons, five of nine subjects preferred the linear amplification scheme and would not accept the instantaneous dynamic compression in hearing aids. Four of nine subjects did not perceive any quality differences. A sentence intelligibility test in noise (Oldenburg sentence test) showed little to no negative effects of the instantaneous dynamic compression, compared to linear amplification. A word intelligibility test in quiet (one-syllable rhyme test) showed that the subjects benefit from the larger amplification at low levels provided by instantaneous dynamic compression. Further analysis showed that the increase in intelligibility

  1. An Optimal Seed Based Compression Algorithm for DNA Sequences

    PubMed Central

    Gopalakrishnan, Gopakumar; Karunakaran, Muralikrishnan

    2016-01-01

    This paper proposes a seed based lossless compression algorithm to compress a DNA sequence which uses a substitution method that is similar to the LempelZiv compression scheme. The proposed method exploits the repetition structures that are inherent in DNA sequences by creating an offline dictionary which contains all such repeats along with the details of mismatches. By ensuring that only promising mismatches are allowed, the method achieves a compression ratio that is at par or better than the existing lossless DNA sequence compression algorithms. PMID:27555868

  2. Effect of Compression Devices on Preventing Deep Vein Thrombosis Among Adult Trauma Patients: A Systematic Review.

    PubMed

    Ibrahim, Mona; Ahmed, Azza; Mohamed, Warda Yousef; El-Sayed Abu Abduo, Somaya

    2015-01-01

    Trauma is the leading cause of death in Americans up to 44 years old each year. Deep vein thrombosis (DVT) is a significant condition occurring in trauma, and prophylaxis is essential to the appropriate management of trauma patients. The incidence of DVT varies in trauma patients, depending on patients' risk factors, modality of prophylaxis, and methods of detection. However, compression devices and arteriovenous (A-V) foot pumps prophylaxis are recommended in trauma patients, but the efficacy and optimal use of it is not well documented in the literature. The aim of this study was to review the literature on the effect of compression devices in preventing DVT among adult trauma patients. We searched through PubMed, CINAHL, and Cochrane Central Register of Controlled Trials for eligible studies published from 1990 until June 2014. Reviewers identified all randomized controlled trials that satisfied the study criteria, and the quality of included studies was assessed by Cochrane risk of bias tool. Five randomized controlled trials were included with a total of 1072 patients. Sequential compression devices significantly reduced the incidence of DVT in trauma patients. Also, foot pumps were more effective in reducing incidence of DVT compared with sequential compression devices. Sequential compression devices and foot pumps reduced the incidence of DVT in trauma patients. However, the evidence is limited to a small sample size and did not take into account other confounding variables that may affect the incidence of DVT in trauma patients. Future randomized controlled trials with larger probability samples to investigate the optimal use of mechanical prophylaxis in trauma patients are needed.

  3. Compressive Classification for TEM-EELS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hao, Weituo; Stevens, Andrew; Yang, Hao

    Electron energy loss spectroscopy (EELS) is typically conducted in STEM mode with a spectrometer, or in TEM mode with energy selction. These methods produce a 3D data set (x, y, energy). Some compressive sensing [1,2] and inpainting [3,4,5] approaches have been proposed for recovering a full set of spectra from compressed measurements. In many cases the final form of the spectral data is an elemental map (an image with channels corresponding to elements). This means that most of the collected data is unused or summarized. We propose a method to directly recover the elemental map with reduced dose and acquisitionmore » time. We have designed a new computational TEM sensor for compressive classification [6,7] of energy loss spectra called TEM-EELS.« less

  4. Compressibility characteristics of Sabak Bernam Marine Clay

    NASA Astrophysics Data System (ADS)

    Lat, D. C.; Ali, N.; Jais, I. B. M.; Baharom, B.; Yunus, N. Z. M.; Salleh, S. M.; Azmi, N. A. C.

    2018-04-01

    This study is carried out to determine the geotechnical properties and compressibility characteristics of marine clay collected at Sabak Bernam. The compressibility characteristics of this soil are determined from 1-D consolidation test and verified by existing correlations by other researchers. No literature has been found on the compressibility characteristics of Sabak Bernam Marine Clay. It is important to carry out this study since this type of marine clay covers large coastal area of west coast Malaysia. This type of marine clay was found on the main road connecting Klang to Perak and the road keeps experiencing undulation and uneven settlement which jeopardise the safety of the road users. The soil is indicated in the Generalised Soil Map of Peninsular Malaysia as a CLAY with alluvial soil on recent marine and riverine alluvium. Based on the British Standard Soil Classification and Plasticity Chart, the soil is classified as a CLAY with very high plasticity (CV). Results from laboratory test on physical properties and compressibility parameters show that Sabak Bernam Marine Clay (SBMC) is highly compressible, has low permeability and poor drainage characteristics. The compressibility parameters obtained for SBMC is in a good agreement with other researchers in the same field.

  5. Aging and compressibility of municipal solid wastes.

    PubMed

    Chen, Y M; Zhan, Tony L T; Wei, H Y; Ke, H

    2009-01-01

    The expansion of a municipal solid waste (MSW) landfill requires the ability to predict settlement behavior of the existing landfill. The practice of using a single compressibility value when performing a settlement analysis may lead to inaccurate predictions. This paper gives consideration to changes in the mechanical compressibility of MSW as a function of the fill age of MSW as well as the embedding depth of MSW. Borehole samples representative of various fill ages were obtained from five boreholes drilled to the bottom of the Qizhishan landfill in Suzhou, China. Thirty-one borehole samples were used to perform confined compression tests. Waste composition and volume-mass properties (i.e., unit weight, void ratio, and water content) were measured on all the samples. The test results showed that the compressible components of the MSW (i.e., organics, plastics, paper, wood and textiles) decreased with an increase in the fill age. The in situ void ratio of the MSW was shown to decrease with depth into the landfill. The compression index, Cc, was observed to decrease from 1.0 to 0.3 with depth into the landfill. Settlement analyses were performed on the existing landfill, demonstrating that the variation of MSW compressibility with fill age or depth should be taken into account in the settlement prediction.

  6. Development and evaluation of a novel lossless image compression method (AIC: artificial intelligence compression method) using neural networks as artificial intelligence.

    PubMed

    Fukatsu, Hiroshi; Naganawa, Shinji; Yumura, Shinnichiro

    2008-04-01

    This study was aimed to validate the performance of a novel image compression method using a neural network to achieve a lossless compression. The encoding consists of the following blocks: a prediction block; a residual data calculation block; a transformation and quantization block; an organization and modification block; and an entropy encoding block. The predicted image is divided into four macro-blocks using the original image for teaching; and then redivided into sixteen sub-blocks. The predicted image is compared to the original image to create the residual image. The spatial and frequency data of the residual image are compared and transformed. Chest radiography, computed tomography (CT), magnetic resonance imaging, positron emission tomography, radioisotope mammography, ultrasonography, and digital subtraction angiography images were compressed using the AIC lossless compression method; and the compression rates were calculated. The compression rates were around 15:1 for chest radiography and mammography, 12:1 for CT, and around 6:1 for other images. This method thus enables greater lossless compression than the conventional methods. This novel method should improve the efficiency of handling of the increasing volume of medical imaging data.

  7. Video data compression using artificial neural network differential vector quantization

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.

    1991-01-01

    An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.

  8. The compression of perceived time in a hot environment depends on physiological and psychological factors.

    PubMed

    Tamm, Maria; Jakobson, Ainika; Havik, Merle; Burk, Andres; Timpmann, Saima; Allik, Jüri; Oöpik, Vahur; Kreegipuu, Kairi

    2014-01-01

    The human perception of time was observed under extremely hot conditions. Young healthy men performed a time production task repeatedly in 4 experimental trials in either a temperate (22 °C, relative humidity 35%) or a hot (42 °C, relative humidity 18%) environment and with or without a moderate-intensity treadmill exercise. Within 1 hour, the produced durations indicated a significant compression of short intervals (0.5 to 10 s) in the combination of exercising and high ambient temperature, while neither variable/condition alone was enough to yield the effect. Temporal judgement was analysed in relation to different indicators of arousal, such as critical flicker frequency (CFF), core temperature, heart rate, and subjective ratings of fatigue and exertion. The arousal-sensitive internal clock model (originally proposed by Treisman) is used to explain the temporal compression while exercising in heat. As a result, we suggest that the psychological response to heat stress, the more precisely perceived fatigue, is important in describing the relationship between core temperature and time perception. Temporal compression is related to higher core temperature, but only if a certain level of perceived fatigue is accounted for, implying the existence of a thermoemotional internal clock.

  9. Compression Fracture of CFRP Laminates Containing Stress Intensifications.

    PubMed

    Leopold, Christian; Schütt, Martin; Liebig, Wilfried V; Philipkowski, Timo; Kürten, Jonas; Schulte, Karl; Fiedler, Bodo

    2017-09-05

    For brittle fracture behaviour of carbon fibre reinforced plastics (CFRP) under compression, several approaches exist, which describe different mechanisms during failure, especially at stress intensifications. The failure process is not only initiated by the buckling fibres, but a shear driven fibre compressive failure beneficiaries or initiates the formation of fibres into a kink-band. Starting from this kink-band further damage can be detected, which leads to the final failure. The subject of this work is an experimental investigation on the influence of ply thickness and stacking sequence in quasi-isotropic CFRP laminates containing stress intensifications under compression loading. Different effects that influence the compression failure and the role the stacking sequence has on damage development and the resulting compressive strength are identified and discussed. The influence of stress intensifications is investigated in detail at a hole in open hole compression (OHC) tests. A proposed interrupted test approach allows identifying the mechanisms of damage initiation and propagation from the free edge of the hole by causing a distinct damage state and examine it at a precise instant of time during fracture process. Compression after impact (CAI) tests are executed in order to compare the OHC results to a different type of stress intensifications. Unnotched compression tests are carried out for comparison as a reference. With this approach, a more detailed description of the failure mechanisms during the sudden compression failure of CFRP is achieved. By microscopic examination of single plies from various specimens, the different effects that influence the compression failure are identified. First damage of fibres occurs always in 0°-ply. Fibre shear failure leads to local microbuckling and the formation and growth of a kink-band as final failure mechanisms. The formation of a kink-band and finally steady state kinking is shifted to higher compressive strains

  10. Compression Fracture of CFRP Laminates Containing Stress Intensifications

    PubMed Central

    Schütt, Martin; Philipkowski, Timo; Kürten, Jonas; Schulte, Karl

    2017-01-01

    For brittle fracture behaviour of carbon fibre reinforced plastics (CFRP) under compression, several approaches exist, which describe different mechanisms during failure, especially at stress intensifications. The failure process is not only initiated by the buckling fibres, but a shear driven fibre compressive failure beneficiaries or initiates the formation of fibres into a kink-band. Starting from this kink-band further damage can be detected, which leads to the final failure. The subject of this work is an experimental investigation on the influence of ply thickness and stacking sequence in quasi-isotropic CFRP laminates containing stress intensifications under compression loading. Different effects that influence the compression failure and the role the stacking sequence has on damage development and the resulting compressive strength are identified and discussed. The influence of stress intensifications is investigated in detail at a hole in open hole compression (OHC) tests. A proposed interrupted test approach allows identifying the mechanisms of damage initiation and propagation from the free edge of the hole by causing a distinct damage state and examine it at a precise instant of time during fracture process. Compression after impact (CAI) tests are executed in order to compare the OHC results to a different type of stress intensifications. Unnotched compression tests are carried out for comparison as a reference. With this approach, a more detailed description of the failure mechanisms during the sudden compression failure of CFRP is achieved. By microscopic examination of single plies from various specimens, the different effects that influence the compression failure are identified. First damage of fibres occurs always in 0°-ply. Fibre shear failure leads to local microbuckling and the formation and growth of a kink-band as final failure mechanisms. The formation of a kink-band and finally steady state kinking is shifted to higher compressive strains

  11. 46 CFR 147.60 - Compressed gases.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 5 2012-10-01 2012-10-01 false Compressed gases. 147.60 Section 147.60 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) DANGEROUS CARGOES HAZARDOUS SHIPS' STORES Stowage and Other Special Requirements for Particular Materials § 147.60 Compressed gases. (a) Cylinder requirements...

  12. 46 CFR 147.60 - Compressed gases.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 5 2014-10-01 2014-10-01 false Compressed gases. 147.60 Section 147.60 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) DANGEROUS CARGOES HAZARDOUS SHIPS' STORES Stowage and Other Special Requirements for Particular Materials § 147.60 Compressed gases. (a) Cylinder requirements...

  13. 46 CFR 147.60 - Compressed gases.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 5 2011-10-01 2011-10-01 false Compressed gases. 147.60 Section 147.60 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) DANGEROUS CARGOES HAZARDOUS SHIPS' STORES Stowage and Other Special Requirements for Particular Materials § 147.60 Compressed gases. (a) Cylinder requirements...

  14. 46 CFR 147.60 - Compressed gases.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 5 2013-10-01 2013-10-01 false Compressed gases. 147.60 Section 147.60 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) DANGEROUS CARGOES HAZARDOUS SHIPS' STORES Stowage and Other Special Requirements for Particular Materials § 147.60 Compressed gases. (a) Cylinder requirements...

  15. Lossless quantum data compression and secure direct communication

    NASA Astrophysics Data System (ADS)

    Boström, Kim

    2004-07-01

    This thesis deals with the encoding and transmission of information through a quantum channel. A quantum channel is a quantum mechanical system whose state is manipulated by a sender and read out by a receiver. The individual state of the channel represents the message. The two topics of the thesis comprise 1) the possibility of compressing a message stored in a quantum channel without loss of information and 2) the possibility to communicate a message directly from one party to another in a secure manner, that is, a third party is not able to eavesdrop the message without being detected. The main results of the thesis are the following. A general framework for variable-length quantum codes is worked out. These codes are necessary to make lossless compression possible. Due to the quantum nature of the channel, the encoded messages are in general in a superposition of different lengths. It is found to be impossible to compress a quantum message without loss of information if the message is not apriori known to the sender. In the other case it is shown that lossless quantum data compression is possible and a lower bound on the compression rate is derived. Furthermore, an explicit compression scheme is constructed that works for arbitrarily given source message ensembles. A quantum cryptographic protocol - the “ping-pong protocol” - is presented that realizes the secure direct communication of classical messages through a quantum channel. The security of the protocol against arbitrary eavesdropping attacks is proven for the case of an ideal quantum channel. In contrast to other quantum cryptographic protocols, the ping-pong protocol is deterministic and can thus be used to transmit a random key as well as a composed message. The protocol is perfectly secure for the transmission of a key, and it is quasi-secure for the direct transmission of a message. The latter means that the probability of successful eavesdropping exponentially decreases with the length

  16. Modeling of Single and Dual Reservoir Porous Media Compressed Gas (Air and CO2) Storage Systems

    NASA Astrophysics Data System (ADS)

    Oldenburg, C. M.; Liu, H.; Borgia, A.; Pan, L.

    2017-12-01

    Intermittent renewable energy sources are causing increasing demand for energy storage. The deep subsurface offers promising opportunities for energy storage because it can safely contain high-pressure gases. Porous media compressed air energy storage (PM-CAES) is one approach, although the only facilities in operation are in caverns (C-CAES) rather than porous media. Just like in C-CAES, PM-CAES operates generally by injecting working gas (air) through well(s) into the reservoir compressing the cushion gas (existing air in the reservoir). During energy recovery, high-pressure air from the reservoir is mixed with fuel in a combustion turbine to produce electricity, thereby reducing compression costs. Unlike in C-CAES, the storage of energy in PM-CAES occurs variably across pressure gradients in the formation, while the solid grains of the matrix can release/store heat. Because air is the working gas, PM-CAES has fairly low thermal efficiency and low energy storage density. To improve the energy storage density, we have conceived and modeled a closed-loop two-reservoir compressed CO2 energy storage system. One reservoir is the low-pressure reservoir, and the other is the high-pressure reservoir. CO2 is cycled back and forth between reservoirs depending on whether energy needs to be stored or recovered. We have carried out thermodynamic and parametric analyses of the performance of an idealized two-reservoir CO2 energy storage system under supercritical and transcritical conditions for CO2 using a steady-state model. Results show that the transcritical compressed CO2 energy storage system has higher round-trip efficiency and exergy efficiency, and larger energy storage density than the supercritical compressed CO2 energy storage. However, the configuration of supercritical compressed CO2 energy storage is simpler, and the energy storage densities of the two systems are both higher than that of PM-CAES, which is advantageous in terms of storage volume for a given

  17. Biomechanical evaluation of a second generation headless compression screw for ankle arthrodesis in a cadaver model.

    PubMed

    Somberg, Andrew Max; Whiteside, William K; Nilssen, Erik; Murawski, Daniel; Liu, Wei

    2016-03-01

    Many types of screws, plates, and strut grafts have been utilized for ankle arthrodesis. Biomechanical testing has shown that these constructs can have variable stiffness. More recently, headless compression screws have emerged as an evolving method of achieving compression in various applications but there is limited literature regarding ankle arthrodesis. The aim of this study was to determine the biomechanical stability provided by a second generation fully threaded headless compression screw compared to a standard headed, partially threaded cancellous screw in a cadaveric ankle arthrodesis model. Twenty fresh frozen human cadaver specimens were subjected to simulated ankle arthrodesis with either three standard cancellous-bone screws (InFix 7.3mm) or with three headless compression screws (Acumed Acutrak 2 7.5mm). The specimens were subjected to cyclic loading and unloading at a rate of 1Hz, compression of 525 Newtons (N) and distraction of 20N for a total of 500 cycles using an electromechanical load frame (Instron). The amount of maximum distraction was recorded as well as the amount of motion that occurred through 1, 10, 50, 100, and 500 cycles. No significant difference (p=0.412) was seen in the amount of distraction that occurred across the fusion site for either screw. The average maximum distraction after 500 cycles was 201.9μm for the Acutrak 2 screw and 235.4μm for the InFix screw. No difference was seen throughout each cycle over time for the Acutrak 2 screw (p-value=0.988) or the InFix screw (p-value=0.991). Both the traditional InFix type screw and the second generation Acumed Acutrak headless compression screws provide adequate fixation during ankle arthrodesis under submaximal loads. There is no demonstrable difference between traditional cannulated partially threaded screws and headless compression screws studied in this model. Copyright © 2015 European Foot and Ankle Society. Published by Elsevier Ltd. All rights reserved.

  18. Pulse compression and prepulse suppression apparatus

    DOEpatents

    Dane, Clifford B.; Hackel, Lloyd A.; George, Edward V.; Miller, John L.; Krupke, William F.

    1993-01-01

    A pulse compression and prepulse suppression apparatus (10) for time compressing the output of a laser (14). A pump pulse (46) is separated from a seed pulse (48) by a first polarized beam splitter (20) according to the orientation of a half wave plate (18). The seed pulse (48) is directed into an SBS oscillator (44) by two plane mirrors (22, 26) and a corner mirror (24), the corner mirror (24) being movable to adjust timing. The pump pulse (46) is directed into an SBS amplifier 34 wherein SBS occurs. The seed pulse (48), having been propagated from the SBS oscillator (44), is then directed through the SBS amplifier (34) wherein it sweeps the energy of the pump pulse (46) out of the SBS amplifier (34) and is simultaneously compressed, and the time compressed pump pulse (46) is emitted as a pulse output (52). A second polarized beam splitter (38) directs any undepleted pump pulse 58 away from the SBS oscillator (44).

  19. Pulse compression and prepulse suppression apparatus

    DOEpatents

    Dane, C.B.; Hackel, L.A.; George, E.V.; Miller, J.L.; Krupke, W.F.

    1993-11-09

    A pulse compression and prepulse suppression apparatus (10) for time compressing the output of a laser (14). A pump pulse (46) is separated from a seed pulse (48) by a first polarized beam splitter (20) according to the orientation of a half wave plate (18). The seed pulse (48) is directed into an SBS oscillator (44) by two plane mirrors (22, 26) and a corner mirror (24), the corner mirror (24) being movable to adjust timing. The pump pulse (46) is directed into an SBS amplifier 34 wherein SBS occurs. The seed pulse (48), having been propagated from the SBS oscillator (44), is then directed through the SBS amplifier (34) wherein it sweeps the energy of the pump pulse (46) out of the SBS amplifier (34) and is simultaneously compressed, and the time compressed pump pulse (46) is emitted as a pulse output (52). A second polarized beam splitter (38) directs any undepleted pump pulse 58 away from the SBS oscillator (44).

  20. Compression asphyxia from a human pyramid.

    PubMed

    Tumram, Nilesh Keshav; Ambade, Vipul Namdeorao; Biyabani, Naushad

    2015-12-01

    In compression asphyxia, respiration is stopped by external forces on the body. It is usually due to an external force compressing the trunk such as a heavy weight on the chest or abdomen and is associated with internal injuries. In present case, the victim was trapped and crushed under the falling persons from a human pyramid formation for a "Dahi Handi" festival. There was neither any severe blunt force injury nor any significant pathological natural disease contributing to the cause of death. The victim was unable to remove himself from the situation because his cognitive responses and coordination were impaired due to alcohol intake. The victim died from asphyxia due to compression of his chest and abdomen. Compression asphyxia resulting from the collapse of a human pyramid and the dynamics of its impact force in these circumstances is very rare and is not reported previously to the best of our knowledge. © The Author(s) 2015.

  1. Hemodynamic Deterioration in Lateral Compression Pelvic Fracture After Prehospital Pelvic Circumferential Compression Device Application.

    PubMed

    Garner, Alan A; Hsu, Jeremy; McShane, Anne; Sroor, Adam

    Increased fracture displacement has previously been described with the application of pelvic circumferential compression devices (PCCDs) in patients with lateral compression-type pelvic fracture. We describe the first reported case of hemodynamic deterioration temporally associated with the prehospital application of a PCCD in a patient with a complex acetabular fracture with medial displacement of the femoral head. Active hemorrhage from a site adjacent to the acetabular fracture was subsequently demonstrated on angiography. Caution in the application of PCCDs to patients with lateral compression-type fractures is warranted. Copyright © 2017 Air Medical Journal Associates. All rights reserved.

  2. The analysis and modelling of dilatational terms in compressible turbulence

    NASA Technical Reports Server (NTRS)

    Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.; Kreiss, H. O.

    1991-01-01

    It is shown that the dilatational terms that need to be modeled in compressible turbulence include not only the pressure-dilatation term but also another term - the compressible dissipation. The nature of these dilatational terms in homogeneous turbulence is explored by asymptotic analysis of the compressible Navier-Stokes equations. A non-dimensional parameter which characterizes some compressible effects in moderate Mach number, homogeneous turbulence is identified. Direct numerical simulations (DNS) of isotropic, compressible turbulence are performed, and their results are found to be in agreement with the theoretical analysis. A model for the compressible dissipation is proposed; the model is based on the asymptotic analysis and the direct numerical simulations. This model is calibrated with reference to the DNS results regarding the influence of compressibility on the decay rate of isotropic turbulence. An application of the proposed model to the compressible mixing layer has shown that the model is able to predict the dramatically reduced growth rate of the compressible mixing layer.

  3. The analysis and modeling of dilatational terms in compressible turbulence

    NASA Technical Reports Server (NTRS)

    Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.; Kreiss, H. O.

    1989-01-01

    It is shown that the dilatational terms that need to be modeled in compressible turbulence include not only the pressure-dilatation term but also another term - the compressible dissipation. The nature of these dilatational terms in homogeneous turbulence is explored by asymptotic analysis of the compressible Navier-Stokes equations. A non-dimensional parameter which characterizes some compressible effects in moderate Mach number, homogeneous turbulence is identified. Direct numerical simulations (DNS) of isotropic, compressible turbulence are performed, and their results are found to be in agreement with the theoretical analysis. A model for the compressible dissipation is proposed; the model is based on the asymptotic analysis and the direct numerical simulations. This model is calibrated with reference to the DNS results regarding the influence of compressibility on the decay rate of isotropic turbulence. An application of the proposed model to the compressible mixing layer has shown that the model is able to predict the dramatically reduced growth rate of the compressible mixing layer.

  4. Tsunami Speed Variations in Density-stratified Compressible Global Oceans

    NASA Astrophysics Data System (ADS)

    Watada, S.

    2013-12-01

    Recent tsunami observations in the deep ocean have accumulated unequivocal evidence that tsunami traveltime delays compared with the linear long-wave tsunami simulations occur during tsunami propagation in the deep ocean. The delay is up to 2% of the tsunami traveltime. Watada et al. [2013] investigated the cause of the delay using the normal mode theory of tsunamis and attributed the delay to the compressibility of seawater, the elasticity of the solid earth, and the gravitational potential change associated with mass motion during the passage of tsunamis. Tsunami speed variations in the deep ocean caused by seawater density stratification is investigated using a newly developed propagator matrix method that is applicable to seawater with depth-variable sound speeds and density gradients. For a 4-km deep ocean, the total tsunami speed reduction is 0.45% compared with incompressible homogeneous seawater; two thirds of the reduction is due to elastic energy stored in the water and one third is due to water density stratification mainly by hydrostatic compression. Tsunami speeds are computed for global ocean density and sound speed profiles and characteristic structures are discussed. Tsunami speed reductions are proportional to ocean depth with small variations, except for in warm Mediterranean seas. The impacts of seawater compressibility and the elasticity effect of the solid earth on tsunami traveltime should be included for precise modeling of trans-oceanic tsunamis. Data locations where a vertical ocean profile deeper than 2500 m is available in World Ocean Atlas 2009. The dark gray area indicates the Pacific Ocean defined in WOA09. a) Tsunami speed variations. Red, gray and black bars represent global, Pacific, and Mediterranean Sea, respectively. b) Regression lines of the tsunami velocity reduction for all oceans. c)Vertical ocean profiles at grid points indicated by the stars in Figure 1.

  5. 46 CFR 112.50-7 - Compressed air starting.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 4 2013-10-01 2013-10-01 false Compressed air starting. 112.50-7 Section 112.50-7... air starting. A compressed air starting system must meet the following: (a) The starting, charging... air compressors addressed in paragraph (c)(3)(i) of this section. (b) The compressed air starting...

  6. 46 CFR 112.50-7 - Compressed air starting.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 4 2014-10-01 2014-10-01 false Compressed air starting. 112.50-7 Section 112.50-7... air starting. A compressed air starting system must meet the following: (a) The starting, charging... air compressors addressed in paragraph (c)(3)(i) of this section. (b) The compressed air starting...

  7. High-performance compression and double cryptography based on compressive ghost imaging with the fast Fourier transform

    NASA Astrophysics Data System (ADS)

    Leihong, Zhang; Zilan, Pan; Luying, Wu; Xiuhua, Ma

    2016-11-01

    To solve the problem that large images can hardly be retrieved for stringent hardware restrictions and the security level is low, a method based on compressive ghost imaging (CGI) with Fast Fourier Transform (FFT) is proposed, named FFT-CGI. Initially, the information is encrypted by the sender with FFT, and the FFT-coded image is encrypted by the system of CGI with a secret key. Then the receiver decrypts the image with the aid of compressive sensing (CS) and FFT. Simulation results are given to verify the feasibility, security, and compression of the proposed encryption scheme. The experiment suggests the method can improve the quality of large images compared with conventional ghost imaging and achieve the imaging for large-sized images, further the amount of data transmitted largely reduced because of the combination of compressive sensing and FFT, and improve the security level of ghost images through ciphertext-only attack (COA), chosen-plaintext attack (CPA), and noise attack. This technique can be immediately applied to encryption and data storage with the advantages of high security, fast transmission, and high quality of reconstructed information.

  8. Locally adaptive vector quantization: Data compression with feature preservation

    NASA Technical Reports Server (NTRS)

    Cheung, K. M.; Sayano, M.

    1992-01-01

    A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.

  9. Squish: Near-Optimal Compression for Archival of Relational Datasets

    PubMed Central

    Gao, Yihan; Parameswaran, Aditya

    2017-01-01

    Relational datasets are being generated at an alarmingly rapid rate across organizations and industries. Compressing these datasets could significantly reduce storage and archival costs. Traditional compression algorithms, e.g., gzip, are suboptimal for compressing relational datasets since they ignore the table structure and relationships between attributes. We study compression algorithms that leverage the relational structure to compress datasets to a much greater extent. We develop Squish, a system that uses a combination of Bayesian Networks and Arithmetic Coding to capture multiple kinds of dependencies among attributes and achieve near-entropy compression rate. Squish also supports user-defined attributes: users can instantiate new data types by simply implementing five functions for a new class interface. We prove the asymptotic optimality of our compression algorithm and conduct experiments to show the effectiveness of our system: Squish achieves a reduction of over 50% in storage size relative to systems developed in prior work on a variety of real datasets. PMID:28180028

  10. Bitshuffle: Filter for improving compression of typed binary data

    NASA Astrophysics Data System (ADS)

    Masui, Kiyoshi

    2017-12-01

    Bitshuffle rearranges typed, binary data for improving compression; the algorithm is implemented in a python/C package within the Numpy framework. The library can be used alongside HDF5 to compress and decompress datasets and is integrated through the dynamically loaded filters framework. Algorithmically, Bitshuffle is closely related to HDF5's Shuffle filter except it operates at the bit level instead of the byte level. Arranging a typed data array in to a matrix with the elements as the rows and the bits within the elements as the columns, Bitshuffle "transposes" the matrix, such that all the least-significant-bits are in a row, etc. This transposition is performed within blocks of data roughly 8kB long; this does not in itself compress data, but rearranges it for more efficient compression. A compression library is necessary to perform the actual compression. This scheme has been used for compression of radio data in high performance computing.

  11. Heterogeneous Compression of Large Collections of Evolutionary Trees.

    PubMed

    Matthews, Suzanne J

    2015-01-01

    Compressing heterogeneous collections of trees is an open problem in computational phylogenetics. In a heterogeneous tree collection, each tree can contain a unique set of taxa. An ideal compression method would allow for the efficient archival of large tree collections and enable scientists to identify common evolutionary relationships over disparate analyses. In this paper, we extend TreeZip to compress heterogeneous collections of trees. TreeZip is the most efficient algorithm for compressing homogeneous tree collections. To the best of our knowledge, no other domain-based compression algorithm exists for large heterogeneous tree collections or enable their rapid analysis. Our experimental results indicate that TreeZip averages 89.03 percent (72.69 percent) space savings on unweighted (weighted) collections of trees when the level of heterogeneity in a collection is moderate. The organization of the TRZ file allows for efficient computations over heterogeneous data. For example, consensus trees can be computed in mere seconds. Lastly, combining the TreeZip compressed (TRZ) file with general-purpose compression yields average space savings of 97.34 percent (81.43 percent) on unweighted (weighted) collections of trees. Our results lead us to believe that TreeZip will prove invaluable in the efficient archival of tree collections, and enables scientists to develop novel methods for relating heterogeneous collections of trees.

  12. Image splitting and remapping method for radiological image compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  13. Mixed raster content (MRC) model for compound image compression

    NASA Astrophysics Data System (ADS)

    de Queiroz, Ricardo L.; Buckley, Robert R.; Xu, Ming

    1998-12-01

    This paper will describe the Mixed Raster Content (MRC) method for compressing compound images, containing both binary test and continuous-tone images. A single compression algorithm that simultaneously meets the requirements for both text and image compression has been elusive. MRC takes a different approach. Rather than using a single algorithm, MRC uses a multi-layered imaging model for representing the results of multiple compression algorithms, including ones developed specifically for text and for images. As a result, MRC can combine the best of existing or new compression algorithms and offer different quality-compression ratio tradeoffs. The algorithms used by MRC set the lower bound on its compression performance. Compared to existing algorithms, MRC has some image-processing overhead to manage multiple algorithms and the imaging model. This paper will develop the rationale for the MRC approach by describing the multi-layered imaging model in light of a rate-distortion trade-off. Results will be presented comparing images compressed using MRC, JPEG and state-of-the-art wavelet algorithms such as SPIHT. MRC has been approved or proposed as an architectural model for several standards, including ITU Color Fax, IETF Internet Fax, and JPEG 2000.

  14. Compression in wearable sensor nodes: impacts of node topology.

    PubMed

    Imtiaz, Syed Anas; Casson, Alexander J; Rodriguez-Villegas, Esther

    2014-04-01

    Wearable sensor nodes monitoring the human body must operate autonomously for very long periods of time. Online and low-power data compression embedded within the sensor node is therefore essential to minimize data storage/transmission overheads. This paper presents a low-power MSP430 compressive sensing implementation for providing such compression, focusing particularly on the impact of the sensor node architecture on the compression performance. Compression power performance is compared for four different sensor nodes incorporating different strategies for wireless transmission/on-sensor-node local storage of data. The results demonstrate that the compressive sensing used must be designed differently depending on the underlying node topology, and that the compression strategy should not be guided only by signal processing considerations. We also provide a practical overview of state-of-the-art sensor node topologies. Wireless transmission of data is often preferred as it offers increased flexibility during use, but in general at the cost of increased power consumption. We demonstrate that wireless sensor nodes can highly benefit from the use of compressive sensing and now can achieve power consumptions comparable to, or better than, the use of local memory.

  15. Shock-wave studies of anomalous compressibility of glassy carbon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molodets, A. M., E-mail: molodets@icp.ac.ru; Golyshev, A. A.; Savinykh, A. S.

    2016-02-15

    The physico-mechanical properties of amorphous glassy carbon are investigated under shock compression up to 10 GPa. Experiments are carried out on the continuous recording of the mass velocity of compression pulses propagating in glassy carbon samples with initial densities of 1.502(5) g/cm{sup 3} and 1.55(2) g/cm{sup 3}. It is shown that, in both cases, a compression wave in glassy carbon contains a leading precursor with amplitude of 0.135(5) GPa. It is established that, in the range of pressures up to 2 GPa, a shock discontinuity in glassy carbon is transformed into a broadened compression wave, and shock waves are formedmore » in the release wave, which generally means the anomalous compressibility of the material in both the compression and release waves. It is shown that, at pressure higher than 3 GPa, anomalous behavior turns into normal behavior, accompanied by the formation of a shock compression wave. In the investigated area of pressure, possible structural changes in glassy carbon under shock compression have a reversible character. A physico-mechanical model of glassy carbon is proposed that involves the equation of state and a constitutive relation for Poisson’s ratio and allows the numerical simulation of physico-mechanical and thermophysical properties of glassy carbon of different densities in the region of its anomalous compressibility.« less

  16. Self-diffusion in compressively strained Ge

    NASA Astrophysics Data System (ADS)

    Kawamura, Yoko; Uematsu, Masashi; Hoshi, Yusuke; Sawano, Kentarou; Myronov, Maksym; Shiraki, Yasuhiro; Haller, Eugene E.; Itoh, Kohei M.

    2011-08-01

    Under a compressive biaxial strain of ˜ 0.71%, Ge self-diffusion has been measured using an isotopically controlled Ge single-crystal layer grown on a relaxed Si0.2Ge0.8 virtual substrate. The self-diffusivity is enhanced by the compressive strain and its behavior is fully consistent with a theoretical prediction of a generalized activation volume model of a simple vacancy mediated diffusion, reported by Aziz et al. [Phys. Rev. B 73, 054101 (2006)]. The activation volume of (-0.65±0.21) times the Ge atomic volume quantitatively describes the observed enhancement due to the compressive biaxial strain very well.

  17. Survived ileocecal blowout from compressed air.

    PubMed

    Weber, Marco; Kolbus, Frank; Dressler, Jan; Lessig, Rüdiger

    2011-03-01

    Industrial accidents with compressed air entering the gastro-intestinal tract often run fatally. The pressures usually over-exceed those used by medical applications such as colonoscopy and lead to vast injuries of the intestines with high mortality. The case described in this report is of a 26-year-old man who was harmed by compressed air that entered through the anus. He survived because of fast emergency operation. This case underlines necessity of explicit instruction considering hazards handling compressed air devices to maintain safety at work. Further, our observations support the hypothesis that the mucosa is the most elastic layer of the intestine wall.

  18. Effect of the rate of chest compression familiarised in previous training on the depth of chest compression during metronome-guided cardiopulmonary resuscitation: a randomised crossover trial

    PubMed Central

    Bae, Jinkun; Chung, Tae Nyoung; Je, Sang Mo

    2016-01-01

    Objectives To assess how the quality of metronome-guided cardiopulmonary resuscitation (CPR) was affected by the chest compression rate familiarised by training before the performance and to determine a possible mechanism for any effect shown. Design Prospective crossover trial of a simulated, one-person, chest-compression-only CPR. Setting Participants were recruited from a medical school and two paramedic schools of South Korea. Participants 42 senior students of a medical school and two paramedic schools were enrolled but five dropped out due to physical restraints. Intervention Senior medical and paramedic students performed 1 min of metronome-guided CPR with chest compressions only at a speed of 120 compressions/min after training for chest compression with three different rates (100, 120 and 140 compressions/min). Friedman's test was used to compare average compression depths based on the different rates used during training. Results Average compression depths were significantly different according to the rate used in training (p<0.001). A post hoc analysis showed that average compression depths were significantly different between trials after training at a speed of 100 compressions/min and those at speeds of 120 and 140 compressions/min (both p<0.001). Conclusions The depth of chest compression during metronome-guided CPR is affected by the relative difference between the rate of metronome guidance and the chest compression rate practised in previous training. PMID:26873050

  19. Lagrangian statistics in compressible isotropic homogeneous turbulence

    NASA Astrophysics Data System (ADS)

    Yang, Yantao; Wang, Jianchun; Shi, Yipeng; Chen, Shiyi

    2011-11-01

    In this work we conducted the Direct Numerical Simulation (DNS) of a forced compressible isotropic homogeneous turbulence and investigated the flow statistics from the Lagrangian point of view, namely the statistics is computed following the passive tracers trajectories. The numerical method combined the Eulerian field solver which was developed by Wang et al. (2010, J. Comp. Phys., 229, 5257-5279), and a Lagrangian module for tracking the tracers and recording the data. The Lagrangian probability density functions (p.d.f.'s) have then been calculated for both kinetic and thermodynamic quantities. In order to isolate the shearing part from the compressing part of the flow, we employed the Helmholtz decomposition to decompose the flow field (mainly the velocity field) into the solenoidal and compressive parts. The solenoidal part was compared with the incompressible case, while the compressibility effect showed up in the compressive part. The Lagrangian structure functions and cross-correlation between various quantities will also be discussed. This work was supported in part by the China's Turbulence Program under Grant No.2009CB724101.

  20. Confounding compression: the effects of posture, sizing and garment type on measured interface pressure in sports compression clothing.

    PubMed

    Brophy-Williams, Ned; Driller, Matthew William; Shing, Cecilia Mary; Fell, James William; Halson, Shona Leigh; Halson, Shona Louise

    2015-01-01

    The purpose of this investigation was to measure the interface pressure exerted by lower body sports compression garments, in order to assess the effect of garment type, size and posture in athletes. Twelve national-level boxers were fitted with sports compression garments (tights and leggings), each in three different sizes (undersized, recommended size and oversized). Interface pressure was assessed across six landmarks on the lower limb (ranging from medial malleolus to upper thigh) as athletes assumed sitting, standing and supine postures. Sports compression leggings exerted a significantly higher mean pressure than sports compression tights (P < 0.001). Oversized tights applied significantly less pressure than manufacturer-recommended size or undersized tights (P < 0.001), yet no significant differences were apparent between different-sized leggings. Standing posture resulted in significantly higher mean pressure application than a seated posture for both tights and leggings (P < 0.001 and P = 0.002, respectively). Pressure was different across landmarks, with analyses revealing a pressure profile that was neither strictly graduated nor progressive in nature. The pressure applied by sports compression garments is significantly affected by garment type, size and posture assumed by the wearer.