Design Point for a Spheromak Compression Experiment
NASA Astrophysics Data System (ADS)
Woodruff, Simon; Romero-Talamas, Carlos A.; O'Bryan, John; Stuber, James; Darpa Spheromak Team
2015-11-01
Two principal issues for the spheromak concept remain to be addressed experimentally: formation efficiency and confinement scaling. We are therefore developing a design point for a spheromak experiment that will be heated by adiabatic compression, utilizing the CORSICA and NIMROD codes as well as analytic modeling with target parameters R_initial =0.3m, R_final =0.1m, T_initial =0.2keV, T_final =1.8keV, n_initial =1019m-3 and n_final = 1021m-3, with radial convergence of C =3. This low convergence differentiates the concept from MTF with C =10 or more, since the plasma will be held in equilibrium throughout compression. We present results from CORSICA showing the placement of coils and passive structure to ensure stability during compression, and design of the capacitor bank needed to both form the target plasma and compress it. We specify target parameters for the compression in terms of plasma beta, formation efficiency and energy confinement. Work performed under DARPA grant N66001-14-1-4044.
Ischemic Compression After Trigger Point Injection Affect the Treatment of Myofascial Trigger Points
Kim, Soo A; Oh, Ki Young; Choi, Won Hyuck
2013-01-01
Objective To investigate the effects of trigger point injection with or without ischemic compression in treatment of myofascial trigger points in the upper trapezius muscle. Methods Sixty patients with active myofascial trigger points in upper trapezius muscle were randomly divided into three groups: group 1 (n=20) received only trigger point injections, group 2 (n=20) received trigger point injections with 30 seconds of ischemic compression, and group 3 (n=20) received trigger point injections with 60 seconds of ischemic compression. The visual analogue scale, pressure pain threshold, and range of motion of the neck were assessed before treatment, immediately after treatment, and 1 week after treatment. Korean Neck Disability Indexes were assessed before treatment and 1 week after treatment. Results We found a significant improvement in all assessment parameters (p<0.05) in all groups. But, receiving trigger point injections with ischemic compression group showed significant improvement as compared with the receiving only trigger point injections group. And no significant differences between receiving 30 seconds of ischemic compression group and 60 seconds of ischemic compression group. Conclusion This study demonstrated the effectiveness of ischemic compression for myofascial trigger point. Trigger point injections combined with ischemic compression shows better effects on treatment of myofascial trigger points in the upper trapezius muscle than the only trigger point injections therapy. But the duration of ischemic compression did not affect treatment of myofascial trigger point. PMID:24020035
Fixed-Rate Compressed Floating-Point Arrays.
Lindstrom, Peter
2014-12-01
Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation. PMID:26356981
Fast and efficient compression of floating-point data.
Lindstrom, Peter; Isenburg, Martin
2006-01-01
Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenecks. When the rate at which data is produced exceeds the available I/O bandwidth, the simulation stalls and the CPUs are idle. Data compression can alleviate this problem by using some CPU cycles to reduce the amount of data needed to be transfered. Most compression schemes, however, are designed to operate offline and seek to maximize compression, not throughput. Furthermore, they often require quantizing floating-point values onto a uniform integer grid, which disqualifies their use in applications where exact values must be retained. We propose a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications. A plug-in scheme for data-dependent prediction makes our scheme applicable to a wide variety of data used in visualization, such as unstructured meshes, point sets, images, and voxel grids. We achieve state-of-the-art compression rates and speeds, the latter in part due to an improved entropy coder. We demonstrate that this significantly accelerates I/O throughput in real simulation runs. Unlike previous schemes, our method also adapts well to variable-precision floating-point and integer data. PMID:17080858
Measurement dimensions compressed spectral imaging with a single point detector
NASA Astrophysics Data System (ADS)
Liu, Xue-Feng; Yu, Wen-Kai; Yao, Xu-Ri; Dai, Bin; Li, Long-Zhen; Wang, Chao; Zhai, Guang-Jie
2016-04-01
An experimental demonstration of spectral imaging with measurement dimensions compressed has been performed. With the method of dual compressed sensing (CS) we derive, the spectral image of a colored object can be obtained with only a single point detector, and sub-sampling is achieved in both spatial and spectral domains. The performances of dual CS spectral imaging are analyzed, including the effects of dual modulation numbers and measurement noise on the imaging quality. Our scheme provides a stable, high-flux measurement approach of spectral imaging.
Fixed-rate compressed floating-point arrays
Energy Science and Technology Software Center (ESTSC)
2014-03-30
ZFP is a library for lossy compression of single- and double-precision floating-point data. One of the unique features of ZFP is its support for fixed-rate compression, which enables random read and write access at the granularity of small blocks of values. Using a C++ interface, this allows declaring compressed arrays (1D, 2D, and 3D arrays are supported) that through operator overloading can be treated just like conventional, uncompressed arrays, but which allow the user tomore » specify the exact number of bits to allocate to the array. ZFP also has variable-rate fixed-precision and fixed-accuracy modes, which allow the user to specify a tolerance on the relative or absolute error.« less
Compression of point-texture 3D motion sequences
NASA Astrophysics Data System (ADS)
Song, In-Wook; Kim, Chang-Su; Lee, Sang-Uk
2005-10-01
In this work, we propose two compression algorithms for PointTexture 3D sequences: the octree-based scheme and the motion-compensated prediction scheme. The first scheme represents each PointTexture frame hierarchically using an octree. The geometry information in the octree nodes is encoded by the predictive partial matching (PPM) method. The encoder supports the progressive transmission of the 3D frame by transmitting the octree nodes in a top-down manner. The second scheme adopts the motion-compensated prediction to exploit the temporal correlation in 3D sequences. It first divides each frame into blocks, and then estimates the motion of each block using the block matching algorithm. In contrast to the motion-compensated 2D video coding, the prediction residual may take more bits than the original signal. Thus, in our approach, the motion compensation is used only for the blocks that can be replaced by the matching blocks. The other blocks are PPM-encoded. Extensive simulation results demonstrate that the proposed algorithms provide excellent compression performances.
Prediction of optimal operation point existence and parameters in lossy compression of noisy images
NASA Astrophysics Data System (ADS)
Zemliachenko, Alexander N.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem
2014-10-01
This paper deals with lossy compression of images corrupted by additive white Gaussian noise. For such images, compression can be characterized by existence of optimal operation point (OOP). In OOP, MSE or other metric derived between compressed and noise-free image might have optimum, i.e., maximal noise removal effect takes place. If OOP exists, then it is reasonable to compress an image in its neighbourhood. If no, more "careful" compression is reasonable. In this paper, we demonstrate that existence of OOP can be predicted based on very simple and fast analysis of discrete cosine transform (DCT) statistics in 8x8 blocks. Moreover, OOP can be predicted not only for conventional metrics as MSE or PSNR but also for visual quality metrics. Such prediction can be useful in automatic compression of multi- and hyperspectral remote sensing images.
Parametric temporal compression of infrared imagery sequences containing a slow-moving point target.
Huber-Shalem, Revital; Hadar, Ofer; Rotman, Stanley R; Huber-Lerner, Merav
2016-02-10
Infrared (IR) imagery sequences are commonly used for detecting moving targets in the presence of evolving cloud clutter or background noise. This research focuses on slow-moving point targets that are less than one pixel in size, such as aircraft at long range from a sensor. Since transmitting IR imagery sequences to a base unit or storing them consumes considerable time and resources, a compression method that maintains the point target detection capabilities is highly desirable. In this work, we introduce a new parametric temporal compression that incorporates Gaussian fit and polynomial fit. We then proceed to spatial compression by spatially applying the lowest possible number of bits for representing each parameter over the parameters extracted by temporal compression, which is followed by bit encoding to achieve an end-to-end compression process of the sequence for data storage and transmission. We evaluate the proposed compression method using the variance estimation ratio score (VERS), which is a signal-to-noise ratio (SNR)-based measure for point target detection that scores each pixel and yields an SNR scores image. A high pixel score indicates that a target is suspected to traverse the pixel. From this score image we calculate the movie scores, which are found to be close to those of the original sequences. Furthermore, we present a new algorithm for automatic detection of the target tracks. This algorithm extracts the target location from the SNR scores image, which is acquired during the evaluation process, using Hough transform. This algorithm yields a similar detection probability (PD) and false alarm probability (PFA) of the compressed sequences and the original sequences. The parameters of the new parametric temporal compression successfully differentiate the targets from the background, yielding high PDs (above 83%) with low PFAs (below 0.043%) without the need to calculate pixel scores or to apply automatic detection of the target tracks. PMID
Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information
NASA Technical Reports Server (NTRS)
Pence, William D.; White, R. L.; Seaman, R.
2010-01-01
We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.
Compression After Impact Testing of Sandwich Structures Using the Four Point Bend Test
NASA Technical Reports Server (NTRS)
Nettles, Alan T.; Gregory, Elizabeth; Jackson, Justin; Kenworthy, Devon
2008-01-01
For many composite laminated structures, the design is driven by data obtained from Compression after Impact (CAI) testing. There currently is no standard for CAI testing of sandwich structures although there is one for solid laminates of a certain thickness and lay-up configuration. Most sandwich CAI testing has followed the basic technique of this standard where the loaded ends are precision machined and placed between two platens and compressed until failure. If little or no damage is present during the compression tests, the loaded ends may need to be potted to prevent end brooming. By putting a sandwich beam in a four point bend configuration, the region between the inner supports is put under a compressive load and a sandwich laminate with damage can be tested in this manner without the need for precision machining. Also, specimens with no damage can be taken to failure so direct comparisons between damaged and undamaged strength can be made. Data is presented that demonstrates the four point bend CAI test and is compared with end loaded compression tests of the same sandwich structure.
Graph-Based Compression of Dynamic 3D Point Cloud Sequences.
Thanou, Dorina; Chou, Philip A; Frossard, Pascal
2016-04-01
This paper addresses the problem of compression of 3D point cloud sequences that are characterized by moving 3D positions and color attributes. As temporally successive point cloud frames share some similarities, motion estimation is key to effective compression of these sequences. It, however, remains a challenging problem as the point cloud frames have varying numbers of points without explicit correspondence information. We represent the time-varying geometry of these sequences with a set of graphs, and consider 3D positions and color attributes of the point clouds as signals on the vertices of the graphs. We then cast motion estimation as a feature-matching problem between successive graphs. The motion is estimated on a sparse set of representative vertices using new spectral graph wavelet descriptors. A dense motion field is eventually interpolated by solving a graph-based regularization problem. The estimated motion is finally used for removing the temporal redundancy in the predictive coding of the 3D positions and the color characteristics of the point cloud sequences. Experimental results demonstrate that our method is able to accurately estimate the motion between consecutive frames. Moreover, motion estimation is shown to bring a significant improvement in terms of the overall compression performance of the sequence. To the best of our knowledge, this is the first paper that exploits both the spatial correlation inside each frame (through the graph) and the temporal correlation between the frames (through the motion estimation) to compress the color and the geometry of 3D point cloud sequences in an efficient way. PMID:26891486
Wet compression performance of a transonic compressor rotor at its near stall point
NASA Astrophysics Data System (ADS)
Yang, Huaifeng; Zheng, Qun; Luo, Mingcong; Sun, Lanxin; Bhargava, Rakesh
2011-03-01
In order to study the effects of wet compression on a transonic compressor, a full 3-D steady numerical simulation was carried out under varying conditions. Different injected water flow rates and droplet diameters were considered. The effect of wet compression on the shock, separated flow, pressure ratio, and efficiency was investigated. Additionally, the effect of wet compression on the tip clearance when the compressor runs in the near-stall and stall situations was emphasized. Analysis of the results shows that the range of stable operation is extended, and that the pressure ratio and inlet air flow rate are also increased at the near-stall point. In addition, it seems that there is an optimum size of the droplet diameter.
Analysis of three-point-bend test for materials with unequal tension and compression properties
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1974-01-01
An analysis capability is described for the three-point-bend test applicable to materials of linear but unequal tensile and compressive stress-strain relations. The capability consists of numerous equations of simple form and their graphical representation. Procedures are described to examine the local stress concentrations and failure modes initiation. Examples are given to illustrate the usefulness and ease of application of the capability. Comparisons are made with materials which have equal tensile and compressive properties. The results indicate possible underestimates for flexural modulus or strength ranging from 25 to 50 percent greater than values predicted when accounting for unequal properties. The capability can also be used to reduce test data from three-point-bending tests, extract material properties useful in design from these test data, select test specimen dimensions, and size structural members.
NASA Astrophysics Data System (ADS)
Magnucka-Blandzi, Ewa
2016-06-01
The study is devoted to stability of simply supported beam under axial compression. The beam is subjected to an axial load located at any point along the axis of the beam. The buckling problem has been desribed and solved mathematically. Critical loads have been calculated. In the particular case, the Euler's buckling load is obtained. Explicit solutions are given. The values of critical loads are collected in tables and shown in figure. The relation between the point of the load application and the critical load is presented.
An upwind-biased, point-implicit relaxation algorithm for viscous, compressible perfect-gas flows
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
1990-01-01
An upwind-biased, point-implicit relaxation algorithm for obtaining the numerical solution to the governing equations for three-dimensional, viscous, compressible, perfect-gas flows is described. The algorithm is derived using a finite-volume formulation in which the inviscid components of flux across cell walls are described with Roe's averaging and Harten's entropy fix with second-order corrections based on Yee's Symmetric Total Variation Diminishing scheme. Viscous terms are discretized using central differences. The relaxation strategy is well suited for computers employing either vector or parallel architectures. It is also well suited to the numerical solution of the governing equations on unstructured grids. Because of the point-implicit relaxation strategy, the algorithm remains stable at large Courant numbers without the necessity of solving large, block tri-diagonal systems. Convergence rates and grid refinement studies are conducted for Mach 5 flow through an inlet with a 10 deg compression ramp and Mach 14 flow over a 15 deg ramp. Predictions for pressure distributions, surface heating, and aerodynamics coefficients compare well with experiment data for Mach 10 flow over a blunt body.
Comparison of ring compression testing to three point bend testing for unirradiated ZIRLO cladding
None, None
2015-04-01
Safe shipment and storage of nuclear reactor discharged fuel requires an understanding of how the fuel may perform under the various conditions that can be encountered. One specific focus of concern is performance during a shipment drop accident. Tests at Savannah River National Laboratory (SRNL) are being performed to characterize the properties of fuel clad relative to a mechanical accident condition such as a container drop. Unirradiated ZIRLO tubing samples have been charged with a range of hydride levels to simulate actual fuel rod levels. Samples of the hydrogen charged tubes were exposed to a radial hydride growth treatment (RHGT) consisting of heating to 400°C, applying initial hoop stresses of 90 to 170 MPa with controlled cooling and producing hydride precipitates. Initial samples have been tested using both a) ring compression test (RCT) which is shown to be sensitive to radial hydride and b) three-point bend tests which are less sensitive to radial hydride effects. Hydrides are generated in Zirconium based fuel cladding as a result of coolant (water) oxidation of the clad, hydrogen release, and a portion of the released (nascent) hydrogen absorbed into the clad and eventually exceeding the hydrogen solubility limit. The orientation of the hydrides relative to the subsequent normal and accident strains has a significant impact on the failure susceptability. In this study the impacts of stress, temperature and hydrogen levels are evaluated in reference to the propensity for hydride reorientation from the circumferential to the radial orientation. In addition the effects of radial hydrides on the Quasi Ductile Brittle Transition Temperature (DBTT) were measured. The results suggest that a) the severity of the radial hydride impact is related to the hydrogen level-peak temperature combination (for example at a peak drying temperature of 400°C; 800 PPM hydrogen has less of an impact/ less radial hydride fraction than 200 PPM hydrogen for the same thermal
Index of Unconfined Compressive Strength of SAFOD Core by Means of Point-Load Penetrometer Tests
NASA Astrophysics Data System (ADS)
Enderlin, M. B.; Weymer, B.; D'Onfro, P. S.; Ramos, R.; Morgan, K.
2010-12-01
The San Andreas Fault Observatory at Depth (SAFOD) project is motivated by the need to answer fundamental questions on the physical and chemical processes controlling faulting and earthquake generation within major plate-boundaries. In 2007, approximately 135 ft (41.1 m) of 4 inch (10.61 cm) diameter rock cores was recovered from two actively deforming traces of the San Andreas Fault. 97 evenly (more or less) distributed index tests for Unconfined Compressive Strength (UCS) where performed on the cores using a modified point-load penetrometer. The point-load penetrometer used was a handheld micro-conical point indenter referred to as the Dimpler, in reference to the small conical depression that it creates. The core surface was first covered with compliant tape that is about a square inch in size. The conical tip of the indenter is coated with a (red) dye and then forced, at a constant axial load, through the tape and into the sample creating a conical red depression (dimple) on the tape. The combination of red dye and tape preserves a record of the dimple geometrical attributes. The geometrical attributes (e.g. diameter and depth) depend on the rock UCS. The diameter of a dimple is measured with a surface measuring magnifier. Correlation between dimple diameter and UCS has been previously established with triaxial testing. The SAFOD core gave Dimpler UCS values in the range of 10 psi (68.9 KPa) to 15,000 psi (103.4 MPa). The UCS index also allows correlations between geomechanical properties and well log-derived petrophysical properties.
York, A.R. II
1997-07-01
The material point method (MPM) is an evolution of the particle in cell method where Lagrangian particles or material points are used to discretize the volume of a material. The particles carry properties such as mass, velocity, stress, and strain and move through a Eulerian or spatial mesh. The momentum equation is solved on the Eulerian mesh. Modifications to the material point method are developed that allow the simulation of thin membranes, compressible fluids, and their dynamic interactions. A single layer of material points through the thickness is used to represent a membrane. The constitutive equation for the membrane is applied in the local coordinate system of each material point. Validation problems are presented and numerical convergence is demonstrated. Fluid simulation is achieved by implementing a constitutive equation for a compressible, viscous, Newtonian fluid and by solution of the energy equation. The fluid formulation is validated by simulating a traveling shock wave in a compressible fluid. Interactions of the fluid and membrane are handled naturally with the method. The fluid and membrane communicate through the Eulerian grid on which forces are calculated due to the fluid and membrane stress states. Validation problems include simulating a projectile impacting an inflated airbag. In some impact simulations with the MPM, bodies may tend to stick together when separating. Several algorithms are proposed and tested that allow bodies to separate from each other after impact. In addition, several methods are investigated to determine the local coordinate system of a membrane material point without relying upon connectivity data.
Magalhães, Marina Figueiredo; Dibai-Filho, Almir Vieira; de Oliveira Guirro, Elaine Caldeira; Girasol, Carlos Eduardo; de Oliveira, Alessandra Kelly; Dias, Fabiana Rodrigues Cancio; Guirro, Rinaldo Roberto de Jesus
2015-01-01
Some assessment and diagnosis methods require palpation or the application of certain forces on the skin, which affects the structures beneath, we highlight the importance of defining possible influences on skin temperature as a result of this physical contact. Thus, the aim of the present study is to determine the ideal time for performing thermographic examination after palpation based on the assessment of skin temperature evolution. Randomized and crossover study carried out with 15 computer-user volunteers of both genders, between 18 and 45 years of age, who were submitted to compressive forces of 0, 1, 2 and 3 kg/cm2 for 30 seconds with a washout period of 48 hours using a portable digital dynamometer. Compressive forces were applied on the following spots on the dominant upper limb: myofascial trigger point in the levator scapulae, biceps brachii muscle and palmaris longus tendon. Volunteers were examined by means of infrared thermography before and after the application of compressive forces (15, 30, 45 and 60 minutes). In most comparisons made over time, a significant decrease was observed 30, 45 and 60 minutes after the application of compressive forces (p < 0.05) on the palmaris longus tendon and biceps brachii muscle. However, no difference was observed when comparing the different compressive forces (p > 0.05). In conclusion, infrared thermography can be used after assessment or diagnosis methods focused on the application of forces on tendons and muscles, provided the procedure is performed 15 minutes after contact with the skin. Regarding to the myofascial trigger point, the thermographic examination can be performed within 60 minutes after the contact with the skin. PMID:26070073
Map-Based Compressive Sensing Model for Wireless Sensor Network Architecture, A Starting Point
NASA Astrophysics Data System (ADS)
Mahmudimanesh, Mohammadreza; Khelil, Abdelmajid; Yazdani, Nasser
Sub-Nyquist sampling techniques for Wireless Sensor Networks (WSN) are gaining increasing attention as an alternative method to capture natural events with desired quality while minimizing the number of active sensor nodes. Among those techniques, Compressive Sensing (CS) approaches are of special interest, because of their mathematically concrete foundations and efficient implementations. We describe how the geometrical representation of the sampling problem can influence the effectiveness and efficiency of CS algorithms. In this paper we introduce a Map-based model which exploits redundancy attributes of signals recorded from natural events to achieve an optimal representation of the signal.
NASA Astrophysics Data System (ADS)
Baek, Hwanjo; Kim, Dae-Hoon; Kim, Kyoungman; Choi, Young-Sup; Kang, Sang-Soo; Kang, Jung-Seock
2013-04-01
Recently, the use of underground openings for various purposes is expanding, particularly for the crushing and processing facilities in open-pit limestone mines. The suitability of current rockmass classification systems for limestone or dolostone is therefore one of the major concerns for field engineers. Consequently, development of the limestone mine site characterization model(LSCM) is underway through the joint efforts of some research institutes and universities in Korea. An experimental program was undertaken to investigate the correlation between rock properties, for quick adaptation of the rockmass classification system in the field. The uniaxial compressive strength(UCS) of rock material is a key property for rockmass characterization purposes and, is reasonably included in the rock mass rating(RMR). As core samples for the uniaxial compression test are not always easily obtained, indirect tests such as the point load test can be a useful alternative, and various equations between the UCS and the point load strength index(Is50) have been reported in the literature. It is generally proposed that the relationship between the Is50 and the UCS value depends on the rock types and, also on the testing conditions. This study investigates the correlation between the UCS and the Is50 of the Pungchon limestone, with a total of 48 core samples obtained from a underground limestone mine. Both uniaxial compression and point load specimens were prepared from the same segment of NX-sized rock cores. The derived equation obtained from regression analysis of two variables is UCS=26Is50, with the root-mean-square error of 13.18.
Aramburu, José Antonio; García-Fernández, Pablo; García-Lastra, Juan María; Moreno, Miguel
2016-07-18
First-principle calculations together with analysis of the experimental data found for 3d(9) and 3d(7) ions in cubic oxides proved that the center found in irradiated CaO:Ni(2+) corresponds to Ni(+) under a static Jahn-Teller effect displaying a compressed equilibrium geometry. It was also shown that the anomalous positive g∥ shift (g∥ -g0 =0.065) measured at T=20 K obeys the superposition of the |3 z(2) -r(2) ⟩ and |x(2) -y(2) ⟩ states driven by quantum effects associated with the zero-point motion, a mechanism first put forward by O'Brien for static Jahn-Teller systems and later extended by Ham to the dynamic Jahn-Teller case. To our knowledge, this is the first genuine Jahn-Teller system (i.e. in which exact degeneracy exists at the high-symmetry configuration) exhibiting a compressed equilibrium geometry for which large quantum effects allow experimental observation of the effect predicted by O'Brien. Analysis of the calculated energy barriers for different Jahn-Teller systems allowed us to explain the origin of the compressed geometry observed for CaO:Ni(+) . PMID:27028895
NASA Astrophysics Data System (ADS)
Dutta, Vimala
1993-07-01
An implicit finite volume nodal point scheme has been developed for solving the two-dimensional compressible Navier-Stokes equations. The numerical scheme is evolved by efficiently combining the basic ideas of the implicit finite-difference scheme of Beam and Warming (1978) with those of nodal point schemes due to Hall (1985) and Ni (1982). The 2-D Navier-Stokes solver is implemented for steady, laminar/turbulent flows past airfoils by using C-type grids. Turbulence closure is achieved by employing the algebraic eddy-viscosity model of Baldwin and Lomax (1978). Results are presented for the NACA-0012 and RAE-2822 airfoil sections. Comparison of the aerodynamic coefficients with experimental results for the different test cases presented here establishes the validity and efficiency of the method.
NASA Astrophysics Data System (ADS)
Tiofack, C. G. L.; Coulibaly, S.; Taki, M.; De Bièvre, S.; Dujardin, G.
2015-10-01
It is shown that sufficiently large periodic modulations in the coefficients of a nonlinear Schrödinger equation can drastically impact the spatial shape of the Peregrine soliton solutions: they can develop multiple compression points of the same amplitude, rather than only a single one, as in the spatially homogeneous focusing nonlinear Schrödinger equation. The additional compression points are generated in pairs forming a comblike structure. The number of additional pairs depends on the amplitude of the modulation but not on its wavelength, which controls their separation distance. The dynamics and characteristics of these generalized Peregrine solitons are analytically described in the case of a completely integrable modulation. A numerical investigation shows that their main properties persist in nonintegrable situations, where no exact analytical expression of the generalized Peregrine soliton is available. Our predictions are in good agreement with numerical findings for an interesting specific case of an experimentally realizable periodically dispersion modulated photonic crystal fiber. Our results therefore pave the way for the experimental control and manipulation of the formation of generalized Peregrine rogue waves in the wide class of physical systems modeled by the nonlinear Schrödinger equation.
Moraska, Albert F.; Hickner, Robert C.; Kohrt, Wendy M.; Brewer, Alan
2012-01-01
Objective To demonstrate proof-of-principle measurement for physiological change within an active myofascial trigger point (MTrP) undergoing trigger point release (ischemic compression). Design Interstitial fluid was sampled continuously at a trigger point before and after intervention. Setting A biomedical research clinic at a university hospital. Participants Two subjects from a pain clinic presenting with chronic headache pain. Interventions A single microdialysis catheter was inserted into an active MTrP of the upper trapezius to allow for continuous sampling of interstitial fluid before and after application of trigger point therapy by a massage therapist. Main Outcome Measures Procedural success, pain tolerance, feasibility of intervention during sample collection, determination of physiologically relevant values for local blood flow, as well as glucose and lactate concentrations. Results Both patients tolerated the microdialysis probe insertion into the MTrP and treatment intervention without complication. Glucose and lactate concentrations were measured in the physiological range. Following intervention, a sustained increase in lactate was noted for both subjects. Conclusions Identifying physiological constituents of MTrP’s following intervention is an important step toward understanding pathophysiology and resolution of myofascial pain. The present study forwards that aim by showing proof-of-concept for collection of interstitial fluid from an MTrP before and after intervention can be accomplished using microdialysis, thus providing methodological insight toward treatment mechanism and pain resolution. Of the biomarkers measured in this study, lactate may be the most relevant for detection and treatment of abnormalities in the MTrP. PMID:22975226
1DB, a one-dimensional diffusion code for nuclear reactor analysis
Little, W.W. Jr. )
1991-09-01
1DB is a multipurpose, one-dimensional (plane, cylinder, sphere) diffusion theory code for use in reactor analysis. The code is designed to do the following: To compute k{sub eff} and perform criticality searches on time absorption, reactor composition, reactor dimensions, and buckling by means of either a flux or an adjoint model; to compute collapsed microscopic and macroscopic cross sections averaged over the spectrum in any specified zone; to compute resonance-shielded cross sections using data in the shielding factor format; and to compute isotopic burnup using decay chains specified by the user. All programming is in FORTRAN. Because variable dimensioning is employed, no simple restrictions on problem complexity can be stated. The number of spatial mesh points, energy groups, upscattering terms, etc. is limited only by the available memory. The source file contains about 3000 cards. 4 refs.
Cagnie, Barbara; Castelein, Birgit; Pollie, Flore; Steelant, Lieselotte; Verhoeyen, Hanne; Cools, Ann
2015-07-01
The aim of this review was to describe the effects of ischemic compression and dry needling on trigger points in the upper trapezius muscle in patients with neck pain and compare these two interventions with other therapeutic interventions aiming to inactivate trigger points. Both PubMed and Web of Science were searched for randomized controlled trials using different key word combinations related to myofascial neck pain and therapeutic interventions. Four main outcome parameters were evaluated on short and medium term: pain, range of motion, functionality, and quality-of-life, including depression. Fifteen randomized controlled trials were included in this systematic review. There is moderate evidence for ischemic compression and strong evidence for dry needling to have a positive effect on pain intensity. This pain decrease is greater compared with active range of motion exercises (ischemic compression) and no or placebo intervention (ischemic compression and dry needling) but similar to other therapeutic approaches. There is moderate evidence that both ischemic compression and dry needling increase side-bending range of motion, with similar effects compared with lidocaine injection. There is weak evidence regarding its effects on functionality and quality-of-life. On the basis of this systematic review, ischemic compression and dry needling can both be recommended in the treatment of neck pain patients with trigger points in the upper trapezius muscle. Additional research with high-quality study designs are needed to develop more conclusive evidence. PMID:25768071
An evaluation of the sandwich beam in four-point bending as a compressive test method for composites
NASA Technical Reports Server (NTRS)
Shuart, M. J.; Herakovich, C. T.
1978-01-01
The experimental phase of the study included compressive tests on HTS/PMR-15 graphite/polyimide, 2024-T3 aluminum alloy, and 5052 aluminum honeycomb at room temperature, and tensile tests on graphite/polyimide at room temperature, -157 C, and 316 C. Elastic properties and strength data are presented for three laminates. The room temperature elastic properties were generally found to differ in tension and compression with Young's modulus values differing by as much as twenty-six percent. The effect of temperature on modulus and strength was shown to be laminate dependent. A three-dimensional finite element analysis predicted an essentially uniform, uniaxial compressive stress state in the top flange test section of the sandwich beam. In conclusion, the sandwich beam can be used to obtain accurate, reliable Young's modulus and Poisson's ratio data for advanced composites; however, the ultimate compressive stress for some laminates may be influenced by the specimen geometry.
NASA Technical Reports Server (NTRS)
Krebs, R. P.
1971-01-01
The computer program described in this report calculates the design-point characteristics of a compressed-air generator for use in V/STOL applications such as systems with a tip-turbine-driven lift fan. The program computes the dimensions and mass, as well as the thermodynamic performance of a model air generator configuration which involves a straight through-flow combustor. Physical and thermodynamic characteristics of the air generator components are also given. The program was written in FORTRAN IV language. Provision has been made so that the program will accept input values in either SI units or U.S. customary units. Each air generator design-point calculation requires about 1.5 seconds of 7094 computer time for execution.
NASA Astrophysics Data System (ADS)
Olejnik, Paweł; Awrejcewicz, Jan
2011-05-01
This paper uncovers some interesting extension of an optimal discrete control methodology partially included in Proceedings and presented at the international conference on "Dynamical Systems Theory and Applications". There has been applied a scheme for realisation of active control strategy with numerically estimated linear optimal quadratic index of performance in reduction of impact-induced deformation of human chest loaded by a point mass at the central point of upper-torso body. We focused on application of one active element attached between torso's upper back (looking from posterior direction) and a fixed support. As the practical result we provide values of quality and reaction matrices, some useful deformation and energy dissipation time-characteristics and the resulting shape of control force time-characteristics that would be the demanding one for a hypothetical real implementation.
Microbunching and RF Compression
Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.
2010-05-23
Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.
Alladio, F.; Mancuso, A.; Micozzi, P.; Rogier, F.
2006-08-15
The ideal magnetohydrodynamic (MHD) stability analysis of axisymmetric plasma equilibria is simplified if magnetic coordinates, such as Boozer coordinates ({psi}{sub T} radial, i.e., toroidal flux divided by 2{pi}, {theta} poloidal angle, {phi} toroidal angle, with Jacobian {radical}(g){proportional_to}1/B{sup 2}), are used. The perturbed plasma displacement {xi}-vector is Fourier expanded in the poloidal angle, and the normal-mode equation {delta}W{sub p}({xi}-vector*,{xi}-vector)={omega}{sup 2}{delta}W{sub k}({xi}-vector*,{xi}-vector) (where {delta}W{sub p} and {delta}W{sub k} are the perturbed potential and kinetic plasma energies and {omega}{sup 2} is the eigenvalue) is solved through a 1D radial finite-element method. All magnetic coordinates are however plagued by divergent metric coefficients, if magnetic separatrices exist within (or at the boundary of) the plasma. The ideal MHD stability of plasma equilibria in the presence of magnetic separatrices is therefore a disputed problem. We consider the most general case of a simply connected axisymmetric plasma, which embeds an internal magnetic separatrix--{psi}{sub T}={psi}{sub T}{sup X}, with rotational transform {iota}slantslash({psi}{sub T}{sup X})=0 and regular X-points (B-vector{ne}0)--and is bounded by a second magnetic separatrix at the edge--{psi}{sub T}={psi}{sub T}{sup max}, with {iota}slantslash({psi}{sub T}{sup max}){ne}0--that includes a part of the symmetry axis (R=0) and is limited by two singular X-points (B-vector=0). At the embedded separatrix, the ideal MHD stability analysis requires the continuity of the normal plasma perturbed displacement variable, {xi}{sup {psi}}={xi}-vector{center_dot}{nabla}-vector{psi}{sub T}; the other displacement variables, the binormal {eta}{sup {psi}}={xi}-vector{center_dot}({nabla}-vector{theta}-{iota}slantslash{nabla}-vector{phi}) and the parallel {mu}=-{radical}(g){xi}-vector{center_dot}{nabla}-vector{phi}, can instead be discontinuous everywhere. The
NASA Astrophysics Data System (ADS)
Bruno, Giovanni; Bobbo, Luigi; Vessia, Giovanna
2014-05-01
Is50 and RL indices are commonly used to indirectly estimate the compression strength of a rocky deposit by in situ and in laboratory devices. The widespread use of Point load and Schmidt hammer tests is due to the simplicity and the speediness of the execution of these tests. Their indices can be related to the UCS by means of the ordinary least square regression analyses. Several researchers suggest to take into account the lithology to build high correlated empirical expressions (R2 >0.8) to draw UCS from Is50 or RL values. Nevertheless, the lower and upper bounds of the UCS ranges of values that can be estimated by means of the two indirect indices are not clearly defined yet. Aydin (2009) stated that the Schmidt hammer test shall be used to assess the compression resistance of rocks characterized by UCS>12-20 MPa. On the other hand, the Point load measures can be performed on weak rocks but upper bound values for UCS are not suggested. In this paper, the empirical relationships between UCS, RL and Is50 are searched by means of the percentile method (Bruno et al. 2013). This method is based on looking for the best regression function, between measured data of UCS and one of the indirect indices, drawn from a subset sample of the couples of measures that are the percentile values. These values are taken from the original dataset of both measures by calculating the cumulative function. No hypothesis on the probability distribution of the sample is needed and the procedure shows to be robust with respect to odd values or outliers. In this study, the carbonate sedimentary rocks are investigated. According to the rock mass classification of Dobereiner and De Freitas (1986), the UCS values for the studied rocks range between 'extremely weak' to 'strong'. For the analyzed data, UCS varies between 1,18-270,70 MPa. Thus, through the percentile method the best empirical relationships UCS-Is50 and UCS-RL are plotted. Relationships between Is50 and RL are drawn, too
Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.
1998-01-01
A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.
Sandford, M.T. II; Handel, T.G.; Bradley, J.N.
1998-07-07
A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.
Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.
1998-01-01
A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.
Sandford, M.T. II; Handel, T.G.; Bradley, J.N.
1998-03-10
A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.
NASA Astrophysics Data System (ADS)
Lim, Se Hoon
Compressive holography estimates images from incomplete data by using sparsity priors. Compressive holography combines digital holography and compressive sensing. Digital holography consists of computational image estimation from data captured by an electronic focal plane array. Compressive sensing enables accurate data reconstruction by prior knowledge on desired signal. Computational and optical co-design optimally supports compressive holography in the joint computational and optical domain. This dissertation explores two examples of compressive holography: estimation of 3D tomographic images from 2D data and estimation of images from under sampled apertures. Compressive holography achieves single shot holographic tomography using decompressive inference. In general, 3D image reconstruction suffers from underdetermined measurements with a 2D detector. Specifically, single shot holographic tomography shows the uniqueness problem in the axial direction because the inversion is ill-posed. Compressive sensing alleviates the ill-posed problem by enforcing some sparsity constraints. Holographic tomography is applied for video-rate microscopic imaging and diffuse object imaging. In diffuse object imaging, sparsity priors are not valid in coherent image basis due to speckle. So incoherent image estimation is designed to hold the sparsity in incoherent image basis by support of multiple speckle realizations. High pixel count holography achieves high resolution and wide field-of-view imaging. Coherent aperture synthesis can be one method to increase the aperture size of a detector. Scanning-based synthetic aperture confronts a multivariable global optimization problem due to time-space measurement errors. A hierarchical estimation strategy divides the global problem into multiple local problems with support of computational and optical co-design. Compressive sparse aperture holography can be another method. Compressive sparse sampling collects most of significant field
Multiphase, Multicomponent Compressibility in Geothermal Reservoir Engineering
Macias-Chapa, L.; Ramey, H.J. Jr.
1987-01-20
Coefficients of compressibilities below the bubble point were computer with a thermodynamic model for single and multicomponent systems. Results showed coefficients of compressibility below the bubble point larger than the gas coefficient of compressibility at the same conditions. Two-phase compressibilities computed in the conventional way are underestimated and may lead to errors in reserve estimation and well test analysis. 10 refs., 9 figs.
NASA Astrophysics Data System (ADS)
Anderson, Peter G.; Liu, Changmeng
2003-01-01
We present a technique for converting continuous gray-scale images to halftone (black and white) images that lend themselves to lossless data compression with compression factor of three or better. Our method involves using novel halftone mask structures which consist of non-repeated threshold values. We have versions of both dispersed-dot and clustered-dot masks, which produce acceptable images for a variety of printers. Using the masks as a sort key allows us to reversibly rearrange the image pixels and partition them into groups with a highly skewed distribution allowing Huffman compression coding techniques to be applied. This gives compression ratios in the range 3:1 to 10:1.
libpolycomp: Compression/decompression library
NASA Astrophysics Data System (ADS)
Tomasi, Maurizio
2016-04-01
Libpolycomp compresses and decompresses one-dimensional streams of numbers by means of several algorithms. It is well-suited for time-ordered data acquired by astronomical instruments or simulations. One of the algorithms, called "polynomial compression", combines two widely-used ideas (namely, polynomial approximation and filtering of Fourier series) to achieve substantial compression ratios for datasets characterized by smoothness and lack of noise. Notable examples are the ephemerides of astronomical objects and the pointing information of astronomical telescopes. Other algorithms implemented in this C library are well known and already widely used, e.g., RLE, quantization, deflate (via libz) and Burrows-Wheeler transform (via libbzip2). Libpolycomp can compress the timelines acquired by the Planck/LFI instrument with an overall compression ratio of ~9, while other widely known programs (gzip, bzip2) reach compression ratios less than 1.5.
NASA Technical Reports Server (NTRS)
1996-01-01
Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.
Wang, G Y; Zhang, C C; Ren, K; Zhang, P P; Liu, C H; Zheng, Z A; Chen, Y; Fang, R
2015-01-01
This study aimed to evaluate the results and complications of image-guided percutaneous kyphoplasty (PKP) using computed tomography (CT) and C-arm fluoroscopy, with finger-touch guidance to determine the needle entry point. Of the 86 patients (106 PKP) examined, 56 were treated for osteoporotic vertebral compression fractures and 30 for vertebral tumors. All patients underwent image-guided treatment using CT and conventional fluoroscopy, with finger-touch identification of a puncture point within a small incision (1.5 to 2 cm). Partial or complete pain relief was achieved in 98% of patients within 24 h of treatment. Moreover, a significant improvement in functional mobility and reduction in analgesic use was observed. CT allowed the detection of cement leakage in 20.7% of the interventions. No bone cement leakages with neurologic symptoms were noted. All work channels were made only once, and bone cement was distributed near the center of the vertebral body. Our study confirms the efficacy of PKP treatment in osteoporotic and oncological patients. The combination of CT and C-arm fluoroscopy with finger-touch guidance reduces the risk of complications compared with conventional fluoroscopy alone, facilitates the detection of minor cement leakage, improves the operative procedure, and results in a favorable bone cement distribution. PMID:25867298
Perceau, Géraldine; Faure, Christine
2012-01-01
The compression of a venous ulcer is carried out with the use of bandages, and for less exudative ulcers, with socks, stockings or tights. The system of bandages is complex. Different forms of extension and therefore different types of models exist. PMID:22489428
Efficient Compression of High Resolution Climate Data
NASA Astrophysics Data System (ADS)
Yin, J.; Schuchardt, K. L.
2011-12-01
resolution climate data can be massive. Those data can consume a huge amount of disk space for storage, incur significant overhead for outputting data during simulation, introduce high latency for visualization and analysis, and may even make interactive visualization and analysis impossible given the limit of the data that a conventional cluster can handle. These problems can be alleviated by with effective and efficient data compression techniques. Even though HDF5 format supports compression, previous work has mainly focused on employ traditional general purpose compression schemes such as dictionary coder and block sorting based compression scheme. Those compression schemes mainly focus on encoding repeated byte sequences efficiently and are not well suitable for compressing climate data consist mainly of distinguished float point numbers. We plan to select and customize our compression schemes according to the characteristics of high-resolution climate data. One observation on high resolution climate data is that as the resolution become higher, values of various climate variables such as temperature and pressure, become closer in nearby cells. This provides excellent opportunities for predication-based compression schemes. We have performed a preliminary estimation of compression ratios of a very simple minded predication-based compression ratio in which we compute the difference between current float point number with previous float point number and then encoding the exponent and significance part of the float point number with entropy-based compression scheme. Our results show that we can achieve higher compression ratios between 2 and 3 in lossless compression, which is significantly higher than traditional compression algorithms. We have also developed lossy compression with our techniques. We can achive orders of magnitude data reduction while ensure error bounds. Moreover, our compression scheme is much more efficient and introduces much less overhead
Data compression in digitized lines
Thapa, K. )
1990-04-01
The problem of data compression is very important in digital photogrammetry, computer assisted cartography, and GIS/LIS. In addition, it is also applicable in many other fields such as computer vision, image processing, pattern recognition, and artificial intelligence. Consequently, there are many algorithms available to solve this problem but none of them are considered to be satisfactory. In this paper, a new method of finding critical points in a digitized curve is explained. This technique, based on the normalized symmetric scattered matrix, is good for both critical points detection and data compression. In addition, the critical points detected by this algorithm are compared with those by zero-crossings. 8 refs.
Erlich, Yaniv; Gordon, Assaf; Brand, Michael; Hannon, Gregory J.; Mitra, Partha P.
2011-01-01
Over the past three decades we have steadily increased our knowledge on the genetic basis of many severe disorders. Nevertheless, there are still great challenges in applying this knowledge routinely in the clinic, mainly due to the relatively tedious and expensive process of genotyping. Since the genetic variations that underlie the disorders are relatively rare in the population, they can be thought of as a sparse signal. Using methods and ideas from compressed sensing and group testing, we have developed a cost-effective genotyping protocol to detect carriers for severe genetic disorders. In particular, we have adapted our scheme to a recently developed class of high throughput DNA sequencing technologies. The mathematical framework presented here has some important distinctions from the ’traditional’ compressed sensing and group testing frameworks in order to address biological and technical constraints of our setting. PMID:21451737
Lossy Compression of ACS images
NASA Astrophysics Data System (ADS)
Cox, Colin
2004-01-01
A method of compressing images stored as floating point arrays was proposed several years ago by White and Greenfield. With the increased image sizes encountered in the last few years and the consequent need to distribute large data volumes, the value of applying such a procedure has become more evident. Methods such as this which offer significant compression ratios are lossy and there is always some concern that statistically important information might be discarded. Several astronomical images have been analyzed and, in the examples tested, compression ratios of about six were obtained with no significant information loss.
Compression and venous ulcers.
Stücker, M; Link, K; Reich-Schupke, S; Altmeyer, P; Doerler, M
2013-03-01
Compression therapy is considered to be the most important conservative treatment of venous leg ulcers. Until a few years ago, compression bandages were regarded as first-line therapy of venous leg ulcers. However, to date medical compression stockings are the first choice of treatment. With respect to compression therapy of venous leg ulcers the following statements are widely accepted: 1. Compression improves the healing of ulcers when compared with no compression; 2. Multicomponent compression systems are more effective than single-component compression systems; 3. High compression is more effective than lower compression; 4. Medical compression stockings are more effective than compression with short stretch bandages. Healed venous leg ulcers show a high relapse rate without ongoing treatment. The use of medical stockings significantly reduces the amount of recurrent ulcers. Furthermore, the relapse rate of venous leg ulcers can be significantly reduced by a combination of compression therapy and surgery of varicose veins compared with compression therapy alone. PMID:23482538
Xenaki, Angeliki; Gerstoft, Peter; Mosegaard, Klaus
2014-07-01
Sound source localization with sensor arrays involves the estimation of the direction-of-arrival (DOA) from a limited number of observations. Compressive sensing (CS) solves such underdetermined problems achieving sparsity, thus improved resolution, and can be solved efficiently with convex optimization. The DOA estimation problem is formulated in the CS framework and it is shown that CS has superior performance compared to traditional DOA estimation methods especially under challenging scenarios such as coherent arrivals and single-snapshot data. An offset and resolution analysis is performed to indicate the limitations of CS. It is shown that the limitations are related to the beampattern, thus can be predicted. The high-resolution capabilities and the robustness of CS are demonstrated on experimental array data from ocean acoustic measurements for source tracking with single-snapshot data. PMID:24993212
Compression and Entrapment Syndromes
Heffernan, L.P.; Benstead, T.J.
1987-01-01
Family physicians are often confronted by patients who present with pain, numbness and weakness. Such complaints, when confined to a single extremity, most particularly to a restricted portion of the extremity, may indicate focal dysfunction of peripheral nerve structures arising from compression and/or entrapment, to which such nerves are selectively vulnerable. The authors of this article consider the paramount clinical features that allow the clinician to arrive at a correct diagnosis, reviews major points in differential diagnosis, and suggest appropriate management strategies. PMID:21263858
Data Compression for Helioseismology
NASA Astrophysics Data System (ADS)
Löptien, Björn
2015-10-01
Efficient data compression will play an important role for several upcoming and planned space missions involving helioseismology, such as Solar Orbiter. Solar Orbiter, to be launched in October 2018, will be the next space mission involving helioseismology. The main characteristic of Solar Orbiter lies in its orbit. The spacecraft will have an inclined solar orbit, reaching a solar latitude of up to 33 deg. This will allow, for the first time, probing the solar poles using local helioseismology. In addition, combined observations of Solar Orbiter and another helioseismic instrument will be used to study the deep interior of the Sun using stereoscopic helioseismology. The Doppler velocity and continuum intensity images of the Sun required for helioseismology will be provided by the Polarimetric and Helioseismic Imager (PHI). Major constraints for helioseismology with Solar Orbiter are the low telemetry and the (probably) short observing time. In addition, helioseismology of the solar poles requires observations close to the solar limb, even from the inclined orbit of Solar Orbiter. This gives rise to systematic errors. In this thesis, I derived a first estimate of the impact of lossy data compression on helioseismology. I put special emphasis on the Solar Orbiter mission, but my results are applicable to other planned missions as well. First, I studied the performance of PHI for helioseismology. Based on simulations of solar surface convection and a model of the PHI instrument, I generated a six-hour time-series of synthetic Doppler velocity images with the same properties as expected for PHI. Here, I focused on the impact of the point spread function, the spacecraft jitter, and of the photon noise level. The derived power spectra of solar oscillations suggest that PHI will be suitable for helioseismology. The low telemetry of Solar Orbiter requires extensive compression of the helioseismic data obtained by PHI. I evaluated the influence of data compression using
Monzón-Casanova, Elisa; Rudolf, Ronald; Starick, Lisa; Müller, Ingrid; Söllner, Christian; Müller, Nora; Westphal, Nico; Miyoshi-Akiyama, Tohru; Uchiyama, Takehiko; Berberich, Ingolf; Walter, Lutz; Herrmann, Thomas
2016-02-01
In this article, we report the complete coding sequence and to our knowledge, the first functional analysis of two homologous nonclassical MHC class II genes: RT1-Db2 of rat and H2-Eb2 of mouse. They differ in important aspects compared with the classical class II β1 molecules: their mRNA expression by APCs is much lower, they show minimal polymorphism in the Ag-binding domain, and they lack N-glycosylation and the highly conserved histidine 81. Also, their cytoplasmic region is completely different and longer. To study and compare them with their classical counterparts, we transduced them in different cell lines. These studies show that they can pair with the classical α-chains (RT1-Da and H2-Ea) and are expressed at the cell surface where they can present superantigens. Interestingly, compared with the classical molecules, they have an extraordinary capacity to present the superantigen Yersinia pseudotuberculosis mitogen. Taken together, our findings suggest that the b2 genes, together with the respective α-chain genes, encode for H2-E2 or RT1-D2 molecules, which could function as Ag-presenting molecules for a particular class of Ags, as modulators of Ag presentation like nonclassical nonpolymorphic class II molecules DM and DO do, or even as players outside the immune system. PMID:26740108
Selfsimilar Spherical Compression Waves in Gas Dynamics
NASA Astrophysics Data System (ADS)
Meyer-ter-Vehn, J.; Schalk, C.
1982-08-01
A synopsis of different selfsimilar spherical compression waves is given pointing out their fundamental importance for the gas dynamics of inertial confinement fusion. Strong blast waves, various forms of isentropic compression waves, imploding shock waves and the solution for non-isentropic collapsing hollow spheres are included. A classification is given in terms of six singular points which characterise the different solutions and the relations between them. The presentation closely follows Guderley's original work on imploding shock waves
Turbulence in Compressible Flows
NASA Technical Reports Server (NTRS)
1997-01-01
Lecture notes for the AGARD Fluid Dynamics Panel (FDP) Special Course on 'Turbulence in Compressible Flows' have been assembled in this report. The following topics were covered: Compressible Turbulent Boundary Layers, Compressible Turbulent Free Shear Layers, Turbulent Combustion, DNS/LES and RANS Simulations of Compressible Turbulent Flows, and Case Studies of Applications of Turbulence Models in Aerospace.
Compressively sensed complex networks.
Dunlavy, Daniel M.; Ray, Jaideep; Pinar, Ali
2010-07-01
The aim of this project is to develop low dimension parametric (deterministic) models of complex networks, to use compressive sensing (CS) and multiscale analysis to do so and to exploit the structure of complex networks (some are self-similar under coarsening). CS provides a new way of sampling and reconstructing networks. The approach is based on multiresolution decomposition of the adjacency matrix and its efficient sampling. It requires preprocessing of the adjacency matrix to make it 'blocky' which is the biggest (combinatorial) algorithm challenge. Current CS reconstruction algorithm makes no use of the structure of a graph, its very general (and so not very efficient/customized). Other model-based CS techniques exist, but not yet adapted to networks. Obvious starting point for future work is to increase the efficiency of reconstruction.
Fabisch, Alexander; Kassahun, Yohannes; Wöhrle, Hendrik; Kirchner, Frank
2013-06-01
We examine two methods which are used to deal with complex machine learning problems: compressed sensing and model compression. We discuss both methods in the context of feed-forward artificial neural networks and develop the backpropagation method in compressed parameter space. We further show that compressing the weights of a layer of a multilayer perceptron is equivalent to compressing the input of the layer. Based on this theoretical framework, we will use orthogonal functions and especially random projections for compression and perform experiments in supervised and reinforcement learning to demonstrate that the presented methods reduce training time significantly. PMID:23501172
Hildebrand, Richard J.; Wozniak, John J.
2001-01-01
A compressed gas storage cell interconnecting manifold including a thermally activated pressure relief device, a manual safety shut-off valve, and a port for connecting the compressed gas storage cells to a motor vehicle power source and to a refueling adapter. The manifold is mechanically and pneumatically connected to a compressed gas storage cell by a bolt including a gas passage therein.
Compressible turbulent mixing: Effects of compressibility
NASA Astrophysics Data System (ADS)
Ni, Qionglin
2016-04-01
We studied by numerical simulations the effects of compressibility on passive scalar transport in stationary compressible turbulence. The turbulent Mach number varied from zero to unity. The difference in driven forcing was the magnitude ratio of compressive to solenoidal modes. In the inertial range, the scalar spectrum followed the k-5 /3 scaling and suffered negligible influence from the compressibility. The growth of the Mach number showed (1) a first reduction and second enhancement in the transfer of scalar flux; (2) an increase in the skewness and flatness of the scalar derivative and a decrease in the mixed skewness and flatness of the velocity-scalar derivatives; (3) a first stronger and second weaker intermittency of scalar relative to that of velocity; and (4) an increase in the intermittency parameter which measures the intermittency of scalar in the dissipative range. Furthermore, the growth of the compressive mode of forcing indicated (1) a decrease in the intermittency parameter and (2) less efficiency in enhancing scalar mixing. The visualization of scalar dissipation showed that, in the solenoidal-forced flow, the field was filled with the small-scale, highly convoluted structures, while in the compressive-forced flow, the field was exhibited as the regions dominated by the large-scale motions of rarefaction and compression.
Fracture in compression of brittle solids
NASA Technical Reports Server (NTRS)
1983-01-01
The fracture of brittle solids in monotonic compression is reviewed from both the mechanistic and phenomenological points of view. The fundamental theoretical developments based on the extension of pre-existing cracks in general multiaxial stress fields are recognized as explaining extrinsic behavior where a single crack is responsible for the final failure. In contrast, shear faulting in compression is recognized to be the result of an evolutionary localization process involving en echelon action of cracks and is termed intrinsic.
Data compression preserving statistical independence
NASA Technical Reports Server (NTRS)
Morduch, G. E.; Rice, W. M.
1973-01-01
The purpose of this study was to determine the optimum points of evaluation of data compressed by means of polynomial smoothing. It is shown that a set y of m statistically independent observations Y(t sub 1), Y(t sub 2), ... Y(t sub m) of a quantity X(t), which can be described by a (n-1)th degree polynomial in time, may be represented by a set Z of n statistically independent compressed observations Z (tau sub 1), Z (tau sub 2),...Z (tau sub n), such that The compressed set Z has the same information content as the observed set Y. the times tau sub 1, tau sub 2,.. tau sub n are the zeros of an nth degree polynomial P sub n, to whose definition and properties the bulk of this report is devoted. The polynomials P sub n are defined as functions of the observation times t sub 1, t sub 2,.. t sub n, and it is interesting to note that if the observation times are continuously distributed the polynomials P sub n degenerate to legendre polynomials. The proposed data compression scheme is a little more complex than those usually employed, but has the advantage of preserving all the information content of the original observations.
Lossy Text Compression Techniques
NASA Astrophysics Data System (ADS)
Palaniappan, Venka; Latifi, Shahram
Most text documents contain a large amount of redundancy. Data compression can be used to minimize this redundancy and increase transmission efficiency or save storage space. Several text compression algorithms have been introduced for lossless text compression used in critical application areas. For non-critical applications, we could use lossy text compression to improve compression efficiency. In this paper, we propose three different source models for character-based lossy text compression: Dropped Vowels (DOV), Letter Mapping (LMP), and Replacement of Characters (ROC). The working principles and transformation methods associated with these methods are presented. Compression ratios obtained are included and compared. Comparisons of performance with those of the Huffman Coding and Arithmetic Coding algorithm are also made. Finally, some ideas for further improving the performance already obtained are proposed.
Radiological Image Compression
NASA Astrophysics Data System (ADS)
Lo, Shih-Chung Benedict
The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.
[Medical image compression: a review].
Noreña, Tatiana; Romero, Eduardo
2013-01-01
Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings. PMID:23715317
Compressive wideband microwave radar holography
NASA Astrophysics Data System (ADS)
Wilson, Scott A.; Narayanan, Ram M.
2014-05-01
Compressive sensing has emerged as a topic of great interest for radar applications requiring large amounts of data storage. Typically, full sets of data are collected at the Nyquist rate only to be compressed at some later point, where information-bearing data are retained and inconsequential data are discarded. However, under sparse conditions, it is possible to collect data at random sampling intervals less than the Nyquist rate and still gather enough meaningful data for accurate signal reconstruction. In this paper, we employ sparse sampling techniques in the recording of digital microwave holograms over a two-dimensional scanning aperture. Using a simple and fast non-linear interpolation scheme prior to image reconstruction, we show that the reconstituted image quality is well-retained with limited perceptual loss.
NASA Technical Reports Server (NTRS)
Reif, John H.
1987-01-01
A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.
Multishock Compression Properties of Warm Dense Argon.
Zheng, Jun; Chen, Qifeng; Yunjun, Gu; Li, Zhiguo; Shen, Zhijun
2015-01-01
Warm dense argon was generated by a shock reverberation technique. The diagnostics of warm dense argon were performed by a multichannel optical pyrometer and a velocity interferometer system. The equations of state in the pressure-density range of 20-150 GPa and 1.9-5.3 g/cm(3) from the first- to fourth-shock compression were presented. The single-shock temperatures in the range of 17.2-23.4 kK were obtained from the spectral radiance. Experimental results indicates that multiple shock-compression ratio (ηi = ρi/ρ0) is greatly enhanced from 3.3 to 8.8, where ρ0 is the initial density of argon and ρi (i = 1, 2, 3, 4) is the compressed density from first to fourth shock, respectively. For the relative compression ratio (ηi' = ρi/ρi-1), an interesting finding is that a turning point occurs at the second shocked states under the conditions of different experiments, and ηi' increases with pressure in lower density regime and reversely decreases with pressure in higher density regime. The evolution of the compression ratio is controlled by the excitation of internal degrees of freedom, which increase the compression, and by the interaction effects between particles that reduce it. A temperature-density plot shows that current multishock compression states of argon have distributed into warm dense regime. PMID:26515505
Lossless Astronomical Image Compression and the Effects of Random Noise
NASA Technical Reports Server (NTRS)
Pence, William
2009-01-01
In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.
Selecting a general-purpose data compression algorithm
NASA Technical Reports Server (NTRS)
Mathews, Gary Jason
1995-01-01
The National Space Science Data Center's Common Data Formate (CDF) is capable of storing many types of data such as scalar data items, vectors, and multidimensional arrays of bytes, integers, or floating point values. However, regardless of the dimensionality and data type, the data break down into a sequence of bytes that can be fed into a data compression function to reduce the amount of data without losing data integrity and thus remaining fully reconstructible. Because of the diversity of data types and high performance speed requirements, a general-purpose, fast, simple data compression algorithm is required to incorporate data compression into CDF. The questions to ask are how to evaluate and compare compression algorithms, and what compression algorithm meets all requirements. The object of this paper is to address these questions and determine the most appropriate compression algorithm to use within the CDF data management package that would be applicable to other software packages with similar data compression needs.
NASA Technical Reports Server (NTRS)
Barnsley, Michael F.; Sloan, Alan D.
1989-01-01
Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed.
Texture Studies and Compression Behaviour of Apple Flesh
NASA Astrophysics Data System (ADS)
James, Bryony; Fonseca, Celia
Compressive behavior of fruit flesh has been studied using mechanical tests and microstructural analysis. Apple flesh from two cultivars (Braeburn and Cox's Orange Pippin) was investigated to represent the extremes in a spectrum of fruit flesh types, hard and juicy (Braeburn) and soft and mealy (Cox's). Force-deformation curves produced during compression of unconstrained discs of apple flesh followed trends predicted from the literature for each of the "juicy" and "mealy" types. The curves display the rupture point and, in some cases, a point of inflection that may be related to the point of incipient juice release. During compression these discs of flesh generally failed along the centre line, perpendicular to the direction of loading, through a barrelling mechanism. Cryo-Scanning Electron Microscopy (cryo-SEM) was used to examine the behavior of the parenchyma cells during fracture and compression using a purpose designed sample holder and compression tester. Fracture behavior reinforced the difference in mechanical properties between crisp and mealy fruit flesh. During compression testing prior to cryo-SEM imaging the apple flesh was constrained perpendicular to the direction of loading. Microstructural analysis suggests that, in this arrangement, the material fails along a compression front ahead of the compressing plate. Failure progresses by whole lines of parenchyma cells collapsing, or rupturing, with juice filling intercellular spaces, before the compression force is transferred to the next row of cells.
Selfsimilar spherical compression waves in gas dynamics
NASA Astrophysics Data System (ADS)
Meyer-Ter-Vehn, J.; Schalk, C.
1982-05-01
A synopsis of different selfsimilar spherical compression waves is given pointing out their fundamental importance for the gas dynamics of inertial confinement fusion. Strong blast waves, various forms of isentropic collapsing hollow spheres are included. A classification is given in terms of six singular points which characterize the different solutions and the relations between them. The presentation closely follows Guderley's original work on imploding shock waves.
EEG data compression techniques.
Antoniol, G; Tonella, P
1997-02-01
In this paper, electroencephalograph (EEG) and Holter EEG data compression techniques which allow perfect reconstruction of the recorded waveform from the compressed one are presented and discussed. Data compression permits one to achieve significant reduction in the space required to store signals and in transmission time. The Huffman coding technique in conjunction with derivative computation reaches high compression ratios (on average 49% on Holter and 58% on EEG signals) with low computational complexity. By exploiting this result a simple and fast encoder/decoder scheme capable of real-time performance on a PC was implemented. This simple technique is compared with other predictive transformations, vector quantization, discrete cosine transform (DCT), and repetition count compression methods. Finally, it is shown that the adoption of a collapsed Huffman tree for the encoding/decoding operations allows one to choose the maximum codeword length without significantly affecting the compression ratio. Therefore, low cost commercial microcontrollers and storage devices can be effectively used to store long Holter EEG's in a compressed format. PMID:9214790
NASA Astrophysics Data System (ADS)
Khorramzadeh, Y.; Lin, Fei; Scarola, V. W.
2012-04-01
Strongly interacting atoms trapped in optical lattices can be used to explore phase diagrams of Hubbard models. Spatial inhomogeneity due to trapping typically obscures distinguishing observables. We propose that measures using boson double occupancy avoid trapping effects to reveal two key correlation functions. We define a boson core compressibility and core superfluid stiffness in terms of double occupancy. We use quantum Monte Carlo on the Bose-Hubbard model to empirically show that these quantities intrinsically eliminate edge effects to reveal correlations near the trap center. The boson core compressibility offers a generally applicable tool that can be used to experimentally map out phase transitions between compressible and incompressible states.
Modeling Compressed Turbulence
Israel, Daniel M.
2012-07-13
From ICE to ICF, the effect of mean compression or expansion is important for predicting the state of the turbulence. When developing combustion models, we would like to know the mix state of the reacting species. This involves density and concentration fluctuations. To date, research has focused on the effect of compression on the turbulent kinetic energy. The current work provides constraints to help development and calibration for models of species mixing effects in compressed turbulence. The Cambon, et al., re-scaling has been extended to buoyancy driven turbulence, including the fluctuating density, concentration, and temperature equations. The new scalings give us helpful constraints for developing and validating RANS turbulence models.
Local compressibilities in crystals
NASA Astrophysics Data System (ADS)
Martín Pendás, A.; Costales, Aurora; Blanco, M. A.; Recio, J. M.; Luaña, Víctor
2000-12-01
An application of the atoms in molecules theory to the partitioning of static thermodynamic properties in condensed systems is presented. Attention is focused on the definition and the behavior of atomic compressibilities. Inverses of bulk moduli are found to be simple weighted averages of atomic compressibilities. Two kinds of systems are investigated as examples: four related oxide spinels and the alkali halide family. Our analyses show that the puzzling constancy of the bulk moduli of these spinels is a consequence of the value of the compressibility of an oxide ion. A functional dependence between ionic bulk moduli and ionic volume is also proposed.
Compressive Optical Image Encryption
Li, Jun; Sheng Li, Jiao; Yang Pan, Yang; Li, Rong
2015-01-01
An optical image encryption technique based on compressive sensing using fully optical means has been proposed. An object image is first encrypted to a white-sense stationary noise pattern using a double random phase encoding (DRPE) method in a Mach-Zehnder interferometer. Then, the encrypted image is highly compressed to a signal using single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the encrypted image is reconstructed well via compressive sensing theory, and the original image can be decrypted with three reconstructed holograms and the correct keys. The numerical simulations show that the method is effective and suitable for optical image security transmission in future all-optical networks because of the ability of completely optical implementation and substantially smaller hologram data volume. PMID:25992946
Military Data Compression Standard
NASA Astrophysics Data System (ADS)
Winterbauer, C. E.
1982-07-01
A facsimile interoperability data compression standard is being adopted by the U.S. Department of Defense and other North Atlantic Treaty Organization (NATO) countries. This algorithm has been shown to perform quite well in a noisy communication channel.
Compressive optical image encryption.
Li, Jun; Sheng Li, Jiao; Yang Pan, Yang; Li, Rong
2015-01-01
An optical image encryption technique based on compressive sensing using fully optical means has been proposed. An object image is first encrypted to a white-sense stationary noise pattern using a double random phase encoding (DRPE) method in a Mach-Zehnder interferometer. Then, the encrypted image is highly compressed to a signal using single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the encrypted image is reconstructed well via compressive sensing theory, and the original image can be decrypted with three reconstructed holograms and the correct keys. The numerical simulations show that the method is effective and suitable for optical image security transmission in future all-optical networks because of the ability of completely optical implementation and substantially smaller hologram data volume. PMID:25992946
Focus on Compression Stockings
... sion apparel is used to prevent or control edema The post-thrombotic syndrome (PTS) is a complication ( ... complication. abdomen. This swelling is referred to as edema. If you have edema, compression therapy may be ...
Compressible Astrophysics Simulation Code
Energy Science and Technology Software Center (ESTSC)
2007-07-18
This is an astrophysics simulation code involving a radiation diffusion module developed at LLNL coupled to compressible hydrodynamics and adaptive mesh infrastructure developed at LBNL. One intended application is to neutrino diffusion in core collapse supernovae.
Melville, James L; Riley, Jenna F; Hirst, Jonathan D
2007-01-01
We present a simple and effective method for similarity searching in virtual high-throughput screening, requiring only a string-based representation of the molecules (e.g., SMILES) and standard compression software, available on all modern desktop computers. This method utilizes the normalized compression distance, an approximation of the normalized information distance, based on the concept of Kolmogorov complexity. On representative data sets, we demonstrate that compression-based similarity searching can outperform standard similarity searching protocols, exemplified by the Tanimoto coefficient combined with a binary fingerprint representation and data fusion. Software to carry out compression-based similarity is available from our Web site at http://comp.chem.nottingham.ac.uk/download/zippity. PMID:17238245
Simulation and Modeling of Homogeneous, Compressed Turbulence.
NASA Astrophysics Data System (ADS)
Wu, Chung-Teh
Low Reynolds number homogeneous turbulence undergoing low Mach number isotropic and one-dimensional compression has been simulated by numerically solving the Navier-Stokes equations. The numerical simulations were carried out on a CYBER 205 computer using a 64 x 64 x 64 mesh. A spectral method was used for spatial differencing and the second -order Runge-Kutta method for time advancement. A variety of statistical information was extracted from the computed flow fields. These include three-dimensional energy and dissipation spectra, two-point velocity correlations, one -dimensional energy spectra, turbulent kinetic energy and its dissipation rate, integral length scales, Taylor microscales, and Kolmogorov length scale. It was found that the ratio of the turbulence time scale to the mean-flow time scale is an important parameter in these flows. When this ratio is large, the flow is immediately affected by the mean strain in a manner similar to that predicted by rapid distortion theory. When this ratio is small, the flow retains the character of decaying isotropic turbulence initially; only after the strain has been applied for a long period does the flow accumulate a significant reflection of the effect of mean strain. In these flows, the Kolmogorov length scale decreases rapidly with increasing total strain, due to the density increase that accompanies compression. Results from the simulated flow fields were used to test one-point-closure, two-equation turbulence models. The two-equation models perform well only when the compression rate is small compared to the eddy turn-over rate. A new one-point-closure, three-equation turbulence model which accounts for the effect of compression is proposed. The new model accurately calculates four types of flows (isotropic decay, isotropic compression, one-dimensional compression, and axisymmetric expansion flows) for a wide range of strain rates.
Fu, Chi-Yung; Petrich, Loren I.
1997-01-01
An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.
Fu, C.Y.; Petrich, L.I.
1997-03-25
An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.
Intelligent bandwith compression
NASA Astrophysics Data System (ADS)
Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.
1980-02-01
The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.
Alternative Compression Garments
NASA Technical Reports Server (NTRS)
Stenger, M. B.; Lee, S. M. C.; Ribeiro, L. C.; Brown, A. K.; Westby, C. M.; Platts, S. H.
2011-01-01
Orthostatic intolerance after spaceflight is still an issue for astronauts as no in-flight countermeasure has been 100% effective. Future anti-gravity suits (AGS) may be similar to the Shuttle era inflatable AGS or may be a mechanical compression device like the Russian Kentavr. We have evaluated the above garments as well as elastic, gradient compression garments of varying magnitude and determined that breast-high elastic compression garments may be a suitable replacement to the current AGS. This new garment should be more comfortable than the AGS, easy to don and doff, and as effective a countermeasure to orthostatic intolerance. Furthermore, these new compression garments could be worn for several days after space flight as necessary if symptoms persisted. We conducted two studies to evaluate elastic, gradient compression garments. The purpose of these studies was to evaluate the comfort and efficacy of an alternative compression garment (ACG) immediately after actual space flight and 6 degree head-down tilt bed rest as a model of space flight, and to determine if they would impact recovery if worn for up to three days after bed rest.
Integer cosine transform for image compression
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Pollara, F.; Shahshahani, M.
1991-01-01
This article describes a recently introduced transform algorithm called the integer cosine transform (ICT), which is used in transform-based data compression schemes. The ICT algorithm requires only integer operations on small integers and at the same time gives a rate-distortion performance comparable to that offered by the floating-point discrete cosine transform (DCT). The article addresses the issue of implementation complexity, which is of prime concern for source coding applications of interest in deep-space communications. Complexity reduction in the transform stage of the compression scheme is particularly relevant, since this stage accounts for most (typically over 80 percent) of the computational load.
Simulation and modeling of homogeneous, compressed turbulence
NASA Technical Reports Server (NTRS)
Wu, C. T.; Ferziger, J. H.; Chapman, D. R.
1985-01-01
Low Reynolds number homogeneous turbulence undergoing low Mach number isotropic and one-dimensional compression was simulated by numerically solving the Navier-Stokes equations. The numerical simulations were performed on a CYBER 205 computer using a 64 x 64 x 64 mesh. A spectral method was used for spatial differencing and the second-order Runge-Kutta method for time advancement. A variety of statistical information was extracted from the computed flow fields. These include three-dimensional energy and dissipation spectra, two-point velocity correlations, one-dimensional energy spectra, turbulent kinetic energy and its dissipation rate, integral length scales, Taylor microscales, and Kolmogorov length scale. Results from the simulated flow fields were used to test one-point closure, two-equation models. A new one-point-closure, three-equation turbulence model which accounts for the effect of compression is proposed. The new model accurately calculates four types of flows (isotropic decay, isotropic compression, one-dimensional compression, and axisymmetric expansion flows) for a wide range of strain rates.
Simulation and modeling of homogeneous, compressed turbulence
NASA Astrophysics Data System (ADS)
Wu, C. T.; Ferziger, J. H.; Chapman, D. R.
1985-05-01
Low Reynolds number homogeneous turbulence undergoing low Mach number isotropic and one-dimensional compression was simulated by numerically solving the Navier-Stokes equations. The numerical simulations were performed on a CYBER 205 computer using a 64 x 64 x 64 mesh. A spectral method was used for spatial differencing and the second-order Runge-Kutta method for time advancement. A variety of statistical information was extracted from the computed flow fields. These include three-dimensional energy and dissipation spectra, two-point velocity correlations, one-dimensional energy spectra, turbulent kinetic energy and its dissipation rate, integral length scales, Taylor microscales, and Kolmogorov length scale. Results from the simulated flow fields were used to test one-point closure, two-equation models. A new one-point-closure, three-equation turbulence model which accounts for the effect of compression is proposed. The new model accurately calculates four types of flows (isotropic decay, isotropic compression, one-dimensional compression, and axisymmetric expansion flows) for a wide range of strain rates.
Parallel image compression circuit for high-speed cameras
NASA Astrophysics Data System (ADS)
Nishikawa, Yukinari; Kawahito, Shoji; Inoue, Toru
2005-02-01
In this paper, we propose 32 parallel image compression circuits for high-speed cameras. The proposed compression circuits are based on a 4 x 4-point 2-dimensional DCT using a DA method, zigzag scanning of 4 blocks of the 2-D DCT coefficients and a 1-dimensional Huffman coding. The compression engine is designed with FPGAs, and the hardware complexity is compared with JPEG algorithm. It is found that the proposed compression circuits require much less hardware, leading to a compact high-speed implementation of the image compression circuits using parallel processing architecture. The PSNR of the reconstructed image using the proposed encoding method is better than that of JPEG at the region of low compression ratio.
Park, Sang-Sub
2014-01-01
The purpose of this study is to grasp difference in quality of chest compression accuracy between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method. Participants were progressed 64 people except 6 absentees among 70 people who agreed to participation with completing the CPR curriculum. In the classification of group in participants, the modified chest compression method was called as smartphone group (33 people). The standardized chest compression method was called as traditional group (31 people). The common equipments in both groups were used Manikin for practice and Manikin for evaluation. In the meantime, the smartphone group for application was utilized Android and iOS Operating System (OS) of 2 smartphone products (G, i). The measurement period was conducted from September 25th to 26th, 2012. Data analysis was used SPSS WIN 12.0 program. As a result of research, the proper compression depth (mm) was shown the proper compression depth (p< 0.01) in traditional group (53.77 mm) compared to smartphone group (48.35 mm). Even the proper chest compression (%) was formed suitably (p< 0.05) in traditional group (73.96%) more than smartphone group (60.51%). As for the awareness of chest compression accuracy, the traditional group (3.83 points) had the higher awareness of chest compression accuracy (p< 0.001) than the smartphone group (2.32 points). In the questionnaire that was additionally carried out 1 question only in smartphone group, the modified chest compression method with the use of smartphone had the high negative reason in rescuer for occurrence of hand back pain (48.5%) and unstable posture (21.2%). PMID:24704648
Evaluation of the tactical utility of compressed imagery
NASA Astrophysics Data System (ADS)
Irvine, John M.; Eckstein, Barbara A.; Hummel, Robert A.; Peters, Richard J.; Ritzel, Rhonda L.
2002-06-01
The effects of compression on image utility are assessed based on manual exploitation performed by military imagery analysts (IAs). The original, uncompressed synthetic aperture radar imagery and compressed products are rated for the Radar National Imagery Interpretability Rating Scale (NIIRS), image features and sensor artifacts, and target detection and recognition. Images were compressed via standard JPEG compression, single-scale intelligent bandwidth compression (IBC), and wavelet/trellis- coded quantization (W/TCQ) at 50-to-1 and 100-to-1 ratios. We find that the utility of the compressed imagery differs only slightly from the uncompressed imagery, with the exception of the JPEG products. Otherwise, both the 50-to-1 and 100-to-1 compressed imagery appear similar in terms of image quality. Radar NIIRS indicates that even 100-to-1 compression using IBC or W/TCQ has minimal impact on imagery intelligence value. A slight loss in performance occurs for vehicle counting and identification tasks. These findings suggest that both single-scale IBC and W/TCQ compression techniques have matured to a point that they could provide value to the tactical user. Additional assessments may verify the practical limits of compression for synthetic aperture radar (SAR) data and address the transition to a field environment.
Wavelet compression of medical imagery.
Reiter, E
1996-01-01
Wavelet compression is a transform-based compression technique recently shown to provide diagnostic-quality images at compression ratios as great as 30:1. Based on a recently developed field of applied mathematics, wavelet compression has found success in compression applications from digital fingerprints to seismic data. The underlying strength of the method is attributable in large part to the efficient representation of image data by the wavelet transform. This efficient or sparse representation forms the basis for high-quality image compression by providing subsequent steps of the compression scheme with data likely to result in long runs of zero. These long runs of zero in turn compress very efficiently, allowing wavelet compression to deliver substantially better performance than existing Fourier-based methods. Although the lack of standardization has historically been an impediment to widespread adoption of wavelet compression, this situation may begin to change as the operational benefits of the technology become better known. PMID:10165355
Transverse Compression of Tendons.
Samuel Salisbury, S T; Paul Buckley, C; Zavatsky, Amy B
2016-04-01
A study was made of the deformation of tendons when compressed transverse to the fiber-aligned axis. Bovine digital extensor tendons were compression tested between flat rigid plates. The methods included: in situ image-based measurement of tendon cross-sectional shapes, after preconditioning but immediately prior to testing; multiple constant-load creep/recovery tests applied to each tendon at increasing loads; and measurements of the resulting tendon displacements in both transverse directions. In these tests, friction resisted axial stretch of the tendon during compression, giving approximately plane-strain conditions. This, together with the assumption of a form of anisotropic hyperelastic constitutive model proposed previously for tendon, justified modeling the isochronal response of tendon as that of an isotropic, slightly compressible, neo-Hookean solid. Inverse analysis, using finite-element (FE) simulations of the experiments and 10 s isochronal creep displacement data, gave values for Young's modulus and Poisson's ratio of this solid of 0.31 MPa and 0.49, respectively, for an idealized tendon shape and averaged data for all the tendons and E = 0.14 and 0.10 MPa for two specific tendons using their actual measured geometry. The compression load versus displacement curves, as measured and as simulated, showed varying degrees of stiffening with increasing load. This can be attributed mostly to geometrical changes in tendon cross section under load, varying according to the initial 3D shape of the tendon. PMID:26833218
Intelligent bandwidth compression
NASA Astrophysics Data System (ADS)
Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.
1980-02-01
The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 bandwidth-compressed images are presented.
Multishock Compression Properties of Warm Dense Argon
Zheng, Jun; Chen, Qifeng; Yunjun, Gu; Li, Zhiguo; Shen, Zhijun
2015-01-01
Warm dense argon was generated by a shock reverberation technique. The diagnostics of warm dense argon were performed by a multichannel optical pyrometer and a velocity interferometer system. The equations of state in the pressure-density range of 20–150 GPa and 1.9–5.3 g/cm3 from the first- to fourth-shock compression were presented. The single-shock temperatures in the range of 17.2–23.4 kK were obtained from the spectral radiance. Experimental results indicates that multiple shock-compression ratio (ηi = ρi/ρ0) is greatly enhanced from 3.3 to 8.8, where ρ0 is the initial density of argon and ρi (i = 1, 2, 3, 4) is the compressed density from first to fourth shock, respectively. For the relative compression ratio (ηi’ = ρi/ρi-1), an interesting finding is that a turning point occurs at the second shocked states under the conditions of different experiments, and ηi’ increases with pressure in lower density regime and reversely decreases with pressure in higher density regime. The evolution of the compression ratio is controlled by the excitation of internal degrees of freedom, which increase the compression, and by the interaction effects between particles that reduce it. A temperature-density plot shows that current multishock compression states of argon have distributed into warm dense regime. PMID:26515505
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.
2006-01-01
The Compressible Flow Toolbox is primarily a MATLAB-language implementation of a set of algorithms that solve approximately 280 linear and nonlinear classical equations for compressible flow. The toolbox is useful for analysis of one-dimensional steady flow with either constant entropy, friction, heat transfer, or Mach number greater than 1. The toolbox also contains algorithms for comparing and validating the equation-solving algorithms against solutions previously published in open literature. The classical equations solved by the Compressible Flow Toolbox are as follows: The isentropic-flow equations, The Fanno flow equations (pertaining to flow of an ideal gas in a pipe with friction), The Rayleigh flow equations (pertaining to frictionless flow of an ideal gas, with heat transfer, in a pipe of constant cross section), The normal-shock equations, The oblique-shock equations, and The expansion equations.
Isentropic Compression of Argon
H. Oona; J.C. Solem; L.R. Veeser, C.A. Ekdahl; P.J. Rodriquez; S.M. Younger; W. Lewis; W.D. Turley
1997-08-01
We are studying the transition of argon from an insulator to a conductor by compressing the frozen gas isentropically to pressures at which neighboring atomic orbitals overlap sufficiently to allow some electron motion between atoms. Argon and the other rare gases have closed electron shells and therefore remain montomic, even when they solidify. Their simple structure makes it likely that any measured change in conductivity is due to changes in the atomic structure, not in molecular configuration. As the crystal is compressed the band gap closes, allowing increased conductivity. We have begun research to determine the conductivity at high pressures, and it is our intention to determine the compression at which the crystal becomes a metal.
NASA Technical Reports Server (NTRS)
Vandromme, Dany; Haminh, Hieu
1991-01-01
The capability of turbulence modeling correctly to handle natural unsteadiness appearing in compressible turbulent flows is investigated. Physical aspects linked to the unsteadiness problem and the role of various flow parameters are analyzed. It is found that unsteady turbulent flows can be simulated by dividing these motions into an 'organized' part for which equations of motion are solved and a remaining 'incoherent' part represented by a turbulence model. Two-equation turbulence models and second-order turbulence models can yield reasonable results. For specific compressible unsteady turbulent flow, graphic presentations of different quantities may reveal complementary physical features. Strong compression zones are observed in rapid flow parts but shocklets do not yet occur.
Orbiting dynamic compression laboratory
NASA Technical Reports Server (NTRS)
Ahrens, T. J.; Vreeland, T., Jr.; Kasiraj, P.; Frisch, B.
1984-01-01
In order to examine the feasibility of carrying out dynamic compression experiments on a space station, the possibility of using explosive gun launchers is studied. The question of whether powders of a refractory metal (molybdenum) and a metallic glass could be well considered by dynamic compression is examined. In both cases extremely good bonds are obtained between grains of metal and metallic glass at 180 and 80 kb, respectively. When the oxide surface is reduced and the dynamic consolidation is carried out in vacuum, in the case of molybdenum, tensile tests of the recovered samples demonstrated beneficial ultimate tensile strengths.
Isentropic compression of argon
Veeser, L.R.; Ekdahl, C.A.; Oona, H.
1997-06-01
The compression was done in an MC-1 flux compression (explosive) generator, in order to study the transition from an insulator to a conductor. Since conductivity signals were observed in all the experiments (except when the probe is removed), both the Teflon and the argon are becoming conductive. The conductivity could not be determined (Teflon insulation properties unknown), but it could be bounded as being {sigma}=1/{rho}{le}8({Omega}cm){sub -1}, because when the Teflon breaks down, the dielectric constant is reduced. The Teflon insulator problem remains, and other ways to better insulate the probe or to measure the conductivity without a probe is being sought.
Underwing compression vortex attenuation device
NASA Technical Reports Server (NTRS)
Patterson, James C., Jr. (Inventor)
1993-01-01
A vortex attenuation device is presented which dissipates a lift-induced vortex generated by a lifting aircraft wing. The device consists of a positive pressure gradient producing means in the form of a compression panel attached to the lower surface of the wing and facing perpendicular to the airflow across the wing. The panel is located between the midpoint of the local wing cord and the trailing edge in the chord-wise direction and at a point which is approximately 55 percent of the wing span as measured from the fuselage center line in the spanwise direction. When deployed in flight, this panel produces a positive pressure gradient aligned with the final roll-up of the total vortex system which interrupts the axial flow in the vortex core and causes the vortex to collapse.
The Compressed Video Experience.
ERIC Educational Resources Information Center
Weber, John
In the fall semester 1995, Southern Arkansas University- Magnolia (SAU-M) began a two semester trial delivering college classes via a compressed video link between SAU-M and its sister school Southern Arkansas University Tech (SAU-T) in Camden. As soon as the University began broadcasting and receiving classes, it was discovered that using the…
ERIC Educational Resources Information Center
Branzburg, Jeffrey
2005-01-01
File compression enables data to be squeezed together, greatly reducing file size. Why would someone want to do this? Reducing file size enables the sending and receiving of files over the Internet more quickly, the ability to store more files on the hard drive, and the ability pack many related files into one archive (for example, all files…
Nonlinear Frequency Compression
Scollie, Susan; Glista, Danielle; Seelisch, Andreas
2013-01-01
Frequency lowering technologies offer an alternative amplification solution for severe to profound high frequency hearing losses. While frequency lowering technologies may improve audibility of high frequency sounds, the very nature of this processing can affect the perceived sound quality. This article reports the results from two studies that investigated the impact of a nonlinear frequency compression (NFC) algorithm on perceived sound quality. In the first study, the cutoff frequency and compression ratio parameters of the NFC algorithm were varied, and their effect on the speech quality was measured subjectively with 12 normal hearing adults, 12 normal hearing children, 13 hearing impaired adults, and 9 hearing impaired children. In the second study, 12 normal hearing and 8 hearing impaired adult listeners rated the quality of speech in quiet, speech in noise, and music after processing with a different set of NFC parameters. Results showed that the cutoff frequency parameter had more impact on sound quality ratings than the compression ratio, and that the hearing impaired adults were more tolerant to increased frequency compression than normal hearing adults. No statistically significant differences were found in the sound quality ratings of speech-in-noise and music stimuli processed through various NFC settings by hearing impaired listeners. These findings suggest that there may be an acceptable range of NFC settings for hearing impaired individuals where sound quality is not adversely affected. These results may assist an Audiologist in clinical NFC hearing aid fittings for achieving a balance between high frequency audibility and sound quality. PMID:23539261
Cahill, C.
1997-07-01
Historically, the decision to purchase or rent compression has been set as a corporate philosophy. As companies decentralize, there seems to be a shift away from corporate philosophy toward individual profit centers. This has led the decision to rent versus purchase to be looked at on a regional or project-by-project basis.
Improved compression molding process
NASA Technical Reports Server (NTRS)
Heier, W. C.
1967-01-01
Modified compression molding process produces plastic molding compounds that are strong, homogeneous, free of residual stresses, and have improved ablative characteristics. The conventional method is modified by applying a vacuum to the mold during the molding cycle, using a volatile sink, and exercising precise control of the mold closure limits.
Energy Transfer and Triadic Interactions in Compressible Turbulence
NASA Technical Reports Server (NTRS)
Bataille, F.; Zhou, Ye; Bertoglio, Jean-Pierre
1997-01-01
Using a two-point closure theory, the Eddy-Damped-Quasi-Normal-Markovian (EDQNM) approximation, we have investigated the energy transfer process and triadic interactions of compressible turbulence. In order to analyze the compressible mode directly, the Helmholtz decomposition is used. The following issues were addressed: (1) What is the mechanism of energy exchange between the solenoidal and compressible modes, and (2) Is there an energy cascade in the compressible energy transfer process? It is concluded that the compressible energy is transferred locally from the solenoidal part to the compressible part. It is also found that there is an energy cascade of the compressible mode for high turbulent Mach number (M(sub t) greater than or equal to 0.5). Since we assume that the compressibility is weak, the magnitude of the compressible (radiative or cascade) transfer is much smaller than that of solenoidal cascade. These results are further confirmed by studying the triadic energy transfer function, the most fundamental building block of the energy transfer.
Learning random networks for compression of still and moving images
NASA Technical Reports Server (NTRS)
Gelenbe, Erol; Sungur, Mert; Cramer, Christopher
1994-01-01
Image compression for both still and moving images is an extremely important area of investigation, with numerous applications to videoconferencing, interactive education, home entertainment, and potential applications to earth observations, medical imaging, digital libraries, and many other areas. We describe work on a neural network methodology to compress/decompress still and moving images. We use the 'point-process' type neural network model which is closer to biophysical reality than standard models, and yet is mathematically much more tractable. We currently achieve compression ratios of the order of 120:1 for moving grey-level images, based on a combination of motion detection and compression. The observed signal-to-noise ratio varies from values above 25 to more than 35. The method is computationally fast so that compression and decompression can be carried out in real-time. It uses the adaptive capabilities of a set of neural networks so as to select varying compression ratios in real-time as a function of quality achieved. It also uses a motion detector which will avoid retransmitting portions of the image which have varied little from the previous frame. Further improvements can be achieved by using on-line learning during compression, and by appropriate compensation of nonlinearities in the compression/decompression scheme. We expect to go well beyond the 250:1 compression level for color images with good quality levels.
NASA Astrophysics Data System (ADS)
Chaudhari, Kapil A.; Reeves, Stanley J.
2005-02-01
Most consumer-level digital cameras use a color filter array to capture color mosaic data followed by demosaicking to obtain full-color images. However, many sophisticated demosaicking algorithms are too complex to implement on-board a camera. To use these algorithms, one must transfer the mosaic data from the camera to a computer without introducing compression losses that could generate artifacts in the demosaicked image. The memory required for losslessly stored mosaic images severely restricts the number of images that can be stored in the camera. Therefore, we need an algorithm to compress the original mosaic data losslessly so that it can later be transferred intact for demosaicking. We propose a new lossless compression technique for mosaic images in this paper. Ordinary image compression methods do not apply to mosaic images because of their non-canonical color sampling structure. Because standard compression methods such as JPEG, JPEG2000, etc. are already available in most digital cameras, we have chosen to build our algorithms using a standard method as a key part of the system. The algorithm begins by separating the mosaic image into 3 color (RGB) components. This is followed by an interpolation or down-sampling operation--depending on the particular variation of the algorithm--that makes all three components the same size. Using the three color components, we form a color image that is coded with JPEG. After appropriately reformatting the data, we calculate the residual between the original image and the coded image and then entropy-code the residual values corresponding to the mosaic data.
MedlinePlus Videos and Cool Tools
... Tipping Point by CPSC Blogger September 22 appliance child Childproofing CPSC danger death electrical fall furniture head ... TV falls with about the same force as child falling from the third story of a building. ...
Stevens, Andrew J.; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.
2015-08-02
One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into a single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental
Progressive compressive imager
NASA Astrophysics Data System (ADS)
Evladov, Sergei; Levi, Ofer; Stern, Adrian
2012-06-01
We have designed and built a working automatic progressive sampling imaging system based on the vector sensor concept, which utilizes a unique sampling scheme of Radon projections. This sampling scheme makes it possible to progressively add information resulting in tradeoff between compression and the quality of reconstruction. The uniqueness of our sampling is that in any moment of the acquisition process the reconstruction can produce a reasonable version of the image. The advantage of the gradual addition of the samples is seen when the sparsity rate of the object is unknown, and thus the number of needed measurements. We have developed the iterative algorithm OSO (Ordered Sets Optimization) which employs our sampling scheme for creation of nearly uniform distributed sets of samples, which allows the reconstruction of Mega-Pixel images. We present the good quality reconstruction from compressed data ratios of 1:20.
Digital cinema video compression
NASA Astrophysics Data System (ADS)
Husak, Walter
2003-05-01
The Motion Picture Industry began a transition from film based distribution and projection to digital distribution and projection several years ago. Digital delivery and presentation offers the prospect to increase the quality of the theatrical experience for the audience, reduce distribution costs to the distributors, and create new business opportunities for the theater owners and the studios. Digital Cinema also presents an opportunity to provide increased flexibility and security of the movies for the content owners and the theater operators. Distribution of content via electronic means to theaters is unlike any of the traditional applications for video compression. The transition from film-based media to electronic media represents a paradigm shift in video compression techniques and applications that will be discussed in this paper.
Data compression for speckle correlation interferometry temporal fringe pattern analysis
Tuck Wah Ng; Kar Tien Ang
2005-05-01
Temporal fringe pattern analysis is gaining prominence in speckle correlation interferometry, in particular for transient phenomena studies. This form of analysis, nevertheless, necessitates large data storage. Current compression schemes do not facilitate efficient data retrieval and may even result in important data loss. We describe a novel compression scheme that does not result in crucial data loss and allows for the efficient retrieval of data for temporal fringe analysis. In sample tests with digital speckle interferometry on fringe patterns of a plate and of a cantilever beam subjected to temporal phase and load evolution, respectively, we achieved a compression ratio of 1.6 without filtering out any data from discontinuous and low fringe modulation spatial points. By eliminating 38% of the data from discontinuous and low fringe modulation spatial points, we attained a significant compression ratio of 2.4.
NASA Technical Reports Server (NTRS)
Vinet, P.; Ferrante, J.; Rose, J. H.; Smith, J. R.
1987-01-01
A universal form is proposed for the equation of state (EOS) of solids. Good agreement is found for a variety of test data. The form of the EOS is used to suggest a method of data analysis, which is applied to materials of geophysical interest. The isothermal bulk modulus is discussed as a function of the volume and of the pressure. The isothermal compression curves for materials of geophysical interest are examined.
NASA Astrophysics Data System (ADS)
Nason, Sarah; Houghton, Brittany; Renfro, Timothy
2012-03-01
The fall university physics class, at McMurry University, created a compression modulus experiment that even high school students could do. The class came up with this idea after a Young's modulus experiment which involved stretching wire. A question was raised of what would happen if we compressed something else? We created our own Young's modulus experiment, but in a more entertaining way. The experiment involves measuring the height of a cake both before and after a weight has been applied to the cake. We worked to derive the compression modulus by applying weight to a cake. In the end, we had our experimental cake and, ate it too! To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2012.TSS.B1.1
An isentropic compression heated Ludwieg tube transient wind tunnel
NASA Technical Reports Server (NTRS)
Magari, Patrick J.; Lagraff, John E.
1988-01-01
Syracuse University's Ludwieg tube with isentropic compression facility is a transient wind tunnel employing a piston drive that incorporates insentropic compression heating of the test gas located ahead of a piston. The facility is well-suited for experimental investigations concerning supersonic and subsonic vehicles over a wide range of pressures, Reynolds numbers, and temperatures; all three parameters can be almost independently controlled. Work at the facility currently includes wake-induced stagnation point heat transfer and supersonic boundary layer transition.
A compressive failure model for anisotropic plates with a cutout under compressive and shear loads
NASA Technical Reports Server (NTRS)
Gurdal, Z.; Haftka, R. T.
1986-01-01
The paper introduces a failure model for laminated composite plates with a cutout under combined compressive and shear loads. The model is based on kinking failure of the load-carrying fibers around a cutout, and includes the effect of local shearing and compressive stresses. Comparison of predictions of the model with available experimental results for quasi-isotropic and orthotropic plates with a circular hole indicated a good agreement. Predictions for orthotropic plates under combined loading are compared with the predictions of a point-stress model. The present model indicates significant reductions in axial load-carrying capacity due to shearing loads for plates with principal axis of orthotropy oriented along the axial load direction. A gain in strength is achieved by rotating the axis of orthotropy to counteract the shearing stress, or by eliminating the compressive-shear deformation coupling.
Investigation into the geometric consequences of processing substantially compressed images
NASA Astrophysics Data System (ADS)
Tempelmann, Udo; Nwosu, Zubbi; Zumbrunn, Roland M.
1995-07-01
One of the major driving forces behind digital photogrammetric systems is the continued drop in the cost of digital storage systems. However, terrestrial remote sensing systems continue to generate enormous volumes of data due to smaller pixels, larger coverage, and increased multispectral and multitemporal possibilities. Sophisticated compression algorithms have been developed but reduced visual quality of their output, which impedes object identification, and resultant geometric deformation have been limiting factors in employing compression. Compression and decompression time is also an issue but of less importance due to off-line possibilities. Two typical image blocks have been selected, one sub-block from a SPOT image and the other is an image of industrial targets taken with an off-the-shelf CCD. Three common compression algorithms have been chosen: JPEG, Wavelet, and Fractal. The images are run through the compression/decompression cycle, with parameter chosen to cover the whole range of available compression ratios. Points are identified on these images and their locations are compared against those in the originals. These results are presented to assist choice of compression facilities after considerations on metric quality against storage availability. Fractals offer the best visual quality but JPEG, closely followed by wavelets, imposes less geometric defects. JPEG seems to offer the best all-around performance when you consider geometric and visual quality, and compression/decompression speed.
Piston reciprocating compressed air engine
Cestero, L.G.
1987-03-24
A compressed air engine is described comprising: (a). a reservoir of compressed air, (b). two power cylinders each containing a reciprocating piston connected to a crankshaft and flywheel, (c). a transfer cylinder which communicates with each power cylinder and the reservoir, and contains a reciprocating piston connected to the crankshaft, (d). valve means controlled by rotation of the crankshaft for supplying compressed air from the reservoir to each power cylinder and for exhausting compressed air from each power cylinder to the transfer cylinder, (e). valve means controlled by rotation of the crankshaft for supplying from the transfer cylinder to the reservoir compressed air supplied to the transfer cylinder on the exhaust strokes of the pistons of the power cylinders, and (f). an externally powered fan for assisting the exhaust of compressed air from each power cylinder to the transfer cylinder and from there to the compressed air reservoir.
Low bit-rate efficient compression for seismic data.
Averbuch, A Z; Meyer, R; Stromberg, J O; Coifman, R; Vassiliou, A
2001-01-01
adaptive multiscale local cosine transform with different windows sizes performs well on all the seismic data sets and outperforms the other methods from the SNR point of view. All the described methods cover wide range of different data sets. Each data set will have his own best performed method chosen from this collection. The results were performed on four different seismic data sets. Special emphasis was given to achieve faster processing speed which is another critical issue that is examined in the paper. Some of these algorithms are also suitable for multimedia type compression. PMID:18255520
Compressible magnetohydrodynamic sawtooth crash
NASA Astrophysics Data System (ADS)
Sugiyama, Linda E.
2014-02-01
In a toroidal magnetically confined plasma at low resistivity, compressible magnetohydrodynamic (MHD) predicts that an m = 1/n = 1 sawtooth has a fast, explosive crash phase with abrupt onset, rate nearly independent of resistivity, and localized temperature redistribution similar to experimental observations. Large scale numerical simulations show that the 1/1 MHD internal kink grows exponentially at a resistive rate until a critical amplitude, when the plasma motion accelerates rapidly, culminating in fast loss of the temperature and magnetic structure inside q < 1, with somewhat slower density redistribution. Nonlinearly, for small effective growth rate the perpendicular momentum rate of change remains small compared to its individual terms ∇p and J × B until the fast crash, so that the compressible growth rate is determined by higher order terms in a large aspect ratio expansion, as in the linear eigenmode. Reduced MHD fails completely to describe the toroidal mode; no Sweet-Parker-like reconnection layer develops. Important differences result from toroidal mode coupling effects. A set of large aspect ratio compressible MHD equations shows that the large aspect ratio expansion also breaks down in typical tokamaks with rq =1/Ro≃1/10 and a /Ro≃1/3. In the large aspect ratio limit, failure extends down to much smaller inverse aspect ratio, at growth rate scalings γ =O(ɛ2). Higher order aspect ratio terms, including B˜ϕ, become important. Nonlinearly, higher toroidal harmonics develop faster and to a greater degree than for large aspect ratio and help to accelerate the fast crash. The perpendicular momentum property applies to other transverse MHD instabilities, including m ≥ 2 magnetic islands and the plasma edge.
Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan
2014-10-01
It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness. PMID:26352631
International magnetic pulse compression
Kirbie, H.C.; Newton, M.A.; Siemens, P.D.
1991-04-01
Although pulsed-power engineering traditionally has been practiced by a fairly small, close community in the areas of defense and energy research, it is becoming more common in high-power, high-energy commercial pursuits such as material processing and lasers. This paper is a synopsis of the Feb. 12--14, 1990 workshop on magnetic switching as it applies primarily to pulse compression (power transformation). During the course of the Workshop at Granlibakken, a great deal of information was amassed and a keen insight into both the problems and opportunities as to the use of this switching approach was developed. The segmented workshop format proved ideal for identifying key aspects affecting optimum performance in a variety of applications. Individual groups of experts addressed network and system modeling, magnetic materials, power conditioning, core cooling and dielectrics, and finally circuits and application. At the end, they came together to consolidate their input and formulate the workshop's conclusions, identifying roadblocks or suggesting research projects, particularly as they apply to magnetic switching's trump card -- its high-average-power-handling capability (at least on a burst-mode basis). The workshop was especially productive both in the quality and quantity of information transfer in an environment conducive to a free and open exchange of ideas. We will not delve into the organization proper of this meeting, rather we wish to commend to the interested reader this volume, which provides the definitive and most up-to-date compilation on the subject of magnetic pulse compression from underlying principles to current state of the art as well as the prognosis for the future of magnetic pulse compression as a consensus of the workshop's organizers and participants.
Quaglino, A.V. Jr.
1987-06-16
A piston apparatus is described for maintaining compression between the piston wall and the cylinder wall, that comprises the following: a generally cylindrical piston body, including: a head portion defining the forward end of the body; and a continuous side wall portion extending rearward from the head portion; a means for lubricating and preventing compression loss between the side wall portion and the cylinder wall, including an annular recessed area in the continuous side wall portion for receiving a quantity of fluid lubricant in fluid engagement between the wall of the recessed and the wall of the cylinder; a first and second resilient, elastomeric, heat resistant rings positioned in grooves along the wall of the continuous side wall portion, above and below the annular recessed area. Each ring engages the cylinder wall to reduce loss of lubricant within the recessed area during operation of the piston; a first pump means for providing fluid lubricant to engine components other than the pistons; and a second pump means provides fluid lubricant to the recessed area in the continuous side wall portion of the piston. The first and second pump means obtains lubricant from a common source, and the second pump means including a flow line supplies oil from a predetermined level above the level of oil provided to the first pump means. This is so that should the oil level to the second pump means fall below the predetermined level, the loss of oil to the recessed area in the continuous side wall portion of the piston would result in loss of compression and shut down of the engine.
International magnetic pulse compression
NASA Astrophysics Data System (ADS)
Kirbie, H. C.; Newton, M. A.; Siemens, P. D.
1991-04-01
Although pulsed-power engineering traditionally has been practiced by a fairly small, close community in the areas of defense and energy research, it is becoming more common in high-power, high-energy commercial pursuits such as material processing and lasers. This paper is a synopsis of the Feb. 12-14, 1990 workshop on magnetic switching as it applies primarily to pulse compression (power transformation). During the course of the Workshop at Granlibakken, a great deal of information was amassed and a keen insight into both the problems and opportunities as to the use of this switching approach was developed. The segmented workshop format proved ideal for identifying key aspects affecting optimum performance in a variety of applications. Individual groups of experts addressed network and system modeling, magnetic materials, power conditioning, core cooling and dielectrics, and finally circuits and application. At the end, they came together to consolidate their input and formulate the workshop's conclusions, identifying roadblocks or suggesting research projects, particularly as they apply to magnetic switching's trump card - its high-average-power-handling capability (at least on a burst-mode basis). The workshop was especially productive both in the quality and quantity of information transfer in an environment conducive to a free and open exchange of ideas. We will not delve into the organization proper of this meeting, rather we wish to commend to the interested reader this volume, which provides the definitive and most up-to-date compilation on the subject of magnetic pulse compression from underlying principles to current state of the art as well as the prognosis for the future of magnetic pulse compression as a consensus of the workshop's organizers and participants.
Comparative data compression techniques and multi-compression results
NASA Astrophysics Data System (ADS)
Hasan, M. R.; Ibrahimy, M. I.; Motakabber, S. M. A.; Ferdaus, M. M.; Khan, M. N. H.
2013-12-01
Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms.
Avalanches in Wood Compression.
Mäkinen, T; Miksic, A; Ovaska, M; Alava, Mikko J
2015-07-31
Wood is a multiscale material exhibiting a complex viscoplastic response. We study avalanches in small wood samples in compression. "Woodquakes" measured by acoustic emission are surprisingly similar to earthquakes and crackling noise in rocks and laboratory tests on brittle materials. Both the distributions of event energies and of waiting (silent) times follow power laws. The stress-strain response exhibits clear signatures of localization of deformation to "weak spots" or softwood layers, as identified using digital image correlation. Even though material structure-dependent localization takes place, the avalanche behavior remains scale-free. PMID:26274428
NASA Technical Reports Server (NTRS)
Shanks, G. C. (Inventor)
1981-01-01
An apparatus for compressive testing of a test specimen may comprise vertically spaced upper and lower platen members between which a test specimen may be placed. The platen members are supported by a fixed support assembly. A load indicator is interposed between the upper platen member and the support assembly for supporting the total weight of the upper platen member and any additional weight which may be placed on it. Operating means are provided for moving the lower platen member upwardly toward the upper platen member whereby an increasing portion of the total weight is transferred from the load indicator to the test specimen.
Sampling video compression system
NASA Technical Reports Server (NTRS)
Matsumoto, Y.; Lum, H. (Inventor)
1977-01-01
A system for transmitting video signal of compressed bandwidth is described. The transmitting station is provided with circuitry for dividing a picture to be transmitted into a plurality of blocks containing a checkerboard pattern of picture elements. Video signals along corresponding diagonal rows of picture elements in the respective blocks are regularly sampled. A transmitter responsive to the output of the sampling circuitry is included for transmitting the sampled video signals of one frame at a reduced bandwidth over a communication channel. The receiving station is provided with a frame memory for temporarily storing transmitted video signals of one frame at the original high bandwidth frequency.
Ultrasound beamforming using compressed data.
Li, Yen-Feng; Li, Pai-Chi
2012-05-01
The rapid advancements in electronics technologies have made software-based beamformers for ultrasound array imaging feasible, thus facilitating the rapid development of high-performance and potentially low-cost systems. However, one challenge to realizing a fully software-based system is transferring data from the analog front end to the software back end at rates of up to a few gigabits per second. This study investigated the use of data compression to reduce the data transfer requirements and optimize the associated trade-off with beamforming quality. JPEG and JPEG2000 compression techniques were adopted. The acoustic data of a line phantom were acquired with a 128-channel array transducer at a center frequency of 3.5 MHz, and the acoustic data of a cyst phantom were acquired with a 64-channel array transducer at a center frequency of 3.33 MHz. The receive-channel data associated with each transmit event are separated into 8 × 8 blocks and several tiles before JPEG and JPEG2000 data compression is applied, respectively. In one scheme, the compression was applied to raw RF data, while in another only the amplitude of baseband data was compressed. The maximum compression ratio of RF data compression to produce an average error of lower than 5 dB was 15 with JPEG compression and 20 with JPEG2000 compression. The image quality is higher with baseband amplitude data compression than with RF data compression; although the maximum overall compression ratio (compared with the original RF data size), which was limited by the data size of uncompressed phase data, was lower than 12, the average error in this case was lower than 1 dB when the compression ratio was lower than 8. PMID:22434817
Compression of color-mapped images
NASA Technical Reports Server (NTRS)
Hadenfeldt, A. C.; Sayood, Khalid
1992-01-01
In a standard image coding scenario, pixel-to-pixel correlation nearly always exists in the data, especially if the image is a natural scene. This correlation is what allows predictive coding schemes (e.g., DPCM) to perform efficient compression. In a color-mapped image, the values stored in the pixel array are no longer directly related to the pixel intensity. Two color indices which are numerically adjacent (close) may point to two very different colors. The correlation still exists, but only via the colormap. This fact can be exploited by sorting the color map to reintroduce the structure. The sorting of colormaps is studied and it is shown how the resulting structure can be used in both lossless and lossy compression of images.
Mechanical Metamaterials with Negative Compressibility Transitions
NASA Astrophysics Data System (ADS)
Motter, Adilson
2015-03-01
When tensioned, ordinary materials expand along the direction of the applied force. In this presentation, I will explore network concepts to design metamaterials exhibiting negative compressibility transitions, during which the material undergoes contraction when tensioned (or expansion when pressured). Such transitions, which are forbidden in thermodynamic equilibrium, are possible during the decay of metastable, super-strained states. I will introduce a statistical physics theory for negative compressibility transitions, derive a first-principles model to predict these transitions, and present a validation of the model using molecular dynamics simulations. Aside from its immediate mechanical implications, our theory points to a wealth of analogous inverted responses, such as inverted susceptibility or heat-capacity transitions, allowed when considering realistic scales. This research was done in collaboration with Zachary Nicolaou, and was supported by the National Science Foundation and the Alfred P. Sloan Foundation.
Perceptually Lossless Wavelet Compression
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John
1996-01-01
The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp -1), where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We propose a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a 'perceptually lossless' quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.
Compressive Sensing DNA Microarrays
2009-01-01
Compressive sensing microarrays (CSMs) are DNA-based sensors that operate using group testing and compressive sensing (CS) principles. In contrast to conventional DNA microarrays, in which each genetic sensor is designed to respond to a single target, in a CSM, each sensor responds to a set of targets. We study the problem of designing CSMs that simultaneously account for both the constraints from CS theory and the biochemistry of probe-target DNA hybridization. An appropriate cross-hybridization model is proposed for CSMs, and several methods are developed for probe design and CS signal recovery based on the new model. Lab experiments suggest that in order to achieve accurate hybridization profiling, consensus probe sequences are required to have sequence homology of at least 80% with all targets to be detected. Furthermore, out-of-equilibrium datasets are usually as accurate as those obtained from equilibrium conditions. Consequently, one can use CSMs in applications in which only short hybridization times are allowed. PMID:19158952
Compressive Bilateral Filtering.
Sugimoto, Kenjiro; Kamata, Sei-Ichiro
2015-11-01
This paper presents an efficient constant-time bilateral filter that produces a near-optimal performance tradeoff between approximate accuracy and computational complexity without any complicated parameter adjustment, called a compressive bilateral filter (CBLF). The constant-time means that the computational complexity is independent of its filter window size. Although many existing constant-time bilateral filters have been proposed step-by-step to pursue a more efficient performance tradeoff, they have less focused on the optimal tradeoff for their own frameworks. It is important to discuss this question, because it can reveal whether or not a constant-time algorithm still has plenty room for improvements of performance tradeoff. This paper tackles the question from a viewpoint of compressibility and highlights the fact that state-of-the-art algorithms have not yet touched the optimal tradeoff. The CBLF achieves a near-optimal performance tradeoff by two key ideas: 1) an approximate Gaussian range kernel through Fourier analysis and 2) a period length optimization. Experiments demonstrate that the CBLF significantly outperforms state-of-the-art algorithms in terms of approximate accuracy, computational complexity, and usability. PMID:26068315
Cancer suppression by compression.
Frieden, B Roy; Gatenby, Robert A
2015-01-01
Recent experiments indicate that uniformly compressing a cancer mass at its surface tends to transform many of its cells from proliferative to functional forms. Cancer cells suffer from the Warburg effect, resulting from depleted levels of cell membrane potentials. We show that the compression results in added free energy and that some of the added energy contributes distortional pressure to the cells. This excites the piezoelectric effect on the cell membranes, in particular raising the potentials on the membranes of cancer cells from their depleted levels to near-normal levels. In a sample calculation, a gain of 150 mV in is so attained. This allows the Warburg effect to be reversed. The result is at least partially regained function and accompanying increased molecular order. The transformation remains even when the pressure is turned off, suggesting a change of phase; these possibilities are briefly discussed. It is found that if the pressure is, in particular, applied adiabatically the process obeys the second law of thermodynamics, further validating the theoretical model. PMID:25520262
Energy transfer in compressible turbulence
NASA Technical Reports Server (NTRS)
Bataille, Francoise; Zhou, YE; Bertoglio, Jean-Pierre
1995-01-01
This letter investigates the compressible energy transfer process. We extend a methodology developed originally for incompressible turbulence and use databases from numerical simulations of a weak compressible turbulence based on Eddy-Damped-Quasi-Normal-Markovian (EDQNM) closure. In order to analyze the compressible mode directly, the well known Helmholtz decomposition is used. While the compressible component has very little influence on the solenoidal part, we found that almost all of the compressible turbulence energy is received from its solenoidal counterpart. We focus on the most fundamental building block of the energy transfer process, the triadic interactions. This analysis leads us to conclude that, at low turbulent Mach number, the compressible energy transfer process is dominated by a local radiative transfer (absorption) in both inertial and energy containing ranges.
Compressive sensing in medical imaging
Graff, Christian G.; Sidky, Emil Y.
2015-01-01
The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400
A PDF closure model for compressible turbulent chemically reacting flows
NASA Technical Reports Server (NTRS)
Kollmann, W.
1992-01-01
The objective of the proposed research project was the analysis of single point closures based on probability density function (pdf) and characteristic functions and the development of a prediction method for the joint velocity-scalar pdf in turbulent reacting flows. Turbulent flows of boundary layer type and stagnation point flows with and without chemical reactions were be calculated as principal applications. Pdf methods for compressible reacting flows were developed and tested in comparison with available experimental data. The research work carried in this project was concentrated on the closure of pdf equations for incompressible and compressible turbulent flows with and without chemical reactions.
ECG data compression by modeling.
Madhukar, B.; Murthy, I. S.
1992-01-01
This paper presents a novel algorithm for data compression of single lead Electrocardiogram (ECG) data. The method is based on Parametric modeling of the Discrete Cosine Transformed ECG signal. Improved high frequency reconstruction is achieved by separately modeling the low and the high frequency regions of the transformed signal. Differential Pulse Code Modulation is applied on the model parameters to obtain a further increase in the compression. Compression ratios up to 1:40 were achieved without significant distortion. PMID:1482940
Shock compression of precompressed deuterium
Armstrong, M R; Crowhurst, J C; Zaug, J M; Bastea, S; Goncharov, A F; Militzer, B
2011-07-31
Here we report quasi-isentropic dynamic compression and thermodynamic characterization of solid, precompressed deuterium over an ultrafast time scale (< 100 ps) and a microscopic length scale (< 1 {micro}m). We further report a fast transition in shock wave compressed solid deuterium that is consistent with the ramp to shock transition, with a time scale of less than 10 ps. These results suggest that high-density dynamic compression of hydrogen may be possible on microscopic length scales.
Magnetic compression laser driving circuit
Ball, D.G.; Birx, D.; Cook, E.G.
1993-01-05
A magnetic compression laser driving circuit is disclosed. The magnetic compression laser driving circuit compresses voltage pulses in the range of 1.5 microseconds at 20 kilovolts of amplitude to pulses in the range of 40 nanoseconds and 60 kilovolts of amplitude. The magnetic compression laser driving circuit includes a multi-stage magnetic switch where the last stage includes a switch having at least two turns which has larger saturated inductance with less core material so that the efficiency of the circuit and hence the laser is increased.
Magnetic compression laser driving circuit
Ball, Don G.; Birx, Dan; Cook, Edward G.
1993-01-01
A magnetic compression laser driving circuit is disclosed. The magnetic compression laser driving circuit compresses voltage pulses in the range of 1.5 microseconds at 20 Kilovolts of amplitude to pulses in the range of 40 nanoseconds and 60 Kilovolts of amplitude. The magnetic compression laser driving circuit includes a multi-stage magnetic switch where the last stage includes a switch having at least two turns which has larger saturated inductance with less core material so that the efficiency of the circuit and hence the laser is increased.
Data compression for sequencing data
2013-01-01
Post-Sanger sequencing methods produce tons of data, and there is a general agreement that the challenge to store and process them must be addressed with data compression. In this review we first answer the question “why compression” in a quantitative manner. Then we also answer the questions “what” and “how”, by sketching the fundamental compression ideas, describing the main sequencing data types and formats, and comparing the specialized compression algorithms and tools. Finally, we go back to the question “why compression” and give other, perhaps surprising answers, demonstrating the pervasiveness of data compression techniques in computational biology. PMID:24252160
POLYCOMP: Efficient and configurable compression of astronomical timelines
NASA Astrophysics Data System (ADS)
Tomasi, M.
2016-07-01
This paper describes the implementation of polycomp, a open-sourced, publicly available program for compressing one-dimensional data series in tabular format. The program is particularly suited for compressing smooth, noiseless streams of data like pointing information, as one of the algorithms it implements applies a combination of least squares polynomial fitting and discrete Chebyshev transforms that is able to achieve a compression ratio Cr up to ≈ 40 in the examples discussed in this work. This performance comes at the expense of a loss of information, whose upper bound is configured by the user. I show two areas in which the usage of polycomp is interesting. In the first example, I compress the ephemeris table of an astronomical object (Ganymede), obtaining Cr ≈ 20, with a compression error on the x , y , z coordinates smaller than 1 m. In the second example, I compress the publicly available timelines recorded by the Low Frequency Instrument (LFI), an array of microwave radiometers onboard the ESA Planck spacecraft. The compression reduces the needed storage from ∼ 6.5 TB to ≈ 0.75 TB (Cr ≈ 9), thus making them small enough to be kept in a portable hard drive.
Population attribute compression
White, James M.; Faber, Vance; Saltzman, Jeffrey S.
1995-01-01
An image population having a large number of attributes is processed to form a display population with a predetermined smaller number of attributes that represent the larger number of attributes. In a particular application, the color values in an image are compressed for storage in a discrete look-up table (LUT). Color space containing the LUT color values is successively subdivided into smaller volumes until a plurality of volumes are formed, each having no more than a preselected maximum number of color values. Image pixel color values can then be rapidly placed in a volume with only a relatively few LUT values from which a nearest neighbor is selected. Image color values are assigned 8 bit pointers to their closest LUT value whereby data processing requires only the 8 bit pointer value to provide 24 bit color values from the LUT.
Vapor compression distillation module
NASA Technical Reports Server (NTRS)
Nuccio, P. P.
1975-01-01
A Vapor Compression Distillation (VCD) module was developed and evaluated as part of a Space Station Prototype (SSP) environmental control and life support system. The VCD module includes the waste tankage, pumps, post-treatment cells, automatic controls and fault detection instrumentation. Development problems were encountered with two components: the liquid pumps, and the waste tank and quantity gauge. Peristaltic pumps were selected instead of gear pumps, and a sub-program of materials and design optimization was undertaken leading to a projected life greater than 10,000 hours of continuous operation. A bladder tank was designed and built to contain the waste liquids and deliver it to the processor. A detrimental pressure pattern imposed upon the bladder by a force-operated quantity gauge was corrected by rearranging the force application, and design goals were achieved. System testing has demonstrated that all performance goals have been fulfilled.
NASA Technical Reports Server (NTRS)
Terp, L. S. (Inventor)
1977-01-01
Apparatus for transferring gas from a first container to a second container of higher pressure was devised. A free-piston compressor having a driving piston and cylinder, and a smaller diameter driven piston and cylinder, comprise the apparatus. A rod member connecting the driving and driven pistons functions for mutual reciprocation in the respective cylinders. A conduit may be provided for supplying gas to the driven cylinder from the first container. Also provided is apparatus for introducing gas to the driving piston, to compress gas by the driven piston for transfer to the second higher pressure container. The system is useful in transferring spacecraft cabin oxygen into higher pressure containers for use in extravehicular activities.
Compressed hyperspectral sensing
NASA Astrophysics Data System (ADS)
Tsagkatakis, Grigorios; Tsakalides, Panagiotis
2015-03-01
Acquisition of high dimensional Hyperspectral Imaging (HSI) data using limited dimensionality imaging sensors has led to restricted capabilities designs that hinder the proliferation of HSI. To overcome this limitation, novel HSI architectures strive to minimize the strict requirements of HSI by introducing computation into the acquisition process. A framework that allows the integration of acquisition with computation is the recently proposed framework of Compressed Sensing (CS). In this work, we propose a novel HSI architecture that exploits the sampling and recovery capabilities of CS to achieve a dramatic reduction in HSI acquisition requirements. In the proposed architecture, signals from multiple spectral bands are multiplexed before getting recorded by the imaging sensor. Reconstruction of the full hyperspectral cube is achieved by exploiting a dictionary of elementary spectral profiles in a unified minimization framework. Simulation results suggest that high quality recovery is possible from a single or a small number of multiplexed frames.
Jiang, Xiaoye; Yao, Yuan; Liu, Han; Guibas, Leonidas
2014-01-01
Modern data acquisition routinely produces massive amounts of network data. Though many methods and models have been proposed to analyze such data, the research of network data is largely disconnected with the classical theory of statistical learning and signal processing. In this paper, we present a new framework for modeling network data, which connects two seemingly different areas: network data analysis and compressed sensing. From a nonparametric perspective, we model an observed network using a large dictionary. In particular, we consider the network clique detection problem and show connections between our formulation with a new algebraic tool, namely Randon basis pursuit in homogeneous spaces. Such a connection allows us to identify rigorous recovery conditions for clique detection problems. Though this paper is mainly conceptual, we also develop practical approximation algorithms for solving empirical problems and demonstrate their usefulness on real-world datasets. PMID:25620806
Edge compression manifold apparatus
Renzi, Ronald F.
2007-02-27
A manifold for connecting external capillaries to the inlet and/or outlet ports of a microfluidic device for high pressure applications is provided. The fluid connector for coupling at least one fluid conduit to a corresponding port of a substrate that includes: (i) a manifold comprising one or more channels extending therethrough wherein each channel is at least partially threaded, (ii) one or more threaded ferrules each defining a bore extending therethrough with each ferrule supporting a fluid conduit wherein each ferrule is threaded into a channel of the manifold, (iii) a substrate having one or more ports on its upper surface wherein the substrate is positioned below the manifold so that the one or more ports is aligned with the one or more channels of the manifold, and (iv) device to apply an axial compressive force to the substrate to couple the one or more ports of the substrate to a corresponding proximal end of a fluid conduit.
Edge compression manifold apparatus
Renzi, Ronald F.
2004-12-21
A manifold for connecting external capillaries to the inlet and/or outlet ports of a microfluidic device for high pressure applications is provided. The fluid connector for coupling at least one fluid conduit to a corresponding port of a substrate that includes: (i) a manifold comprising one or more channels extending therethrough wherein each channel is at least partially threaded, (ii) one or more threaded ferrules each defining a bore extending therethrough with each ferrule supporting a fluid conduit wherein each ferrule is threaded into a channel of the manifold, (iii) a substrate having one or more ports on its upper surface wherein the substrate is positioned below the manifold so that the one or more ports is aligned with the one or more channels of the manifold, and (iv) device to apply an axial compressive force to the substrate to couple the one or more ports of the substrate to a corresponding proximal end of a fluid conduit.
The effects of wavelet compression on Digital Elevation Models (DEMs)
Oimoen, M.J.
2004-01-01
This paper investigates the effects of lossy compression on floating-point digital elevation models using the discrete wavelet transform. The compression of elevation data poses a different set of problems and concerns than does the compression of images. Most notably, the usefulness of DEMs depends largely in the quality of their derivatives, such as slope and aspect. Three areas extracted from the U.S. Geological Survey's National Elevation Dataset were transformed to the wavelet domain using the third order filters of the Daubechies family (DAUB6), and were made sparse by setting 95 percent of the smallest wavelet coefficients to zero. The resulting raster is compressible to a corresponding degree. The effects of the nulled coefficients on the reconstructed DEM are noted as residuals in elevation, derived slope and aspect, and delineation of drainage basins and streamlines. A simple masking technique also is presented, that maintains the integrity and flatness of water bodies in the reconstructed DEM.
Compression and compression fatigue testing of composite laminates
NASA Technical Reports Server (NTRS)
Porter, T. R.
1982-01-01
The effects of moisture and temperature on the fatigue and fracture response of composite laminates under compression loads were investigated. The structural laminates studied were an intermediate stiffness graphite-epoxy composite (a typical angle ply laimna liminate had a typical fan blade laminate). Full and half penetration slits and impact delaminations were the defects examined. Results are presented which show the effects of moisture on the fracture and fatigue strength at room temperature, 394 K (250 F), and 422 K (300 F). Static tests results show the effects of defect size and type on the compression-fracture strength under moisture and thermal environments. The cyclic tests results compare the fatigue lives and residual compression strength under compression only and under tension-compression fatigue loading.
Adaptive compressive sensing camera
NASA Astrophysics Data System (ADS)
Hsu, Charles; Hsu, Ming K.; Cha, Jae; Iwamura, Tomo; Landa, Joseph; Nguyen, Charles; Szu, Harold
2013-05-01
We have embedded Adaptive Compressive Sensing (ACS) algorithm on Charge-Coupled-Device (CCD) camera based on the simplest concept that each pixel is a charge bucket, and the charges comes from Einstein photoelectric conversion effect. Applying the manufactory design principle, we only allow altering each working component at a minimum one step. We then simulated what would be such a camera can do for real world persistent surveillance taking into account of diurnal, all weather, and seasonal variations. The data storage has saved immensely, and the order of magnitude of saving is inversely proportional to target angular speed. We did design two new components of CCD camera. Due to the matured CMOS (Complementary metal-oxide-semiconductor) technology, the on-chip Sample and Hold (SAH) circuitry can be designed for a dual Photon Detector (PD) analog circuitry for changedetection that predicts skipping or going forward at a sufficient sampling frame rate. For an admitted frame, there is a purely random sparse matrix [Φ] which is implemented at each bucket pixel level the charge transport bias voltage toward its neighborhood buckets or not, and if not, it goes to the ground drainage. Since the snapshot image is not a video, we could not apply the usual MPEG video compression and Hoffman entropy codec as well as powerful WaveNet Wrapper on sensor level. We shall compare (i) Pre-Processing FFT and a threshold of significant Fourier mode components and inverse FFT to check PSNR; (ii) Post-Processing image recovery will be selectively done by CDT&D adaptive version of linear programming at L1 minimization and L2 similarity. For (ii) we need to determine in new frames selection by SAH circuitry (i) the degree of information (d.o.i) K(t) dictates the purely random linear sparse combination of measurement data a la [Φ]M,N M(t) = K(t) Log N(t).
Compressive optical imaging systems
NASA Astrophysics Data System (ADS)
Wu, Yuehao
Compared to the classic Nyquist sampling theorem, Compressed Sensing or Compressive Sampling (CS) was proposed as a more efficient alternative for sampling sparse signals. In this dissertation, we discuss the implementation of the CS theory in building a variety of optical imaging systems. CS-based Imaging Systems (CSISs) exploit the sparsity of optical images in their transformed domains by imposing incoherent CS measurement patterns on them. The amplitudes and locations of sparse frequency components of optical images in their transformed domains can be reconstructed from the CS measurement results by solving an
Survey of Header Compression Techniques
NASA Technical Reports Server (NTRS)
Ishac, Joseph
2001-01-01
This report provides a summary of several different header compression techniques. The different techniques included are: (1) Van Jacobson's header compression (RFC 1144); (2) SCPS (Space Communications Protocol Standards) header compression (SCPS-TP, SCPS-NP); (3) Robust header compression (ROHC); and (4) The header compression techniques in RFC2507 and RFC2508. The methodology for compression and error correction for these schemes are described in the remainder of this document. All of the header compression schemes support compression over simplex links, provided that the end receiver has some means of sending data back to the sender. However, if that return path does not exist, then neither Van Jacobson's nor SCPS can be used, since both rely on TCP (Transmission Control Protocol). In addition, under link conditions of low delay and low error, all of the schemes perform as expected. However, based on the methodology of the schemes, each scheme is likely to behave differently as conditions degrade. Van Jacobson's header compression relies heavily on the TCP retransmission timer and would suffer an increase in loss propagation should the link possess a high delay and/or bit error rate (BER). The SCPS header compression scheme protects against high delay environments by avoiding delta encoding between packets. Thus, loss propagation is avoided. However, SCPS is still affected by an increased BER (bit-error-rate) since the lack of delta encoding results in larger header sizes. Next, the schemes found in RFC2507 and RFC2508 perform well for non-TCP connections in poor conditions. RFC2507 performance with TCP connections is improved by various techniques over Van Jacobson's, but still suffers a performance hit with poor link properties. Also, RFC2507 offers the ability to send TCP data without delta encoding, similar to what SCPS offers. ROHC is similar to the previous two schemes, but adds additional CRCs (cyclic redundancy check) into headers and improves
Compressible turbulent mixing: Effects of compressibility and Schmidt number
NASA Astrophysics Data System (ADS)
Ni, Qionglin
2015-11-01
Effects of compressibility and Schmidt number on passive scalar in compressible turbulence were studied. On the effect of compressibility, the scalar spectrum followed the k- 5 / 3 inertial-range scaling and suffered negligible influence from compressibility. The transfer of scalar flux was reduced by the transition from incompressible to compressible flows, however, was enhanced by the growth of Mach number. The intermittency parameter was increased by the growth of Mach number, and was decreased by the growth of the compressive mode of driven forcing. The dependency of the mixing timescale on compressibility showed that for the driven forcing, the compressive mode was less efficient in enhancing scalar mixing. On the effect of Schmidt number (Sc), in the inertial-convective range the scalar spectrum obeyed the k- 5 / 3 scaling. For Sc >> 1, a k-1 power law appeared in the viscous-convective range, while for Sc << 1, a k- 17 / 3 power law was identified in the inertial-diffusive range. The transfer of scalar flux grew over Sc. In the Sc >> 1 flow the scalar field rolled up and mixed sufficiently, while in the Sc << 1 flow that only had the large-scale, cloudlike structures. In Sc >> 1 and Sc << 1 flows, the spectral densities of scalar advection and dissipation followed the k- 5 / 3 scaling, indicating that in compressible turbulence the processes of advection and dissipation might deferring to the Kolmogorov picture. Finally, the comparison with incompressible results showed that the scalar in compressible turbulence lacked a conspicuous bump structure in its spectrum, and was more intermittent in the dissipative range.
Compression strength of composite primary structural components
NASA Technical Reports Server (NTRS)
Johnson, Eric R.
1992-01-01
A status report of work performed during the period May 1, 1992 to October 31, 1992 is presented. Research was conducted in three areas: delamination initiation in postbuckled dropped-ply laminates; stiffener crippling initiated by delamination; and pressure pillowing of an orthogonally stiffened cylindrical shell. The geometrically nonlinear response and delamination initiation of compression-loaded dropped-ply laminates is analyzed. A computational model of the stiffener specimens that includes the capability to predict the interlaminar response at the flange free edge in postbuckling is developed. The distribution of the interacting loads between the stiffeners and the shell wall, particularly at the load transfer at the stiffener crossing point, is determined.
14. Detail, upper chord connection point on upstream side of ...
14. Detail, upper chord connection point on upstream side of truss, showing connection of upper chord, laced vertical compression member, strut, counters, and laterals. - Dry Creek Bridge, Spanning Dry Creek at Cook Road, Ione, Amador County, CA
Pressure Oscillations in Adiabatic Compression
ERIC Educational Resources Information Center
Stout, Roland
2011-01-01
After finding Moloney and McGarvey's modified adiabatic compression apparatus, I decided to insert this experiment into my physical chemistry laboratory at the last minute, replacing a problematic experiment. With insufficient time to build the apparatus, we placed a bottle between two thick textbooks and compressed it with a third textbook forced…
Compression failure of composite laminates
NASA Technical Reports Server (NTRS)
Pipes, R. B.
1983-01-01
This presentation attempts to characterize the compressive behavior of Hercules AS-1/3501-6 graphite-epoxy composite. The effect of varying specimen geometry on test results is examined. The transition region is determined between buckling and compressive failure. Failure modes are defined and analytical models to describe these modes are presented.
Data compression by wavelet transforms
NASA Technical Reports Server (NTRS)
Shahshahani, M.
1992-01-01
A wavelet transform algorithm is applied to image compression. It is observed that the algorithm does not suffer from the blockiness characteristic of the DCT-based algorithms at compression ratios exceeding 25:1, but the edges do not appear as sharp as they do with the latter method. Some suggestions for the improved performance of the wavelet transform method are presented.
Application specific compression : final report.
Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.
2008-12-01
With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.
Streaming Compression of Hexahedral Meshes
Isenburg, M; Courbet, C
2010-02-03
We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.
Compression Shocks of Detached Flow
NASA Technical Reports Server (NTRS)
Eggink
1947-01-01
It is known that compression shocks which lead from supersonic to subsonic velocity cause the flow to separate on impact on a rigid wall. Such shocks appear at bodies with circular symmetry or wing profiles on locally exceeding sonic velocity, and in Laval nozzles with too high a back pressure. The form of the compression shocks observed therein is investigated.
Compressive Deconvolution in Medical Ultrasound Imaging.
Chen, Zhouye; Basarab, Adrian; Kouame, Denis
2016-03-01
The interest of compressive sampling in ultrasound imaging has been recently extensively evaluated by several research teams. Following the different application setups, it has been shown that the RF data may be reconstructed from a small number of measurements and/or using a reduced number of ultrasound pulse emissions. Nevertheless, RF image spatial resolution, contrast and signal to noise ratio are affected by the limited bandwidth of the imaging transducer and the physical phenomenon related to US wave propagation. To overcome these limitations, several deconvolution-based image processing techniques have been proposed to enhance the ultrasound images. In this paper, we propose a novel framework, named compressive deconvolution, that reconstructs enhanced RF images from compressed measurements. Exploiting an unified formulation of the direct acquisition model, combining random projections and 2D convolution with a spatially invariant point spread function, the benefit of our approach is the joint data volume reduction and image quality improvement. The proposed optimization method, based on the Alternating Direction Method of Multipliers, is evaluated on both simulated and in vivo data. PMID:26513780
Hyperspectral fluorescence microscopy based on compressed sensing
NASA Astrophysics Data System (ADS)
Studer, Vincent; Bobin, Jérome; Chahid, Makhlad; Mousavi, Hamed; Candes, Emmanuel; Dahan, Maxime
2012-03-01
In fluorescence microscopy, one can distinguish two kinds of imaging approaches, wide field and raster scan microscopy, differing by their excitation and detection scheme. In both imaging modalities the acquisition is independent of the information content of the image. Rather, the number of acquisitions N, is imposed by the Nyquist-Shannon theorem. However, in practice, many biological images are compressible (or, equivalently here, sparse), meaning that they depend on a number of degrees of freedom K that is smaller that their size N. Recently, the mathematical theory of compressed sensing (CS) has shown how the sensing modality could take advantage of the image sparsity to reconstruct images with no loss of information while largely reducing the number M of acquisition. Here we present a novel fluorescence microscope designed along the principles of CS. It uses a spatial light modulator (DMD) to create structured wide field excitation patterns and a sensitive point detector to measure the emitted fluorescence. On sparse fluorescent samples, we could achieve compression ratio N/M of up to 64, meaning that an image can be reconstructed with a number of measurements of only 1.5 % of its pixel number. Furthemore, we extend our CS acquisition scheme to an hyperspectral imaging system.
Multiview image compression based on LDV scheme
NASA Astrophysics Data System (ADS)
Battin, Benjamin; Niquin, Cédric; Vautrot, Philippe; Debons, Didier; Lucas, Laurent
2011-03-01
In recent years, we have seen several different approaches dealing with multiview compression. First, we can find the H264/MVC extension which generates quite heavy bitstreams when used on n-views autostereoscopic medias and does not allow inter-view reconstruction. Another solution relies on the MVD (MultiView+Depth) scheme which keeps p views (n > p > 1) and their associated depth-maps. This method is not suitable for multiview compression since it does not exploit the redundancy between the p views, moreover occlusion areas cannot be accurately filled. In this paper, we present our method based on the LDV (Layered Depth Video) approach which keeps one reference view with its associated depth-map and the n-1 residual ones required to fill occluded areas. We first perform a global per-pixel matching step (providing a good consistency between each view) in order to generate one unified-color RGB texture (where a unique color is devoted to all pixels corresponding to the same 3D-point, thus avoiding illumination artifacts) and a signed integer disparity texture. Next, we extract the non-redundant information and store it into two textures (a unified-color one and a disparity one) containing the reference and the n-1 residual views. The RGB texture is compressed with a conventional DCT or DWT-based algorithm and the disparity texture with a lossless dictionary algorithm. Then, we will discuss about the signal deformations generated by our approach.
Microbunching Instability due to Bunch Compression
Huang, Zhirong; Wu, Juhao; Shaftan, Timur; /Brookhaven
2005-12-13
Magnetic bunch compressors are designed to increase the peak current while maintaining the transverse and longitudinal emittances in order to drive a short-wavelength free electron laser (FEL). Recently, several linac-based FEL experiments observe self-developing micro-structures in the longitudinal phase space of electron bunches undergoing strong compression [1-3]. In the mean time, computer simulations of coherent synchrotron radiation (CSR) effects in bunch compressors illustrate that a CSR-driven microbunching instability may significantly amplify small longitudinal density and energy modulations and hence degrade the beam quality [4]. Various theoretical models have since been developed to describe this instability [5-8]. It is also pointed out that the microbunching instability may be driven strongly by the longitudinal space charge (LSC) field [9,10] and by the linac wakefield [11] in the accelerator, leading to a very large overall gain of a two-stage compression system such as found in the Linac Coherent Light Source (LCLS) [12]. This paper reviews theory and simulations of microbunching instability due to bunch compression, the proposed method to suppress its effects for short-wavelength FELs, and experimental characterizations of beam modulations in linear accelerators. A related topic of interests is microbunching instability in storage rings, which has been reported in the previous ICFA beam dynamics newsletter No. 35 (http://wwwbd. fnal.gov/icfabd/Newsletter35.pdf).
Digital compression algorithms for HDTV transmission
NASA Technical Reports Server (NTRS)
Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.
1990-01-01
Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.
Analytical model for ramp compression
NASA Astrophysics Data System (ADS)
Xue, Quanxi; Jiang, Shaoen; Wang, Zhebin; Wang, Feng; Hu, Yun; Ding, Yongkun
2016-08-01
An analytical ramp compression model for condensed matter, which can provide explicit solutions for isentropic compression flow fields, is reported. A ramp compression experiment can be easily designed according to the capability of the loading source using this model. Specifically, important parameters, such as the maximum isentropic region width, material properties, profile of the pressure pulse, and the pressure pulse duration can be reasonably allocated or chosen. To demonstrate and study this model, laser-direct-driven ramp compression experiments and code simulation are performed successively, and the factors influencing the accuracy of the model are studied. The application and simulation show that this model can be used as guidance in the design of a ramp compression experiment. However, it is verified that further optimization work is required for a precise experimental design.
Increasing FTIR spectromicroscopy speed and resolution through compressive imaging
Gallet, Julien; Riley, Michael; Hao, Zhao; Martin, Michael C
2007-10-15
At the Advanced Light Source at Lawrence Berkeley National Laboratory, we are investigating how to increase both the speed and resolution of synchrotron infrared imaging. Synchrotron infrared beamlines have diffraction-limited spot sizes and high signal to noise, however spectral images must be obtained one point at a time and the spatial resolution is limited by the effects of diffraction. One technique to assist in speeding up spectral image acquisition is described here and uses compressive imaging algorithms. Compressive imaging can potentially attain resolutions higher than allowed by diffraction and/or can acquire spectral images without having to measure every spatial point individually thus increasing the speed of such maps. Here we present and discuss initial tests of compressive imaging techniques performed with ALS Beamline 1.4.3?s Nic-Plan infrared microscope, Beamline 1.4.4 Continuum XL IR microscope, and also with a stand-alone Nicolet Nexus 470 FTIR spectrometer.
Image analysis and compression: renewed focus on texture
NASA Astrophysics Data System (ADS)
Pappas, Thrasyvoulos N.; Zujovic, Jana; Neuhoff, David L.
2010-01-01
We argue that a key to further advances in the fields of image analysis and compression is a better understanding of texture. We review a number of applications that critically depend on texture analysis, including image and video compression, content-based retrieval, visual to tactile image conversion, and multimodal interfaces. We introduce the idea of "structurally lossless" compression of visual data that allows significant differences between the original and decoded images, which may be perceptible when they are viewed side-by-side, but do not affect the overall quality of the image. We then discuss the development of objective texture similarity metrics, which allow substantial point-by-point deviations between textures that according to human judgment are essentially identical.
Compressive sensing exploiting wavelet-domain dependencies for ECG compression
NASA Astrophysics Data System (ADS)
Polania, Luisa F.; Carrillo, Rafael E.; Blanco-Velasco, Manuel; Barner, Kenneth E.
2012-06-01
Compressive sensing (CS) is an emerging signal processing paradigm that enables sub-Nyquist sampling of sparse signals. Extensive previous work has exploited the sparse representation of ECG signals in compression applications. In this paper, we propose the use of wavelet domain dependencies to further reduce the number of samples in compressive sensing-based ECG compression while decreasing the computational complexity. R wave events manifest themselves as chains of large coefficients propagating across scales to form a connected subtree of the wavelet coefficient tree. We show that the incorporation of this connectedness as additional prior information into a modified version of the CoSaMP algorithm can significantly reduce the required number of samples to achieve good quality in the reconstruction. This approach also allows more control over the ECG signal reconstruction, in particular, the QRS complex, which is typically distorted when prior information is not included in the recovery. The compression algorithm was tested upon records selected from the MIT-BIH arrhythmia database. Simulation results show that the proposed algorithm leads to high compression ratios associated with low distortion levels relative to state-of-the-art compression algorithms.
Pontier, C; Viana, M; Champion, E; Bernache-Assollant, D; Chulia, D
2001-05-01
Literature concerning calcium phosphates in pharmacy exhibits the chemical diversity of the compounds available. Some excipient manufacturers offer hydroxyapatite as a direct compression excipient, but the chemical analysis of this compound usually shows a variability of the composition: the so-called materials can be hydroxyapatite or other calcium phosphates, uncalcined (i.e. with a low crystallinity) or calcined and well-crystallized hydroxyapatite. This study points out the incidence of the crystallinity of one compound (i.e. hydroxyapatite) on the mechanical properties. Stoichiometric hydroxyapatite is synthesized and compounds differing in their crystallinity, manufacturing process and particle size are manufactured. X-Ray diffraction analysis is used to investigate the chemical nature of the compounds. The mechanical study (study of the compression, diametral compressive strength, Heckel plots) highlights the negative effect of calcination on the mechanical properties. Porosity and specific surface area measurements show the effect of calcination on compaction. Uncalcined materials show bulk and mechanical properties in accordance with their use as direct compression excipients. PMID:11343890
Wavefield Compression for Full-Waveform Inversion
NASA Astrophysics Data System (ADS)
Boehm, Christian; Fichtner, Andreas; de la Puente, Josep; Hanzich, Mauricio
2015-04-01
We present compression techniques tailored to iterative nonlinear minimization methods that significantly reduce the memory requirements to store the forward wavefield for the computation of sensitivity kernels. Full-waveform inversion on 3d data sets requires massive computing and memory capabilities. Adjoint techniques offer a powerful tool to compute the first and second derivatives. However, due to the asynchronous nature of forward and adjoint simulations, a severe bottleneck is introduced by the necessity to access both wavefields simultaneously when computing sensitivity kernels. There exist two opposing strategies to deal with this challenge. On the one hand, conventional approaches save the whole forward wavefield to the disk, which yields a significant I/O overhead and might require several terabytes of storage capacity per seismic event. On the other hand, checkpointing techniques allow to trade an almost arbitrary amount of memory requirements for a - potentially large - number of additional forward simulations. We propose an alternative approach that strikes a balance between memory requirements and the need for additional computations. Here, we aim at compressing the forward wavefield in such a way that (1) the I/O overhead is reduced substantially without the need for additional simulations, (2) the costs for compressing/decompressing the wavefield are negligible, and (3) the approximate derivatives resulting from the compressed forward wavefield do not affect the rate of convergence of a Newton-type minimization method. To this end, we apply an adaptive re-quantization of the displacement field that uses dynamically adjusted floating-point accuracies - i.e., a locally varying number of bits - to store the data. Furthermore, the spectral element functions are adaptively downsampled to a lower polynomial degree. In addition, a sliding-window cubic spline re-interpolates the temporal snapshots to recover a smooth signal. Moreover, a preprocessing step
Planar velocity measurements in compressible mixing layers
NASA Astrophysics Data System (ADS)
Urban, William David
1999-10-01
The efficiency of high-Mach number airbreathing propulsion devices is critically dependent upon the mixing of gases in turbulent shear flows. However, compressibility is known to suppress the growth rates of these mixing layers, posing a problem of both practical and scientific interest. In the present study, particle image velocimetry (PIV) is used to obtain planar, two- component velocity fields for Planar gaseous shear layers at convective Mach numbers Mc of 0.25, 0.63, and 0.76. The experiments are performed in a large-scale blowdown wind tunnel, with high-speed freestream Mach numbers up to 2.25 and shear-layer Reynolds numbers up to 106 . The instantaneous data are analyzed to produce maps of derived quantities such as vorticity, and ensemble averaged to provide turbulence statistics. Specific issues relating to the application of PIV to supersonic flows are addressed. In addition to the fluid- velocity measurements, we present double-pulsed scalar visualizations, permitting inference of the convective velocity of the large-scale structures, and examine the interaction of a weak wave with the mixing layer. The principal change associated with compressibility is seen to be the development of multiple high-gradient regions in the instantaneous velocity field, disrupting the spanwise-coherent `roller' structure usually associated with incompressible layers. As a result, the vorticity peaks reside in multiple thin sheets, segregated in the transverse direction. This suggests a decrease in cross-stream communication and a disconnection of the entrainment processes at the two interfaces. In the compressible case, steep-gradient regions in the instantaneous velocity field often correspond closely with the local sonic line, suggesting a sensitivity to lab-frame disturbances; this could in turn explain the effectiveness of sub-boundary layer mixing enhancement strategies in this flow. Large- ensemble statistics bear out the observation from previous single-point
Compression relief engine brake
Meneely, V.A.
1987-10-06
A compression relief brake is described for four cycle internal-combustion engines, comprising: a pressurized oil supply; means for selectively pressurizing a hydraulic circuit with oil from the oil supply; a master piston and cylinder communicating with a slave piston and cylinder via the hydraulic circuit; an engine exhaust valve mechanically coupled to the engine and timed to open during the exhaust cycle of the engine the exhaust valve coupled to the slave piston. The exhaust valve is spring-based in a closed state to contact a valve seat; a sleeve frictionally and slidably disposed within a cavity defined by the slave piston which cavity communicates with the hydraulic circuit. When the hydraulic circuit is selectively pressurized and the engine is operating the sleeve entraps an incompressible volume of oil within the cavity to generate a displacement of the slave piston within the slave cylinder, whereby a first gap is maintained between the exhaust valve and its associated seat; and means for reciprocally activating the master piston for increasing the pressure within the previously pressurized hydraulic circuit during at least a portion of the expansion cycle of the engine whereby a second gap is reciprocally maintained between the exhaust valve and its associated seat.
Variable compression ratio control
Johnson, K.A.
1988-04-19
In a four cycle engine that includes a crankshaft having a plural number of main shaft sections defining the crankshaft rotational axis and a plural number of crank arms defining orbital shaft sections, a plural number of combustion cylinders, a movable piston within each cylinder, each cylinder and its associated piston defining a combustion chamber, a connecting rod connecting each piston to an orbital shaft section of the crankshaft, and a plural number of stationary support walls spaced along the crankshaft axis for absorbing crankshaft forces: the improvement is described comprising means for adjustably supporting the crankshaft on the stationary walls such that the crankshaft rotational axis is adjustable along the piston-cylinder axis for the purpose of varying a resulting engine compression ratio; the adjustable support means comprising a circular cavity in each stationary wall. A circular disk swivably is seated in each cavity, each circular disk having a circular opening therethrough eccentric to the disk center. The crankshaft is arranged so that respective ones of its main shaft sections are located within respective ones of the circular openings; means for rotating each circular disk around its center so that the main shaft sections of the crankshaft are adjusted toward and away from the combustion chamber; a pinion gear on an output end of the crankshaft in axial alignment with and positioned beyond the respective ones of the main shaft sections, and a rotary output gear located about and engaged with teeth extending from the pinion gear.
Adaptive compression of image data
NASA Astrophysics Data System (ADS)
Hludov, Sergei; Schroeter, Claus; Meinel, Christoph
1998-09-01
In this paper we will introduce a method of analyzing images, a criterium to differentiate between images, a compression method of medical images in digital form based on the classification of the image bit plane and finally an algorithm for adaptive image compression. The analysis of the image content is based on a valuation of the relative number and absolute values of the wavelet coefficients. A comparison between the original image and the decoded image will be done by a difference criteria calculated by the wavelet coefficients of the original image and the decoded image of the first and second iteration step of the wavelet transformation. This adaptive image compression algorithm is based on a classification of digital images into three classes and followed by the compression of the image by a suitable compression algorithm. Furthermore we will show that applying these classification rules on DICOM-images is a very effective method to do adaptive compression. The image classification algorithm and the image compression algorithms have been implemented in JAVA.
Advances in compressible turbulent mixing
Dannevik, W.P.; Buckingham, A.C.; Leith, C.E.
1992-01-01
This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.
Best compression: Reciprocating or rotary?
Cahill, C.
1997-07-01
A compressor is a device used to increase the pressure of a compressible fluid. The inlet pressure can vary from a deep vacuum to a high positive pressure. The discharge pressure can range from subatmospheric levels to tens of thousands of pounds per square inch. Compressors come in numerous forms, but for oilfield applications there are two primary types, reciprocating and rotary. Both reciprocating and rotary compressors are grouped in the intermittent mode of compression. Intermittent is cyclic in nature, in that a specific quantity of gas is ingested by the compressor, acted upon and discharged before the cycle is repeated. Reciprocating compression is the most common form of compression used for oilfield applications. Rotary screw compressors have a long history but are relative newcomers to oilfield applications. The rotary screw compressor-technically a helical rotor compressor-dates back to 1878. That was when the first rotary screw was manufactured for the purpose of compressing air. Today thousands of rotary screw compression packages are being used throughout the world to compress natural gas.
Designing experiments through compressed sensing.
Young, Joseph G.; Ridzal, Denis
2013-06-01
In the following paper, we discuss how to design an ensemble of experiments through the use of compressed sensing. Specifically, we show how to conduct a small number of physical experiments and then use compressed sensing to reconstruct a larger set of data. In order to accomplish this, we organize our results into four sections. We begin by extending the theory of compressed sensing to a finite product of Hilbert spaces. Then, we show how these results apply to experiment design. Next, we develop an efficient reconstruction algorithm that allows us to reconstruct experimental data projected onto a finite element basis. Finally, we verify our approach with two computational experiments.
Context-Aware Image Compression
Chan, Jacky C. K.; Mahjoubfar, Ata; Chen, Claire L.; Jalali, Bahram
2016-01-01
We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling. PMID:27367904
Image compression using constrained relaxation
NASA Astrophysics Data System (ADS)
He, Zhihai
2007-01-01
In this work, we develop a new data representation framework, called constrained relaxation for image compression. Our basic observation is that an image is not a random 2-D array of pixels. They have to satisfy a set of imaging constraints so as to form a natural image. Therefore, one of the major tasks in image representation and coding is to efficiently encode these imaging constraints. The proposed data representation and image compression method not only achieves more efficient data compression than the state-of-the-art H.264 Intra frame coding, but also provides much more resilience to wireless transmission errors with an internal error-correction capability.
Partial transparency of compressed wood
NASA Astrophysics Data System (ADS)
Sugimoto, Hiroyuki; Sugimori, Masatoshi
2016-05-01
We have developed novel wood composite with optical transparency at arbitrary region. Pores in wood cells have a great variation in size. These pores expand the light path in the sample, because the refractive indexes differ between constituents of cell and air in lumen. In this study, wood compressed to close to lumen had optical transparency. Because the condition of the compression of wood needs the plastic deformation, wood was impregnated phenolic resin. The optimal condition for high transmission is compression ratio above 0.7.
A Quadratic Closure for Compressible Turbulence
Futterman, J A
2008-09-16
We have investigated a one-point closure model for compressible turbulence based on third- and higher order cumulant discard for systems undergoing rapid deformation, such as might occur downstream of a shock or other discontinuity. In so doing, we find the lowest order contributions of turbulence to the mean flow, which lead to criteria for Adaptive Mesh Refinement. Rapid distortion theory (RDT) as originally applied by Herring closes the turbulence hierarchy of moment equations by discarding third order and higher cumulants. This is similar to the fourth-order cumulant discard hypothesis of Millionshchikov, except that the Millionshchikov hypothesis was taken to apply to incompressible homogeneous isotropic turbulence generally, whereas RDT is applied only to fluids undergoing a distortion that is 'rapid' in the sense that the interaction of the mean flow with the turbulence overwhelms the interaction of the turbulence with itself. It is also similar to Gaussian closure, in which both second and fourth-order cumulants are retained. Motivated by RDT, we develop a quadratic one-point closure for rapidly distorting compressible turbulence, without regard to homogeneity or isotropy, and make contact with two equation turbulence models, especially the K-{var_epsilon} and K-L models, and with linear instability growth. In the end, we arrive at criteria for Adaptive Mesh Refinement in Finite Volume simulations.
Internal roll compression system
Anderson, Graydon E.
1985-01-01
This invention is a machine for squeezing water out of peat or other material of low tensile strength; the machine including an inner roll eccentrically positioned inside a tubular outer roll, so as to form a gradually increasing pinch area at one point therebetween, so that, as the rolls rotate, the material is placed between the rolls, and gets wrung out when passing through the pinch area.
Li, Xiangwei; Lan, Xuguang; Yang, Meng; Xue, Jianru; Zheng, Nanning
2014-01-01
Compressive Sensing Imaging (CSI) is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS) acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4∼2 dB comparing with current state-of-the-art, while maintaining a low computational complexity. PMID:25490597
Compression fractures of the back
Compression fractures of the back are broken vertebrae. Vertebrae are the bones of the spine. ... bone from elsewhere Tumors that start in the spine, such as multiple myeloma Having many fractures of ...
Efficient Decoding of Compressed Data.
ERIC Educational Resources Information Center
Bassiouni, Mostafa A.; Mukherjee, Amar
1995-01-01
Discusses the problem of enhancing the speed of Huffman decoding of compressed data. Topics addressed include the Huffman decoding tree; multibit decoding; binary string mapping problems; and algorithms for solving mapping problems. (22 references) (LRW)
[New aspects of compression therapy].
Partsch, Bernhard; Partsch, Hugo
2016-06-01
In this review article the mechanisms of action of compression therapy are summarized and a survey of materials is presented together with some practical advice how and when these different devices should be applied. Some new experimental findings regarding the optimal dosage (= compression pressure) concerning an improvement of venous hemodynamics and a reduction of oedema are discussed. It is shown, that stiff, non-yielding material applied with adequate pressure provides hemodynamically superior effects compared to elastic material and that relatively low pressures reduce oedema. Compression over the calf is more important to increase the calf pump function compared to graduated compression. In patients with mixed, arterial-venous ulcers and an ABPI over 0.6 inelastic bandages not exceeding a sub-bandage pressure of 40 mmHg may increase the arterial flow and improve venous pumping function. PMID:27259340
Compressed gas fuel storage system
Wozniak, John J.; Tiller, Dale B.; Wienhold, Paul D.; Hildebrand, Richard J.
2001-01-01
A compressed gas vehicle fuel storage system comprised of a plurality of compressed gas pressure cells supported by shock-absorbing foam positioned within a shape-conforming container. The container is dimensioned relative to the compressed gas pressure cells whereby a radial air gap surrounds each compressed gas pressure cell. The radial air gap allows pressure-induced expansion of the pressure cells without resulting in the application of pressure to adjacent pressure cells or physical pressure to the container. The pressure cells are interconnected by a gas control assembly including a thermally activated pressure relief device, a manual safety shut-off valve, and means for connecting the fuel storage system to a vehicle power source and a refueling adapter. The gas control assembly is enclosed by a protective cover attached to the container. The system is attached to the vehicle with straps to enable the chassis to deform as intended in a high-speed collision.
Comparison of Artificial Compressibility Methods
NASA Technical Reports Server (NTRS)
Kiris, Cetin; Housman, Jeffrey; Kwak, Dochan
2004-01-01
Various artificial compressibility methods for calculating the three-dimensional incompressible Navier-Stokes equations are compared. Each method is described and numerical solutions to test problems are conducted. A comparison based on convergence behavior, accuracy, and robustness is given.
Shock compression of polyvinyl chloride
NASA Astrophysics Data System (ADS)
Neogi, Anupam; Mitra, Nilanjan
2016-04-01
This study presents shock compression simulation of atactic polyvinyl chloride (PVC) using ab-initio and classical molecular dynamics. The manuscript also identifies the limits of applicability of classical molecular dynamics based shock compression simulation for PVC. The mechanism of bond dissociation under shock loading and its progression is demonstrated in this manuscript using the density functional theory based molecular dynamics simulations. The rate of dissociation of different bonds at different shock velocities is also presented in this manuscript.
Negative compressibility observed in graphene containing resonant impurities
Chen, X. L.; Wang, L.; Li, W.; Wang, Y.; He, Y. H.; Wu, Z. F.; Han, Y.; Zhang, M. W.; Xiong, W.; Wang, N.
2013-05-20
We observed negative compressibility in monolayer graphene containing resonant impurities under different magnetic fields. Hydrogenous impurities were introduced into graphene by electron beam (e-beam) irradiation. Resonant states located in the energy region of {+-}0.04 eV around the charge neutrality point were probed in e-beam-irradiated graphene capacitors. Theoretical results based on tight-binding and Lifshitz models agreed well with experimental observations of graphene containing a low concentration of resonant impurities. The interaction between resonant states and Landau levels was detected by varying the applied magnetic field. The interaction mechanisms and enhancement of the negative compressibility in disordered graphene are discussed.
Stress Relaxation for Granular Materials near Jamming under Cyclic Compression
NASA Astrophysics Data System (ADS)
Farhadi, Somayeh; Zhu, Alex Z.; Behringer, Robert P.
2015-10-01
We have explored isotropically jammed states of semi-2D granular materials through cyclic compression. In each compression cycle, systems of either identical ellipses or bidisperse disks transition between jammed and unjammed states. We determine the evolution of the average pressure P and structure through consecutive jammed states. We observe a transition point ϕm above which P persists over many cycles; below ϕm, P relaxes slowly. The relaxation time scale associated with P increases with packing fraction, while the relaxation time scale for collective particle motion remains constant. The collective motion of the ellipses is hindered compared to disks because of the rotational constraints on elliptical particles.
Negative compressibility observed in graphene containing resonant impurities
NASA Astrophysics Data System (ADS)
Chen, X. L.; Wang, L.; Li, W.; Wang, Y.; He, Y. H.; Wu, Z. F.; Han, Y.; Zhang, M. W.; Xiong, W.; Wang, N.
2013-05-01
We observed negative compressibility in monolayer graphene containing resonant impurities under different magnetic fields. Hydrogenous impurities were introduced into graphene by electron beam (e-beam) irradiation. Resonant states located in the energy region of ±0.04 eV around the charge neutrality point were probed in e-beam-irradiated graphene capacitors. Theoretical results based on tight-binding and Lifshitz models agreed well with experimental observations of graphene containing a low concentration of resonant impurities. The interaction between resonant states and Landau levels was detected by varying the applied magnetic field. The interaction mechanisms and enhancement of the negative compressibility in disordered graphene are discussed.
Object-Based Image Compression
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.
2003-01-01
Image compression frequently supports reduced storage requirement in a computer system, as well as enhancement of effective channel bandwidth in a communication system, by decreasing the source bit rate through reduction of source redundancy. The majority of image compression techniques emphasize pixel-level operations, such as matching rectangular or elliptical sampling blocks taken from the source data stream, with exemplars stored in a database (e.g., a codebook in vector quantization or VQ). Alternatively, one can represent a source block via transformation, coefficient quantization, and selection of coefficients deemed significant for source content approximation in the decompressed image. This approach, called transform coding (TC), has predominated for several decades in the signal and image processing communities. A further technique that has been employed is the deduction of affine relationships from source properties such as local self-similarity, which supports the construction of adaptive codebooks in a self-VQ paradigm that has been called iterated function systems (IFS). Although VQ, TC, and IFS based compression algorithms have enjoyed varying levels of success for different types of applications, bit rate requirements, and image quality constraints, few of these algorithms examine the higher-level spatial structure of an image, and fewer still exploit this structure to enhance compression ratio. In this paper, we discuss a fourth type of compression algorithm, called object-based compression, which is based on research in joint segmentaton and compression, as well as previous research in the extraction of sketch-like representations from digital imagery. Here, large image regions that correspond to contiguous recognizeable objects or parts of objects are segmented from the source, then represented compactly in the compressed image. Segmentation is facilitated by source properties such as size, shape, texture, statistical properties, and spectral
29 CFR 1917.154 - Compressed air.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 29 Labor 7 2012-07-01 2012-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...
29 CFR 1917.154 - Compressed air.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 29 Labor 7 2013-07-01 2013-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...
29 CFR 1917.154 - Compressed air.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 29 Labor 7 2014-07-01 2014-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...
29 CFR 1917.154 - Compressed air.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 7 2011-07-01 2011-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...
29 CFR 1917.154 - Compressed air.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 7 2010-07-01 2010-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...
A Test Data Compression Scheme Based on Irrational Numbers Stored Coding
Wu, Hai-feng; Cheng, Yu-sheng; Zhan, Wen-fa; Cheng, Yi-fei; Wu, Qiong; Zhu, Shi-juan
2014-01-01
Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS), is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL. PMID:25258744
Perceptual Image Compression in Telemedicine
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)
1996-01-01
The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications
Absolutely lossless compression of medical images.
Ashraf, Robina; Akbar, Muhammad
2005-01-01
Data in medical images is very large and therefore for storage and/or transmission of these images, compression is essential. A method is proposed which provides high compression ratios for radiographic images with no loss of diagnostic quality. In the approach an image is first compressed at a high compression ratio but with loss, and the error image is then compressed losslessly. The resulting compression is not only strictly lossless, but also expected to yield a high compression ratio, especially if the lossy compression technique is good. A neural network vector quantizer (NNVQ) is used as a lossy compressor, while for lossless compression Huffman coding is used. Quality of images is evaluated by comparing with standard compression techniques available. PMID:17281110
Improved Compression of Wavelet-Transformed Images
NASA Technical Reports Server (NTRS)
Kiely, Aaron; Klimesh, Matthew
2005-01-01
length and the code parameter. When this difference falls outside a fixed range, the code parameter is updated (increased or decreased). The Golomb code parameter is selected based on the average magnitude of recently encoded nonzero samples. The coding method requires no floating- point operations, and more readily adapts to local statistics than other methods. The method can also accommodate arbitrarily large input values and arbitrarily long runs of zeros. In practice, this means that changes in the dynamic range or size of the input data set would not require a change to the compressor. The algorithm has been tested in computational experiments on test images. A comparison with a previously developed algorithm that uses large code tables (generated via Huffman coding on training data) suggests that the data-compression effectiveness of the present algorithm is comparable to the best performance achievable by the previously developed algorithm.
Point Cloud Server (pcs) : Point Clouds In-Base Management and Processing
NASA Astrophysics Data System (ADS)
Cura, R.; Perret, J.; Paparoditis, N.
2015-08-01
In addition to the traditional Geographic Information System (GIS) data such as images and vectors, point cloud data has become more available. It is appreciated for its precision and true three-Dimensional (3D) nature. However, managing the point cloud can be difficult due to scaling problems and specificities of this data type. Several methods exist but are usually fairly specialised and solve only one aspect of the management problem. In this work, we propose a complete and efficient point cloud management system based on a database server that works on groups of points rather than individual points. This system is specifically designed to solve all the needs of point cloud users: fast loading, compressed storage, powerful filtering, easy data access and exporting, and integrated processing. Moreover, the system fully integrates metadata (like sensor position) and can conjointly use point clouds with images, vectors, and other point clouds. The system also offers in-base processing for easy prototyping and parallel processing and can scale well. Lastly, the system is built on open source technologies; therefore it can be easily extended and customised. We test the system will several billion points of point clouds from Lidar (aerial and terrestrial ) and stereo-vision. We demonstrate ~ 400 million pts/h loading speed, user-transparent greater than 2 to 4:1 compression ratio, filtering in the approximately 50 ms range, and output of about a million pts/s, along with classical processing, such as object detection.
An overview of semantic compression
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.
2010-08-01
We live in such perceptually rich natural and manmade environments that detection and recognition of objects is mediated cerebrally by attentional filtering, in order to separate objects of interest from background clutter. In computer models of the human visual system, attentional filtering is often restricted to early processing, where areas of interest (AOIs) are delineated around anomalies of interest, then the pixels within each AOI's subtense are isolated for later processing. In contrast, the human visual system concurrently detects many targets at multiple levels (e.g., retinal center-surround filters, ganglion layer feature detectors, post-retinal spatial filtering, and cortical detection / filtering of features and objects, to name but a few processes). Intracranial attentional filtering appears to play multiple roles, including clutter filtration at all levels of processing - thus, we process individual retinal cell responses, early filtering response, and so forth, on up to the filtering of objects at high levels of semantic complexity. Computationally, image compression techniques have progressed from emphasizing pixels, to considering regions of pixels as foci of computational interest. In more recent research, object-based compression has been investigated with varying rate-distortion performance and computational efficiency. Codecs have been developed for a wide variety of applications, although the majority of compression and decompression transforms continue to concentrate on region- and pixel-based processing, in part because of computational convenience. It is interesting to note that a growing body of research has emphasized the detection and representation of small features in relationship to their surrounding environment, which has occasionally been called semantic compression. In this paper, we overview different types of semantic compression approaches, with particular interest in high-level compression algorithms. Various algorithms and
Compression and Progressive Retrieval of Multi-Dimensional Sensor Data
NASA Astrophysics Data System (ADS)
Lorkowski, P.; Brinkhoff, T.
2016-06-01
Since the emergence of sensor data streams, increasing amounts of observations have to be transmitted, stored and retrieved. Performing these tasks at the granularity of single points would mean an inappropriate waste of resources. Thus, we propose a concept that performs a partitioning of observations by spatial, temporal or other criteria (or a combination of them) into data segments. We exploit the resulting proximity (according to the partitioning dimension(s)) within each data segment for compression and efficient data retrieval. While in principle allowing lossless compression, it can also be used for progressive transmission with increasing accuracy wherever incremental data transfer is reasonable. In a first feasibility study, we apply the proposed method to a dataset of ARGO drifting buoys covering large spatio-temporal regions of the world's oceans and compare the achieved compression ratio to other formats.
Compression Wave Velocity of Cylindrical Rock Specimens: Engineering Modulus Interpretation
NASA Astrophysics Data System (ADS)
Cha, Minsu; Cho, Gye-Chun
2007-07-01
In this study, we experimentally assess which elastic modulus — Young’s modulus or the constraint modulus — is appropriate for application to the compression wave velocity of rock cores measured via an ultrasonic pulse technique and a point-source travel-time method. Experimental tests are performed at pulse frequencies between 50 kHz and 1 MHz, the ratio of diameter (D) to wavelength (λ) is between 0.6 and 25.6, and the specimen length is between 10 and 70 cm. It is found that compression wave velocities obtained from the two methods are constrained wave velocities, and thus the constraint modulus should be applied in the wave equation. Also, the effect of the frequency of the ultrasonic pulse, D/λ, and specimen length on compression wave velocity is negligble within the ranges explored in this study.
The upper-branch stability of compressible boundary layer flows
NASA Technical Reports Server (NTRS)
Gajjar, J. S. B.; Cole, J. W.
1989-01-01
The upper-branch linear and nonlinear stability of compressible boundary layer flows is studied using the approach of Smith and Bodonyi (1982) for a similar incompressible problem. Both pressure gradient boundary layers and Blasius flow are considered with and without heat transfer, and the neutral eigenrelations incorporating compressibility effects are obtained explicitly. The compressible nonlinear viscous critical layer equations are derived and solved numerically and the results indicate some solutions with positive phase shift across the critical layer. Various limiting cases are investigated including the case of much larger disturbance amplitudes and this indicates the structure for the strongly nonlinear critical layer of the Benney-Bergeon (1969) type. It is also shown how a match with the inviscid neutral inflexional modes arising from the generalized inflexion point criterion, is achieved.
Compression of spectral meteorological imagery
NASA Technical Reports Server (NTRS)
Miettinen, Kristo
1993-01-01
Data compression is essential to current low-earth-orbit spectral sensors with global coverage, e.g., meteorological sensors. Such sensors routinely produce in excess of 30 Gb of data per orbit (over 4 Mb/s for about 110 min) while typically limited to less than 10 Gb of downlink capacity per orbit (15 minutes at 10 Mb/s). Astro-Space Division develops spaceborne compression systems for compression ratios from as little as three to as much as twenty-to-one for high-fidelity reconstructions. Current hardware production and development at Astro-Space Division focuses on discrete cosine transform (DCT) systems implemented with the GE PFFT chip, a 32x32 2D-DCT engine. Spectral relations in the data are exploited through block mean extraction followed by orthonormal transformation. The transformation produces blocks with spatial correlation that are suitable for further compression with any block-oriented spatial compression system, e.g., Astro-Space Division's Laplacian modeler and analytic encoder of DCT coefficients.
NASA Astrophysics Data System (ADS)
Mikheenko, P.; Colclough, M. S.; Chakalov, R.; Kawano, K.; Muirhead, C. M.
We report on experimental investigation of the effect of flux compression in superconducting YBa2Cu3Ox (YBCO) films and YBCO/CMR (Colossal Magnetoresistive) multilayers. The flux compression produces positive magnetic moment (m) upon the cooling in a field from above to below the critical temperature. We found effect of compression in all measured films and multilayers. In accordance with theoretical calculations, m is proportional to applied magnetic field. The amplitude of the effect depends on the cooling rate, which suggests the inhomogeneous cooling as its origin. The positive moment is always very small, a fraction of a percent of the ideal diamagnetic response. A CMR layer in contact with HTS decreases the amplitude of the effect. The flux compression weakly depends on sample size, but sensitive to its form and topology. The positive magnetic moment does not appear in bulk samples at low rates of the cooling. Our results show that the main features of the flux compression are very different from those in Paramagnetic Meissner effect observed in bulk high temperature superconductors and Nb disks.
Compression of Probabilistic XML Documents
NASA Astrophysics Data System (ADS)
Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice
Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.
30 CFR 75.1730 - Compressed air; general; compressed air systems.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...
30 CFR 75.1730 - Compressed air; general; compressed air systems.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...
30 CFR 75.1730 - Compressed air; general; compressed air systems.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...
30 CFR 75.1730 - Compressed air; general; compressed air systems.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...
30 CFR 75.1730 - Compressed air; general; compressed air systems.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...
Flux Compression Magnetic Nozzle
NASA Technical Reports Server (NTRS)
Thio, Y. C. Francis; Schafer, Charles (Technical Monitor)
2001-01-01
In pulsed fusion propulsion schemes in which the fusion energy creates a radially expanding plasma, a magnetic nozzle is required to redirect the radially diverging flow of the expanding fusion plasma into a rearward axial flow, thereby producing a forward axial impulse to the vehicle. In a highly electrically conducting plasma, the presence of a magnetic field B in the plasma creates a pressure B(exp 2)/2(mu) in the plasma, the magnetic pressure. A gradient in the magnetic pressure can be used to decelerate the plasma traveling in the direction of increasing magnetic field, or to accelerate a plasma from rest in the direction of decreasing magnetic pressure. In principle, ignoring dissipative processes, it is possible to design magnetic configurations to produce an 'elastic' deflection of a plasma beam. In particular, it is conceivable that, by an appropriate arrangement of a set of coils, a good approximation to a parabolic 'magnetic mirror' may be formed, such that a beam of charged particles emanating from the focal point of the parabolic mirror would be reflected by the mirror to travel axially away from the mirror. The degree to which this may be accomplished depends on the degree of control one has over the flux surface of the magnetic field, which changes as a result of its interaction with a moving plasma.
Lossless compression of projection data from photon counting detectors
NASA Astrophysics Data System (ADS)
Shunhavanich, Picha; Pelc, Norbert J.
2016-03-01
With many attractive attributes, photon counting detectors with many energy bins are being considered for clinical CT systems. In practice, a large amount of projection data acquired for multiple energy bins must be transferred in real time through slip rings and data storage subsystems, causing a bandwidth bottleneck problem. The higher resolution of these detectors and the need for faster acquisition additionally contribute to this issue. In this work, we introduce a new approach to lossless compression, specifically for projection data from photon counting detectors, by utilizing the dependencies in the multi-energy data. The proposed predictor estimates the value of a projection data sample as a weighted average of its neighboring samples and an approximation from other energy bins, and the prediction residuals are then encoded. Context modeling using three or four quantized local gradients is also employed to detect edge characteristics of the data. Using three simulated phantoms including a head phantom, compression of 2.3:1-2.4:1 was achieved. The proposed predictor using zero, three, and four gradient contexts was compared to JPEG-LS and the ideal predictor (noiseless projection data). Among our proposed predictors, three-gradient context is preferred with a compression ratio from Golomb coding 7% higher than JPEG-LS and only 3% lower than the ideal predictor. In encoder efficiency, the Golomb code with the proposed three-gradient contexts has higher compression than block floating point. We also propose a lossy compression scheme, which quantizes the prediction residuals with scalar uniform quantization using quantization boundaries that limit the ratio of quantization error variance to quantum noise variance. Applying our proposed predictor with three-gradient context, the lossy compression achieved a compression ratio of 3.3:1 but inserted a 2.1% standard deviation of error compared to that of quantum noise in reconstructed images. From the initial
Data compression using Chebyshev transform
NASA Technical Reports Server (NTRS)
Cheng, Andrew F. (Inventor); Hawkins, III, S. Edward (Inventor); Nguyen, Lillian (Inventor); Monaco, Christopher A. (Inventor); Seagrave, Gordon G. (Inventor)
2007-01-01
The present invention is a method, system, and computer program product for implementation of a capable, general purpose compression algorithm that can be engaged on the fly. This invention has particular practical application with time-series data, and more particularly, time-series data obtained form a spacecraft, or similar situations where cost, size and/or power limitations are prevalent, although it is not limited to such applications. It is also particularly applicable to the compression of serial data streams and works in one, two, or three dimensions. The original input data is approximated by Chebyshev polynomials, achieving very high compression ratios on serial data streams with minimal loss of scientific information.
Compressive behavior of fine sand.
Martin, Bradley E.; Kabir, Md. E.; Song, Bo; Chen, Wayne
2010-04-01
The compressive mechanical response of fine sand is experimentally investigated. The strain rate, initial density, stress state, and moisture level are systematically varied. A Kolsky bar was modified to obtain uniaxial and triaxial compressive response at high strain rates. A controlled loading pulse allows the specimen to acquire stress equilibrium and constant strain-rates. The results show that the compressive response of the fine sand is not sensitive to strain rate under the loading conditions in this study, but significantly dependent on the moisture content, initial density and lateral confinement. Partially saturated sand is more compliant than dry sand. Similar trends were reported in the quasi-static regime for experiments conducted at comparable specimen conditions. The sand becomes stiffer as initial density and/or confinement pressure increases. The sand particle size become smaller after hydrostatic pressure and further smaller after dynamic axial loading.
Measurement of compressed breast thickness by optical stereoscopic photogrammetry
Tyson, Albert H.; Mawdsley, Gordon E.; Yaffe, Martin J.
2009-02-15
The determination of volumetric breast density (VBD) from mammograms requires accurate knowledge of the thickness of the compressed breast. In attempting to accurately determine VBD from images obtained on conventional mammography systems, the authors found that the thickness reported by a number of mammography systems in the field varied by as much as 15 mm when compressing the same breast or phantom. In order to evaluate the behavior of mammographic compression systems and to be able to predict the thickness at different locations in the breast on patients, they have developed a method for measuring the local thickness of the breast at all points of contact with the compression paddle using optical stereoscopic photogrammetry. On both flat (solid) and compressible phantoms, the measurements were accurate to better than 1 mm with a precision of 0.2 mm. In a pilot study, this method was used to measure thickness on 108 volunteers who were undergoing mammography examination. This measurement tool will allow us to characterize paddle surface deformations, deflections and calibration offsets for mammographic units.
Stress relaxation in vanadium under shock and shockless dynamic compression
Kanel, G. I.; Razorenov, S. V.; Garkushin, G. V.; Savinykh, A. S.; Zaretsky, E. B.
2015-07-28
Evolutions of elastic-plastic waves have been recorded in three series of plate impact experiments with annealed vanadium samples under conditions of shockless and combined ramp and shock dynamic compression. The shaping of incident wave profiles was realized using intermediate base plates made of different silicate glasses through which the compression waves were entered into the samples. Measurements of the free surface velocity histories revealed an apparent growth of the Hugoniot elastic limit with decreasing average rate of compression. The growth was explained by “freezing” of the elastic precursor decay in the area of interaction of the incident and reflected waves. A set of obtained data show that the current value of the Hugoniot elastic limit and plastic strain rate is rather associated with the rate of the elastic precursor decay than with the local rate of compression. The study has revealed the contributions of dislocation multiplications in elastic waves. It has been shown that independently of the compression history the material arrives at the minimum point between the elastic and plastic waves with the same density of mobile dislocations.
Compressive residual strength of graphite/epoxy laminates after impact
NASA Technical Reports Server (NTRS)
Guy, Teresa A.; Lagace, Paul A.
1992-01-01
The issue of damage tolerance after impact, in terms of the compressive residual strength, was experimentally examined in graphite/epoxy laminates using Hercules AS4/3501-6 in a (+ or - 45/0)(sub 2S) configuration. Three different impactor masses were used at various velocities and the resultant damage measured via a number of nondestructive and destructive techniques. Specimens were then tested to failure under uniaxial compression. The results clearly show that a minimum compressive residual strength exists which is below the open hole strength for a hole of the same diameter as the impactor. Increases in velocity beyond the point of minimum strength cause a difference in the damage produced and cause a resultant increase in the compressive residual strength which asymptotes to the open hole strength value. Furthermore, the results show that this minimum compressive residual strength value is independent of the impactor mass used and is only dependent upon the damage present in the impacted specimen which is the same for the three impactor mass cases. A full 3-D representation of the damage is obtained through the various techniques. Only this 3-D representation can properly characterize the damage state that causes the resultant residual strength. Assessment of the state-of-the-art in predictive analysis capabilities shows a need to further develop techniques based on the 3-D damage state that exists. In addition, the need for damage 'metrics' is clearly indicated.
Simulating Ramp Compression of Diamond
NASA Astrophysics Data System (ADS)
Godwal, B. K.; Gonzàlez-Cataldo, F. J.; Jeanloz, R.
2014-12-01
We model ramp compression, shock-free dynamic loading, intended to generate a well-defined equation of state that achieves higher densities and lower temperatures than the corresponding shock Hugoniot. Ramp loading ideally approaches isentropic compression for a fluid sample, so is useful for simulating the states deep inside convecting planets. Our model explicitly evaluates the deviation of ramp from "quasi-isentropic" compression. Motivated by recent ramp-compression experiments to 5 TPa (50 Mbar), we calculate the room-temperature isotherm of diamond using first-principles density functional theory and molecular dynamics, from which we derive a principal isentrope and Hugoniot by way of the Mie-Grüneisen formulation and the Hugoniot conservation relations. We simulate ramp compression by imposing a uniaxial strain that then relaxes to an isotropic state, evaluating the change in internal energy and stress components as the sample relaxes toward isotropic strain at constant volume; temperature is well defined for the resulting hydrostatic state. Finally, we evaluate multiple shock- and ramp-loading steps to compare with single-step loading to a given final compression. Temperatures calculated for single-step ramp compression are less than Hugoniot temperatures only above 500 GPa, the two being close to each other at lower pressures. We obtain temperatures of 5095 K and 6815 K for single-step ramp loading to 600 and 800 GPa, for example, which compares well with values of ~5100 K and ~6300 K estimated from previous experiments [PRL,102, 075503, 2009]. At 800 GPa, diamond is calculated to have a temperature of 500 K along the isentrope; 900 K under multi-shock compression (asymptotic result after 8-10 steps); and 3400 K under 3-step ramp loading (200-400-800 GPa). Asymptotic multi-step shock and ramp loading are indistinguishable from the isentrope, within present uncertainties. Our simulations quantify the manner in which current experiments can simulate the
GPU-accelerated compressive holography.
Endo, Yutaka; Shimobaba, Tomoyoshi; Kakue, Takashi; Ito, Tomoyoshi
2016-04-18
In this paper, we show fast signal reconstruction for compressive holography using a graphics processing unit (GPU). We implemented a fast iterative shrinkage-thresholding algorithm on a GPU to solve the ℓ_{1} and total variation (TV) regularized problems that are typically used in compressive holography. Since the algorithm is highly parallel, GPUs can compute it efficiently by data-parallel computing. For better performance, our implementation exploits the structure of the measurement matrix to compute the matrix multiplications. The results show that GPU-based implementation is about 20 times faster than CPU-based implementation. PMID:27137282
Analyzing Ramp Compression Wave Experiments
NASA Astrophysics Data System (ADS)
Hayes, D. B.
2007-12-01
Isentropic compression of a solid to 100's of GPa by a ramped, planar compression wave allows measurement of material properties at high strain and at modest temperature. Introduction of a measurement plane disturbs the flow, requiring special analysis techniques. If the measurement interface is windowed, the unsteady nature of the wave in the window requires special treatment. When the flow is hyperbolic the equations of motion can be integrated backward in space in the sample to a region undisturbed by the interface interactions, fully accounting for the untoward interactions. For more complex materials like hysteretic elastic/plastic solids or phase changing material, hybrid analysis techniques are required.
Extended testing of compression distillation.
NASA Technical Reports Server (NTRS)
Bambenek, R. A.; Nuccio, P. P.
1972-01-01
During the past eight years, the NASA Manned Spacecraft Center has supported the development of an integrated water and waste management system which includes the compression distillation process for recovering useable water from urine, urinal flush water, humidity condensate, commode flush water, and concentrated wash water. This paper describes the design of the compression distillation unit, developed for this system, and the testing performed to demonstrate its reliability and performance. In addition, this paper summarizes the work performed on pretreatment and post-treatment processes, to assure the recovery of sterile potable water from urine and treated urinal flush water.
Data compression for satellite images
NASA Technical Reports Server (NTRS)
Chen, P. H.; Wintz, P. A.
1976-01-01
An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.
Compressing the Inert Doublet Model
Blinov, Nikita; Kozaczuk, Jonathan; Morrissey, David E.; de la Puente, Alejandro
2016-02-16
The Inert Doublet Model relies on a discrete symmetry to prevent couplings of the new scalars to Standard Model fermions. We found that this stabilizes the lightest inert state, which can then contribute to the observed dark matter density. In the presence of additional approximate symmetries, the resulting spectrum of exotic scalars can be compressed. Here, we study the phenomenological and cosmological implications of this scenario. In conclusion, we derive new limits on the compressed Inert Doublet Model from LEP, and outline the prospects for exclusion and discovery of this model at dark matter experiments, the LHC, and future colliders.
Compressing the Inert Doublet Model
Blinov, Nikita; Morrissey, David E.; de la Puente, Alejandro
2015-10-29
The Inert Doublet Model relies on a discrete symmetry to prevent couplings of the new scalars to Standard Model fermions. We found that this stabilizes the lightest inert state, which can then contribute to the observed dark matter density. In the presence of additional approximate symmetries, the resulting spectrum of exotic scalars can be compressed. Here, we study the phenomenological and cosmological implications of this scenario. Furthermore, we derive new limits on the compressed Inert Doublet Model from LEP, and outline the prospects for exclusion and discovery of this model at dark matter experiments, the LHC, and future colliders.
Structured illumination temporal compressive microscopy
Yuan, Xin; Pang, Shuo
2016-01-01
We present a compressive video microscope based on structured illumination with incoherent light source. The source-side illumination coding scheme allows the emission photons being collected by the full aperture of the microscope objective, and thus is suitable for the fluorescence readout mode. A 2-step iterative reconstruction algorithm, termed BWISE, has been developed to address the mismatch between the illumination pattern size and the detector pixel size. Image sequences with a temporal compression ratio of 4:1 were demonstrated. PMID:27231586
Finite scale equations for compressible fluid flow
Margolin, Len G
2008-01-01
Finite-scale equations (FSE) describe the evolution of finite volumes of fluid over time. We discuss the FSE for a one-dimensional compressible fluid, whose every point is governed by the Navier-Stokes equations. The FSE contain new momentum and internal energy transport terms. These are similar to terms added in numerical simulation for high-speed flows (e.g. artificial viscosity) and for turbulent flows (e.g. subgrid scale models). These similarities suggest that the FSE may provide new insight as a basis for computational fluid dynamics. Our analysis of the FS continuity equation leads to a physical interpretation of the new transport terms, and indicates the need to carefully distinguish between volume-averaged and mass-averaged velocities in numerical simulation. We make preliminary connections to the other recent work reformulating Navier-Stokes equations.
Image Segmentation, Registration, Compression, and Matching
NASA Technical Reports Server (NTRS)
Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina
2011-01-01
A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity
Trajectory NG: portable, compressed, general molecular dynamics trajectories.
Spångberg, Daniel; Larsson, Daniel S D; van der Spoel, David
2011-10-01
We present general algorithms for the compression of molecular dynamics trajectories. The standard ways to store MD trajectories as text or as raw binary floating point numbers result in very large files when efficient simulation programs are used on supercomputers. Our algorithms are based on the observation that differences in atomic coordinates/velocities, in either time or space, are generally smaller than the absolute values of the coordinates/velocities. Also, it is often possible to store values at a lower precision. We apply several compression schemes to compress the resulting differences further. The most efficient algorithms developed here use a block sorting algorithm in combination with Huffman coding. Depending on the frequency of storage of frames in the trajectory, either space, time, or combinations of space and time differences are usually the most efficient. We compare the efficiency of our algorithms with each other and with other algorithms present in the literature for various systems: liquid argon, water, a virus capsid solvated in 15 mM aqueous NaCl, and solid magnesium oxide. We perform tests to determine how much precision is necessary to obtain accurate structural and dynamic properties, as well as benchmark a parallelized implementation of the algorithms. We obtain compression ratios (compared to single precision floating point) of 1:3.3-1:35 depending on the frequency of storage of frames and the system studied. PMID:21267752
Melting of compressed iron by monitoring atomic dynamics
NASA Astrophysics Data System (ADS)
Jackson, Jennifer M.; Sturhahn, Wolfgang; Lerche, Michael; Zhao, Jiyong; Toellner, Thomas S.; Alp, E. Ercan; Sinogeikin, Stanislav V.; Bass, Jay D.; Murphy, Caitlin A.; Wicks, June K.
2013-01-01
We present a novel method for detecting the solid-liquid phase boundary of compressed iron at high temperatures using synchrotron Mössbauer spectroscopy (SMS). Our approach is unique because the dynamics of the iron atoms are monitored. This process is described by the Lamb-Mössbauer factor, which is related to the mean-square displacement of the iron atoms. Focused synchrotron radiation with 1 meV bandwidth passes through a laser-heated 57Fe sample inside a diamond-anvil cell, and the characteristic SMS time signature vanishes when melting occurs. At our highest compression measurement and considering thermal pressure, we find the melting point of iron to be TM=3025±115 K at P=82±5 GPa. When compared with previously reported melting points for iron using static compression methods with different criteria for melting, our melting trend defines a steeper positive slope as a function of pressure. The obtained melting temperatures represent a significant step toward a reliable melting curve of iron at Earth's core conditions. For other terrestrial planets possessing cores with liquid portions rich in metallic iron, such as Mercury and Mars, the higher melting temperatures for compressed iron may imply warmer internal temperatures.
Astronomical context coder for image compression
NASA Astrophysics Data System (ADS)
Pata, Petr; Schindler, Jaromir
2015-10-01
Recent lossless still image compression formats are powerful tools for compression of all kind of common images (pictures, text, schemes, etc.). Generally, the performance of a compression algorithm depends on its ability to anticipate the image function of the processed image. In other words, a compression algorithm to be successful, it has to take perfectly the advantage of coded image properties. Astronomical data form a special class of images and they have, among general image properties, also some specific characteristics which are unique. If a new coder is able to correctly use the knowledge of these special properties it should lead to its superior performance on this specific class of images at least in terms of the compression ratio. In this work, the novel lossless astronomical image data compression method will be presented. The achievable compression ratio of this new coder will be compared to theoretical lossless compression limit and also to the recent compression standards of the astronomy and general multimedia.
Compression fractures of the back
... Meirhaeghe J, et al. Efficacy and safety of balloon kyphoplasty compared with non-surgical care for vertebral compression fracture (FREE): a randomised controlled trial. Lancet . 2009;373(9668):1016-24. PMID: 19246088 www.ncbi.nlm.nih.gov/pubmed/19246088 .
A programmable image compression system
NASA Technical Reports Server (NTRS)
Farrelle, Paul M.
1989-01-01
A programmable image compression system which has the necessary flexibility to address diverse imaging needs is described. It can compress and expand single frame video images (monochrome or color) as well as documents and graphics (black and white or color) for archival or transmission applications. Through software control, the compression mode can be set for lossless or controlled quality coding; the image size and bit depth can be varied; and the image source and destination devices can be readily changed. Despite the large combination of image data types, image sources, and algorithms, the system provides a simple consistent interface to the programmer. This system (OPTIPAC) is based on the TITMS320C25 digital signal processing (DSP) chip and has been implemented as a co-processor board for an IBM PC-AT compatible computer. The underlying philosophy can readily be applied to different hardware platforms. By using multiple DSP chips or incorporating algorithm specific chips, the compression and expansion times can be significantly reduced to meet performance requirements.
COMPRESSIBLE FLOW, ENTRAINMENT, AND MEGAPLUME
It is generally believed that low Mach number, i.e., low-velocity, flow may be assumed to be incompressible flow. Under steady-state conditions, an exact equation of continuity may then be used to show that such flow is non-divergent. However, a rigorous, compressible fluid-dynam...
Teaching Time-Space Compression
ERIC Educational Resources Information Center
Warf, Barney
2011-01-01
Time-space compression shows students that geographies are plastic, mutable and forever changing. This paper justifies the need to teach this topic, which is rarely found in undergraduate course syllabi. It addresses the impacts of transportation and communications technologies to explicate its dynamics. In summarizing various conceptual…