Science.gov

Sample records for 1-db compression point

  1. Design Point for a Spheromak Compression Experiment

    NASA Astrophysics Data System (ADS)

    Woodruff, Simon; Romero-Talamas, Carlos A.; O'Bryan, John; Stuber, James; Darpa Spheromak Team

    2015-11-01

    Two principal issues for the spheromak concept remain to be addressed experimentally: formation efficiency and confinement scaling. We are therefore developing a design point for a spheromak experiment that will be heated by adiabatic compression, utilizing the CORSICA and NIMROD codes as well as analytic modeling with target parameters R_initial =0.3m, R_final =0.1m, T_initial =0.2keV, T_final =1.8keV, n_initial =1019m-3 and n_final = 1021m-3, with radial convergence of C =3. This low convergence differentiates the concept from MTF with C =10 or more, since the plasma will be held in equilibrium throughout compression. We present results from CORSICA showing the placement of coils and passive structure to ensure stability during compression, and design of the capacitor bank needed to both form the target plasma and compress it. We specify target parameters for the compression in terms of plasma beta, formation efficiency and energy confinement. Work performed under DARPA grant N66001-14-1-4044.

  2. Ischemic Compression After Trigger Point Injection Affect the Treatment of Myofascial Trigger Points

    PubMed Central

    Kim, Soo A; Oh, Ki Young; Choi, Won Hyuck

    2013-01-01

    Objective To investigate the effects of trigger point injection with or without ischemic compression in treatment of myofascial trigger points in the upper trapezius muscle. Methods Sixty patients with active myofascial trigger points in upper trapezius muscle were randomly divided into three groups: group 1 (n=20) received only trigger point injections, group 2 (n=20) received trigger point injections with 30 seconds of ischemic compression, and group 3 (n=20) received trigger point injections with 60 seconds of ischemic compression. The visual analogue scale, pressure pain threshold, and range of motion of the neck were assessed before treatment, immediately after treatment, and 1 week after treatment. Korean Neck Disability Indexes were assessed before treatment and 1 week after treatment. Results We found a significant improvement in all assessment parameters (p<0.05) in all groups. But, receiving trigger point injections with ischemic compression group showed significant improvement as compared with the receiving only trigger point injections group. And no significant differences between receiving 30 seconds of ischemic compression group and 60 seconds of ischemic compression group. Conclusion This study demonstrated the effectiveness of ischemic compression for myofascial trigger point. Trigger point injections combined with ischemic compression shows better effects on treatment of myofascial trigger points in the upper trapezius muscle than the only trigger point injections therapy. But the duration of ischemic compression did not affect treatment of myofascial trigger point. PMID:24020035

  3. Fixed-Rate Compressed Floating-Point Arrays.

    PubMed

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation. PMID:26356981

  4. Fast and efficient compression of floating-point data.

    PubMed

    Lindstrom, Peter; Isenburg, Martin

    2006-01-01

    Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenecks. When the rate at which data is produced exceeds the available I/O bandwidth, the simulation stalls and the CPUs are idle. Data compression can alleviate this problem by using some CPU cycles to reduce the amount of data needed to be transfered. Most compression schemes, however, are designed to operate offline and seek to maximize compression, not throughput. Furthermore, they often require quantizing floating-point values onto a uniform integer grid, which disqualifies their use in applications where exact values must be retained. We propose a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications. A plug-in scheme for data-dependent prediction makes our scheme applicable to a wide variety of data used in visualization, such as unstructured meshes, point sets, images, and voxel grids. We achieve state-of-the-art compression rates and speeds, the latter in part due to an improved entropy coder. We demonstrate that this significantly accelerates I/O throughput in real simulation runs. Unlike previous schemes, our method also adapts well to variable-precision floating-point and integer data. PMID:17080858

  5. Measurement dimensions compressed spectral imaging with a single point detector

    NASA Astrophysics Data System (ADS)

    Liu, Xue-Feng; Yu, Wen-Kai; Yao, Xu-Ri; Dai, Bin; Li, Long-Zhen; Wang, Chao; Zhai, Guang-Jie

    2016-04-01

    An experimental demonstration of spectral imaging with measurement dimensions compressed has been performed. With the method of dual compressed sensing (CS) we derive, the spectral image of a colored object can be obtained with only a single point detector, and sub-sampling is achieved in both spatial and spectral domains. The performances of dual CS spectral imaging are analyzed, including the effects of dual modulation numbers and measurement noise on the imaging quality. Our scheme provides a stable, high-flux measurement approach of spectral imaging.

  6. Fixed-rate compressed floating-point arrays

    2014-03-30

    ZFP is a library for lossy compression of single- and double-precision floating-point data. One of the unique features of ZFP is its support for fixed-rate compression, which enables random read and write access at the granularity of small blocks of values. Using a C++ interface, this allows declaring compressed arrays (1D, 2D, and 3D arrays are supported) that through operator overloading can be treated just like conventional, uncompressed arrays, but which allow the user tomore » specify the exact number of bits to allocate to the array. ZFP also has variable-rate fixed-precision and fixed-accuracy modes, which allow the user to specify a tolerance on the relative or absolute error.« less

  7. Compression of point-texture 3D motion sequences

    NASA Astrophysics Data System (ADS)

    Song, In-Wook; Kim, Chang-Su; Lee, Sang-Uk

    2005-10-01

    In this work, we propose two compression algorithms for PointTexture 3D sequences: the octree-based scheme and the motion-compensated prediction scheme. The first scheme represents each PointTexture frame hierarchically using an octree. The geometry information in the octree nodes is encoded by the predictive partial matching (PPM) method. The encoder supports the progressive transmission of the 3D frame by transmitting the octree nodes in a top-down manner. The second scheme adopts the motion-compensated prediction to exploit the temporal correlation in 3D sequences. It first divides each frame into blocks, and then estimates the motion of each block using the block matching algorithm. In contrast to the motion-compensated 2D video coding, the prediction residual may take more bits than the original signal. Thus, in our approach, the motion compensation is used only for the blocks that can be replaced by the matching blocks. The other blocks are PPM-encoded. Extensive simulation results demonstrate that the proposed algorithms provide excellent compression performances.

  8. Prediction of optimal operation point existence and parameters in lossy compression of noisy images

    NASA Astrophysics Data System (ADS)

    Zemliachenko, Alexander N.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2014-10-01

    This paper deals with lossy compression of images corrupted by additive white Gaussian noise. For such images, compression can be characterized by existence of optimal operation point (OOP). In OOP, MSE or other metric derived between compressed and noise-free image might have optimum, i.e., maximal noise removal effect takes place. If OOP exists, then it is reasonable to compress an image in its neighbourhood. If no, more "careful" compression is reasonable. In this paper, we demonstrate that existence of OOP can be predicted based on very simple and fast analysis of discrete cosine transform (DCT) statistics in 8x8 blocks. Moreover, OOP can be predicted not only for conventional metrics as MSE or PSNR but also for visual quality metrics. Such prediction can be useful in automatic compression of multi- and hyperspectral remote sensing images.

  9. Parametric temporal compression of infrared imagery sequences containing a slow-moving point target.

    PubMed

    Huber-Shalem, Revital; Hadar, Ofer; Rotman, Stanley R; Huber-Lerner, Merav

    2016-02-10

    Infrared (IR) imagery sequences are commonly used for detecting moving targets in the presence of evolving cloud clutter or background noise. This research focuses on slow-moving point targets that are less than one pixel in size, such as aircraft at long range from a sensor. Since transmitting IR imagery sequences to a base unit or storing them consumes considerable time and resources, a compression method that maintains the point target detection capabilities is highly desirable. In this work, we introduce a new parametric temporal compression that incorporates Gaussian fit and polynomial fit. We then proceed to spatial compression by spatially applying the lowest possible number of bits for representing each parameter over the parameters extracted by temporal compression, which is followed by bit encoding to achieve an end-to-end compression process of the sequence for data storage and transmission. We evaluate the proposed compression method using the variance estimation ratio score (VERS), which is a signal-to-noise ratio (SNR)-based measure for point target detection that scores each pixel and yields an SNR scores image. A high pixel score indicates that a target is suspected to traverse the pixel. From this score image we calculate the movie scores, which are found to be close to those of the original sequences. Furthermore, we present a new algorithm for automatic detection of the target tracks. This algorithm extracts the target location from the SNR scores image, which is acquired during the evaluation process, using Hough transform. This algorithm yields a similar detection probability (PD) and false alarm probability (PFA) of the compressed sequences and the original sequences. The parameters of the new parametric temporal compression successfully differentiate the targets from the background, yielding high PDs (above 83%) with low PFAs (below 0.043%) without the need to calculate pixel scores or to apply automatic detection of the target tracks. PMID

  10. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information

    NASA Technical Reports Server (NTRS)

    Pence, William D.; White, R. L.; Seaman, R.

    2010-01-01

    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  11. Compression After Impact Testing of Sandwich Structures Using the Four Point Bend Test

    NASA Technical Reports Server (NTRS)

    Nettles, Alan T.; Gregory, Elizabeth; Jackson, Justin; Kenworthy, Devon

    2008-01-01

    For many composite laminated structures, the design is driven by data obtained from Compression after Impact (CAI) testing. There currently is no standard for CAI testing of sandwich structures although there is one for solid laminates of a certain thickness and lay-up configuration. Most sandwich CAI testing has followed the basic technique of this standard where the loaded ends are precision machined and placed between two platens and compressed until failure. If little or no damage is present during the compression tests, the loaded ends may need to be potted to prevent end brooming. By putting a sandwich beam in a four point bend configuration, the region between the inner supports is put under a compressive load and a sandwich laminate with damage can be tested in this manner without the need for precision machining. Also, specimens with no damage can be taken to failure so direct comparisons between damaged and undamaged strength can be made. Data is presented that demonstrates the four point bend CAI test and is compared with end loaded compression tests of the same sandwich structure.

  12. Graph-Based Compression of Dynamic 3D Point Cloud Sequences.

    PubMed

    Thanou, Dorina; Chou, Philip A; Frossard, Pascal

    2016-04-01

    This paper addresses the problem of compression of 3D point cloud sequences that are characterized by moving 3D positions and color attributes. As temporally successive point cloud frames share some similarities, motion estimation is key to effective compression of these sequences. It, however, remains a challenging problem as the point cloud frames have varying numbers of points without explicit correspondence information. We represent the time-varying geometry of these sequences with a set of graphs, and consider 3D positions and color attributes of the point clouds as signals on the vertices of the graphs. We then cast motion estimation as a feature-matching problem between successive graphs. The motion is estimated on a sparse set of representative vertices using new spectral graph wavelet descriptors. A dense motion field is eventually interpolated by solving a graph-based regularization problem. The estimated motion is finally used for removing the temporal redundancy in the predictive coding of the 3D positions and the color characteristics of the point cloud sequences. Experimental results demonstrate that our method is able to accurately estimate the motion between consecutive frames. Moreover, motion estimation is shown to bring a significant improvement in terms of the overall compression performance of the sequence. To the best of our knowledge, this is the first paper that exploits both the spatial correlation inside each frame (through the graph) and the temporal correlation between the frames (through the motion estimation) to compress the color and the geometry of 3D point cloud sequences in an efficient way. PMID:26891486

  13. Wet compression performance of a transonic compressor rotor at its near stall point

    NASA Astrophysics Data System (ADS)

    Yang, Huaifeng; Zheng, Qun; Luo, Mingcong; Sun, Lanxin; Bhargava, Rakesh

    2011-03-01

    In order to study the effects of wet compression on a transonic compressor, a full 3-D steady numerical simulation was carried out under varying conditions. Different injected water flow rates and droplet diameters were considered. The effect of wet compression on the shock, separated flow, pressure ratio, and efficiency was investigated. Additionally, the effect of wet compression on the tip clearance when the compressor runs in the near-stall and stall situations was emphasized. Analysis of the results shows that the range of stable operation is extended, and that the pressure ratio and inlet air flow rate are also increased at the near-stall point. In addition, it seems that there is an optimum size of the droplet diameter.

  14. Analysis of three-point-bend test for materials with unequal tension and compression properties

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1974-01-01

    An analysis capability is described for the three-point-bend test applicable to materials of linear but unequal tensile and compressive stress-strain relations. The capability consists of numerous equations of simple form and their graphical representation. Procedures are described to examine the local stress concentrations and failure modes initiation. Examples are given to illustrate the usefulness and ease of application of the capability. Comparisons are made with materials which have equal tensile and compressive properties. The results indicate possible underestimates for flexural modulus or strength ranging from 25 to 50 percent greater than values predicted when accounting for unequal properties. The capability can also be used to reduce test data from three-point-bending tests, extract material properties useful in design from these test data, select test specimen dimensions, and size structural members.

  15. Mathematical modelling of the beam under axial compression force applied at any point - the buckling problem

    NASA Astrophysics Data System (ADS)

    Magnucka-Blandzi, Ewa

    2016-06-01

    The study is devoted to stability of simply supported beam under axial compression. The beam is subjected to an axial load located at any point along the axis of the beam. The buckling problem has been desribed and solved mathematically. Critical loads have been calculated. In the particular case, the Euler's buckling load is obtained. Explicit solutions are given. The values of critical loads are collected in tables and shown in figure. The relation between the point of the load application and the critical load is presented.

  16. An upwind-biased, point-implicit relaxation algorithm for viscous, compressible perfect-gas flows

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    1990-01-01

    An upwind-biased, point-implicit relaxation algorithm for obtaining the numerical solution to the governing equations for three-dimensional, viscous, compressible, perfect-gas flows is described. The algorithm is derived using a finite-volume formulation in which the inviscid components of flux across cell walls are described with Roe's averaging and Harten's entropy fix with second-order corrections based on Yee's Symmetric Total Variation Diminishing scheme. Viscous terms are discretized using central differences. The relaxation strategy is well suited for computers employing either vector or parallel architectures. It is also well suited to the numerical solution of the governing equations on unstructured grids. Because of the point-implicit relaxation strategy, the algorithm remains stable at large Courant numbers without the necessity of solving large, block tri-diagonal systems. Convergence rates and grid refinement studies are conducted for Mach 5 flow through an inlet with a 10 deg compression ramp and Mach 14 flow over a 15 deg ramp. Predictions for pressure distributions, surface heating, and aerodynamics coefficients compare well with experiment data for Mach 10 flow over a blunt body.

  17. Comparison of ring compression testing to three point bend testing for unirradiated ZIRLO cladding

    SciTech Connect

    None, None

    2015-04-01

    Safe shipment and storage of nuclear reactor discharged fuel requires an understanding of how the fuel may perform under the various conditions that can be encountered. One specific focus of concern is performance during a shipment drop accident. Tests at Savannah River National Laboratory (SRNL) are being performed to characterize the properties of fuel clad relative to a mechanical accident condition such as a container drop. Unirradiated ZIRLO tubing samples have been charged with a range of hydride levels to simulate actual fuel rod levels. Samples of the hydrogen charged tubes were exposed to a radial hydride growth treatment (RHGT) consisting of heating to 400°C, applying initial hoop stresses of 90 to 170 MPa with controlled cooling and producing hydride precipitates. Initial samples have been tested using both a) ring compression test (RCT) which is shown to be sensitive to radial hydride and b) three-point bend tests which are less sensitive to radial hydride effects. Hydrides are generated in Zirconium based fuel cladding as a result of coolant (water) oxidation of the clad, hydrogen release, and a portion of the released (nascent) hydrogen absorbed into the clad and eventually exceeding the hydrogen solubility limit. The orientation of the hydrides relative to the subsequent normal and accident strains has a significant impact on the failure susceptability. In this study the impacts of stress, temperature and hydrogen levels are evaluated in reference to the propensity for hydride reorientation from the circumferential to the radial orientation. In addition the effects of radial hydrides on the Quasi Ductile Brittle Transition Temperature (DBTT) were measured. The results suggest that a) the severity of the radial hydride impact is related to the hydrogen level-peak temperature combination (for example at a peak drying temperature of 400°C; 800 PPM hydrogen has less of an impact/ less radial hydride fraction than 200 PPM hydrogen for the same thermal

  18. Index of Unconfined Compressive Strength of SAFOD Core by Means of Point-Load Penetrometer Tests

    NASA Astrophysics Data System (ADS)

    Enderlin, M. B.; Weymer, B.; D'Onfro, P. S.; Ramos, R.; Morgan, K.

    2010-12-01

    The San Andreas Fault Observatory at Depth (SAFOD) project is motivated by the need to answer fundamental questions on the physical and chemical processes controlling faulting and earthquake generation within major plate-boundaries. In 2007, approximately 135 ft (41.1 m) of 4 inch (10.61 cm) diameter rock cores was recovered from two actively deforming traces of the San Andreas Fault. 97 evenly (more or less) distributed index tests for Unconfined Compressive Strength (UCS) where performed on the cores using a modified point-load penetrometer. The point-load penetrometer used was a handheld micro-conical point indenter referred to as the Dimpler, in reference to the small conical depression that it creates. The core surface was first covered with compliant tape that is about a square inch in size. The conical tip of the indenter is coated with a (red) dye and then forced, at a constant axial load, through the tape and into the sample creating a conical red depression (dimple) on the tape. The combination of red dye and tape preserves a record of the dimple geometrical attributes. The geometrical attributes (e.g. diameter and depth) depend on the rock UCS. The diameter of a dimple is measured with a surface measuring magnifier. Correlation between dimple diameter and UCS has been previously established with triaxial testing. The SAFOD core gave Dimpler UCS values in the range of 10 psi (68.9 KPa) to 15,000 psi (103.4 MPa). The UCS index also allows correlations between geomechanical properties and well log-derived petrophysical properties.

  19. Development of modifications to the material point method for the simulation of thin membranes, compressible fluids, and their interactions

    SciTech Connect

    York, A.R. II

    1997-07-01

    The material point method (MPM) is an evolution of the particle in cell method where Lagrangian particles or material points are used to discretize the volume of a material. The particles carry properties such as mass, velocity, stress, and strain and move through a Eulerian or spatial mesh. The momentum equation is solved on the Eulerian mesh. Modifications to the material point method are developed that allow the simulation of thin membranes, compressible fluids, and their dynamic interactions. A single layer of material points through the thickness is used to represent a membrane. The constitutive equation for the membrane is applied in the local coordinate system of each material point. Validation problems are presented and numerical convergence is demonstrated. Fluid simulation is achieved by implementing a constitutive equation for a compressible, viscous, Newtonian fluid and by solution of the energy equation. The fluid formulation is validated by simulating a traveling shock wave in a compressible fluid. Interactions of the fluid and membrane are handled naturally with the method. The fluid and membrane communicate through the Eulerian grid on which forces are calculated due to the fluid and membrane stress states. Validation problems include simulating a projectile impacting an inflated airbag. In some impact simulations with the MPM, bodies may tend to stick together when separating. Several algorithms are proposed and tested that allow bodies to separate from each other after impact. In addition, several methods are investigated to determine the local coordinate system of a membrane material point without relying upon connectivity data.

  20. Evolution of Skin Temperature after the Application of Compressive Forces on Tendon, Muscle and Myofascial Trigger Point

    PubMed Central

    Magalhães, Marina Figueiredo; Dibai-Filho, Almir Vieira; de Oliveira Guirro, Elaine Caldeira; Girasol, Carlos Eduardo; de Oliveira, Alessandra Kelly; Dias, Fabiana Rodrigues Cancio; Guirro, Rinaldo Roberto de Jesus

    2015-01-01

    Some assessment and diagnosis methods require palpation or the application of certain forces on the skin, which affects the structures beneath, we highlight the importance of defining possible influences on skin temperature as a result of this physical contact. Thus, the aim of the present study is to determine the ideal time for performing thermographic examination after palpation based on the assessment of skin temperature evolution. Randomized and crossover study carried out with 15 computer-user volunteers of both genders, between 18 and 45 years of age, who were submitted to compressive forces of 0, 1, 2 and 3 kg/cm2 for 30 seconds with a washout period of 48 hours using a portable digital dynamometer. Compressive forces were applied on the following spots on the dominant upper limb: myofascial trigger point in the levator scapulae, biceps brachii muscle and palmaris longus tendon. Volunteers were examined by means of infrared thermography before and after the application of compressive forces (15, 30, 45 and 60 minutes). In most comparisons made over time, a significant decrease was observed 30, 45 and 60 minutes after the application of compressive forces (p < 0.05) on the palmaris longus tendon and biceps brachii muscle. However, no difference was observed when comparing the different compressive forces (p > 0.05). In conclusion, infrared thermography can be used after assessment or diagnosis methods focused on the application of forces on tendons and muscles, provided the procedure is performed 15 minutes after contact with the skin. Regarding to the myofascial trigger point, the thermographic examination can be performed within 60 minutes after the contact with the skin. PMID:26070073

  1. Map-Based Compressive Sensing Model for Wireless Sensor Network Architecture, A Starting Point

    NASA Astrophysics Data System (ADS)

    Mahmudimanesh, Mohammadreza; Khelil, Abdelmajid; Yazdani, Nasser

    Sub-Nyquist sampling techniques for Wireless Sensor Networks (WSN) are gaining increasing attention as an alternative method to capture natural events with desired quality while minimizing the number of active sensor nodes. Among those techniques, Compressive Sensing (CS) approaches are of special interest, because of their mathematically concrete foundations and efficient implementations. We describe how the geometrical representation of the sampling problem can influence the effectiveness and efficiency of CS algorithms. In this paper we introduce a Map-based model which exploits redundancy attributes of signals recorded from natural events to achieve an optimal representation of the signal.

  2. Correlation between the uniaxial compressive strength and the point load strength index of the Pungchon limestone, Korea

    NASA Astrophysics Data System (ADS)

    Baek, Hwanjo; Kim, Dae-Hoon; Kim, Kyoungman; Choi, Young-Sup; Kang, Sang-Soo; Kang, Jung-Seock

    2013-04-01

    Recently, the use of underground openings for various purposes is expanding, particularly for the crushing and processing facilities in open-pit limestone mines. The suitability of current rockmass classification systems for limestone or dolostone is therefore one of the major concerns for field engineers. Consequently, development of the limestone mine site characterization model(LSCM) is underway through the joint efforts of some research institutes and universities in Korea. An experimental program was undertaken to investigate the correlation between rock properties, for quick adaptation of the rockmass classification system in the field. The uniaxial compressive strength(UCS) of rock material is a key property for rockmass characterization purposes and, is reasonably included in the rock mass rating(RMR). As core samples for the uniaxial compression test are not always easily obtained, indirect tests such as the point load test can be a useful alternative, and various equations between the UCS and the point load strength index(Is50) have been reported in the literature. It is generally proposed that the relationship between the Is50 and the UCS value depends on the rock types and, also on the testing conditions. This study investigates the correlation between the UCS and the Is50 of the Pungchon limestone, with a total of 48 core samples obtained from a underground limestone mine. Both uniaxial compression and point load specimens were prepared from the same segment of NX-sized rock cores. The derived equation obtained from regression analysis of two variables is UCS=26Is50, with the root-mean-square error of 13.18.

  3. A Genuine Jahn-Teller System with Compressed Geometry and Quantum Effects Originating from Zero-Point Motion.

    PubMed

    Aramburu, José Antonio; García-Fernández, Pablo; García-Lastra, Juan María; Moreno, Miguel

    2016-07-18

    First-principle calculations together with analysis of the experimental data found for 3d(9) and 3d(7) ions in cubic oxides proved that the center found in irradiated CaO:Ni(2+) corresponds to Ni(+) under a static Jahn-Teller effect displaying a compressed equilibrium geometry. It was also shown that the anomalous positive g∥ shift (g∥ -g0 =0.065) measured at T=20 K obeys the superposition of the |3 z(2) -r(2) ⟩ and |x(2) -y(2) ⟩ states driven by quantum effects associated with the zero-point motion, a mechanism first put forward by O'Brien for static Jahn-Teller systems and later extended by Ham to the dynamic Jahn-Teller case. To our knowledge, this is the first genuine Jahn-Teller system (i.e. in which exact degeneracy exists at the high-symmetry configuration) exhibiting a compressed equilibrium geometry for which large quantum effects allow experimental observation of the effect predicted by O'Brien. Analysis of the calculated energy barriers for different Jahn-Teller systems allowed us to explain the origin of the compressed geometry observed for CaO:Ni(+) . PMID:27028895

  4. An implicit finite volume nodal point scheme for the solution of two-dimensional compressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Dutta, Vimala

    1993-07-01

    An implicit finite volume nodal point scheme has been developed for solving the two-dimensional compressible Navier-Stokes equations. The numerical scheme is evolved by efficiently combining the basic ideas of the implicit finite-difference scheme of Beam and Warming (1978) with those of nodal point schemes due to Hall (1985) and Ni (1982). The 2-D Navier-Stokes solver is implemented for steady, laminar/turbulent flows past airfoils by using C-type grids. Turbulence closure is achieved by employing the algebraic eddy-viscosity model of Baldwin and Lomax (1978). Results are presented for the NACA-0012 and RAE-2822 airfoil sections. Comparison of the aerodynamic coefficients with experimental results for the different test cases presented here establishes the validity and efficiency of the method.

  5. Comb generation using multiple compression points of Peregrine rogue waves in periodically modulated nonlinear Schrödinger equations

    NASA Astrophysics Data System (ADS)

    Tiofack, C. G. L.; Coulibaly, S.; Taki, M.; De Bièvre, S.; Dujardin, G.

    2015-10-01

    It is shown that sufficiently large periodic modulations in the coefficients of a nonlinear Schrödinger equation can drastically impact the spatial shape of the Peregrine soliton solutions: they can develop multiple compression points of the same amplitude, rather than only a single one, as in the spatially homogeneous focusing nonlinear Schrödinger equation. The additional compression points are generated in pairs forming a comblike structure. The number of additional pairs depends on the amplitude of the modulation but not on its wavelength, which controls their separation distance. The dynamics and characteristics of these generalized Peregrine solitons are analytically described in the case of a completely integrable modulation. A numerical investigation shows that their main properties persist in nonintegrable situations, where no exact analytical expression of the generalized Peregrine soliton is available. Our predictions are in good agreement with numerical findings for an interesting specific case of an experimentally realizable periodically dispersion modulated photonic crystal fiber. Our results therefore pave the way for the experimental control and manipulation of the formation of generalized Peregrine rogue waves in the wide class of physical systems modeled by the nonlinear Schrödinger equation.

  6. Changes in blood flow and cellular metabolism at a myofascial trigger point with trigger point release (ischemic compression): a proof-of-principle pilot study

    PubMed Central

    Moraska, Albert F.; Hickner, Robert C.; Kohrt, Wendy M.; Brewer, Alan

    2012-01-01

    Objective To demonstrate proof-of-principle measurement for physiological change within an active myofascial trigger point (MTrP) undergoing trigger point release (ischemic compression). Design Interstitial fluid was sampled continuously at a trigger point before and after intervention. Setting A biomedical research clinic at a university hospital. Participants Two subjects from a pain clinic presenting with chronic headache pain. Interventions A single microdialysis catheter was inserted into an active MTrP of the upper trapezius to allow for continuous sampling of interstitial fluid before and after application of trigger point therapy by a massage therapist. Main Outcome Measures Procedural success, pain tolerance, feasibility of intervention during sample collection, determination of physiologically relevant values for local blood flow, as well as glucose and lactate concentrations. Results Both patients tolerated the microdialysis probe insertion into the MTrP and treatment intervention without complication. Glucose and lactate concentrations were measured in the physiological range. Following intervention, a sustained increase in lactate was noted for both subjects. Conclusions Identifying physiological constituents of MTrP’s following intervention is an important step toward understanding pathophysiology and resolution of myofascial pain. The present study forwards that aim by showing proof-of-concept for collection of interstitial fluid from an MTrP before and after intervention can be accomplished using microdialysis, thus providing methodological insight toward treatment mechanism and pain resolution. Of the biomarkers measured in this study, lactate may be the most relevant for detection and treatment of abnormalities in the MTrP. PMID:22975226

  7. 1DB, a one-dimensional diffusion code for nuclear reactor analysis

    SciTech Connect

    Little, W.W. Jr. )

    1991-09-01

    1DB is a multipurpose, one-dimensional (plane, cylinder, sphere) diffusion theory code for use in reactor analysis. The code is designed to do the following: To compute k{sub eff} and perform criticality searches on time absorption, reactor composition, reactor dimensions, and buckling by means of either a flux or an adjoint model; to compute collapsed microscopic and macroscopic cross sections averaged over the spectrum in any specified zone; to compute resonance-shielded cross sections using data in the shielding factor format; and to compute isotopic burnup using decay chains specified by the user. All programming is in FORTRAN. Because variable dimensioning is employed, no simple restrictions on problem complexity can be stated. The number of spatial mesh points, energy groups, upscattering terms, etc. is limited only by the available memory. The source file contains about 3000 cards. 4 refs.

  8. Evidence for the Use of Ischemic Compression and Dry Needling in the Management of Trigger Points of the Upper Trapezius in Patients with Neck Pain: A Systematic Review.

    PubMed

    Cagnie, Barbara; Castelein, Birgit; Pollie, Flore; Steelant, Lieselotte; Verhoeyen, Hanne; Cools, Ann

    2015-07-01

    The aim of this review was to describe the effects of ischemic compression and dry needling on trigger points in the upper trapezius muscle in patients with neck pain and compare these two interventions with other therapeutic interventions aiming to inactivate trigger points. Both PubMed and Web of Science were searched for randomized controlled trials using different key word combinations related to myofascial neck pain and therapeutic interventions. Four main outcome parameters were evaluated on short and medium term: pain, range of motion, functionality, and quality-of-life, including depression. Fifteen randomized controlled trials were included in this systematic review. There is moderate evidence for ischemic compression and strong evidence for dry needling to have a positive effect on pain intensity. This pain decrease is greater compared with active range of motion exercises (ischemic compression) and no or placebo intervention (ischemic compression and dry needling) but similar to other therapeutic approaches. There is moderate evidence that both ischemic compression and dry needling increase side-bending range of motion, with similar effects compared with lidocaine injection. There is weak evidence regarding its effects on functionality and quality-of-life. On the basis of this systematic review, ischemic compression and dry needling can both be recommended in the treatment of neck pain patients with trigger points in the upper trapezius muscle. Additional research with high-quality study designs are needed to develop more conclusive evidence. PMID:25768071

  9. An evaluation of the sandwich beam in four-point bending as a compressive test method for composites

    NASA Technical Reports Server (NTRS)

    Shuart, M. J.; Herakovich, C. T.

    1978-01-01

    The experimental phase of the study included compressive tests on HTS/PMR-15 graphite/polyimide, 2024-T3 aluminum alloy, and 5052 aluminum honeycomb at room temperature, and tensile tests on graphite/polyimide at room temperature, -157 C, and 316 C. Elastic properties and strength data are presented for three laminates. The room temperature elastic properties were generally found to differ in tension and compression with Young's modulus values differing by as much as twenty-six percent. The effect of temperature on modulus and strength was shown to be laminate dependent. A three-dimensional finite element analysis predicted an essentially uniform, uniaxial compressive stress state in the top flange test section of the sandwich beam. In conclusion, the sandwich beam can be used to obtain accurate, reliable Young's modulus and Poisson's ratio data for advanced composites; however, the ultimate compressive stress for some laminates may be influenced by the specimen geometry.

  10. Operational procedure for computer program for design point characteristics of a compressed-air generator with through-flow combustor for V/STOL applications

    NASA Technical Reports Server (NTRS)

    Krebs, R. P.

    1971-01-01

    The computer program described in this report calculates the design-point characteristics of a compressed-air generator for use in V/STOL applications such as systems with a tip-turbine-driven lift fan. The program computes the dimensions and mass, as well as the thermodynamic performance of a model air generator configuration which involves a straight through-flow combustor. Physical and thermodynamic characteristics of the air generator components are also given. The program was written in FORTRAN IV language. Provision has been made so that the program will accept input values in either SI units or U.S. customary units. Each air generator design-point calculation requires about 1.5 seconds of 7094 computer time for execution.

  11. One-dimensional discrete LQR control of compression of the human chest impulsively loaded by fast moving point mass

    NASA Astrophysics Data System (ADS)

    Olejnik, Paweł; Awrejcewicz, Jan

    2011-05-01

    This paper uncovers some interesting extension of an optimal discrete control methodology partially included in Proceedings and presented at the international conference on "Dynamical Systems Theory and Applications". There has been applied a scheme for realisation of active control strategy with numerically estimated linear optimal quadratic index of performance in reduction of impact-induced deformation of human chest loaded by a point mass at the central point of upper-torso body. We focused on application of one active element attached between torso's upper back (looking from posterior direction) and a fixed support. As the practical result we provide values of quality and reaction matrices, some useful deformation and energy dissipation time-characteristics and the resulting shape of control force time-characteristics that would be the demanding one for a hypothetical real implementation.

  12. Microbunching and RF Compression

    SciTech Connect

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.

    2010-05-23

    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  13. Behavior of perturbed plasma displacement near regular and singular X-points for compressible ideal magnetohydrodynamic stability analysis

    SciTech Connect

    Alladio, F.; Mancuso, A.; Micozzi, P.; Rogier, F.

    2006-08-15

    The ideal magnetohydrodynamic (MHD) stability analysis of axisymmetric plasma equilibria is simplified if magnetic coordinates, such as Boozer coordinates ({psi}{sub T} radial, i.e., toroidal flux divided by 2{pi}, {theta} poloidal angle, {phi} toroidal angle, with Jacobian {radical}(g){proportional_to}1/B{sup 2}), are used. The perturbed plasma displacement {xi}-vector is Fourier expanded in the poloidal angle, and the normal-mode equation {delta}W{sub p}({xi}-vector*,{xi}-vector)={omega}{sup 2}{delta}W{sub k}({xi}-vector*,{xi}-vector) (where {delta}W{sub p} and {delta}W{sub k} are the perturbed potential and kinetic plasma energies and {omega}{sup 2} is the eigenvalue) is solved through a 1D radial finite-element method. All magnetic coordinates are however plagued by divergent metric coefficients, if magnetic separatrices exist within (or at the boundary of) the plasma. The ideal MHD stability of plasma equilibria in the presence of magnetic separatrices is therefore a disputed problem. We consider the most general case of a simply connected axisymmetric plasma, which embeds an internal magnetic separatrix--{psi}{sub T}={psi}{sub T}{sup X}, with rotational transform {iota}slantslash({psi}{sub T}{sup X})=0 and regular X-points (B-vector{ne}0)--and is bounded by a second magnetic separatrix at the edge--{psi}{sub T}={psi}{sub T}{sup max}, with {iota}slantslash({psi}{sub T}{sup max}){ne}0--that includes a part of the symmetry axis (R=0) and is limited by two singular X-points (B-vector=0). At the embedded separatrix, the ideal MHD stability analysis requires the continuity of the normal plasma perturbed displacement variable, {xi}{sup {psi}}={xi}-vector{center_dot}{nabla}-vector{psi}{sub T}; the other displacement variables, the binormal {eta}{sup {psi}}={xi}-vector{center_dot}({nabla}-vector{theta}-{iota}slantslash{nabla}-vector{phi}) and the parallel {mu}=-{radical}(g){xi}-vector{center_dot}{nabla}-vector{phi}, can instead be discontinuous everywhere. The

  14. The use of the percentile method for searching empirical relationships between compression strength (UCS), Point Load (Is50) and Schmidt Hammer (RL) Indices

    NASA Astrophysics Data System (ADS)

    Bruno, Giovanni; Bobbo, Luigi; Vessia, Giovanna

    2014-05-01

    Is50 and RL indices are commonly used to indirectly estimate the compression strength of a rocky deposit by in situ and in laboratory devices. The widespread use of Point load and Schmidt hammer tests is due to the simplicity and the speediness of the execution of these tests. Their indices can be related to the UCS by means of the ordinary least square regression analyses. Several researchers suggest to take into account the lithology to build high correlated empirical expressions (R2 >0.8) to draw UCS from Is50 or RL values. Nevertheless, the lower and upper bounds of the UCS ranges of values that can be estimated by means of the two indirect indices are not clearly defined yet. Aydin (2009) stated that the Schmidt hammer test shall be used to assess the compression resistance of rocks characterized by UCS>12-20 MPa. On the other hand, the Point load measures can be performed on weak rocks but upper bound values for UCS are not suggested. In this paper, the empirical relationships between UCS, RL and Is50 are searched by means of the percentile method (Bruno et al. 2013). This method is based on looking for the best regression function, between measured data of UCS and one of the indirect indices, drawn from a subset sample of the couples of measures that are the percentile values. These values are taken from the original dataset of both measures by calculating the cumulative function. No hypothesis on the probability distribution of the sample is needed and the procedure shows to be robust with respect to odd values or outliers. In this study, the carbonate sedimentary rocks are investigated. According to the rock mass classification of Dobereiner and De Freitas (1986), the UCS values for the studied rocks range between 'extremely weak' to 'strong'. For the analyzed data, UCS varies between 1,18-270,70 MPa. Thus, through the percentile method the best empirical relationships UCS-Is50 and UCS-RL are plotted. Relationships between Is50 and RL are drawn, too

  15. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.

  16. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  17. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.

  18. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-03-10

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.

  19. Compressive Holography

    NASA Astrophysics Data System (ADS)

    Lim, Se Hoon

    Compressive holography estimates images from incomplete data by using sparsity priors. Compressive holography combines digital holography and compressive sensing. Digital holography consists of computational image estimation from data captured by an electronic focal plane array. Compressive sensing enables accurate data reconstruction by prior knowledge on desired signal. Computational and optical co-design optimally supports compressive holography in the joint computational and optical domain. This dissertation explores two examples of compressive holography: estimation of 3D tomographic images from 2D data and estimation of images from under sampled apertures. Compressive holography achieves single shot holographic tomography using decompressive inference. In general, 3D image reconstruction suffers from underdetermined measurements with a 2D detector. Specifically, single shot holographic tomography shows the uniqueness problem in the axial direction because the inversion is ill-posed. Compressive sensing alleviates the ill-posed problem by enforcing some sparsity constraints. Holographic tomography is applied for video-rate microscopic imaging and diffuse object imaging. In diffuse object imaging, sparsity priors are not valid in coherent image basis due to speckle. So incoherent image estimation is designed to hold the sparsity in incoherent image basis by support of multiple speckle realizations. High pixel count holography achieves high resolution and wide field-of-view imaging. Coherent aperture synthesis can be one method to increase the aperture size of a detector. Scanning-based synthetic aperture confronts a multivariable global optimization problem due to time-space measurement errors. A hierarchical estimation strategy divides the global problem into multiple local problems with support of computational and optical co-design. Compressive sparse aperture holography can be another method. Compressive sparse sampling collects most of significant field

  20. Multiphase, Multicomponent Compressibility in Geothermal Reservoir Engineering

    SciTech Connect

    Macias-Chapa, L.; Ramey, H.J. Jr.

    1987-01-20

    Coefficients of compressibilities below the bubble point were computer with a thermodynamic model for single and multicomponent systems. Results showed coefficients of compressibility below the bubble point larger than the gas coefficient of compressibility at the same conditions. Two-phase compressibilities computed in the conventional way are underestimated and may lead to errors in reserve estimation and well test analysis. 10 refs., 9 figs.

  1. Compressible halftoning

    NASA Astrophysics Data System (ADS)

    Anderson, Peter G.; Liu, Changmeng

    2003-01-01

    We present a technique for converting continuous gray-scale images to halftone (black and white) images that lend themselves to lossless data compression with compression factor of three or better. Our method involves using novel halftone mask structures which consist of non-repeated threshold values. We have versions of both dispersed-dot and clustered-dot masks, which produce acceptable images for a variety of printers. Using the masks as a sort key allows us to reversibly rearrange the image pixels and partition them into groups with a highly skewed distribution allowing Huffman compression coding techniques to be applied. This gives compression ratios in the range 3:1 to 10:1.

  2. libpolycomp: Compression/decompression library

    NASA Astrophysics Data System (ADS)

    Tomasi, Maurizio

    2016-04-01

    Libpolycomp compresses and decompresses one-dimensional streams of numbers by means of several algorithms. It is well-suited for time-ordered data acquired by astronomical instruments or simulations. One of the algorithms, called "polynomial compression", combines two widely-used ideas (namely, polynomial approximation and filtering of Fourier series) to achieve substantial compression ratios for datasets characterized by smoothness and lack of noise. Notable examples are the ephemerides of astronomical objects and the pointing information of astronomical telescopes. Other algorithms implemented in this C library are well known and already widely used, e.g., RLE, quantization, deflate (via libz) and Burrows-Wheeler transform (via libbzip2). Libpolycomp can compress the timelines acquired by the Planck/LFI instrument with an overall compression ratio of ~9, while other widely known programs (gzip, bzip2) reach compression ratios less than 1.5.

  3. Video Compression

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.

  4. Treatment of vertebral body compression fractures using percutaneous kyphoplasty guided by a combination of computed tomography and C-arm fluoroscopy with finger-touch guidance to determine the needle entry point.

    PubMed

    Wang, G Y; Zhang, C C; Ren, K; Zhang, P P; Liu, C H; Zheng, Z A; Chen, Y; Fang, R

    2015-01-01

    This study aimed to evaluate the results and complications of image-guided percutaneous kyphoplasty (PKP) using computed tomography (CT) and C-arm fluoroscopy, with finger-touch guidance to determine the needle entry point. Of the 86 patients (106 PKP) examined, 56 were treated for osteoporotic vertebral compression fractures and 30 for vertebral tumors. All patients underwent image-guided treatment using CT and conventional fluoroscopy, with finger-touch identification of a puncture point within a small incision (1.5 to 2 cm). Partial or complete pain relief was achieved in 98% of patients within 24 h of treatment. Moreover, a significant improvement in functional mobility and reduction in analgesic use was observed. CT allowed the detection of cement leakage in 20.7% of the interventions. No bone cement leakages with neurologic symptoms were noted. All work channels were made only once, and bone cement was distributed near the center of the vertebral body. Our study confirms the efficacy of PKP treatment in osteoporotic and oncological patients. The combination of CT and C-arm fluoroscopy with finger-touch guidance reduces the risk of complications compared with conventional fluoroscopy alone, facilitates the detection of minor cement leakage, improves the operative procedure, and results in a favorable bone cement distribution. PMID:25867298

  5. [Compression material].

    PubMed

    Perceau, Géraldine; Faure, Christine

    2012-01-01

    The compression of a venous ulcer is carried out with the use of bandages, and for less exudative ulcers, with socks, stockings or tights. The system of bandages is complex. Different forms of extension and therefore different types of models exist. PMID:22489428

  6. Efficient Compression of High Resolution Climate Data

    NASA Astrophysics Data System (ADS)

    Yin, J.; Schuchardt, K. L.

    2011-12-01

    resolution climate data can be massive. Those data can consume a huge amount of disk space for storage, incur significant overhead for outputting data during simulation, introduce high latency for visualization and analysis, and may even make interactive visualization and analysis impossible given the limit of the data that a conventional cluster can handle. These problems can be alleviated by with effective and efficient data compression techniques. Even though HDF5 format supports compression, previous work has mainly focused on employ traditional general purpose compression schemes such as dictionary coder and block sorting based compression scheme. Those compression schemes mainly focus on encoding repeated byte sequences efficiently and are not well suitable for compressing climate data consist mainly of distinguished float point numbers. We plan to select and customize our compression schemes according to the characteristics of high-resolution climate data. One observation on high resolution climate data is that as the resolution become higher, values of various climate variables such as temperature and pressure, become closer in nearby cells. This provides excellent opportunities for predication-based compression schemes. We have performed a preliminary estimation of compression ratios of a very simple minded predication-based compression ratio in which we compute the difference between current float point number with previous float point number and then encoding the exponent and significance part of the float point number with entropy-based compression scheme. Our results show that we can achieve higher compression ratios between 2 and 3 in lossless compression, which is significantly higher than traditional compression algorithms. We have also developed lossy compression with our techniques. We can achive orders of magnitude data reduction while ensure error bounds. Moreover, our compression scheme is much more efficient and introduces much less overhead

  7. Data compression in digitized lines

    SciTech Connect

    Thapa, K. )

    1990-04-01

    The problem of data compression is very important in digital photogrammetry, computer assisted cartography, and GIS/LIS. In addition, it is also applicable in many other fields such as computer vision, image processing, pattern recognition, and artificial intelligence. Consequently, there are many algorithms available to solve this problem but none of them are considered to be satisfactory. In this paper, a new method of finding critical points in a digitized curve is explained. This technique, based on the normalized symmetric scattered matrix, is good for both critical points detection and data compression. In addition, the critical points detected by this algorithm are compared with those by zero-crossings. 8 refs.

  8. Compressed Genotyping

    PubMed Central

    Erlich, Yaniv; Gordon, Assaf; Brand, Michael; Hannon, Gregory J.; Mitra, Partha P.

    2011-01-01

    Over the past three decades we have steadily increased our knowledge on the genetic basis of many severe disorders. Nevertheless, there are still great challenges in applying this knowledge routinely in the clinic, mainly due to the relatively tedious and expensive process of genotyping. Since the genetic variations that underlie the disorders are relatively rare in the population, they can be thought of as a sparse signal. Using methods and ideas from compressed sensing and group testing, we have developed a cost-effective genotyping protocol to detect carriers for severe genetic disorders. In particular, we have adapted our scheme to a recently developed class of high throughput DNA sequencing technologies. The mathematical framework presented here has some important distinctions from the ’traditional’ compressed sensing and group testing frameworks in order to address biological and technical constraints of our setting. PMID:21451737

  9. Lossy Compression of ACS images

    NASA Astrophysics Data System (ADS)

    Cox, Colin

    2004-01-01

    A method of compressing images stored as floating point arrays was proposed several years ago by White and Greenfield. With the increased image sizes encountered in the last few years and the consequent need to distribute large data volumes, the value of applying such a procedure has become more evident. Methods such as this which offer significant compression ratios are lossy and there is always some concern that statistically important information might be discarded. Several astronomical images have been analyzed and, in the examples tested, compression ratios of about six were obtained with no significant information loss.

  10. Compression and venous ulcers.

    PubMed

    Stücker, M; Link, K; Reich-Schupke, S; Altmeyer, P; Doerler, M

    2013-03-01

    Compression therapy is considered to be the most important conservative treatment of venous leg ulcers. Until a few years ago, compression bandages were regarded as first-line therapy of venous leg ulcers. However, to date medical compression stockings are the first choice of treatment. With respect to compression therapy of venous leg ulcers the following statements are widely accepted: 1. Compression improves the healing of ulcers when compared with no compression; 2. Multicomponent compression systems are more effective than single-component compression systems; 3. High compression is more effective than lower compression; 4. Medical compression stockings are more effective than compression with short stretch bandages. Healed venous leg ulcers show a high relapse rate without ongoing treatment. The use of medical stockings significantly reduces the amount of recurrent ulcers. Furthermore, the relapse rate of venous leg ulcers can be significantly reduced by a combination of compression therapy and surgery of varicose veins compared with compression therapy alone. PMID:23482538

  11. Compressive beamforming.

    PubMed

    Xenaki, Angeliki; Gerstoft, Peter; Mosegaard, Klaus

    2014-07-01

    Sound source localization with sensor arrays involves the estimation of the direction-of-arrival (DOA) from a limited number of observations. Compressive sensing (CS) solves such underdetermined problems achieving sparsity, thus improved resolution, and can be solved efficiently with convex optimization. The DOA estimation problem is formulated in the CS framework and it is shown that CS has superior performance compared to traditional DOA estimation methods especially under challenging scenarios such as coherent arrivals and single-snapshot data. An offset and resolution analysis is performed to indicate the limitations of CS. It is shown that the limitations are related to the beampattern, thus can be predicted. The high-resolution capabilities and the robustness of CS are demonstrated on experimental array data from ocean acoustic measurements for source tracking with single-snapshot data. PMID:24993212

  12. Compression and Entrapment Syndromes

    PubMed Central

    Heffernan, L.P.; Benstead, T.J.

    1987-01-01

    Family physicians are often confronted by patients who present with pain, numbness and weakness. Such complaints, when confined to a single extremity, most particularly to a restricted portion of the extremity, may indicate focal dysfunction of peripheral nerve structures arising from compression and/or entrapment, to which such nerves are selectively vulnerable. The authors of this article consider the paramount clinical features that allow the clinician to arrive at a correct diagnosis, reviews major points in differential diagnosis, and suggest appropriate management strategies. PMID:21263858

  13. Data Compression for Helioseismology

    NASA Astrophysics Data System (ADS)

    Löptien, Björn

    2015-10-01

    Efficient data compression will play an important role for several upcoming and planned space missions involving helioseismology, such as Solar Orbiter. Solar Orbiter, to be launched in October 2018, will be the next space mission involving helioseismology. The main characteristic of Solar Orbiter lies in its orbit. The spacecraft will have an inclined solar orbit, reaching a solar latitude of up to 33 deg. This will allow, for the first time, probing the solar poles using local helioseismology. In addition, combined observations of Solar Orbiter and another helioseismic instrument will be used to study the deep interior of the Sun using stereoscopic helioseismology. The Doppler velocity and continuum intensity images of the Sun required for helioseismology will be provided by the Polarimetric and Helioseismic Imager (PHI). Major constraints for helioseismology with Solar Orbiter are the low telemetry and the (probably) short observing time. In addition, helioseismology of the solar poles requires observations close to the solar limb, even from the inclined orbit of Solar Orbiter. This gives rise to systematic errors. In this thesis, I derived a first estimate of the impact of lossy data compression on helioseismology. I put special emphasis on the Solar Orbiter mission, but my results are applicable to other planned missions as well. First, I studied the performance of PHI for helioseismology. Based on simulations of solar surface convection and a model of the PHI instrument, I generated a six-hour time-series of synthetic Doppler velocity images with the same properties as expected for PHI. Here, I focused on the impact of the point spread function, the spacecraft jitter, and of the photon noise level. The derived power spectra of solar oscillations suggest that PHI will be suitable for helioseismology. The low telemetry of Solar Orbiter requires extensive compression of the helioseismic data obtained by PHI. I evaluated the influence of data compression using

  14. The Forgotten: Identification and Functional Characterization of MHC Class II Molecules H2-Eb2 and RT1-Db2.

    PubMed

    Monzón-Casanova, Elisa; Rudolf, Ronald; Starick, Lisa; Müller, Ingrid; Söllner, Christian; Müller, Nora; Westphal, Nico; Miyoshi-Akiyama, Tohru; Uchiyama, Takehiko; Berberich, Ingolf; Walter, Lutz; Herrmann, Thomas

    2016-02-01

    In this article, we report the complete coding sequence and to our knowledge, the first functional analysis of two homologous nonclassical MHC class II genes: RT1-Db2 of rat and H2-Eb2 of mouse. They differ in important aspects compared with the classical class II β1 molecules: their mRNA expression by APCs is much lower, they show minimal polymorphism in the Ag-binding domain, and they lack N-glycosylation and the highly conserved histidine 81. Also, their cytoplasmic region is completely different and longer. To study and compare them with their classical counterparts, we transduced them in different cell lines. These studies show that they can pair with the classical α-chains (RT1-Da and H2-Ea) and are expressed at the cell surface where they can present superantigens. Interestingly, compared with the classical molecules, they have an extraordinary capacity to present the superantigen Yersinia pseudotuberculosis mitogen. Taken together, our findings suggest that the b2 genes, together with the respective α-chain genes, encode for H2-E2 or RT1-D2 molecules, which could function as Ag-presenting molecules for a particular class of Ags, as modulators of Ag presentation like nonclassical nonpolymorphic class II molecules DM and DO do, or even as players outside the immune system. PMID:26740108

  15. Selfsimilar Spherical Compression Waves in Gas Dynamics

    NASA Astrophysics Data System (ADS)

    Meyer-ter-Vehn, J.; Schalk, C.

    1982-08-01

    A synopsis of different selfsimilar spherical compression waves is given pointing out their fundamental importance for the gas dynamics of inertial confinement fusion. Strong blast waves, various forms of isentropic compression waves, imploding shock waves and the solution for non-isentropic collapsing hollow spheres are included. A classification is given in terms of six singular points which characterise the different solutions and the relations between them. The presentation closely follows Guderley's original work on imploding shock waves

  16. Turbulence in Compressible Flows

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Lecture notes for the AGARD Fluid Dynamics Panel (FDP) Special Course on 'Turbulence in Compressible Flows' have been assembled in this report. The following topics were covered: Compressible Turbulent Boundary Layers, Compressible Turbulent Free Shear Layers, Turbulent Combustion, DNS/LES and RANS Simulations of Compressible Turbulent Flows, and Case Studies of Applications of Turbulence Models in Aerospace.

  17. Compressively sensed complex networks.

    SciTech Connect

    Dunlavy, Daniel M.; Ray, Jaideep; Pinar, Ali

    2010-07-01

    The aim of this project is to develop low dimension parametric (deterministic) models of complex networks, to use compressive sensing (CS) and multiscale analysis to do so and to exploit the structure of complex networks (some are self-similar under coarsening). CS provides a new way of sampling and reconstructing networks. The approach is based on multiresolution decomposition of the adjacency matrix and its efficient sampling. It requires preprocessing of the adjacency matrix to make it 'blocky' which is the biggest (combinatorial) algorithm challenge. Current CS reconstruction algorithm makes no use of the structure of a graph, its very general (and so not very efficient/customized). Other model-based CS techniques exist, but not yet adapted to networks. Obvious starting point for future work is to increase the efficiency of reconstruction.

  18. Learning in compressed space.

    PubMed

    Fabisch, Alexander; Kassahun, Yohannes; Wöhrle, Hendrik; Kirchner, Frank

    2013-06-01

    We examine two methods which are used to deal with complex machine learning problems: compressed sensing and model compression. We discuss both methods in the context of feed-forward artificial neural networks and develop the backpropagation method in compressed parameter space. We further show that compressing the weights of a layer of a multilayer perceptron is equivalent to compressing the input of the layer. Based on this theoretical framework, we will use orthogonal functions and especially random projections for compression and perform experiments in supervised and reinforcement learning to demonstrate that the presented methods reduce training time significantly. PMID:23501172

  19. Compressed gas manifold

    DOEpatents

    Hildebrand, Richard J.; Wozniak, John J.

    2001-01-01

    A compressed gas storage cell interconnecting manifold including a thermally activated pressure relief device, a manual safety shut-off valve, and a port for connecting the compressed gas storage cells to a motor vehicle power source and to a refueling adapter. The manifold is mechanically and pneumatically connected to a compressed gas storage cell by a bolt including a gas passage therein.

  20. Compressible turbulent mixing: Effects of compressibility

    NASA Astrophysics Data System (ADS)

    Ni, Qionglin

    2016-04-01

    We studied by numerical simulations the effects of compressibility on passive scalar transport in stationary compressible turbulence. The turbulent Mach number varied from zero to unity. The difference in driven forcing was the magnitude ratio of compressive to solenoidal modes. In the inertial range, the scalar spectrum followed the k-5 /3 scaling and suffered negligible influence from the compressibility. The growth of the Mach number showed (1) a first reduction and second enhancement in the transfer of scalar flux; (2) an increase in the skewness and flatness of the scalar derivative and a decrease in the mixed skewness and flatness of the velocity-scalar derivatives; (3) a first stronger and second weaker intermittency of scalar relative to that of velocity; and (4) an increase in the intermittency parameter which measures the intermittency of scalar in the dissipative range. Furthermore, the growth of the compressive mode of forcing indicated (1) a decrease in the intermittency parameter and (2) less efficiency in enhancing scalar mixing. The visualization of scalar dissipation showed that, in the solenoidal-forced flow, the field was filled with the small-scale, highly convoluted structures, while in the compressive-forced flow, the field was exhibited as the regions dominated by the large-scale motions of rarefaction and compression.

  1. Fracture in compression of brittle solids

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The fracture of brittle solids in monotonic compression is reviewed from both the mechanistic and phenomenological points of view. The fundamental theoretical developments based on the extension of pre-existing cracks in general multiaxial stress fields are recognized as explaining extrinsic behavior where a single crack is responsible for the final failure. In contrast, shear faulting in compression is recognized to be the result of an evolutionary localization process involving en echelon action of cracks and is termed intrinsic.

  2. Data compression preserving statistical independence

    NASA Technical Reports Server (NTRS)

    Morduch, G. E.; Rice, W. M.

    1973-01-01

    The purpose of this study was to determine the optimum points of evaluation of data compressed by means of polynomial smoothing. It is shown that a set y of m statistically independent observations Y(t sub 1), Y(t sub 2), ... Y(t sub m) of a quantity X(t), which can be described by a (n-1)th degree polynomial in time, may be represented by a set Z of n statistically independent compressed observations Z (tau sub 1), Z (tau sub 2),...Z (tau sub n), such that The compressed set Z has the same information content as the observed set Y. the times tau sub 1, tau sub 2,.. tau sub n are the zeros of an nth degree polynomial P sub n, to whose definition and properties the bulk of this report is devoted. The polynomials P sub n are defined as functions of the observation times t sub 1, t sub 2,.. t sub n, and it is interesting to note that if the observation times are continuously distributed the polynomials P sub n degenerate to legendre polynomials. The proposed data compression scheme is a little more complex than those usually employed, but has the advantage of preserving all the information content of the original observations.

  3. Lossy Text Compression Techniques

    NASA Astrophysics Data System (ADS)

    Palaniappan, Venka; Latifi, Shahram

    Most text documents contain a large amount of redundancy. Data compression can be used to minimize this redundancy and increase transmission efficiency or save storage space. Several text compression algorithms have been introduced for lossless text compression used in critical application areas. For non-critical applications, we could use lossy text compression to improve compression efficiency. In this paper, we propose three different source models for character-based lossy text compression: Dropped Vowels (DOV), Letter Mapping (LMP), and Replacement of Characters (ROC). The working principles and transformation methods associated with these methods are presented. Compression ratios obtained are included and compared. Comparisons of performance with those of the Huffman Coding and Arithmetic Coding algorithm are also made. Finally, some ideas for further improving the performance already obtained are proposed.

  4. Radiological Image Compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  5. [Medical image compression: a review].

    PubMed

    Noreña, Tatiana; Romero, Eduardo

    2013-01-01

    Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings. PMID:23715317

  6. Compressive wideband microwave radar holography

    NASA Astrophysics Data System (ADS)

    Wilson, Scott A.; Narayanan, Ram M.

    2014-05-01

    Compressive sensing has emerged as a topic of great interest for radar applications requiring large amounts of data storage. Typically, full sets of data are collected at the Nyquist rate only to be compressed at some later point, where information-bearing data are retained and inconsequential data are discarded. However, under sparse conditions, it is possible to collect data at random sampling intervals less than the Nyquist rate and still gather enough meaningful data for accurate signal reconstruction. In this paper, we employ sparse sampling techniques in the recording of digital microwave holograms over a two-dimensional scanning aperture. Using a simple and fast non-linear interpolation scheme prior to image reconstruction, we show that the reconstituted image quality is well-retained with limited perceptual loss.

  7. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  8. Multishock Compression Properties of Warm Dense Argon.

    PubMed

    Zheng, Jun; Chen, Qifeng; Yunjun, Gu; Li, Zhiguo; Shen, Zhijun

    2015-01-01

    Warm dense argon was generated by a shock reverberation technique. The diagnostics of warm dense argon were performed by a multichannel optical pyrometer and a velocity interferometer system. The equations of state in the pressure-density range of 20-150 GPa and 1.9-5.3 g/cm(3) from the first- to fourth-shock compression were presented. The single-shock temperatures in the range of 17.2-23.4 kK were obtained from the spectral radiance. Experimental results indicates that multiple shock-compression ratio (ηi = ρi/ρ0) is greatly enhanced from 3.3 to 8.8, where ρ0 is the initial density of argon and ρi (i = 1, 2, 3, 4) is the compressed density from first to fourth shock, respectively. For the relative compression ratio (ηi' = ρi/ρi-1), an interesting finding is that a turning point occurs at the second shocked states under the conditions of different experiments, and ηi' increases with pressure in lower density regime and reversely decreases with pressure in higher density regime. The evolution of the compression ratio is controlled by the excitation of internal degrees of freedom, which increase the compression, and by the interaction effects between particles that reduce it. A temperature-density plot shows that current multishock compression states of argon have distributed into warm dense regime. PMID:26515505

  9. Lossless Astronomical Image Compression and the Effects of Random Noise

    NASA Technical Reports Server (NTRS)

    Pence, William

    2009-01-01

    In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.

  10. Selecting a general-purpose data compression algorithm

    NASA Technical Reports Server (NTRS)

    Mathews, Gary Jason

    1995-01-01

    The National Space Science Data Center's Common Data Formate (CDF) is capable of storing many types of data such as scalar data items, vectors, and multidimensional arrays of bytes, integers, or floating point values. However, regardless of the dimensionality and data type, the data break down into a sequence of bytes that can be fed into a data compression function to reduce the amount of data without losing data integrity and thus remaining fully reconstructible. Because of the diversity of data types and high performance speed requirements, a general-purpose, fast, simple data compression algorithm is required to incorporate data compression into CDF. The questions to ask are how to evaluate and compare compression algorithms, and what compression algorithm meets all requirements. The object of this paper is to address these questions and determine the most appropriate compression algorithm to use within the CDF data management package that would be applicable to other software packages with similar data compression needs.

  11. Fractal image compression

    NASA Technical Reports Server (NTRS)

    Barnsley, Michael F.; Sloan, Alan D.

    1989-01-01

    Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed.

  12. Texture Studies and Compression Behaviour of Apple Flesh

    NASA Astrophysics Data System (ADS)

    James, Bryony; Fonseca, Celia

    Compressive behavior of fruit flesh has been studied using mechanical tests and microstructural analysis. Apple flesh from two cultivars (Braeburn and Cox's Orange Pippin) was investigated to represent the extremes in a spectrum of fruit flesh types, hard and juicy (Braeburn) and soft and mealy (Cox's). Force-deformation curves produced during compression of unconstrained discs of apple flesh followed trends predicted from the literature for each of the "juicy" and "mealy" types. The curves display the rupture point and, in some cases, a point of inflection that may be related to the point of incipient juice release. During compression these discs of flesh generally failed along the centre line, perpendicular to the direction of loading, through a barrelling mechanism. Cryo-Scanning Electron Microscopy (cryo-SEM) was used to examine the behavior of the parenchyma cells during fracture and compression using a purpose designed sample holder and compression tester. Fracture behavior reinforced the difference in mechanical properties between crisp and mealy fruit flesh. During compression testing prior to cryo-SEM imaging the apple flesh was constrained perpendicular to the direction of loading. Microstructural analysis suggests that, in this arrangement, the material fails along a compression front ahead of the compressing plate. Failure progresses by whole lines of parenchyma cells collapsing, or rupturing, with juice filling intercellular spaces, before the compression force is transferred to the next row of cells.

  13. Selfsimilar spherical compression waves in gas dynamics

    NASA Astrophysics Data System (ADS)

    Meyer-Ter-Vehn, J.; Schalk, C.

    1982-05-01

    A synopsis of different selfsimilar spherical compression waves is given pointing out their fundamental importance for the gas dynamics of inertial confinement fusion. Strong blast waves, various forms of isentropic collapsing hollow spheres are included. A classification is given in terms of six singular points which characterize the different solutions and the relations between them. The presentation closely follows Guderley's original work on imploding shock waves.

  14. EEG data compression techniques.

    PubMed

    Antoniol, G; Tonella, P

    1997-02-01

    In this paper, electroencephalograph (EEG) and Holter EEG data compression techniques which allow perfect reconstruction of the recorded waveform from the compressed one are presented and discussed. Data compression permits one to achieve significant reduction in the space required to store signals and in transmission time. The Huffman coding technique in conjunction with derivative computation reaches high compression ratios (on average 49% on Holter and 58% on EEG signals) with low computational complexity. By exploiting this result a simple and fast encoder/decoder scheme capable of real-time performance on a PC was implemented. This simple technique is compared with other predictive transformations, vector quantization, discrete cosine transform (DCT), and repetition count compression methods. Finally, it is shown that the adoption of a collapsed Huffman tree for the encoding/decoding operations allows one to choose the maximum codeword length without significantly affecting the compression ratio. Therefore, low cost commercial microcontrollers and storage devices can be effectively used to store long Holter EEG's in a compressed format. PMID:9214790

  15. Boson core compressibility

    NASA Astrophysics Data System (ADS)

    Khorramzadeh, Y.; Lin, Fei; Scarola, V. W.

    2012-04-01

    Strongly interacting atoms trapped in optical lattices can be used to explore phase diagrams of Hubbard models. Spatial inhomogeneity due to trapping typically obscures distinguishing observables. We propose that measures using boson double occupancy avoid trapping effects to reveal two key correlation functions. We define a boson core compressibility and core superfluid stiffness in terms of double occupancy. We use quantum Monte Carlo on the Bose-Hubbard model to empirically show that these quantities intrinsically eliminate edge effects to reveal correlations near the trap center. The boson core compressibility offers a generally applicable tool that can be used to experimentally map out phase transitions between compressible and incompressible states.

  16. Modeling Compressed Turbulence

    SciTech Connect

    Israel, Daniel M.

    2012-07-13

    From ICE to ICF, the effect of mean compression or expansion is important for predicting the state of the turbulence. When developing combustion models, we would like to know the mix state of the reacting species. This involves density and concentration fluctuations. To date, research has focused on the effect of compression on the turbulent kinetic energy. The current work provides constraints to help development and calibration for models of species mixing effects in compressed turbulence. The Cambon, et al., re-scaling has been extended to buoyancy driven turbulence, including the fluctuating density, concentration, and temperature equations. The new scalings give us helpful constraints for developing and validating RANS turbulence models.

  17. Local compressibilities in crystals

    NASA Astrophysics Data System (ADS)

    Martín Pendás, A.; Costales, Aurora; Blanco, M. A.; Recio, J. M.; Luaña, Víctor

    2000-12-01

    An application of the atoms in molecules theory to the partitioning of static thermodynamic properties in condensed systems is presented. Attention is focused on the definition and the behavior of atomic compressibilities. Inverses of bulk moduli are found to be simple weighted averages of atomic compressibilities. Two kinds of systems are investigated as examples: four related oxide spinels and the alkali halide family. Our analyses show that the puzzling constancy of the bulk moduli of these spinels is a consequence of the value of the compressibility of an oxide ion. A functional dependence between ionic bulk moduli and ionic volume is also proposed.

  18. Compressive Optical Image Encryption

    PubMed Central

    Li, Jun; Sheng Li, Jiao; Yang Pan, Yang; Li, Rong

    2015-01-01

    An optical image encryption technique based on compressive sensing using fully optical means has been proposed. An object image is first encrypted to a white-sense stationary noise pattern using a double random phase encoding (DRPE) method in a Mach-Zehnder interferometer. Then, the encrypted image is highly compressed to a signal using single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the encrypted image is reconstructed well via compressive sensing theory, and the original image can be decrypted with three reconstructed holograms and the correct keys. The numerical simulations show that the method is effective and suitable for optical image security transmission in future all-optical networks because of the ability of completely optical implementation and substantially smaller hologram data volume. PMID:25992946

  19. Military Data Compression Standard

    NASA Astrophysics Data System (ADS)

    Winterbauer, C. E.

    1982-07-01

    A facsimile interoperability data compression standard is being adopted by the U.S. Department of Defense and other North Atlantic Treaty Organization (NATO) countries. This algorithm has been shown to perform quite well in a noisy communication channel.

  20. Compressive optical image encryption.

    PubMed

    Li, Jun; Sheng Li, Jiao; Yang Pan, Yang; Li, Rong

    2015-01-01

    An optical image encryption technique based on compressive sensing using fully optical means has been proposed. An object image is first encrypted to a white-sense stationary noise pattern using a double random phase encoding (DRPE) method in a Mach-Zehnder interferometer. Then, the encrypted image is highly compressed to a signal using single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the encrypted image is reconstructed well via compressive sensing theory, and the original image can be decrypted with three reconstructed holograms and the correct keys. The numerical simulations show that the method is effective and suitable for optical image security transmission in future all-optical networks because of the ability of completely optical implementation and substantially smaller hologram data volume. PMID:25992946

  1. Focus on Compression Stockings

    MedlinePlus

    ... sion apparel is used to prevent or control edema The post-thrombotic syndrome (PTS) is a complication ( ... complication. abdomen. This swelling is referred to as edema. If you have edema, compression therapy may be ...

  2. Compressible Astrophysics Simulation Code

    2007-07-18

    This is an astrophysics simulation code involving a radiation diffusion module developed at LLNL coupled to compressible hydrodynamics and adaptive mesh infrastructure developed at LBNL. One intended application is to neutrino diffusion in core collapse supernovae.

  3. Similarity by compression.

    PubMed

    Melville, James L; Riley, Jenna F; Hirst, Jonathan D

    2007-01-01

    We present a simple and effective method for similarity searching in virtual high-throughput screening, requiring only a string-based representation of the molecules (e.g., SMILES) and standard compression software, available on all modern desktop computers. This method utilizes the normalized compression distance, an approximation of the normalized information distance, based on the concept of Kolmogorov complexity. On representative data sets, we demonstrate that compression-based similarity searching can outperform standard similarity searching protocols, exemplified by the Tanimoto coefficient combined with a binary fingerprint representation and data fusion. Software to carry out compression-based similarity is available from our Web site at http://comp.chem.nottingham.ac.uk/download/zippity. PMID:17238245

  4. Simulation and Modeling of Homogeneous, Compressed Turbulence.

    NASA Astrophysics Data System (ADS)

    Wu, Chung-Teh

    Low Reynolds number homogeneous turbulence undergoing low Mach number isotropic and one-dimensional compression has been simulated by numerically solving the Navier-Stokes equations. The numerical simulations were carried out on a CYBER 205 computer using a 64 x 64 x 64 mesh. A spectral method was used for spatial differencing and the second -order Runge-Kutta method for time advancement. A variety of statistical information was extracted from the computed flow fields. These include three-dimensional energy and dissipation spectra, two-point velocity correlations, one -dimensional energy spectra, turbulent kinetic energy and its dissipation rate, integral length scales, Taylor microscales, and Kolmogorov length scale. It was found that the ratio of the turbulence time scale to the mean-flow time scale is an important parameter in these flows. When this ratio is large, the flow is immediately affected by the mean strain in a manner similar to that predicted by rapid distortion theory. When this ratio is small, the flow retains the character of decaying isotropic turbulence initially; only after the strain has been applied for a long period does the flow accumulate a significant reflection of the effect of mean strain. In these flows, the Kolmogorov length scale decreases rapidly with increasing total strain, due to the density increase that accompanies compression. Results from the simulated flow fields were used to test one-point-closure, two-equation turbulence models. The two-equation models perform well only when the compression rate is small compared to the eddy turn-over rate. A new one-point-closure, three-equation turbulence model which accounts for the effect of compression is proposed. The new model accurately calculates four types of flows (isotropic decay, isotropic compression, one-dimensional compression, and axisymmetric expansion flows) for a wide range of strain rates.

  5. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  6. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  7. Intelligent bandwith compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.

  8. Alternative Compression Garments

    NASA Technical Reports Server (NTRS)

    Stenger, M. B.; Lee, S. M. C.; Ribeiro, L. C.; Brown, A. K.; Westby, C. M.; Platts, S. H.

    2011-01-01

    Orthostatic intolerance after spaceflight is still an issue for astronauts as no in-flight countermeasure has been 100% effective. Future anti-gravity suits (AGS) may be similar to the Shuttle era inflatable AGS or may be a mechanical compression device like the Russian Kentavr. We have evaluated the above garments as well as elastic, gradient compression garments of varying magnitude and determined that breast-high elastic compression garments may be a suitable replacement to the current AGS. This new garment should be more comfortable than the AGS, easy to don and doff, and as effective a countermeasure to orthostatic intolerance. Furthermore, these new compression garments could be worn for several days after space flight as necessary if symptoms persisted. We conducted two studies to evaluate elastic, gradient compression garments. The purpose of these studies was to evaluate the comfort and efficacy of an alternative compression garment (ACG) immediately after actual space flight and 6 degree head-down tilt bed rest as a model of space flight, and to determine if they would impact recovery if worn for up to three days after bed rest.

  9. Integer cosine transform for image compression

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Pollara, F.; Shahshahani, M.

    1991-01-01

    This article describes a recently introduced transform algorithm called the integer cosine transform (ICT), which is used in transform-based data compression schemes. The ICT algorithm requires only integer operations on small integers and at the same time gives a rate-distortion performance comparable to that offered by the floating-point discrete cosine transform (DCT). The article addresses the issue of implementation complexity, which is of prime concern for source coding applications of interest in deep-space communications. Complexity reduction in the transform stage of the compression scheme is particularly relevant, since this stage accounts for most (typically over 80 percent) of the computational load.

  10. Simulation and modeling of homogeneous, compressed turbulence

    NASA Technical Reports Server (NTRS)

    Wu, C. T.; Ferziger, J. H.; Chapman, D. R.

    1985-01-01

    Low Reynolds number homogeneous turbulence undergoing low Mach number isotropic and one-dimensional compression was simulated by numerically solving the Navier-Stokes equations. The numerical simulations were performed on a CYBER 205 computer using a 64 x 64 x 64 mesh. A spectral method was used for spatial differencing and the second-order Runge-Kutta method for time advancement. A variety of statistical information was extracted from the computed flow fields. These include three-dimensional energy and dissipation spectra, two-point velocity correlations, one-dimensional energy spectra, turbulent kinetic energy and its dissipation rate, integral length scales, Taylor microscales, and Kolmogorov length scale. Results from the simulated flow fields were used to test one-point closure, two-equation models. A new one-point-closure, three-equation turbulence model which accounts for the effect of compression is proposed. The new model accurately calculates four types of flows (isotropic decay, isotropic compression, one-dimensional compression, and axisymmetric expansion flows) for a wide range of strain rates.

  11. Simulation and modeling of homogeneous, compressed turbulence

    NASA Astrophysics Data System (ADS)

    Wu, C. T.; Ferziger, J. H.; Chapman, D. R.

    1985-05-01

    Low Reynolds number homogeneous turbulence undergoing low Mach number isotropic and one-dimensional compression was simulated by numerically solving the Navier-Stokes equations. The numerical simulations were performed on a CYBER 205 computer using a 64 x 64 x 64 mesh. A spectral method was used for spatial differencing and the second-order Runge-Kutta method for time advancement. A variety of statistical information was extracted from the computed flow fields. These include three-dimensional energy and dissipation spectra, two-point velocity correlations, one-dimensional energy spectra, turbulent kinetic energy and its dissipation rate, integral length scales, Taylor microscales, and Kolmogorov length scale. Results from the simulated flow fields were used to test one-point closure, two-equation models. A new one-point-closure, three-equation turbulence model which accounts for the effect of compression is proposed. The new model accurately calculates four types of flows (isotropic decay, isotropic compression, one-dimensional compression, and axisymmetric expansion flows) for a wide range of strain rates.

  12. Parallel image compression circuit for high-speed cameras

    NASA Astrophysics Data System (ADS)

    Nishikawa, Yukinari; Kawahito, Shoji; Inoue, Toru

    2005-02-01

    In this paper, we propose 32 parallel image compression circuits for high-speed cameras. The proposed compression circuits are based on a 4 x 4-point 2-dimensional DCT using a DA method, zigzag scanning of 4 blocks of the 2-D DCT coefficients and a 1-dimensional Huffman coding. The compression engine is designed with FPGAs, and the hardware complexity is compared with JPEG algorithm. It is found that the proposed compression circuits require much less hardware, leading to a compact high-speed implementation of the image compression circuits using parallel processing architecture. The PSNR of the reconstructed image using the proposed encoding method is better than that of JPEG at the region of low compression ratio.

  13. Comparison of chest compression quality between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method during CPR.

    PubMed

    Park, Sang-Sub

    2014-01-01

    The purpose of this study is to grasp difference in quality of chest compression accuracy between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method. Participants were progressed 64 people except 6 absentees among 70 people who agreed to participation with completing the CPR curriculum. In the classification of group in participants, the modified chest compression method was called as smartphone group (33 people). The standardized chest compression method was called as traditional group (31 people). The common equipments in both groups were used Manikin for practice and Manikin for evaluation. In the meantime, the smartphone group for application was utilized Android and iOS Operating System (OS) of 2 smartphone products (G, i). The measurement period was conducted from September 25th to 26th, 2012. Data analysis was used SPSS WIN 12.0 program. As a result of research, the proper compression depth (mm) was shown the proper compression depth (p< 0.01) in traditional group (53.77 mm) compared to smartphone group (48.35 mm). Even the proper chest compression (%) was formed suitably (p< 0.05) in traditional group (73.96%) more than smartphone group (60.51%). As for the awareness of chest compression accuracy, the traditional group (3.83 points) had the higher awareness of chest compression accuracy (p< 0.001) than the smartphone group (2.32 points). In the questionnaire that was additionally carried out 1 question only in smartphone group, the modified chest compression method with the use of smartphone had the high negative reason in rescuer for occurrence of hand back pain (48.5%) and unstable posture (21.2%). PMID:24704648

  14. Evaluation of the tactical utility of compressed imagery

    NASA Astrophysics Data System (ADS)

    Irvine, John M.; Eckstein, Barbara A.; Hummel, Robert A.; Peters, Richard J.; Ritzel, Rhonda L.

    2002-06-01

    The effects of compression on image utility are assessed based on manual exploitation performed by military imagery analysts (IAs). The original, uncompressed synthetic aperture radar imagery and compressed products are rated for the Radar National Imagery Interpretability Rating Scale (NIIRS), image features and sensor artifacts, and target detection and recognition. Images were compressed via standard JPEG compression, single-scale intelligent bandwidth compression (IBC), and wavelet/trellis- coded quantization (W/TCQ) at 50-to-1 and 100-to-1 ratios. We find that the utility of the compressed imagery differs only slightly from the uncompressed imagery, with the exception of the JPEG products. Otherwise, both the 50-to-1 and 100-to-1 compressed imagery appear similar in terms of image quality. Radar NIIRS indicates that even 100-to-1 compression using IBC or W/TCQ has minimal impact on imagery intelligence value. A slight loss in performance occurs for vehicle counting and identification tasks. These findings suggest that both single-scale IBC and W/TCQ compression techniques have matured to a point that they could provide value to the tactical user. Additional assessments may verify the practical limits of compression for synthetic aperture radar (SAR) data and address the transition to a field environment.

  15. Wavelet compression of medical imagery.

    PubMed

    Reiter, E

    1996-01-01

    Wavelet compression is a transform-based compression technique recently shown to provide diagnostic-quality images at compression ratios as great as 30:1. Based on a recently developed field of applied mathematics, wavelet compression has found success in compression applications from digital fingerprints to seismic data. The underlying strength of the method is attributable in large part to the efficient representation of image data by the wavelet transform. This efficient or sparse representation forms the basis for high-quality image compression by providing subsequent steps of the compression scheme with data likely to result in long runs of zero. These long runs of zero in turn compress very efficiently, allowing wavelet compression to deliver substantially better performance than existing Fourier-based methods. Although the lack of standardization has historically been an impediment to widespread adoption of wavelet compression, this situation may begin to change as the operational benefits of the technology become better known. PMID:10165355

  16. Transverse Compression of Tendons.

    PubMed

    Samuel Salisbury, S T; Paul Buckley, C; Zavatsky, Amy B

    2016-04-01

    A study was made of the deformation of tendons when compressed transverse to the fiber-aligned axis. Bovine digital extensor tendons were compression tested between flat rigid plates. The methods included: in situ image-based measurement of tendon cross-sectional shapes, after preconditioning but immediately prior to testing; multiple constant-load creep/recovery tests applied to each tendon at increasing loads; and measurements of the resulting tendon displacements in both transverse directions. In these tests, friction resisted axial stretch of the tendon during compression, giving approximately plane-strain conditions. This, together with the assumption of a form of anisotropic hyperelastic constitutive model proposed previously for tendon, justified modeling the isochronal response of tendon as that of an isotropic, slightly compressible, neo-Hookean solid. Inverse analysis, using finite-element (FE) simulations of the experiments and 10 s isochronal creep displacement data, gave values for Young's modulus and Poisson's ratio of this solid of 0.31 MPa and 0.49, respectively, for an idealized tendon shape and averaged data for all the tendons and E = 0.14 and 0.10 MPa for two specific tendons using their actual measured geometry. The compression load versus displacement curves, as measured and as simulated, showed varying degrees of stiffening with increasing load. This can be attributed mostly to geometrical changes in tendon cross section under load, varying according to the initial 3D shape of the tendon. PMID:26833218

  17. Intelligent bandwidth compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 bandwidth-compressed images are presented.

  18. Multishock Compression Properties of Warm Dense Argon

    PubMed Central

    Zheng, Jun; Chen, Qifeng; Yunjun, Gu; Li, Zhiguo; Shen, Zhijun

    2015-01-01

    Warm dense argon was generated by a shock reverberation technique. The diagnostics of warm dense argon were performed by a multichannel optical pyrometer and a velocity interferometer system. The equations of state in the pressure-density range of 20–150 GPa and 1.9–5.3 g/cm3 from the first- to fourth-shock compression were presented. The single-shock temperatures in the range of 17.2–23.4 kK were obtained from the spectral radiance. Experimental results indicates that multiple shock-compression ratio (ηi = ρi/ρ0) is greatly enhanced from 3.3 to 8.8, where ρ0 is the initial density of argon and ρi (i = 1, 2, 3, 4) is the compressed density from first to fourth shock, respectively. For the relative compression ratio (ηi’ = ρi/ρi-1), an interesting finding is that a turning point occurs at the second shocked states under the conditions of different experiments, and ηi’ increases with pressure in lower density regime and reversely decreases with pressure in higher density regime. The evolution of the compression ratio is controlled by the excitation of internal degrees of freedom, which increase the compression, and by the interaction effects between particles that reduce it. A temperature-density plot shows that current multishock compression states of argon have distributed into warm dense regime. PMID:26515505

  19. Compressible Flow Toolbox

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.

    2006-01-01

    The Compressible Flow Toolbox is primarily a MATLAB-language implementation of a set of algorithms that solve approximately 280 linear and nonlinear classical equations for compressible flow. The toolbox is useful for analysis of one-dimensional steady flow with either constant entropy, friction, heat transfer, or Mach number greater than 1. The toolbox also contains algorithms for comparing and validating the equation-solving algorithms against solutions previously published in open literature. The classical equations solved by the Compressible Flow Toolbox are as follows: The isentropic-flow equations, The Fanno flow equations (pertaining to flow of an ideal gas in a pipe with friction), The Rayleigh flow equations (pertaining to frictionless flow of an ideal gas, with heat transfer, in a pipe of constant cross section), The normal-shock equations, The oblique-shock equations, and The expansion equations.

  20. Isentropic Compression of Argon

    SciTech Connect

    H. Oona; J.C. Solem; L.R. Veeser, C.A. Ekdahl; P.J. Rodriquez; S.M. Younger; W. Lewis; W.D. Turley

    1997-08-01

    We are studying the transition of argon from an insulator to a conductor by compressing the frozen gas isentropically to pressures at which neighboring atomic orbitals overlap sufficiently to allow some electron motion between atoms. Argon and the other rare gases have closed electron shells and therefore remain montomic, even when they solidify. Their simple structure makes it likely that any measured change in conductivity is due to changes in the atomic structure, not in molecular configuration. As the crystal is compressed the band gap closes, allowing increased conductivity. We have begun research to determine the conductivity at high pressures, and it is our intention to determine the compression at which the crystal becomes a metal.

  1. The compressible mixing layer

    NASA Technical Reports Server (NTRS)

    Vandromme, Dany; Haminh, Hieu

    1991-01-01

    The capability of turbulence modeling correctly to handle natural unsteadiness appearing in compressible turbulent flows is investigated. Physical aspects linked to the unsteadiness problem and the role of various flow parameters are analyzed. It is found that unsteady turbulent flows can be simulated by dividing these motions into an 'organized' part for which equations of motion are solved and a remaining 'incoherent' part represented by a turbulence model. Two-equation turbulence models and second-order turbulence models can yield reasonable results. For specific compressible unsteady turbulent flow, graphic presentations of different quantities may reveal complementary physical features. Strong compression zones are observed in rapid flow parts but shocklets do not yet occur.

  2. Orbiting dynamic compression laboratory

    NASA Technical Reports Server (NTRS)

    Ahrens, T. J.; Vreeland, T., Jr.; Kasiraj, P.; Frisch, B.

    1984-01-01

    In order to examine the feasibility of carrying out dynamic compression experiments on a space station, the possibility of using explosive gun launchers is studied. The question of whether powders of a refractory metal (molybdenum) and a metallic glass could be well considered by dynamic compression is examined. In both cases extremely good bonds are obtained between grains of metal and metallic glass at 180 and 80 kb, respectively. When the oxide surface is reduced and the dynamic consolidation is carried out in vacuum, in the case of molybdenum, tensile tests of the recovered samples demonstrated beneficial ultimate tensile strengths.

  3. Isentropic compression of argon

    SciTech Connect

    Veeser, L.R.; Ekdahl, C.A.; Oona, H.

    1997-06-01

    The compression was done in an MC-1 flux compression (explosive) generator, in order to study the transition from an insulator to a conductor. Since conductivity signals were observed in all the experiments (except when the probe is removed), both the Teflon and the argon are becoming conductive. The conductivity could not be determined (Teflon insulation properties unknown), but it could be bounded as being {sigma}=1/{rho}{le}8({Omega}cm){sub -1}, because when the Teflon breaks down, the dielectric constant is reduced. The Teflon insulator problem remains, and other ways to better insulate the probe or to measure the conductivity without a probe is being sought.

  4. Underwing compression vortex attenuation device

    NASA Technical Reports Server (NTRS)

    Patterson, James C., Jr. (Inventor)

    1993-01-01

    A vortex attenuation device is presented which dissipates a lift-induced vortex generated by a lifting aircraft wing. The device consists of a positive pressure gradient producing means in the form of a compression panel attached to the lower surface of the wing and facing perpendicular to the airflow across the wing. The panel is located between the midpoint of the local wing cord and the trailing edge in the chord-wise direction and at a point which is approximately 55 percent of the wing span as measured from the fuselage center line in the spanwise direction. When deployed in flight, this panel produces a positive pressure gradient aligned with the final roll-up of the total vortex system which interrupts the axial flow in the vortex core and causes the vortex to collapse.

  5. The Compressed Video Experience.

    ERIC Educational Resources Information Center

    Weber, John

    In the fall semester 1995, Southern Arkansas University- Magnolia (SAU-M) began a two semester trial delivering college classes via a compressed video link between SAU-M and its sister school Southern Arkansas University Tech (SAU-T) in Camden. As soon as the University began broadcasting and receiving classes, it was discovered that using the…

  6. Compress Your Files

    ERIC Educational Resources Information Center

    Branzburg, Jeffrey

    2005-01-01

    File compression enables data to be squeezed together, greatly reducing file size. Why would someone want to do this? Reducing file size enables the sending and receiving of files over the Internet more quickly, the ability to store more files on the hard drive, and the ability pack many related files into one archive (for example, all files…

  7. Nonlinear Frequency Compression

    PubMed Central

    Scollie, Susan; Glista, Danielle; Seelisch, Andreas

    2013-01-01

    Frequency lowering technologies offer an alternative amplification solution for severe to profound high frequency hearing losses. While frequency lowering technologies may improve audibility of high frequency sounds, the very nature of this processing can affect the perceived sound quality. This article reports the results from two studies that investigated the impact of a nonlinear frequency compression (NFC) algorithm on perceived sound quality. In the first study, the cutoff frequency and compression ratio parameters of the NFC algorithm were varied, and their effect on the speech quality was measured subjectively with 12 normal hearing adults, 12 normal hearing children, 13 hearing impaired adults, and 9 hearing impaired children. In the second study, 12 normal hearing and 8 hearing impaired adult listeners rated the quality of speech in quiet, speech in noise, and music after processing with a different set of NFC parameters. Results showed that the cutoff frequency parameter had more impact on sound quality ratings than the compression ratio, and that the hearing impaired adults were more tolerant to increased frequency compression than normal hearing adults. No statistically significant differences were found in the sound quality ratings of speech-in-noise and music stimuli processed through various NFC settings by hearing impaired listeners. These findings suggest that there may be an acceptable range of NFC settings for hearing impaired individuals where sound quality is not adversely affected. These results may assist an Audiologist in clinical NFC hearing aid fittings for achieving a balance between high frequency audibility and sound quality. PMID:23539261

  8. Compression: Rent or own

    SciTech Connect

    Cahill, C.

    1997-07-01

    Historically, the decision to purchase or rent compression has been set as a corporate philosophy. As companies decentralize, there seems to be a shift away from corporate philosophy toward individual profit centers. This has led the decision to rent versus purchase to be looked at on a regional or project-by-project basis.

  9. Improved compression molding process

    NASA Technical Reports Server (NTRS)

    Heier, W. C.

    1967-01-01

    Modified compression molding process produces plastic molding compounds that are strong, homogeneous, free of residual stresses, and have improved ablative characteristics. The conventional method is modified by applying a vacuum to the mold during the molding cycle, using a volatile sink, and exercising precise control of the mold closure limits.

  10. Energy Transfer and Triadic Interactions in Compressible Turbulence

    NASA Technical Reports Server (NTRS)

    Bataille, F.; Zhou, Ye; Bertoglio, Jean-Pierre

    1997-01-01

    Using a two-point closure theory, the Eddy-Damped-Quasi-Normal-Markovian (EDQNM) approximation, we have investigated the energy transfer process and triadic interactions of compressible turbulence. In order to analyze the compressible mode directly, the Helmholtz decomposition is used. The following issues were addressed: (1) What is the mechanism of energy exchange between the solenoidal and compressible modes, and (2) Is there an energy cascade in the compressible energy transfer process? It is concluded that the compressible energy is transferred locally from the solenoidal part to the compressible part. It is also found that there is an energy cascade of the compressible mode for high turbulent Mach number (M(sub t) greater than or equal to 0.5). Since we assume that the compressibility is weak, the magnitude of the compressible (radiative or cascade) transfer is much smaller than that of solenoidal cascade. These results are further confirmed by studying the triadic energy transfer function, the most fundamental building block of the energy transfer.

  11. Learning random networks for compression of still and moving images

    NASA Technical Reports Server (NTRS)

    Gelenbe, Erol; Sungur, Mert; Cramer, Christopher

    1994-01-01

    Image compression for both still and moving images is an extremely important area of investigation, with numerous applications to videoconferencing, interactive education, home entertainment, and potential applications to earth observations, medical imaging, digital libraries, and many other areas. We describe work on a neural network methodology to compress/decompress still and moving images. We use the 'point-process' type neural network model which is closer to biophysical reality than standard models, and yet is mathematically much more tractable. We currently achieve compression ratios of the order of 120:1 for moving grey-level images, based on a combination of motion detection and compression. The observed signal-to-noise ratio varies from values above 25 to more than 35. The method is computationally fast so that compression and decompression can be carried out in real-time. It uses the adaptive capabilities of a set of neural networks so as to select varying compression ratios in real-time as a function of quality achieved. It also uses a motion detector which will avoid retransmitting portions of the image which have varied little from the previous frame. Further improvements can be achieved by using on-line learning during compression, and by appropriate compensation of nonlinearities in the compression/decompression scheme. We expect to go well beyond the 250:1 compression level for color images with good quality levels.

  12. Mosaic image compression

    NASA Astrophysics Data System (ADS)

    Chaudhari, Kapil A.; Reeves, Stanley J.

    2005-02-01

    Most consumer-level digital cameras use a color filter array to capture color mosaic data followed by demosaicking to obtain full-color images. However, many sophisticated demosaicking algorithms are too complex to implement on-board a camera. To use these algorithms, one must transfer the mosaic data from the camera to a computer without introducing compression losses that could generate artifacts in the demosaicked image. The memory required for losslessly stored mosaic images severely restricts the number of images that can be stored in the camera. Therefore, we need an algorithm to compress the original mosaic data losslessly so that it can later be transferred intact for demosaicking. We propose a new lossless compression technique for mosaic images in this paper. Ordinary image compression methods do not apply to mosaic images because of their non-canonical color sampling structure. Because standard compression methods such as JPEG, JPEG2000, etc. are already available in most digital cameras, we have chosen to build our algorithms using a standard method as a key part of the system. The algorithm begins by separating the mosaic image into 3 color (RGB) components. This is followed by an interpolation or down-sampling operation--depending on the particular variation of the algorithm--that makes all three components the same size. Using the three color components, we form a color image that is coded with JPEG. After appropriately reformatting the data, we calculate the residual between the original image and the coded image and then entropy-code the residual values corresponding to the mosaic data.

  13. Tipping Point

    MedlinePlus Videos and Cool Tools

    ... Tipping Point by CPSC Blogger September 22 appliance child Childproofing CPSC danger death electrical fall furniture head ... TV falls with about the same force as child falling from the third story of a building. ...

  14. TEM Video Compressive Sensing

    SciTech Connect

    Stevens, Andrew J.; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.

    2015-08-02

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into a single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental

  15. Progressive compressive imager

    NASA Astrophysics Data System (ADS)

    Evladov, Sergei; Levi, Ofer; Stern, Adrian

    2012-06-01

    We have designed and built a working automatic progressive sampling imaging system based on the vector sensor concept, which utilizes a unique sampling scheme of Radon projections. This sampling scheme makes it possible to progressively add information resulting in tradeoff between compression and the quality of reconstruction. The uniqueness of our sampling is that in any moment of the acquisition process the reconstruction can produce a reasonable version of the image. The advantage of the gradual addition of the samples is seen when the sparsity rate of the object is unknown, and thus the number of needed measurements. We have developed the iterative algorithm OSO (Ordered Sets Optimization) which employs our sampling scheme for creation of nearly uniform distributed sets of samples, which allows the reconstruction of Mega-Pixel images. We present the good quality reconstruction from compressed data ratios of 1:20.

  16. Digital cinema video compression

    NASA Astrophysics Data System (ADS)

    Husak, Walter

    2003-05-01

    The Motion Picture Industry began a transition from film based distribution and projection to digital distribution and projection several years ago. Digital delivery and presentation offers the prospect to increase the quality of the theatrical experience for the audience, reduce distribution costs to the distributors, and create new business opportunities for the theater owners and the studios. Digital Cinema also presents an opportunity to provide increased flexibility and security of the movies for the content owners and the theater operators. Distribution of content via electronic means to theaters is unlike any of the traditional applications for video compression. The transition from film-based media to electronic media represents a paradigm shift in video compression techniques and applications that will be discussed in this paper.

  17. Data compression for speckle correlation interferometry temporal fringe pattern analysis

    SciTech Connect

    Tuck Wah Ng; Kar Tien Ang

    2005-05-01

    Temporal fringe pattern analysis is gaining prominence in speckle correlation interferometry, in particular for transient phenomena studies. This form of analysis, nevertheless, necessitates large data storage. Current compression schemes do not facilitate efficient data retrieval and may even result in important data loss. We describe a novel compression scheme that does not result in crucial data loss and allows for the efficient retrieval of data for temporal fringe analysis. In sample tests with digital speckle interferometry on fringe patterns of a plate and of a cantilever beam subjected to temporal phase and load evolution, respectively, we achieved a compression ratio of 1.6 without filtering out any data from discontinuous and low fringe modulation spatial points. By eliminating 38% of the data from discontinuous and low fringe modulation spatial points, we attained a significant compression ratio of 2.4.

  18. Compressibility of solids

    NASA Technical Reports Server (NTRS)

    Vinet, P.; Ferrante, J.; Rose, J. H.; Smith, J. R.

    1987-01-01

    A universal form is proposed for the equation of state (EOS) of solids. Good agreement is found for a variety of test data. The form of the EOS is used to suggest a method of data analysis, which is applied to materials of geophysical interest. The isothermal bulk modulus is discussed as a function of the volume and of the pressure. The isothermal compression curves for materials of geophysical interest are examined.

  19. Compression of Cake

    NASA Astrophysics Data System (ADS)

    Nason, Sarah; Houghton, Brittany; Renfro, Timothy

    2012-03-01

    The fall university physics class, at McMurry University, created a compression modulus experiment that even high school students could do. The class came up with this idea after a Young's modulus experiment which involved stretching wire. A question was raised of what would happen if we compressed something else? We created our own Young's modulus experiment, but in a more entertaining way. The experiment involves measuring the height of a cake both before and after a weight has been applied to the cake. We worked to derive the compression modulus by applying weight to a cake. In the end, we had our experimental cake and, ate it too! To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2012.TSS.B1.1

  20. An isentropic compression heated Ludwieg tube transient wind tunnel

    NASA Technical Reports Server (NTRS)

    Magari, Patrick J.; Lagraff, John E.

    1988-01-01

    Syracuse University's Ludwieg tube with isentropic compression facility is a transient wind tunnel employing a piston drive that incorporates insentropic compression heating of the test gas located ahead of a piston. The facility is well-suited for experimental investigations concerning supersonic and subsonic vehicles over a wide range of pressures, Reynolds numbers, and temperatures; all three parameters can be almost independently controlled. Work at the facility currently includes wake-induced stagnation point heat transfer and supersonic boundary layer transition.

  1. A compressive failure model for anisotropic plates with a cutout under compressive and shear loads

    NASA Technical Reports Server (NTRS)

    Gurdal, Z.; Haftka, R. T.

    1986-01-01

    The paper introduces a failure model for laminated composite plates with a cutout under combined compressive and shear loads. The model is based on kinking failure of the load-carrying fibers around a cutout, and includes the effect of local shearing and compressive stresses. Comparison of predictions of the model with available experimental results for quasi-isotropic and orthotropic plates with a circular hole indicated a good agreement. Predictions for orthotropic plates under combined loading are compared with the predictions of a point-stress model. The present model indicates significant reductions in axial load-carrying capacity due to shearing loads for plates with principal axis of orthotropy oriented along the axial load direction. A gain in strength is achieved by rotating the axis of orthotropy to counteract the shearing stress, or by eliminating the compressive-shear deformation coupling.

  2. Investigation into the geometric consequences of processing substantially compressed images

    NASA Astrophysics Data System (ADS)

    Tempelmann, Udo; Nwosu, Zubbi; Zumbrunn, Roland M.

    1995-07-01

    One of the major driving forces behind digital photogrammetric systems is the continued drop in the cost of digital storage systems. However, terrestrial remote sensing systems continue to generate enormous volumes of data due to smaller pixels, larger coverage, and increased multispectral and multitemporal possibilities. Sophisticated compression algorithms have been developed but reduced visual quality of their output, which impedes object identification, and resultant geometric deformation have been limiting factors in employing compression. Compression and decompression time is also an issue but of less importance due to off-line possibilities. Two typical image blocks have been selected, one sub-block from a SPOT image and the other is an image of industrial targets taken with an off-the-shelf CCD. Three common compression algorithms have been chosen: JPEG, Wavelet, and Fractal. The images are run through the compression/decompression cycle, with parameter chosen to cover the whole range of available compression ratios. Points are identified on these images and their locations are compared against those in the originals. These results are presented to assist choice of compression facilities after considerations on metric quality against storage availability. Fractals offer the best visual quality but JPEG, closely followed by wavelets, imposes less geometric defects. JPEG seems to offer the best all-around performance when you consider geometric and visual quality, and compression/decompression speed.

  3. Piston reciprocating compressed air engine

    SciTech Connect

    Cestero, L.G.

    1987-03-24

    A compressed air engine is described comprising: (a). a reservoir of compressed air, (b). two power cylinders each containing a reciprocating piston connected to a crankshaft and flywheel, (c). a transfer cylinder which communicates with each power cylinder and the reservoir, and contains a reciprocating piston connected to the crankshaft, (d). valve means controlled by rotation of the crankshaft for supplying compressed air from the reservoir to each power cylinder and for exhausting compressed air from each power cylinder to the transfer cylinder, (e). valve means controlled by rotation of the crankshaft for supplying from the transfer cylinder to the reservoir compressed air supplied to the transfer cylinder on the exhaust strokes of the pistons of the power cylinders, and (f). an externally powered fan for assisting the exhaust of compressed air from each power cylinder to the transfer cylinder and from there to the compressed air reservoir.

  4. Low bit-rate efficient compression for seismic data.

    PubMed

    Averbuch, A Z; Meyer, R; Stromberg, J O; Coifman, R; Vassiliou, A

    2001-01-01

    adaptive multiscale local cosine transform with different windows sizes performs well on all the seismic data sets and outperforms the other methods from the SNR point of view. All the described methods cover wide range of different data sets. Each data set will have his own best performed method chosen from this collection. The results were performed on four different seismic data sets. Special emphasis was given to achieve faster processing speed which is another critical issue that is examined in the paper. Some of these algorithms are also suitable for multimedia type compression. PMID:18255520

  5. Compressible magnetohydrodynamic sawtooth crash

    NASA Astrophysics Data System (ADS)

    Sugiyama, Linda E.

    2014-02-01

    In a toroidal magnetically confined plasma at low resistivity, compressible magnetohydrodynamic (MHD) predicts that an m = 1/n = 1 sawtooth has a fast, explosive crash phase with abrupt onset, rate nearly independent of resistivity, and localized temperature redistribution similar to experimental observations. Large scale numerical simulations show that the 1/1 MHD internal kink grows exponentially at a resistive rate until a critical amplitude, when the plasma motion accelerates rapidly, culminating in fast loss of the temperature and magnetic structure inside q < 1, with somewhat slower density redistribution. Nonlinearly, for small effective growth rate the perpendicular momentum rate of change remains small compared to its individual terms ∇p and J × B until the fast crash, so that the compressible growth rate is determined by higher order terms in a large aspect ratio expansion, as in the linear eigenmode. Reduced MHD fails completely to describe the toroidal mode; no Sweet-Parker-like reconnection layer develops. Important differences result from toroidal mode coupling effects. A set of large aspect ratio compressible MHD equations shows that the large aspect ratio expansion also breaks down in typical tokamaks with rq =1/Ro≃1/10 and a /Ro≃1/3. In the large aspect ratio limit, failure extends down to much smaller inverse aspect ratio, at growth rate scalings γ =O(ɛ2). Higher order aspect ratio terms, including B˜ϕ, become important. Nonlinearly, higher toroidal harmonics develop faster and to a greater degree than for large aspect ratio and help to accelerate the fast crash. The perpendicular momentum property applies to other transverse MHD instabilities, including m ≥ 2 magnetic islands and the plasma edge.

  6. Fast Compressive Tracking.

    PubMed

    Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan

    2014-10-01

    It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness. PMID:26352631

  7. International magnetic pulse compression

    SciTech Connect

    Kirbie, H.C.; Newton, M.A.; Siemens, P.D.

    1991-04-01

    Although pulsed-power engineering traditionally has been practiced by a fairly small, close community in the areas of defense and energy research, it is becoming more common in high-power, high-energy commercial pursuits such as material processing and lasers. This paper is a synopsis of the Feb. 12--14, 1990 workshop on magnetic switching as it applies primarily to pulse compression (power transformation). During the course of the Workshop at Granlibakken, a great deal of information was amassed and a keen insight into both the problems and opportunities as to the use of this switching approach was developed. The segmented workshop format proved ideal for identifying key aspects affecting optimum performance in a variety of applications. Individual groups of experts addressed network and system modeling, magnetic materials, power conditioning, core cooling and dielectrics, and finally circuits and application. At the end, they came together to consolidate their input and formulate the workshop's conclusions, identifying roadblocks or suggesting research projects, particularly as they apply to magnetic switching's trump card -- its high-average-power-handling capability (at least on a burst-mode basis). The workshop was especially productive both in the quality and quantity of information transfer in an environment conducive to a free and open exchange of ideas. We will not delve into the organization proper of this meeting, rather we wish to commend to the interested reader this volume, which provides the definitive and most up-to-date compilation on the subject of magnetic pulse compression from underlying principles to current state of the art as well as the prognosis for the future of magnetic pulse compression as a consensus of the workshop's organizers and participants.

  8. Compression retaining piston

    SciTech Connect

    Quaglino, A.V. Jr.

    1987-06-16

    A piston apparatus is described for maintaining compression between the piston wall and the cylinder wall, that comprises the following: a generally cylindrical piston body, including: a head portion defining the forward end of the body; and a continuous side wall portion extending rearward from the head portion; a means for lubricating and preventing compression loss between the side wall portion and the cylinder wall, including an annular recessed area in the continuous side wall portion for receiving a quantity of fluid lubricant in fluid engagement between the wall of the recessed and the wall of the cylinder; a first and second resilient, elastomeric, heat resistant rings positioned in grooves along the wall of the continuous side wall portion, above and below the annular recessed area. Each ring engages the cylinder wall to reduce loss of lubricant within the recessed area during operation of the piston; a first pump means for providing fluid lubricant to engine components other than the pistons; and a second pump means provides fluid lubricant to the recessed area in the continuous side wall portion of the piston. The first and second pump means obtains lubricant from a common source, and the second pump means including a flow line supplies oil from a predetermined level above the level of oil provided to the first pump means. This is so that should the oil level to the second pump means fall below the predetermined level, the loss of oil to the recessed area in the continuous side wall portion of the piston would result in loss of compression and shut down of the engine.

  9. International magnetic pulse compression

    NASA Astrophysics Data System (ADS)

    Kirbie, H. C.; Newton, M. A.; Siemens, P. D.

    1991-04-01

    Although pulsed-power engineering traditionally has been practiced by a fairly small, close community in the areas of defense and energy research, it is becoming more common in high-power, high-energy commercial pursuits such as material processing and lasers. This paper is a synopsis of the Feb. 12-14, 1990 workshop on magnetic switching as it applies primarily to pulse compression (power transformation). During the course of the Workshop at Granlibakken, a great deal of information was amassed and a keen insight into both the problems and opportunities as to the use of this switching approach was developed. The segmented workshop format proved ideal for identifying key aspects affecting optimum performance in a variety of applications. Individual groups of experts addressed network and system modeling, magnetic materials, power conditioning, core cooling and dielectrics, and finally circuits and application. At the end, they came together to consolidate their input and formulate the workshop's conclusions, identifying roadblocks or suggesting research projects, particularly as they apply to magnetic switching's trump card - its high-average-power-handling capability (at least on a burst-mode basis). The workshop was especially productive both in the quality and quantity of information transfer in an environment conducive to a free and open exchange of ideas. We will not delve into the organization proper of this meeting, rather we wish to commend to the interested reader this volume, which provides the definitive and most up-to-date compilation on the subject of magnetic pulse compression from underlying principles to current state of the art as well as the prognosis for the future of magnetic pulse compression as a consensus of the workshop's organizers and participants.

  10. Comparative data compression techniques and multi-compression results

    NASA Astrophysics Data System (ADS)

    Hasan, M. R.; Ibrahimy, M. I.; Motakabber, S. M. A.; Ferdaus, M. M.; Khan, M. N. H.

    2013-12-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms.

  11. Avalanches in Wood Compression.

    PubMed

    Mäkinen, T; Miksic, A; Ovaska, M; Alava, Mikko J

    2015-07-31

    Wood is a multiscale material exhibiting a complex viscoplastic response. We study avalanches in small wood samples in compression. "Woodquakes" measured by acoustic emission are surprisingly similar to earthquakes and crackling noise in rocks and laboratory tests on brittle materials. Both the distributions of event energies and of waiting (silent) times follow power laws. The stress-strain response exhibits clear signatures of localization of deformation to "weak spots" or softwood layers, as identified using digital image correlation. Even though material structure-dependent localization takes place, the avalanche behavior remains scale-free. PMID:26274428

  12. Compression test apparatus

    NASA Technical Reports Server (NTRS)

    Shanks, G. C. (Inventor)

    1981-01-01

    An apparatus for compressive testing of a test specimen may comprise vertically spaced upper and lower platen members between which a test specimen may be placed. The platen members are supported by a fixed support assembly. A load indicator is interposed between the upper platen member and the support assembly for supporting the total weight of the upper platen member and any additional weight which may be placed on it. Operating means are provided for moving the lower platen member upwardly toward the upper platen member whereby an increasing portion of the total weight is transferred from the load indicator to the test specimen.

  13. Sampling video compression system

    NASA Technical Reports Server (NTRS)

    Matsumoto, Y.; Lum, H. (Inventor)

    1977-01-01

    A system for transmitting video signal of compressed bandwidth is described. The transmitting station is provided with circuitry for dividing a picture to be transmitted into a plurality of blocks containing a checkerboard pattern of picture elements. Video signals along corresponding diagonal rows of picture elements in the respective blocks are regularly sampled. A transmitter responsive to the output of the sampling circuitry is included for transmitting the sampled video signals of one frame at a reduced bandwidth over a communication channel. The receiving station is provided with a frame memory for temporarily storing transmitted video signals of one frame at the original high bandwidth frequency.

  14. Ultrasound beamforming using compressed data.

    PubMed

    Li, Yen-Feng; Li, Pai-Chi

    2012-05-01

    The rapid advancements in electronics technologies have made software-based beamformers for ultrasound array imaging feasible, thus facilitating the rapid development of high-performance and potentially low-cost systems. However, one challenge to realizing a fully software-based system is transferring data from the analog front end to the software back end at rates of up to a few gigabits per second. This study investigated the use of data compression to reduce the data transfer requirements and optimize the associated trade-off with beamforming quality. JPEG and JPEG2000 compression techniques were adopted. The acoustic data of a line phantom were acquired with a 128-channel array transducer at a center frequency of 3.5 MHz, and the acoustic data of a cyst phantom were acquired with a 64-channel array transducer at a center frequency of 3.33 MHz. The receive-channel data associated with each transmit event are separated into 8 × 8 blocks and several tiles before JPEG and JPEG2000 data compression is applied, respectively. In one scheme, the compression was applied to raw RF data, while in another only the amplitude of baseband data was compressed. The maximum compression ratio of RF data compression to produce an average error of lower than 5 dB was 15 with JPEG compression and 20 with JPEG2000 compression. The image quality is higher with baseband amplitude data compression than with RF data compression; although the maximum overall compression ratio (compared with the original RF data size), which was limited by the data size of uncompressed phase data, was lower than 12, the average error in this case was lower than 1 dB when the compression ratio was lower than 8. PMID:22434817

  15. Compression of color-mapped images

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, A. C.; Sayood, Khalid

    1992-01-01

    In a standard image coding scenario, pixel-to-pixel correlation nearly always exists in the data, especially if the image is a natural scene. This correlation is what allows predictive coding schemes (e.g., DPCM) to perform efficient compression. In a color-mapped image, the values stored in the pixel array are no longer directly related to the pixel intensity. Two color indices which are numerically adjacent (close) may point to two very different colors. The correlation still exists, but only via the colormap. This fact can be exploited by sorting the color map to reintroduce the structure. The sorting of colormaps is studied and it is shown how the resulting structure can be used in both lossless and lossy compression of images.

  16. Mechanical Metamaterials with Negative Compressibility Transitions

    NASA Astrophysics Data System (ADS)

    Motter, Adilson

    2015-03-01

    When tensioned, ordinary materials expand along the direction of the applied force. In this presentation, I will explore network concepts to design metamaterials exhibiting negative compressibility transitions, during which the material undergoes contraction when tensioned (or expansion when pressured). Such transitions, which are forbidden in thermodynamic equilibrium, are possible during the decay of metastable, super-strained states. I will introduce a statistical physics theory for negative compressibility transitions, derive a first-principles model to predict these transitions, and present a validation of the model using molecular dynamics simulations. Aside from its immediate mechanical implications, our theory points to a wealth of analogous inverted responses, such as inverted susceptibility or heat-capacity transitions, allowed when considering realistic scales. This research was done in collaboration with Zachary Nicolaou, and was supported by the National Science Foundation and the Alfred P. Sloan Foundation.

  17. Perceptually Lossless Wavelet Compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John

    1996-01-01

    The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp -1), where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We propose a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a 'perceptually lossless' quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  18. Compressive Sensing DNA Microarrays

    PubMed Central

    2009-01-01

    Compressive sensing microarrays (CSMs) are DNA-based sensors that operate using group testing and compressive sensing (CS) principles. In contrast to conventional DNA microarrays, in which each genetic sensor is designed to respond to a single target, in a CSM, each sensor responds to a set of targets. We study the problem of designing CSMs that simultaneously account for both the constraints from CS theory and the biochemistry of probe-target DNA hybridization. An appropriate cross-hybridization model is proposed for CSMs, and several methods are developed for probe design and CS signal recovery based on the new model. Lab experiments suggest that in order to achieve accurate hybridization profiling, consensus probe sequences are required to have sequence homology of at least 80% with all targets to be detected. Furthermore, out-of-equilibrium datasets are usually as accurate as those obtained from equilibrium conditions. Consequently, one can use CSMs in applications in which only short hybridization times are allowed. PMID:19158952

  19. Compressive Bilateral Filtering.

    PubMed

    Sugimoto, Kenjiro; Kamata, Sei-Ichiro

    2015-11-01

    This paper presents an efficient constant-time bilateral filter that produces a near-optimal performance tradeoff between approximate accuracy and computational complexity without any complicated parameter adjustment, called a compressive bilateral filter (CBLF). The constant-time means that the computational complexity is independent of its filter window size. Although many existing constant-time bilateral filters have been proposed step-by-step to pursue a more efficient performance tradeoff, they have less focused on the optimal tradeoff for their own frameworks. It is important to discuss this question, because it can reveal whether or not a constant-time algorithm still has plenty room for improvements of performance tradeoff. This paper tackles the question from a viewpoint of compressibility and highlights the fact that state-of-the-art algorithms have not yet touched the optimal tradeoff. The CBLF achieves a near-optimal performance tradeoff by two key ideas: 1) an approximate Gaussian range kernel through Fourier analysis and 2) a period length optimization. Experiments demonstrate that the CBLF significantly outperforms state-of-the-art algorithms in terms of approximate accuracy, computational complexity, and usability. PMID:26068315

  20. Cancer suppression by compression.

    PubMed

    Frieden, B Roy; Gatenby, Robert A

    2015-01-01

    Recent experiments indicate that uniformly compressing a cancer mass at its surface tends to transform many of its cells from proliferative to functional forms. Cancer cells suffer from the Warburg effect, resulting from depleted levels of cell membrane potentials. We show that the compression results in added free energy and that some of the added energy contributes distortional pressure to the cells. This excites the piezoelectric effect on the cell membranes, in particular raising the potentials on the membranes of cancer cells from their depleted levels to near-normal levels. In a sample calculation, a gain of 150 mV in is so attained. This allows the Warburg effect to be reversed. The result is at least partially regained function and accompanying increased molecular order. The transformation remains even when the pressure is turned off, suggesting a change of phase; these possibilities are briefly discussed. It is found that if the pressure is, in particular, applied adiabatically the process obeys the second law of thermodynamics, further validating the theoretical model. PMID:25520262

  1. Energy transfer in compressible turbulence

    NASA Technical Reports Server (NTRS)

    Bataille, Francoise; Zhou, YE; Bertoglio, Jean-Pierre

    1995-01-01

    This letter investigates the compressible energy transfer process. We extend a methodology developed originally for incompressible turbulence and use databases from numerical simulations of a weak compressible turbulence based on Eddy-Damped-Quasi-Normal-Markovian (EDQNM) closure. In order to analyze the compressible mode directly, the well known Helmholtz decomposition is used. While the compressible component has very little influence on the solenoidal part, we found that almost all of the compressible turbulence energy is received from its solenoidal counterpart. We focus on the most fundamental building block of the energy transfer process, the triadic interactions. This analysis leads us to conclude that, at low turbulent Mach number, the compressible energy transfer process is dominated by a local radiative transfer (absorption) in both inertial and energy containing ranges.

  2. Compressive sensing in medical imaging

    PubMed Central

    Graff, Christian G.; Sidky, Emil Y.

    2015-01-01

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400

  3. A PDF closure model for compressible turbulent chemically reacting flows

    NASA Technical Reports Server (NTRS)

    Kollmann, W.

    1992-01-01

    The objective of the proposed research project was the analysis of single point closures based on probability density function (pdf) and characteristic functions and the development of a prediction method for the joint velocity-scalar pdf in turbulent reacting flows. Turbulent flows of boundary layer type and stagnation point flows with and without chemical reactions were be calculated as principal applications. Pdf methods for compressible reacting flows were developed and tested in comparison with available experimental data. The research work carried in this project was concentrated on the closure of pdf equations for incompressible and compressible turbulent flows with and without chemical reactions.

  4. ECG data compression by modeling.

    PubMed Central

    Madhukar, B.; Murthy, I. S.

    1992-01-01

    This paper presents a novel algorithm for data compression of single lead Electrocardiogram (ECG) data. The method is based on Parametric modeling of the Discrete Cosine Transformed ECG signal. Improved high frequency reconstruction is achieved by separately modeling the low and the high frequency regions of the transformed signal. Differential Pulse Code Modulation is applied on the model parameters to obtain a further increase in the compression. Compression ratios up to 1:40 were achieved without significant distortion. PMID:1482940

  5. Shock compression of precompressed deuterium

    SciTech Connect

    Armstrong, M R; Crowhurst, J C; Zaug, J M; Bastea, S; Goncharov, A F; Militzer, B

    2011-07-31

    Here we report quasi-isentropic dynamic compression and thermodynamic characterization of solid, precompressed deuterium over an ultrafast time scale (< 100 ps) and a microscopic length scale (< 1 {micro}m). We further report a fast transition in shock wave compressed solid deuterium that is consistent with the ramp to shock transition, with a time scale of less than 10 ps. These results suggest that high-density dynamic compression of hydrogen may be possible on microscopic length scales.

  6. Magnetic compression laser driving circuit

    DOEpatents

    Ball, D.G.; Birx, D.; Cook, E.G.

    1993-01-05

    A magnetic compression laser driving circuit is disclosed. The magnetic compression laser driving circuit compresses voltage pulses in the range of 1.5 microseconds at 20 kilovolts of amplitude to pulses in the range of 40 nanoseconds and 60 kilovolts of amplitude. The magnetic compression laser driving circuit includes a multi-stage magnetic switch where the last stage includes a switch having at least two turns which has larger saturated inductance with less core material so that the efficiency of the circuit and hence the laser is increased.

  7. Magnetic compression laser driving circuit

    DOEpatents

    Ball, Don G.; Birx, Dan; Cook, Edward G.

    1993-01-01

    A magnetic compression laser driving circuit is disclosed. The magnetic compression laser driving circuit compresses voltage pulses in the range of 1.5 microseconds at 20 Kilovolts of amplitude to pulses in the range of 40 nanoseconds and 60 Kilovolts of amplitude. The magnetic compression laser driving circuit includes a multi-stage magnetic switch where the last stage includes a switch having at least two turns which has larger saturated inductance with less core material so that the efficiency of the circuit and hence the laser is increased.

  8. Data compression for sequencing data

    PubMed Central

    2013-01-01

    Post-Sanger sequencing methods produce tons of data, and there is a general agreement that the challenge to store and process them must be addressed with data compression. In this review we first answer the question “why compression” in a quantitative manner. Then we also answer the questions “what” and “how”, by sketching the fundamental compression ideas, describing the main sequencing data types and formats, and comparing the specialized compression algorithms and tools. Finally, we go back to the question “why compression” and give other, perhaps surprising answers, demonstrating the pervasiveness of data compression techniques in computational biology. PMID:24252160

  9. POLYCOMP: Efficient and configurable compression of astronomical timelines

    NASA Astrophysics Data System (ADS)

    Tomasi, M.

    2016-07-01

    This paper describes the implementation of polycomp, a open-sourced, publicly available program for compressing one-dimensional data series in tabular format. The program is particularly suited for compressing smooth, noiseless streams of data like pointing information, as one of the algorithms it implements applies a combination of least squares polynomial fitting and discrete Chebyshev transforms that is able to achieve a compression ratio Cr up to ≈ 40 in the examples discussed in this work. This performance comes at the expense of a loss of information, whose upper bound is configured by the user. I show two areas in which the usage of polycomp is interesting. In the first example, I compress the ephemeris table of an astronomical object (Ganymede), obtaining Cr ≈ 20, with a compression error on the x , y , z coordinates smaller than 1 m. In the second example, I compress the publicly available timelines recorded by the Low Frequency Instrument (LFI), an array of microwave radiometers onboard the ESA Planck spacecraft. The compression reduces the needed storage from ∼ 6.5 TB to ≈ 0.75 TB (Cr ≈ 9), thus making them small enough to be kept in a portable hard drive.

  10. Population attribute compression

    DOEpatents

    White, James M.; Faber, Vance; Saltzman, Jeffrey S.

    1995-01-01

    An image population having a large number of attributes is processed to form a display population with a predetermined smaller number of attributes that represent the larger number of attributes. In a particular application, the color values in an image are compressed for storage in a discrete look-up table (LUT). Color space containing the LUT color values is successively subdivided into smaller volumes until a plurality of volumes are formed, each having no more than a preselected maximum number of color values. Image pixel color values can then be rapidly placed in a volume with only a relatively few LUT values from which a nearest neighbor is selected. Image color values are assigned 8 bit pointers to their closest LUT value whereby data processing requires only the 8 bit pointer value to provide 24 bit color values from the LUT.

  11. Vapor compression distillation module

    NASA Technical Reports Server (NTRS)

    Nuccio, P. P.

    1975-01-01

    A Vapor Compression Distillation (VCD) module was developed and evaluated as part of a Space Station Prototype (SSP) environmental control and life support system. The VCD module includes the waste tankage, pumps, post-treatment cells, automatic controls and fault detection instrumentation. Development problems were encountered with two components: the liquid pumps, and the waste tank and quantity gauge. Peristaltic pumps were selected instead of gear pumps, and a sub-program of materials and design optimization was undertaken leading to a projected life greater than 10,000 hours of continuous operation. A bladder tank was designed and built to contain the waste liquids and deliver it to the processor. A detrimental pressure pattern imposed upon the bladder by a force-operated quantity gauge was corrected by rearranging the force application, and design goals were achieved. System testing has demonstrated that all performance goals have been fulfilled.

  12. Gas compression apparatus

    NASA Technical Reports Server (NTRS)

    Terp, L. S. (Inventor)

    1977-01-01

    Apparatus for transferring gas from a first container to a second container of higher pressure was devised. A free-piston compressor having a driving piston and cylinder, and a smaller diameter driven piston and cylinder, comprise the apparatus. A rod member connecting the driving and driven pistons functions for mutual reciprocation in the respective cylinders. A conduit may be provided for supplying gas to the driven cylinder from the first container. Also provided is apparatus for introducing gas to the driving piston, to compress gas by the driven piston for transfer to the second higher pressure container. The system is useful in transferring spacecraft cabin oxygen into higher pressure containers for use in extravehicular activities.

  13. Compressed hyperspectral sensing

    NASA Astrophysics Data System (ADS)

    Tsagkatakis, Grigorios; Tsakalides, Panagiotis

    2015-03-01

    Acquisition of high dimensional Hyperspectral Imaging (HSI) data using limited dimensionality imaging sensors has led to restricted capabilities designs that hinder the proliferation of HSI. To overcome this limitation, novel HSI architectures strive to minimize the strict requirements of HSI by introducing computation into the acquisition process. A framework that allows the integration of acquisition with computation is the recently proposed framework of Compressed Sensing (CS). In this work, we propose a novel HSI architecture that exploits the sampling and recovery capabilities of CS to achieve a dramatic reduction in HSI acquisition requirements. In the proposed architecture, signals from multiple spectral bands are multiplexed before getting recorded by the imaging sensor. Reconstruction of the full hyperspectral cube is achieved by exploiting a dictionary of elementary spectral profiles in a unified minimization framework. Simulation results suggest that high quality recovery is possible from a single or a small number of multiplexed frames.

  14. Compressive Network Analysis

    PubMed Central

    Jiang, Xiaoye; Yao, Yuan; Liu, Han; Guibas, Leonidas

    2014-01-01

    Modern data acquisition routinely produces massive amounts of network data. Though many methods and models have been proposed to analyze such data, the research of network data is largely disconnected with the classical theory of statistical learning and signal processing. In this paper, we present a new framework for modeling network data, which connects two seemingly different areas: network data analysis and compressed sensing. From a nonparametric perspective, we model an observed network using a large dictionary. In particular, we consider the network clique detection problem and show connections between our formulation with a new algebraic tool, namely Randon basis pursuit in homogeneous spaces. Such a connection allows us to identify rigorous recovery conditions for clique detection problems. Though this paper is mainly conceptual, we also develop practical approximation algorithms for solving empirical problems and demonstrate their usefulness on real-world datasets. PMID:25620806

  15. Edge compression manifold apparatus

    DOEpatents

    Renzi, Ronald F.

    2007-02-27

    A manifold for connecting external capillaries to the inlet and/or outlet ports of a microfluidic device for high pressure applications is provided. The fluid connector for coupling at least one fluid conduit to a corresponding port of a substrate that includes: (i) a manifold comprising one or more channels extending therethrough wherein each channel is at least partially threaded, (ii) one or more threaded ferrules each defining a bore extending therethrough with each ferrule supporting a fluid conduit wherein each ferrule is threaded into a channel of the manifold, (iii) a substrate having one or more ports on its upper surface wherein the substrate is positioned below the manifold so that the one or more ports is aligned with the one or more channels of the manifold, and (iv) device to apply an axial compressive force to the substrate to couple the one or more ports of the substrate to a corresponding proximal end of a fluid conduit.

  16. Edge compression manifold apparatus

    DOEpatents

    Renzi, Ronald F.

    2004-12-21

    A manifold for connecting external capillaries to the inlet and/or outlet ports of a microfluidic device for high pressure applications is provided. The fluid connector for coupling at least one fluid conduit to a corresponding port of a substrate that includes: (i) a manifold comprising one or more channels extending therethrough wherein each channel is at least partially threaded, (ii) one or more threaded ferrules each defining a bore extending therethrough with each ferrule supporting a fluid conduit wherein each ferrule is threaded into a channel of the manifold, (iii) a substrate having one or more ports on its upper surface wherein the substrate is positioned below the manifold so that the one or more ports is aligned with the one or more channels of the manifold, and (iv) device to apply an axial compressive force to the substrate to couple the one or more ports of the substrate to a corresponding proximal end of a fluid conduit.

  17. The effects of wavelet compression on Digital Elevation Models (DEMs)

    USGS Publications Warehouse

    Oimoen, M.J.

    2004-01-01

    This paper investigates the effects of lossy compression on floating-point digital elevation models using the discrete wavelet transform. The compression of elevation data poses a different set of problems and concerns than does the compression of images. Most notably, the usefulness of DEMs depends largely in the quality of their derivatives, such as slope and aspect. Three areas extracted from the U.S. Geological Survey's National Elevation Dataset were transformed to the wavelet domain using the third order filters of the Daubechies family (DAUB6), and were made sparse by setting 95 percent of the smallest wavelet coefficients to zero. The resulting raster is compressible to a corresponding degree. The effects of the nulled coefficients on the reconstructed DEM are noted as residuals in elevation, derived slope and aspect, and delineation of drainage basins and streamlines. A simple masking technique also is presented, that maintains the integrity and flatness of water bodies in the reconstructed DEM.

  18. Compression and compression fatigue testing of composite laminates

    NASA Technical Reports Server (NTRS)

    Porter, T. R.

    1982-01-01

    The effects of moisture and temperature on the fatigue and fracture response of composite laminates under compression loads were investigated. The structural laminates studied were an intermediate stiffness graphite-epoxy composite (a typical angle ply laimna liminate had a typical fan blade laminate). Full and half penetration slits and impact delaminations were the defects examined. Results are presented which show the effects of moisture on the fracture and fatigue strength at room temperature, 394 K (250 F), and 422 K (300 F). Static tests results show the effects of defect size and type on the compression-fracture strength under moisture and thermal environments. The cyclic tests results compare the fatigue lives and residual compression strength under compression only and under tension-compression fatigue loading.

  19. Adaptive compressive sensing camera

    NASA Astrophysics Data System (ADS)

    Hsu, Charles; Hsu, Ming K.; Cha, Jae; Iwamura, Tomo; Landa, Joseph; Nguyen, Charles; Szu, Harold

    2013-05-01

    We have embedded Adaptive Compressive Sensing (ACS) algorithm on Charge-Coupled-Device (CCD) camera based on the simplest concept that each pixel is a charge bucket, and the charges comes from Einstein photoelectric conversion effect. Applying the manufactory design principle, we only allow altering each working component at a minimum one step. We then simulated what would be such a camera can do for real world persistent surveillance taking into account of diurnal, all weather, and seasonal variations. The data storage has saved immensely, and the order of magnitude of saving is inversely proportional to target angular speed. We did design two new components of CCD camera. Due to the matured CMOS (Complementary metal-oxide-semiconductor) technology, the on-chip Sample and Hold (SAH) circuitry can be designed for a dual Photon Detector (PD) analog circuitry for changedetection that predicts skipping or going forward at a sufficient sampling frame rate. For an admitted frame, there is a purely random sparse matrix [Φ] which is implemented at each bucket pixel level the charge transport bias voltage toward its neighborhood buckets or not, and if not, it goes to the ground drainage. Since the snapshot image is not a video, we could not apply the usual MPEG video compression and Hoffman entropy codec as well as powerful WaveNet Wrapper on sensor level. We shall compare (i) Pre-Processing FFT and a threshold of significant Fourier mode components and inverse FFT to check PSNR; (ii) Post-Processing image recovery will be selectively done by CDT&D adaptive version of linear programming at L1 minimization and L2 similarity. For (ii) we need to determine in new frames selection by SAH circuitry (i) the degree of information (d.o.i) K(t) dictates the purely random linear sparse combination of measurement data a la [Φ]M,N M(t) = K(t) Log N(t).

  20. Compressive optical imaging systems

    NASA Astrophysics Data System (ADS)

    Wu, Yuehao

    Compared to the classic Nyquist sampling theorem, Compressed Sensing or Compressive Sampling (CS) was proposed as a more efficient alternative for sampling sparse signals. In this dissertation, we discuss the implementation of the CS theory in building a variety of optical imaging systems. CS-based Imaging Systems (CSISs) exploit the sparsity of optical images in their transformed domains by imposing incoherent CS measurement patterns on them. The amplitudes and locations of sparse frequency components of optical images in their transformed domains can be reconstructed from the CS measurement results by solving an l1-regularized minimization problem. In this work, we review the theoretical background of the CS theory and present two hardware implementation schemes for CSISs, including a single pixel detector based scheme and an array detector based scheme. The first implementation scheme is suitable for acquiring Two-Dimensional (2D) spatial information of the imaging scene. We demonstrate the feasibility of this implementation scheme by developing a single pixel camera, a multispectral imaging system, and an optical sectioning microscope for fluorescence microscopy. The array detector based scheme is suitable for hyperspectral imaging applications, wherein both the spatial and spectral information of the imaging scene are of interest. We demonstrate the feasibility of this scheme by developing a Digital Micromirror Device-based Snapshot Spectral Imaging (DMD-SSI) system, which implements CS measurement processes on the Three-Dimensional (3D) spatial/spectral information of the imaging scene. Tens of spectral images can be reconstructed from the DMD-SSI system simultaneously without any mechanical or temporal scanning processes.

  1. Survey of Header Compression Techniques

    NASA Technical Reports Server (NTRS)

    Ishac, Joseph

    2001-01-01

    This report provides a summary of several different header compression techniques. The different techniques included are: (1) Van Jacobson's header compression (RFC 1144); (2) SCPS (Space Communications Protocol Standards) header compression (SCPS-TP, SCPS-NP); (3) Robust header compression (ROHC); and (4) The header compression techniques in RFC2507 and RFC2508. The methodology for compression and error correction for these schemes are described in the remainder of this document. All of the header compression schemes support compression over simplex links, provided that the end receiver has some means of sending data back to the sender. However, if that return path does not exist, then neither Van Jacobson's nor SCPS can be used, since both rely on TCP (Transmission Control Protocol). In addition, under link conditions of low delay and low error, all of the schemes perform as expected. However, based on the methodology of the schemes, each scheme is likely to behave differently as conditions degrade. Van Jacobson's header compression relies heavily on the TCP retransmission timer and would suffer an increase in loss propagation should the link possess a high delay and/or bit error rate (BER). The SCPS header compression scheme protects against high delay environments by avoiding delta encoding between packets. Thus, loss propagation is avoided. However, SCPS is still affected by an increased BER (bit-error-rate) since the lack of delta encoding results in larger header sizes. Next, the schemes found in RFC2507 and RFC2508 perform well for non-TCP connections in poor conditions. RFC2507 performance with TCP connections is improved by various techniques over Van Jacobson's, but still suffers a performance hit with poor link properties. Also, RFC2507 offers the ability to send TCP data without delta encoding, similar to what SCPS offers. ROHC is similar to the previous two schemes, but adds additional CRCs (cyclic redundancy check) into headers and improves

  2. Compressible turbulent mixing: Effects of compressibility and Schmidt number

    NASA Astrophysics Data System (ADS)

    Ni, Qionglin

    2015-11-01

    Effects of compressibility and Schmidt number on passive scalar in compressible turbulence were studied. On the effect of compressibility, the scalar spectrum followed the k- 5 / 3 inertial-range scaling and suffered negligible influence from compressibility. The transfer of scalar flux was reduced by the transition from incompressible to compressible flows, however, was enhanced by the growth of Mach number. The intermittency parameter was increased by the growth of Mach number, and was decreased by the growth of the compressive mode of driven forcing. The dependency of the mixing timescale on compressibility showed that for the driven forcing, the compressive mode was less efficient in enhancing scalar mixing. On the effect of Schmidt number (Sc), in the inertial-convective range the scalar spectrum obeyed the k- 5 / 3 scaling. For Sc >> 1, a k-1 power law appeared in the viscous-convective range, while for Sc << 1, a k- 17 / 3 power law was identified in the inertial-diffusive range. The transfer of scalar flux grew over Sc. In the Sc >> 1 flow the scalar field rolled up and mixed sufficiently, while in the Sc << 1 flow that only had the large-scale, cloudlike structures. In Sc >> 1 and Sc << 1 flows, the spectral densities of scalar advection and dissipation followed the k- 5 / 3 scaling, indicating that in compressible turbulence the processes of advection and dissipation might deferring to the Kolmogorov picture. Finally, the comparison with incompressible results showed that the scalar in compressible turbulence lacked a conspicuous bump structure in its spectrum, and was more intermittent in the dissipative range.

  3. Compression strength of composite primary structural components

    NASA Technical Reports Server (NTRS)

    Johnson, Eric R.

    1992-01-01

    A status report of work performed during the period May 1, 1992 to October 31, 1992 is presented. Research was conducted in three areas: delamination initiation in postbuckled dropped-ply laminates; stiffener crippling initiated by delamination; and pressure pillowing of an orthogonally stiffened cylindrical shell. The geometrically nonlinear response and delamination initiation of compression-loaded dropped-ply laminates is analyzed. A computational model of the stiffener specimens that includes the capability to predict the interlaminar response at the flange free edge in postbuckling is developed. The distribution of the interacting loads between the stiffeners and the shell wall, particularly at the load transfer at the stiffener crossing point, is determined.

  4. 14. Detail, upper chord connection point on upstream side of ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    14. Detail, upper chord connection point on upstream side of truss, showing connection of upper chord, laced vertical compression member, strut, counters, and laterals. - Dry Creek Bridge, Spanning Dry Creek at Cook Road, Ione, Amador County, CA

  5. Pressure Oscillations in Adiabatic Compression

    ERIC Educational Resources Information Center

    Stout, Roland

    2011-01-01

    After finding Moloney and McGarvey's modified adiabatic compression apparatus, I decided to insert this experiment into my physical chemistry laboratory at the last minute, replacing a problematic experiment. With insufficient time to build the apparatus, we placed a bottle between two thick textbooks and compressed it with a third textbook forced…

  6. Compression failure of composite laminates

    NASA Technical Reports Server (NTRS)

    Pipes, R. B.

    1983-01-01

    This presentation attempts to characterize the compressive behavior of Hercules AS-1/3501-6 graphite-epoxy composite. The effect of varying specimen geometry on test results is examined. The transition region is determined between buckling and compressive failure. Failure modes are defined and analytical models to describe these modes are presented.

  7. Data compression by wavelet transforms

    NASA Technical Reports Server (NTRS)

    Shahshahani, M.

    1992-01-01

    A wavelet transform algorithm is applied to image compression. It is observed that the algorithm does not suffer from the blockiness characteristic of the DCT-based algorithms at compression ratios exceeding 25:1, but the edges do not appear as sharp as they do with the latter method. Some suggestions for the improved performance of the wavelet transform method are presented.

  8. Application specific compression : final report.

    SciTech Connect

    Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.

    2008-12-01

    With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

  9. Streaming Compression of Hexahedral Meshes

    SciTech Connect

    Isenburg, M; Courbet, C

    2010-02-03

    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  10. Compression Shocks of Detached Flow

    NASA Technical Reports Server (NTRS)

    Eggink

    1947-01-01

    It is known that compression shocks which lead from supersonic to subsonic velocity cause the flow to separate on impact on a rigid wall. Such shocks appear at bodies with circular symmetry or wing profiles on locally exceeding sonic velocity, and in Laval nozzles with too high a back pressure. The form of the compression shocks observed therein is investigated.

  11. Compressive Deconvolution in Medical Ultrasound Imaging.

    PubMed

    Chen, Zhouye; Basarab, Adrian; Kouame, Denis

    2016-03-01

    The interest of compressive sampling in ultrasound imaging has been recently extensively evaluated by several research teams. Following the different application setups, it has been shown that the RF data may be reconstructed from a small number of measurements and/or using a reduced number of ultrasound pulse emissions. Nevertheless, RF image spatial resolution, contrast and signal to noise ratio are affected by the limited bandwidth of the imaging transducer and the physical phenomenon related to US wave propagation. To overcome these limitations, several deconvolution-based image processing techniques have been proposed to enhance the ultrasound images. In this paper, we propose a novel framework, named compressive deconvolution, that reconstructs enhanced RF images from compressed measurements. Exploiting an unified formulation of the direct acquisition model, combining random projections and 2D convolution with a spatially invariant point spread function, the benefit of our approach is the joint data volume reduction and image quality improvement. The proposed optimization method, based on the Alternating Direction Method of Multipliers, is evaluated on both simulated and in vivo data. PMID:26513780

  12. Hyperspectral fluorescence microscopy based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Studer, Vincent; Bobin, Jérome; Chahid, Makhlad; Mousavi, Hamed; Candes, Emmanuel; Dahan, Maxime

    2012-03-01

    In fluorescence microscopy, one can distinguish two kinds of imaging approaches, wide field and raster scan microscopy, differing by their excitation and detection scheme. In both imaging modalities the acquisition is independent of the information content of the image. Rather, the number of acquisitions N, is imposed by the Nyquist-Shannon theorem. However, in practice, many biological images are compressible (or, equivalently here, sparse), meaning that they depend on a number of degrees of freedom K that is smaller that their size N. Recently, the mathematical theory of compressed sensing (CS) has shown how the sensing modality could take advantage of the image sparsity to reconstruct images with no loss of information while largely reducing the number M of acquisition. Here we present a novel fluorescence microscope designed along the principles of CS. It uses a spatial light modulator (DMD) to create structured wide field excitation patterns and a sensitive point detector to measure the emitted fluorescence. On sparse fluorescent samples, we could achieve compression ratio N/M of up to 64, meaning that an image can be reconstructed with a number of measurements of only 1.5 % of its pixel number. Furthemore, we extend our CS acquisition scheme to an hyperspectral imaging system.

  13. Multiview image compression based on LDV scheme

    NASA Astrophysics Data System (ADS)

    Battin, Benjamin; Niquin, Cédric; Vautrot, Philippe; Debons, Didier; Lucas, Laurent

    2011-03-01

    In recent years, we have seen several different approaches dealing with multiview compression. First, we can find the H264/MVC extension which generates quite heavy bitstreams when used on n-views autostereoscopic medias and does not allow inter-view reconstruction. Another solution relies on the MVD (MultiView+Depth) scheme which keeps p views (n > p > 1) and their associated depth-maps. This method is not suitable for multiview compression since it does not exploit the redundancy between the p views, moreover occlusion areas cannot be accurately filled. In this paper, we present our method based on the LDV (Layered Depth Video) approach which keeps one reference view with its associated depth-map and the n-1 residual ones required to fill occluded areas. We first perform a global per-pixel matching step (providing a good consistency between each view) in order to generate one unified-color RGB texture (where a unique color is devoted to all pixels corresponding to the same 3D-point, thus avoiding illumination artifacts) and a signed integer disparity texture. Next, we extract the non-redundant information and store it into two textures (a unified-color one and a disparity one) containing the reference and the n-1 residual views. The RGB texture is compressed with a conventional DCT or DWT-based algorithm and the disparity texture with a lossless dictionary algorithm. Then, we will discuss about the signal deformations generated by our approach.

  14. Microbunching Instability due to Bunch Compression

    SciTech Connect

    Huang, Zhirong; Wu, Juhao; Shaftan, Timur; /Brookhaven

    2005-12-13

    Magnetic bunch compressors are designed to increase the peak current while maintaining the transverse and longitudinal emittances in order to drive a short-wavelength free electron laser (FEL). Recently, several linac-based FEL experiments observe self-developing micro-structures in the longitudinal phase space of electron bunches undergoing strong compression [1-3]. In the mean time, computer simulations of coherent synchrotron radiation (CSR) effects in bunch compressors illustrate that a CSR-driven microbunching instability may significantly amplify small longitudinal density and energy modulations and hence degrade the beam quality [4]. Various theoretical models have since been developed to describe this instability [5-8]. It is also pointed out that the microbunching instability may be driven strongly by the longitudinal space charge (LSC) field [9,10] and by the linac wakefield [11] in the accelerator, leading to a very large overall gain of a two-stage compression system such as found in the Linac Coherent Light Source (LCLS) [12]. This paper reviews theory and simulations of microbunching instability due to bunch compression, the proposed method to suppress its effects for short-wavelength FELs, and experimental characterizations of beam modulations in linear accelerators. A related topic of interests is microbunching instability in storage rings, which has been reported in the previous ICFA beam dynamics newsletter No. 35 (http://wwwbd. fnal.gov/icfabd/Newsletter35.pdf).

  15. Digital compression algorithms for HDTV transmission

    NASA Technical Reports Server (NTRS)

    Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.

    1990-01-01

    Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.

  16. Analytical model for ramp compression

    NASA Astrophysics Data System (ADS)

    Xue, Quanxi; Jiang, Shaoen; Wang, Zhebin; Wang, Feng; Hu, Yun; Ding, Yongkun

    2016-08-01

    An analytical ramp compression model for condensed matter, which can provide explicit solutions for isentropic compression flow fields, is reported. A ramp compression experiment can be easily designed according to the capability of the loading source using this model. Specifically, important parameters, such as the maximum isentropic region width, material properties, profile of the pressure pulse, and the pressure pulse duration can be reasonably allocated or chosen. To demonstrate and study this model, laser-direct-driven ramp compression experiments and code simulation are performed successively, and the factors influencing the accuracy of the model are studied. The application and simulation show that this model can be used as guidance in the design of a ramp compression experiment. However, it is verified that further optimization work is required for a precise experimental design.

  17. Increasing FTIR spectromicroscopy speed and resolution through compressive imaging

    SciTech Connect

    Gallet, Julien; Riley, Michael; Hao, Zhao; Martin, Michael C

    2007-10-15

    At the Advanced Light Source at Lawrence Berkeley National Laboratory, we are investigating how to increase both the speed and resolution of synchrotron infrared imaging. Synchrotron infrared beamlines have diffraction-limited spot sizes and high signal to noise, however spectral images must be obtained one point at a time and the spatial resolution is limited by the effects of diffraction. One technique to assist in speeding up spectral image acquisition is described here and uses compressive imaging algorithms. Compressive imaging can potentially attain resolutions higher than allowed by diffraction and/or can acquire spectral images without having to measure every spatial point individually thus increasing the speed of such maps. Here we present and discuss initial tests of compressive imaging techniques performed with ALS Beamline 1.4.3?s Nic-Plan infrared microscope, Beamline 1.4.4 Continuum XL IR microscope, and also with a stand-alone Nicolet Nexus 470 FTIR spectrometer.

  18. Image analysis and compression: renewed focus on texture

    NASA Astrophysics Data System (ADS)

    Pappas, Thrasyvoulos N.; Zujovic, Jana; Neuhoff, David L.

    2010-01-01

    We argue that a key to further advances in the fields of image analysis and compression is a better understanding of texture. We review a number of applications that critically depend on texture analysis, including image and video compression, content-based retrieval, visual to tactile image conversion, and multimodal interfaces. We introduce the idea of "structurally lossless" compression of visual data that allows significant differences between the original and decoded images, which may be perceptible when they are viewed side-by-side, but do not affect the overall quality of the image. We then discuss the development of objective texture similarity metrics, which allow substantial point-by-point deviations between textures that according to human judgment are essentially identical.

  19. Compressive sensing exploiting wavelet-domain dependencies for ECG compression

    NASA Astrophysics Data System (ADS)

    Polania, Luisa F.; Carrillo, Rafael E.; Blanco-Velasco, Manuel; Barner, Kenneth E.

    2012-06-01

    Compressive sensing (CS) is an emerging signal processing paradigm that enables sub-Nyquist sampling of sparse signals. Extensive previous work has exploited the sparse representation of ECG signals in compression applications. In this paper, we propose the use of wavelet domain dependencies to further reduce the number of samples in compressive sensing-based ECG compression while decreasing the computational complexity. R wave events manifest themselves as chains of large coefficients propagating across scales to form a connected subtree of the wavelet coefficient tree. We show that the incorporation of this connectedness as additional prior information into a modified version of the CoSaMP algorithm can significantly reduce the required number of samples to achieve good quality in the reconstruction. This approach also allows more control over the ECG signal reconstruction, in particular, the QRS complex, which is typically distorted when prior information is not included in the recovery. The compression algorithm was tested upon records selected from the MIT-BIH arrhythmia database. Simulation results show that the proposed algorithm leads to high compression ratios associated with low distortion levels relative to state-of-the-art compression algorithms.

  20. About the use of stoichiometric hydroxyapatite in compression - incidence of manufacturing process on compressibility.

    PubMed

    Pontier, C; Viana, M; Champion, E; Bernache-Assollant, D; Chulia, D

    2001-05-01

    Literature concerning calcium phosphates in pharmacy exhibits the chemical diversity of the compounds available. Some excipient manufacturers offer hydroxyapatite as a direct compression excipient, but the chemical analysis of this compound usually shows a variability of the composition: the so-called materials can be hydroxyapatite or other calcium phosphates, uncalcined (i.e. with a low crystallinity) or calcined and well-crystallized hydroxyapatite. This study points out the incidence of the crystallinity of one compound (i.e. hydroxyapatite) on the mechanical properties. Stoichiometric hydroxyapatite is synthesized and compounds differing in their crystallinity, manufacturing process and particle size are manufactured. X-Ray diffraction analysis is used to investigate the chemical nature of the compounds. The mechanical study (study of the compression, diametral compressive strength, Heckel plots) highlights the negative effect of calcination on the mechanical properties. Porosity and specific surface area measurements show the effect of calcination on compaction. Uncalcined materials show bulk and mechanical properties in accordance with their use as direct compression excipients. PMID:11343890

  1. Wavefield Compression for Full-Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Boehm, Christian; Fichtner, Andreas; de la Puente, Josep; Hanzich, Mauricio

    2015-04-01

    We present compression techniques tailored to iterative nonlinear minimization methods that significantly reduce the memory requirements to store the forward wavefield for the computation of sensitivity kernels. Full-waveform inversion on 3d data sets requires massive computing and memory capabilities. Adjoint techniques offer a powerful tool to compute the first and second derivatives. However, due to the asynchronous nature of forward and adjoint simulations, a severe bottleneck is introduced by the necessity to access both wavefields simultaneously when computing sensitivity kernels. There exist two opposing strategies to deal with this challenge. On the one hand, conventional approaches save the whole forward wavefield to the disk, which yields a significant I/O overhead and might require several terabytes of storage capacity per seismic event. On the other hand, checkpointing techniques allow to trade an almost arbitrary amount of memory requirements for a - potentially large - number of additional forward simulations. We propose an alternative approach that strikes a balance between memory requirements and the need for additional computations. Here, we aim at compressing the forward wavefield in such a way that (1) the I/O overhead is reduced substantially without the need for additional simulations, (2) the costs for compressing/decompressing the wavefield are negligible, and (3) the approximate derivatives resulting from the compressed forward wavefield do not affect the rate of convergence of a Newton-type minimization method. To this end, we apply an adaptive re-quantization of the displacement field that uses dynamically adjusted floating-point accuracies - i.e., a locally varying number of bits - to store the data. Furthermore, the spectral element functions are adaptively downsampled to a lower polynomial degree. In addition, a sliding-window cubic spline re-interpolates the temporal snapshots to recover a smooth signal. Moreover, a preprocessing step

  2. Planar velocity measurements in compressible mixing layers

    NASA Astrophysics Data System (ADS)

    Urban, William David

    1999-10-01

    The efficiency of high-Mach number airbreathing propulsion devices is critically dependent upon the mixing of gases in turbulent shear flows. However, compressibility is known to suppress the growth rates of these mixing layers, posing a problem of both practical and scientific interest. In the present study, particle image velocimetry (PIV) is used to obtain planar, two- component velocity fields for Planar gaseous shear layers at convective Mach numbers Mc of 0.25, 0.63, and 0.76. The experiments are performed in a large-scale blowdown wind tunnel, with high-speed freestream Mach numbers up to 2.25 and shear-layer Reynolds numbers up to 106 . The instantaneous data are analyzed to produce maps of derived quantities such as vorticity, and ensemble averaged to provide turbulence statistics. Specific issues relating to the application of PIV to supersonic flows are addressed. In addition to the fluid- velocity measurements, we present double-pulsed scalar visualizations, permitting inference of the convective velocity of the large-scale structures, and examine the interaction of a weak wave with the mixing layer. The principal change associated with compressibility is seen to be the development of multiple high-gradient regions in the instantaneous velocity field, disrupting the spanwise-coherent `roller' structure usually associated with incompressible layers. As a result, the vorticity peaks reside in multiple thin sheets, segregated in the transverse direction. This suggests a decrease in cross-stream communication and a disconnection of the entrainment processes at the two interfaces. In the compressible case, steep-gradient regions in the instantaneous velocity field often correspond closely with the local sonic line, suggesting a sensitivity to lab-frame disturbances; this could in turn explain the effectiveness of sub-boundary layer mixing enhancement strategies in this flow. Large- ensemble statistics bear out the observation from previous single-point

  3. Compression relief engine brake

    SciTech Connect

    Meneely, V.A.

    1987-10-06

    A compression relief brake is described for four cycle internal-combustion engines, comprising: a pressurized oil supply; means for selectively pressurizing a hydraulic circuit with oil from the oil supply; a master piston and cylinder communicating with a slave piston and cylinder via the hydraulic circuit; an engine exhaust valve mechanically coupled to the engine and timed to open during the exhaust cycle of the engine the exhaust valve coupled to the slave piston. The exhaust valve is spring-based in a closed state to contact a valve seat; a sleeve frictionally and slidably disposed within a cavity defined by the slave piston which cavity communicates with the hydraulic circuit. When the hydraulic circuit is selectively pressurized and the engine is operating the sleeve entraps an incompressible volume of oil within the cavity to generate a displacement of the slave piston within the slave cylinder, whereby a first gap is maintained between the exhaust valve and its associated seat; and means for reciprocally activating the master piston for increasing the pressure within the previously pressurized hydraulic circuit during at least a portion of the expansion cycle of the engine whereby a second gap is reciprocally maintained between the exhaust valve and its associated seat.

  4. Variable compression ratio control

    SciTech Connect

    Johnson, K.A.

    1988-04-19

    In a four cycle engine that includes a crankshaft having a plural number of main shaft sections defining the crankshaft rotational axis and a plural number of crank arms defining orbital shaft sections, a plural number of combustion cylinders, a movable piston within each cylinder, each cylinder and its associated piston defining a combustion chamber, a connecting rod connecting each piston to an orbital shaft section of the crankshaft, and a plural number of stationary support walls spaced along the crankshaft axis for absorbing crankshaft forces: the improvement is described comprising means for adjustably supporting the crankshaft on the stationary walls such that the crankshaft rotational axis is adjustable along the piston-cylinder axis for the purpose of varying a resulting engine compression ratio; the adjustable support means comprising a circular cavity in each stationary wall. A circular disk swivably is seated in each cavity, each circular disk having a circular opening therethrough eccentric to the disk center. The crankshaft is arranged so that respective ones of its main shaft sections are located within respective ones of the circular openings; means for rotating each circular disk around its center so that the main shaft sections of the crankshaft are adjusted toward and away from the combustion chamber; a pinion gear on an output end of the crankshaft in axial alignment with and positioned beyond the respective ones of the main shaft sections, and a rotary output gear located about and engaged with teeth extending from the pinion gear.

  5. Adaptive compression of image data

    NASA Astrophysics Data System (ADS)

    Hludov, Sergei; Schroeter, Claus; Meinel, Christoph

    1998-09-01

    In this paper we will introduce a method of analyzing images, a criterium to differentiate between images, a compression method of medical images in digital form based on the classification of the image bit plane and finally an algorithm for adaptive image compression. The analysis of the image content is based on a valuation of the relative number and absolute values of the wavelet coefficients. A comparison between the original image and the decoded image will be done by a difference criteria calculated by the wavelet coefficients of the original image and the decoded image of the first and second iteration step of the wavelet transformation. This adaptive image compression algorithm is based on a classification of digital images into three classes and followed by the compression of the image by a suitable compression algorithm. Furthermore we will show that applying these classification rules on DICOM-images is a very effective method to do adaptive compression. The image classification algorithm and the image compression algorithms have been implemented in JAVA.

  6. Advances in compressible turbulent mixing

    SciTech Connect

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.

  7. Best compression: Reciprocating or rotary?

    SciTech Connect

    Cahill, C.

    1997-07-01

    A compressor is a device used to increase the pressure of a compressible fluid. The inlet pressure can vary from a deep vacuum to a high positive pressure. The discharge pressure can range from subatmospheric levels to tens of thousands of pounds per square inch. Compressors come in numerous forms, but for oilfield applications there are two primary types, reciprocating and rotary. Both reciprocating and rotary compressors are grouped in the intermittent mode of compression. Intermittent is cyclic in nature, in that a specific quantity of gas is ingested by the compressor, acted upon and discharged before the cycle is repeated. Reciprocating compression is the most common form of compression used for oilfield applications. Rotary screw compressors have a long history but are relative newcomers to oilfield applications. The rotary screw compressor-technically a helical rotor compressor-dates back to 1878. That was when the first rotary screw was manufactured for the purpose of compressing air. Today thousands of rotary screw compression packages are being used throughout the world to compress natural gas.

  8. Designing experiments through compressed sensing.

    SciTech Connect

    Young, Joseph G.; Ridzal, Denis

    2013-06-01

    In the following paper, we discuss how to design an ensemble of experiments through the use of compressed sensing. Specifically, we show how to conduct a small number of physical experiments and then use compressed sensing to reconstruct a larger set of data. In order to accomplish this, we organize our results into four sections. We begin by extending the theory of compressed sensing to a finite product of Hilbert spaces. Then, we show how these results apply to experiment design. Next, we develop an efficient reconstruction algorithm that allows us to reconstruct experimental data projected onto a finite element basis. Finally, we verify our approach with two computational experiments.

  9. Context-Aware Image Compression

    PubMed Central

    Chan, Jacky C. K.; Mahjoubfar, Ata; Chen, Claire L.; Jalali, Bahram

    2016-01-01

    We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling. PMID:27367904

  10. Image compression using constrained relaxation

    NASA Astrophysics Data System (ADS)

    He, Zhihai

    2007-01-01

    In this work, we develop a new data representation framework, called constrained relaxation for image compression. Our basic observation is that an image is not a random 2-D array of pixels. They have to satisfy a set of imaging constraints so as to form a natural image. Therefore, one of the major tasks in image representation and coding is to efficiently encode these imaging constraints. The proposed data representation and image compression method not only achieves more efficient data compression than the state-of-the-art H.264 Intra frame coding, but also provides much more resilience to wireless transmission errors with an internal error-correction capability.

  11. Partial transparency of compressed wood

    NASA Astrophysics Data System (ADS)

    Sugimoto, Hiroyuki; Sugimori, Masatoshi

    2016-05-01

    We have developed novel wood composite with optical transparency at arbitrary region. Pores in wood cells have a great variation in size. These pores expand the light path in the sample, because the refractive indexes differ between constituents of cell and air in lumen. In this study, wood compressed to close to lumen had optical transparency. Because the condition of the compression of wood needs the plastic deformation, wood was impregnated phenolic resin. The optimal condition for high transmission is compression ratio above 0.7.

  12. A Quadratic Closure for Compressible Turbulence

    SciTech Connect

    Futterman, J A

    2008-09-16

    We have investigated a one-point closure model for compressible turbulence based on third- and higher order cumulant discard for systems undergoing rapid deformation, such as might occur downstream of a shock or other discontinuity. In so doing, we find the lowest order contributions of turbulence to the mean flow, which lead to criteria for Adaptive Mesh Refinement. Rapid distortion theory (RDT) as originally applied by Herring closes the turbulence hierarchy of moment equations by discarding third order and higher cumulants. This is similar to the fourth-order cumulant discard hypothesis of Millionshchikov, except that the Millionshchikov hypothesis was taken to apply to incompressible homogeneous isotropic turbulence generally, whereas RDT is applied only to fluids undergoing a distortion that is 'rapid' in the sense that the interaction of the mean flow with the turbulence overwhelms the interaction of the turbulence with itself. It is also similar to Gaussian closure, in which both second and fourth-order cumulants are retained. Motivated by RDT, we develop a quadratic one-point closure for rapidly distorting compressible turbulence, without regard to homogeneity or isotropy, and make contact with two equation turbulence models, especially the K-{var_epsilon} and K-L models, and with linear instability growth. In the end, we arrive at criteria for Adaptive Mesh Refinement in Finite Volume simulations.

  13. Internal roll compression system

    DOEpatents

    Anderson, Graydon E.

    1985-01-01

    This invention is a machine for squeezing water out of peat or other material of low tensile strength; the machine including an inner roll eccentrically positioned inside a tubular outer roll, so as to form a gradually increasing pinch area at one point therebetween, so that, as the rolls rotate, the material is placed between the rolls, and gets wrung out when passing through the pinch area.

  14. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    PubMed Central

    Li, Xiangwei; Lan, Xuguang; Yang, Meng; Xue, Jianru; Zheng, Nanning

    2014-01-01

    Compressive Sensing Imaging (CSI) is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS) acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4∼2 dB comparing with current state-of-the-art, while maintaining a low computational complexity. PMID:25490597

  15. Compression fractures of the back

    MedlinePlus

    Compression fractures of the back are broken vertebrae. Vertebrae are the bones of the spine. ... bone from elsewhere Tumors that start in the spine, such as multiple myeloma Having many fractures of ...

  16. Efficient Decoding of Compressed Data.

    ERIC Educational Resources Information Center

    Bassiouni, Mostafa A.; Mukherjee, Amar

    1995-01-01

    Discusses the problem of enhancing the speed of Huffman decoding of compressed data. Topics addressed include the Huffman decoding tree; multibit decoding; binary string mapping problems; and algorithms for solving mapping problems. (22 references) (LRW)

  17. [New aspects of compression therapy].

    PubMed

    Partsch, Bernhard; Partsch, Hugo

    2016-06-01

    In this review article the mechanisms of action of compression therapy are summarized and a survey of materials is presented together with some practical advice how and when these different devices should be applied. Some new experimental findings regarding the optimal dosage (= compression pressure) concerning an improvement of venous hemodynamics and a reduction of oedema are discussed. It is shown, that stiff, non-yielding material applied with adequate pressure provides hemodynamically superior effects compared to elastic material and that relatively low pressures reduce oedema. Compression over the calf is more important to increase the calf pump function compared to graduated compression. In patients with mixed, arterial-venous ulcers and an ABPI over 0.6 inelastic bandages not exceeding a sub-bandage pressure of 40 mmHg may increase the arterial flow and improve venous pumping function. PMID:27259340

  18. Compressed gas fuel storage system

    SciTech Connect

    Wozniak, John J.; Tiller, Dale B.; Wienhold, Paul D.; Hildebrand, Richard J.

    2001-01-01

    A compressed gas vehicle fuel storage system comprised of a plurality of compressed gas pressure cells supported by shock-absorbing foam positioned within a shape-conforming container. The container is dimensioned relative to the compressed gas pressure cells whereby a radial air gap surrounds each compressed gas pressure cell. The radial air gap allows pressure-induced expansion of the pressure cells without resulting in the application of pressure to adjacent pressure cells or physical pressure to the container. The pressure cells are interconnected by a gas control assembly including a thermally activated pressure relief device, a manual safety shut-off valve, and means for connecting the fuel storage system to a vehicle power source and a refueling adapter. The gas control assembly is enclosed by a protective cover attached to the container. The system is attached to the vehicle with straps to enable the chassis to deform as intended in a high-speed collision.

  19. Comparison of Artificial Compressibility Methods

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Housman, Jeffrey; Kwak, Dochan

    2004-01-01

    Various artificial compressibility methods for calculating the three-dimensional incompressible Navier-Stokes equations are compared. Each method is described and numerical solutions to test problems are conducted. A comparison based on convergence behavior, accuracy, and robustness is given.

  20. Shock compression of polyvinyl chloride

    NASA Astrophysics Data System (ADS)

    Neogi, Anupam; Mitra, Nilanjan

    2016-04-01

    This study presents shock compression simulation of atactic polyvinyl chloride (PVC) using ab-initio and classical molecular dynamics. The manuscript also identifies the limits of applicability of classical molecular dynamics based shock compression simulation for PVC. The mechanism of bond dissociation under shock loading and its progression is demonstrated in this manuscript using the density functional theory based molecular dynamics simulations. The rate of dissociation of different bonds at different shock velocities is also presented in this manuscript.

  1. Negative compressibility observed in graphene containing resonant impurities

    SciTech Connect

    Chen, X. L.; Wang, L.; Li, W.; Wang, Y.; He, Y. H.; Wu, Z. F.; Han, Y.; Zhang, M. W.; Xiong, W.; Wang, N.

    2013-05-20

    We observed negative compressibility in monolayer graphene containing resonant impurities under different magnetic fields. Hydrogenous impurities were introduced into graphene by electron beam (e-beam) irradiation. Resonant states located in the energy region of {+-}0.04 eV around the charge neutrality point were probed in e-beam-irradiated graphene capacitors. Theoretical results based on tight-binding and Lifshitz models agreed well with experimental observations of graphene containing a low concentration of resonant impurities. The interaction between resonant states and Landau levels was detected by varying the applied magnetic field. The interaction mechanisms and enhancement of the negative compressibility in disordered graphene are discussed.

  2. Stress Relaxation for Granular Materials near Jamming under Cyclic Compression

    NASA Astrophysics Data System (ADS)

    Farhadi, Somayeh; Zhu, Alex Z.; Behringer, Robert P.

    2015-10-01

    We have explored isotropically jammed states of semi-2D granular materials through cyclic compression. In each compression cycle, systems of either identical ellipses or bidisperse disks transition between jammed and unjammed states. We determine the evolution of the average pressure P and structure through consecutive jammed states. We observe a transition point ϕm above which P persists over many cycles; below ϕm, P relaxes slowly. The relaxation time scale associated with P increases with packing fraction, while the relaxation time scale for collective particle motion remains constant. The collective motion of the ellipses is hindered compared to disks because of the rotational constraints on elliptical particles.

  3. Negative compressibility observed in graphene containing resonant impurities

    NASA Astrophysics Data System (ADS)

    Chen, X. L.; Wang, L.; Li, W.; Wang, Y.; He, Y. H.; Wu, Z. F.; Han, Y.; Zhang, M. W.; Xiong, W.; Wang, N.

    2013-05-01

    We observed negative compressibility in monolayer graphene containing resonant impurities under different magnetic fields. Hydrogenous impurities were introduced into graphene by electron beam (e-beam) irradiation. Resonant states located in the energy region of ±0.04 eV around the charge neutrality point were probed in e-beam-irradiated graphene capacitors. Theoretical results based on tight-binding and Lifshitz models agreed well with experimental observations of graphene containing a low concentration of resonant impurities. The interaction between resonant states and Landau levels was detected by varying the applied magnetic field. The interaction mechanisms and enhancement of the negative compressibility in disordered graphene are discussed.

  4. Object-Based Image Compression

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    2003-01-01

    Image compression frequently supports reduced storage requirement in a computer system, as well as enhancement of effective channel bandwidth in a communication system, by decreasing the source bit rate through reduction of source redundancy. The majority of image compression techniques emphasize pixel-level operations, such as matching rectangular or elliptical sampling blocks taken from the source data stream, with exemplars stored in a database (e.g., a codebook in vector quantization or VQ). Alternatively, one can represent a source block via transformation, coefficient quantization, and selection of coefficients deemed significant for source content approximation in the decompressed image. This approach, called transform coding (TC), has predominated for several decades in the signal and image processing communities. A further technique that has been employed is the deduction of affine relationships from source properties such as local self-similarity, which supports the construction of adaptive codebooks in a self-VQ paradigm that has been called iterated function systems (IFS). Although VQ, TC, and IFS based compression algorithms have enjoyed varying levels of success for different types of applications, bit rate requirements, and image quality constraints, few of these algorithms examine the higher-level spatial structure of an image, and fewer still exploit this structure to enhance compression ratio. In this paper, we discuss a fourth type of compression algorithm, called object-based compression, which is based on research in joint segmentaton and compression, as well as previous research in the extraction of sketch-like representations from digital imagery. Here, large image regions that correspond to contiguous recognizeable objects or parts of objects are segmented from the source, then represented compactly in the compressed image. Segmentation is facilitated by source properties such as size, shape, texture, statistical properties, and spectral

  5. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 7 2012-07-01 2012-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  6. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 7 2013-07-01 2013-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  7. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 7 2014-07-01 2014-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  8. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 7 2011-07-01 2011-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  9. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  10. A Test Data Compression Scheme Based on Irrational Numbers Stored Coding

    PubMed Central

    Wu, Hai-feng; Cheng, Yu-sheng; Zhan, Wen-fa; Cheng, Yi-fei; Wu, Qiong; Zhu, Shi-juan

    2014-01-01

    Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS), is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL. PMID:25258744

  11. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  12. Absolutely lossless compression of medical images.

    PubMed

    Ashraf, Robina; Akbar, Muhammad

    2005-01-01

    Data in medical images is very large and therefore for storage and/or transmission of these images, compression is essential. A method is proposed which provides high compression ratios for radiographic images with no loss of diagnostic quality. In the approach an image is first compressed at a high compression ratio but with loss, and the error image is then compressed losslessly. The resulting compression is not only strictly lossless, but also expected to yield a high compression ratio, especially if the lossy compression technique is good. A neural network vector quantizer (NNVQ) is used as a lossy compressor, while for lossless compression Huffman coding is used. Quality of images is evaluated by comparing with standard compression techniques available. PMID:17281110

  13. Improved Compression of Wavelet-Transformed Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Klimesh, Matthew

    2005-01-01

    length and the code parameter. When this difference falls outside a fixed range, the code parameter is updated (increased or decreased). The Golomb code parameter is selected based on the average magnitude of recently encoded nonzero samples. The coding method requires no floating- point operations, and more readily adapts to local statistics than other methods. The method can also accommodate arbitrarily large input values and arbitrarily long runs of zeros. In practice, this means that changes in the dynamic range or size of the input data set would not require a change to the compressor. The algorithm has been tested in computational experiments on test images. A comparison with a previously developed algorithm that uses large code tables (generated via Huffman coding on training data) suggests that the data-compression effectiveness of the present algorithm is comparable to the best performance achievable by the previously developed algorithm.

  14. Point Cloud Server (pcs) : Point Clouds In-Base Management and Processing

    NASA Astrophysics Data System (ADS)

    Cura, R.; Perret, J.; Paparoditis, N.

    2015-08-01

    In addition to the traditional Geographic Information System (GIS) data such as images and vectors, point cloud data has become more available. It is appreciated for its precision and true three-Dimensional (3D) nature. However, managing the point cloud can be difficult due to scaling problems and specificities of this data type. Several methods exist but are usually fairly specialised and solve only one aspect of the management problem. In this work, we propose a complete and efficient point cloud management system based on a database server that works on groups of points rather than individual points. This system is specifically designed to solve all the needs of point cloud users: fast loading, compressed storage, powerful filtering, easy data access and exporting, and integrated processing. Moreover, the system fully integrates metadata (like sensor position) and can conjointly use point clouds with images, vectors, and other point clouds. The system also offers in-base processing for easy prototyping and parallel processing and can scale well. Lastly, the system is built on open source technologies; therefore it can be easily extended and customised. We test the system will several billion points of point clouds from Lidar (aerial and terrestrial ) and stereo-vision. We demonstrate ~ 400 million pts/h loading speed, user-transparent greater than 2 to 4:1 compression ratio, filtering in the approximately 50 ms range, and output of about a million pts/s, along with classical processing, such as object detection.

  15. An overview of semantic compression

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    2010-08-01

    We live in such perceptually rich natural and manmade environments that detection and recognition of objects is mediated cerebrally by attentional filtering, in order to separate objects of interest from background clutter. In computer models of the human visual system, attentional filtering is often restricted to early processing, where areas of interest (AOIs) are delineated around anomalies of interest, then the pixels within each AOI's subtense are isolated for later processing. In contrast, the human visual system concurrently detects many targets at multiple levels (e.g., retinal center-surround filters, ganglion layer feature detectors, post-retinal spatial filtering, and cortical detection / filtering of features and objects, to name but a few processes). Intracranial attentional filtering appears to play multiple roles, including clutter filtration at all levels of processing - thus, we process individual retinal cell responses, early filtering response, and so forth, on up to the filtering of objects at high levels of semantic complexity. Computationally, image compression techniques have progressed from emphasizing pixels, to considering regions of pixels as foci of computational interest. In more recent research, object-based compression has been investigated with varying rate-distortion performance and computational efficiency. Codecs have been developed for a wide variety of applications, although the majority of compression and decompression transforms continue to concentrate on region- and pixel-based processing, in part because of computational convenience. It is interesting to note that a growing body of research has emphasized the detection and representation of small features in relationship to their surrounding environment, which has occasionally been called semantic compression. In this paper, we overview different types of semantic compression approaches, with particular interest in high-level compression algorithms. Various algorithms and

  16. Compression and Progressive Retrieval of Multi-Dimensional Sensor Data

    NASA Astrophysics Data System (ADS)

    Lorkowski, P.; Brinkhoff, T.

    2016-06-01

    Since the emergence of sensor data streams, increasing amounts of observations have to be transmitted, stored and retrieved. Performing these tasks at the granularity of single points would mean an inappropriate waste of resources. Thus, we propose a concept that performs a partitioning of observations by spatial, temporal or other criteria (or a combination of them) into data segments. We exploit the resulting proximity (according to the partitioning dimension(s)) within each data segment for compression and efficient data retrieval. While in principle allowing lossless compression, it can also be used for progressive transmission with increasing accuracy wherever incremental data transfer is reasonable. In a first feasibility study, we apply the proposed method to a dataset of ARGO drifting buoys covering large spatio-temporal regions of the world's oceans and compare the achieved compression ratio to other formats.

  17. Compression Wave Velocity of Cylindrical Rock Specimens: Engineering Modulus Interpretation

    NASA Astrophysics Data System (ADS)

    Cha, Minsu; Cho, Gye-Chun

    2007-07-01

    In this study, we experimentally assess which elastic modulus — Young’s modulus or the constraint modulus — is appropriate for application to the compression wave velocity of rock cores measured via an ultrasonic pulse technique and a point-source travel-time method. Experimental tests are performed at pulse frequencies between 50 kHz and 1 MHz, the ratio of diameter (D) to wavelength (λ) is between 0.6 and 25.6, and the specimen length is between 10 and 70 cm. It is found that compression wave velocities obtained from the two methods are constrained wave velocities, and thus the constraint modulus should be applied in the wave equation. Also, the effect of the frequency of the ultrasonic pulse, D/λ, and specimen length on compression wave velocity is negligble within the ranges explored in this study.

  18. The upper-branch stability of compressible boundary layer flows

    NASA Technical Reports Server (NTRS)

    Gajjar, J. S. B.; Cole, J. W.

    1989-01-01

    The upper-branch linear and nonlinear stability of compressible boundary layer flows is studied using the approach of Smith and Bodonyi (1982) for a similar incompressible problem. Both pressure gradient boundary layers and Blasius flow are considered with and without heat transfer, and the neutral eigenrelations incorporating compressibility effects are obtained explicitly. The compressible nonlinear viscous critical layer equations are derived and solved numerically and the results indicate some solutions with positive phase shift across the critical layer. Various limiting cases are investigated including the case of much larger disturbance amplitudes and this indicates the structure for the strongly nonlinear critical layer of the Benney-Bergeon (1969) type. It is also shown how a match with the inviscid neutral inflexional modes arising from the generalized inflexion point criterion, is achieved.

  19. Compression of spectral meteorological imagery

    NASA Technical Reports Server (NTRS)

    Miettinen, Kristo

    1993-01-01

    Data compression is essential to current low-earth-orbit spectral sensors with global coverage, e.g., meteorological sensors. Such sensors routinely produce in excess of 30 Gb of data per orbit (over 4 Mb/s for about 110 min) while typically limited to less than 10 Gb of downlink capacity per orbit (15 minutes at 10 Mb/s). Astro-Space Division develops spaceborne compression systems for compression ratios from as little as three to as much as twenty-to-one for high-fidelity reconstructions. Current hardware production and development at Astro-Space Division focuses on discrete cosine transform (DCT) systems implemented with the GE PFFT chip, a 32x32 2D-DCT engine. Spectral relations in the data are exploited through block mean extraction followed by orthonormal transformation. The transformation produces blocks with spatial correlation that are suitable for further compression with any block-oriented spatial compression system, e.g., Astro-Space Division's Laplacian modeler and analytic encoder of DCT coefficients.

  20. Flux Compression in HTS Films

    NASA Astrophysics Data System (ADS)

    Mikheenko, P.; Colclough, M. S.; Chakalov, R.; Kawano, K.; Muirhead, C. M.

    We report on experimental investigation of the effect of flux compression in superconducting YBa2Cu3Ox (YBCO) films and YBCO/CMR (Colossal Magnetoresistive) multilayers. The flux compression produces positive magnetic moment (m) upon the cooling in a field from above to below the critical temperature. We found effect of compression in all measured films and multilayers. In accordance with theoretical calculations, m is proportional to applied magnetic field. The amplitude of the effect depends on the cooling rate, which suggests the inhomogeneous cooling as its origin. The positive moment is always very small, a fraction of a percent of the ideal diamagnetic response. A CMR layer in contact with HTS decreases the amplitude of the effect. The flux compression weakly depends on sample size, but sensitive to its form and topology. The positive magnetic moment does not appear in bulk samples at low rates of the cooling. Our results show that the main features of the flux compression are very different from those in Paramagnetic Meissner effect observed in bulk high temperature superconductors and Nb disks.

  1. Compression of Probabilistic XML Documents

    NASA Astrophysics Data System (ADS)

    Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice

    Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

  2. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...

  3. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...

  4. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...

  5. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...

  6. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped...

  7. Flux Compression Magnetic Nozzle

    NASA Technical Reports Server (NTRS)

    Thio, Y. C. Francis; Schafer, Charles (Technical Monitor)

    2001-01-01

    In pulsed fusion propulsion schemes in which the fusion energy creates a radially expanding plasma, a magnetic nozzle is required to redirect the radially diverging flow of the expanding fusion plasma into a rearward axial flow, thereby producing a forward axial impulse to the vehicle. In a highly electrically conducting plasma, the presence of a magnetic field B in the plasma creates a pressure B(exp 2)/2(mu) in the plasma, the magnetic pressure. A gradient in the magnetic pressure can be used to decelerate the plasma traveling in the direction of increasing magnetic field, or to accelerate a plasma from rest in the direction of decreasing magnetic pressure. In principle, ignoring dissipative processes, it is possible to design magnetic configurations to produce an 'elastic' deflection of a plasma beam. In particular, it is conceivable that, by an appropriate arrangement of a set of coils, a good approximation to a parabolic 'magnetic mirror' may be formed, such that a beam of charged particles emanating from the focal point of the parabolic mirror would be reflected by the mirror to travel axially away from the mirror. The degree to which this may be accomplished depends on the degree of control one has over the flux surface of the magnetic field, which changes as a result of its interaction with a moving plasma.

  8. Lossless compression of projection data from photon counting detectors

    NASA Astrophysics Data System (ADS)

    Shunhavanich, Picha; Pelc, Norbert J.

    2016-03-01

    With many attractive attributes, photon counting detectors with many energy bins are being considered for clinical CT systems. In practice, a large amount of projection data acquired for multiple energy bins must be transferred in real time through slip rings and data storage subsystems, causing a bandwidth bottleneck problem. The higher resolution of these detectors and the need for faster acquisition additionally contribute to this issue. In this work, we introduce a new approach to lossless compression, specifically for projection data from photon counting detectors, by utilizing the dependencies in the multi-energy data. The proposed predictor estimates the value of a projection data sample as a weighted average of its neighboring samples and an approximation from other energy bins, and the prediction residuals are then encoded. Context modeling using three or four quantized local gradients is also employed to detect edge characteristics of the data. Using three simulated phantoms including a head phantom, compression of 2.3:1-2.4:1 was achieved. The proposed predictor using zero, three, and four gradient contexts was compared to JPEG-LS and the ideal predictor (noiseless projection data). Among our proposed predictors, three-gradient context is preferred with a compression ratio from Golomb coding 7% higher than JPEG-LS and only 3% lower than the ideal predictor. In encoder efficiency, the Golomb code with the proposed three-gradient contexts has higher compression than block floating point. We also propose a lossy compression scheme, which quantizes the prediction residuals with scalar uniform quantization using quantization boundaries that limit the ratio of quantization error variance to quantum noise variance. Applying our proposed predictor with three-gradient context, the lossy compression achieved a compression ratio of 3.3:1 but inserted a 2.1% standard deviation of error compared to that of quantum noise in reconstructed images. From the initial

  9. Data compression using Chebyshev transform

    NASA Technical Reports Server (NTRS)

    Cheng, Andrew F. (Inventor); Hawkins, III, S. Edward (Inventor); Nguyen, Lillian (Inventor); Monaco, Christopher A. (Inventor); Seagrave, Gordon G. (Inventor)

    2007-01-01

    The present invention is a method, system, and computer program product for implementation of a capable, general purpose compression algorithm that can be engaged on the fly. This invention has particular practical application with time-series data, and more particularly, time-series data obtained form a spacecraft, or similar situations where cost, size and/or power limitations are prevalent, although it is not limited to such applications. It is also particularly applicable to the compression of serial data streams and works in one, two, or three dimensions. The original input data is approximated by Chebyshev polynomials, achieving very high compression ratios on serial data streams with minimal loss of scientific information.

  10. Compressive behavior of fine sand.

    SciTech Connect

    Martin, Bradley E.; Kabir, Md. E.; Song, Bo; Chen, Wayne

    2010-04-01

    The compressive mechanical response of fine sand is experimentally investigated. The strain rate, initial density, stress state, and moisture level are systematically varied. A Kolsky bar was modified to obtain uniaxial and triaxial compressive response at high strain rates. A controlled loading pulse allows the specimen to acquire stress equilibrium and constant strain-rates. The results show that the compressive response of the fine sand is not sensitive to strain rate under the loading conditions in this study, but significantly dependent on the moisture content, initial density and lateral confinement. Partially saturated sand is more compliant than dry sand. Similar trends were reported in the quasi-static regime for experiments conducted at comparable specimen conditions. The sand becomes stiffer as initial density and/or confinement pressure increases. The sand particle size become smaller after hydrostatic pressure and further smaller after dynamic axial loading.

  11. Measurement of compressed breast thickness by optical stereoscopic photogrammetry

    SciTech Connect

    Tyson, Albert H.; Mawdsley, Gordon E.; Yaffe, Martin J.

    2009-02-15

    The determination of volumetric breast density (VBD) from mammograms requires accurate knowledge of the thickness of the compressed breast. In attempting to accurately determine VBD from images obtained on conventional mammography systems, the authors found that the thickness reported by a number of mammography systems in the field varied by as much as 15 mm when compressing the same breast or phantom. In order to evaluate the behavior of mammographic compression systems and to be able to predict the thickness at different locations in the breast on patients, they have developed a method for measuring the local thickness of the breast at all points of contact with the compression paddle using optical stereoscopic photogrammetry. On both flat (solid) and compressible phantoms, the measurements were accurate to better than 1 mm with a precision of 0.2 mm. In a pilot study, this method was used to measure thickness on 108 volunteers who were undergoing mammography examination. This measurement tool will allow us to characterize paddle surface deformations, deflections and calibration offsets for mammographic units.

  12. Stress relaxation in vanadium under shock and shockless dynamic compression

    SciTech Connect

    Kanel, G. I.; Razorenov, S. V.; Garkushin, G. V.; Savinykh, A. S.; Zaretsky, E. B.

    2015-07-28

    Evolutions of elastic-plastic waves have been recorded in three series of plate impact experiments with annealed vanadium samples under conditions of shockless and combined ramp and shock dynamic compression. The shaping of incident wave profiles was realized using intermediate base plates made of different silicate glasses through which the compression waves were entered into the samples. Measurements of the free surface velocity histories revealed an apparent growth of the Hugoniot elastic limit with decreasing average rate of compression. The growth was explained by “freezing” of the elastic precursor decay in the area of interaction of the incident and reflected waves. A set of obtained data show that the current value of the Hugoniot elastic limit and plastic strain rate is rather associated with the rate of the elastic precursor decay than with the local rate of compression. The study has revealed the contributions of dislocation multiplications in elastic waves. It has been shown that independently of the compression history the material arrives at the minimum point between the elastic and plastic waves with the same density of mobile dislocations.

  13. Compressive residual strength of graphite/epoxy laminates after impact

    NASA Technical Reports Server (NTRS)

    Guy, Teresa A.; Lagace, Paul A.

    1992-01-01

    The issue of damage tolerance after impact, in terms of the compressive residual strength, was experimentally examined in graphite/epoxy laminates using Hercules AS4/3501-6 in a (+ or - 45/0)(sub 2S) configuration. Three different impactor masses were used at various velocities and the resultant damage measured via a number of nondestructive and destructive techniques. Specimens were then tested to failure under uniaxial compression. The results clearly show that a minimum compressive residual strength exists which is below the open hole strength for a hole of the same diameter as the impactor. Increases in velocity beyond the point of minimum strength cause a difference in the damage produced and cause a resultant increase in the compressive residual strength which asymptotes to the open hole strength value. Furthermore, the results show that this minimum compressive residual strength value is independent of the impactor mass used and is only dependent upon the damage present in the impacted specimen which is the same for the three impactor mass cases. A full 3-D representation of the damage is obtained through the various techniques. Only this 3-D representation can properly characterize the damage state that causes the resultant residual strength. Assessment of the state-of-the-art in predictive analysis capabilities shows a need to further develop techniques based on the 3-D damage state that exists. In addition, the need for damage 'metrics' is clearly indicated.

  14. Simulating Ramp Compression of Diamond

    NASA Astrophysics Data System (ADS)

    Godwal, B. K.; Gonzàlez-Cataldo, F. J.; Jeanloz, R.

    2014-12-01

    We model ramp compression, shock-free dynamic loading, intended to generate a well-defined equation of state that achieves higher densities and lower temperatures than the corresponding shock Hugoniot. Ramp loading ideally approaches isentropic compression for a fluid sample, so is useful for simulating the states deep inside convecting planets. Our model explicitly evaluates the deviation of ramp from "quasi-isentropic" compression. Motivated by recent ramp-compression experiments to 5 TPa (50 Mbar), we calculate the room-temperature isotherm of diamond using first-principles density functional theory and molecular dynamics, from which we derive a principal isentrope and Hugoniot by way of the Mie-Grüneisen formulation and the Hugoniot conservation relations. We simulate ramp compression by imposing a uniaxial strain that then relaxes to an isotropic state, evaluating the change in internal energy and stress components as the sample relaxes toward isotropic strain at constant volume; temperature is well defined for the resulting hydrostatic state. Finally, we evaluate multiple shock- and ramp-loading steps to compare with single-step loading to a given final compression. Temperatures calculated for single-step ramp compression are less than Hugoniot temperatures only above 500 GPa, the two being close to each other at lower pressures. We obtain temperatures of 5095 K and 6815 K for single-step ramp loading to 600 and 800 GPa, for example, which compares well with values of ~5100 K and ~6300 K estimated from previous experiments [PRL,102, 075503, 2009]. At 800 GPa, diamond is calculated to have a temperature of 500 K along the isentrope; 900 K under multi-shock compression (asymptotic result after 8-10 steps); and 3400 K under 3-step ramp loading (200-400-800 GPa). Asymptotic multi-step shock and ramp loading are indistinguishable from the isentrope, within present uncertainties. Our simulations quantify the manner in which current experiments can simulate the

  15. GPU-accelerated compressive holography.

    PubMed

    Endo, Yutaka; Shimobaba, Tomoyoshi; Kakue, Takashi; Ito, Tomoyoshi

    2016-04-18

    In this paper, we show fast signal reconstruction for compressive holography using a graphics processing unit (GPU). We implemented a fast iterative shrinkage-thresholding algorithm on a GPU to solve the ℓ1 and total variation (TV) regularized problems that are typically used in compressive holography. Since the algorithm is highly parallel, GPUs can compute it efficiently by data-parallel computing. For better performance, our implementation exploits the structure of the measurement matrix to compute the matrix multiplications. The results show that GPU-based implementation is about 20 times faster than CPU-based implementation. PMID:27137282

  16. Analyzing Ramp Compression Wave Experiments

    NASA Astrophysics Data System (ADS)

    Hayes, D. B.

    2007-12-01

    Isentropic compression of a solid to 100's of GPa by a ramped, planar compression wave allows measurement of material properties at high strain and at modest temperature. Introduction of a measurement plane disturbs the flow, requiring special analysis techniques. If the measurement interface is windowed, the unsteady nature of the wave in the window requires special treatment. When the flow is hyperbolic the equations of motion can be integrated backward in space in the sample to a region undisturbed by the interface interactions, fully accounting for the untoward interactions. For more complex materials like hysteretic elastic/plastic solids or phase changing material, hybrid analysis techniques are required.

  17. Extended testing of compression distillation.

    NASA Technical Reports Server (NTRS)

    Bambenek, R. A.; Nuccio, P. P.

    1972-01-01

    During the past eight years, the NASA Manned Spacecraft Center has supported the development of an integrated water and waste management system which includes the compression distillation process for recovering useable water from urine, urinal flush water, humidity condensate, commode flush water, and concentrated wash water. This paper describes the design of the compression distillation unit, developed for this system, and the testing performed to demonstrate its reliability and performance. In addition, this paper summarizes the work performed on pretreatment and post-treatment processes, to assure the recovery of sterile potable water from urine and treated urinal flush water.

  18. Data compression for satellite images

    NASA Technical Reports Server (NTRS)

    Chen, P. H.; Wintz, P. A.

    1976-01-01

    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.

  19. Compressing the Inert Doublet Model

    DOE PAGESBeta

    Blinov, Nikita; Kozaczuk, Jonathan; Morrissey, David E.; de la Puente, Alejandro

    2016-02-16

    The Inert Doublet Model relies on a discrete symmetry to prevent couplings of the new scalars to Standard Model fermions. We found that this stabilizes the lightest inert state, which can then contribute to the observed dark matter density. In the presence of additional approximate symmetries, the resulting spectrum of exotic scalars can be compressed. Here, we study the phenomenological and cosmological implications of this scenario. In conclusion, we derive new limits on the compressed Inert Doublet Model from LEP, and outline the prospects for exclusion and discovery of this model at dark matter experiments, the LHC, and future colliders.

  20. Compressing the Inert Doublet Model

    SciTech Connect

    Blinov, Nikita; Morrissey, David E.; de la Puente, Alejandro

    2015-10-29

    The Inert Doublet Model relies on a discrete symmetry to prevent couplings of the new scalars to Standard Model fermions. We found that this stabilizes the lightest inert state, which can then contribute to the observed dark matter density. In the presence of additional approximate symmetries, the resulting spectrum of exotic scalars can be compressed. Here, we study the phenomenological and cosmological implications of this scenario. Furthermore, we derive new limits on the compressed Inert Doublet Model from LEP, and outline the prospects for exclusion and discovery of this model at dark matter experiments, the LHC, and future colliders.

  1. Structured illumination temporal compressive microscopy

    PubMed Central

    Yuan, Xin; Pang, Shuo

    2016-01-01

    We present a compressive video microscope based on structured illumination with incoherent light source. The source-side illumination coding scheme allows the emission photons being collected by the full aperture of the microscope objective, and thus is suitable for the fluorescence readout mode. A 2-step iterative reconstruction algorithm, termed BWISE, has been developed to address the mismatch between the illumination pattern size and the detector pixel size. Image sequences with a temporal compression ratio of 4:1 were demonstrated. PMID:27231586

  2. Finite scale equations for compressible fluid flow

    SciTech Connect

    Margolin, Len G

    2008-01-01

    Finite-scale equations (FSE) describe the evolution of finite volumes of fluid over time. We discuss the FSE for a one-dimensional compressible fluid, whose every point is governed by the Navier-Stokes equations. The FSE contain new momentum and internal energy transport terms. These are similar to terms added in numerical simulation for high-speed flows (e.g. artificial viscosity) and for turbulent flows (e.g. subgrid scale models). These similarities suggest that the FSE may provide new insight as a basis for computational fluid dynamics. Our analysis of the FS continuity equation leads to a physical interpretation of the new transport terms, and indicates the need to carefully distinguish between volume-averaged and mass-averaged velocities in numerical simulation. We make preliminary connections to the other recent work reformulating Navier-Stokes equations.

  3. Image Segmentation, Registration, Compression, and Matching

    NASA Technical Reports Server (NTRS)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  4. Trajectory NG: portable, compressed, general molecular dynamics trajectories.

    PubMed

    Spångberg, Daniel; Larsson, Daniel S D; van der Spoel, David

    2011-10-01

    We present general algorithms for the compression of molecular dynamics trajectories. The standard ways to store MD trajectories as text or as raw binary floating point numbers result in very large files when efficient simulation programs are used on supercomputers. Our algorithms are based on the observation that differences in atomic coordinates/velocities, in either time or space, are generally smaller than the absolute values of the coordinates/velocities. Also, it is often possible to store values at a lower precision. We apply several compression schemes to compress the resulting differences further. The most efficient algorithms developed here use a block sorting algorithm in combination with Huffman coding. Depending on the frequency of storage of frames in the trajectory, either space, time, or combinations of space and time differences are usually the most efficient. We compare the efficiency of our algorithms with each other and with other algorithms present in the literature for various systems: liquid argon, water, a virus capsid solvated in 15 mM aqueous NaCl, and solid magnesium oxide. We perform tests to determine how much precision is necessary to obtain accurate structural and dynamic properties, as well as benchmark a parallelized implementation of the algorithms. We obtain compression ratios (compared to single precision floating point) of 1:3.3-1:35 depending on the frequency of storage of frames and the system studied. PMID:21267752

  5. Melting of compressed iron by monitoring atomic dynamics

    NASA Astrophysics Data System (ADS)

    Jackson, Jennifer M.; Sturhahn, Wolfgang; Lerche, Michael; Zhao, Jiyong; Toellner, Thomas S.; Alp, E. Ercan; Sinogeikin, Stanislav V.; Bass, Jay D.; Murphy, Caitlin A.; Wicks, June K.

    2013-01-01

    We present a novel method for detecting the solid-liquid phase boundary of compressed iron at high temperatures using synchrotron Mössbauer spectroscopy (SMS). Our approach is unique because the dynamics of the iron atoms are monitored. This process is described by the Lamb-Mössbauer factor, which is related to the mean-square displacement of the iron atoms. Focused synchrotron radiation with 1 meV bandwidth passes through a laser-heated 57Fe sample inside a diamond-anvil cell, and the characteristic SMS time signature vanishes when melting occurs. At our highest compression measurement and considering thermal pressure, we find the melting point of iron to be TM=3025±115 K at P=82±5 GPa. When compared with previously reported melting points for iron using static compression methods with different criteria for melting, our melting trend defines a steeper positive slope as a function of pressure. The obtained melting temperatures represent a significant step toward a reliable melting curve of iron at Earth's core conditions. For other terrestrial planets possessing cores with liquid portions rich in metallic iron, such as Mercury and Mars, the higher melting temperatures for compressed iron may imply warmer internal temperatures.

  6. Astronomical context coder for image compression

    NASA Astrophysics Data System (ADS)

    Pata, Petr; Schindler, Jaromir

    2015-10-01

    Recent lossless still image compression formats are powerful tools for compression of all kind of common images (pictures, text, schemes, etc.). Generally, the performance of a compression algorithm depends on its ability to anticipate the image function of the processed image. In other words, a compression algorithm to be successful, it has to take perfectly the advantage of coded image properties. Astronomical data form a special class of images and they have, among general image properties, also some specific characteristics which are unique. If a new coder is able to correctly use the knowledge of these special properties it should lead to its superior performance on this specific class of images at least in terms of the compression ratio. In this work, the novel lossless astronomical image data compression method will be presented. The achievable compression ratio of this new coder will be compared to theoretical lossless compression limit and also to the recent compression standards of the astronomy and general multimedia.

  7. Compression fractures of the back

    MedlinePlus

    ... Meirhaeghe J, et al. Efficacy and safety of balloon kyphoplasty compared with non-surgical care for vertebral compression fracture (FREE): a randomised controlled trial. Lancet . 2009;373(9668):1016-24. PMID: 19246088 www.ncbi.nlm.nih.gov/pubmed/19246088 .

  8. A programmable image compression system

    NASA Technical Reports Server (NTRS)

    Farrelle, Paul M.

    1989-01-01

    A programmable image compression system which has the necessary flexibility to address diverse imaging needs is described. It can compress and expand single frame video images (monochrome or color) as well as documents and graphics (black and white or color) for archival or transmission applications. Through software control, the compression mode can be set for lossless or controlled quality coding; the image size and bit depth can be varied; and the image source and destination devices can be readily changed. Despite the large combination of image data types, image sources, and algorithms, the system provides a simple consistent interface to the programmer. This system (OPTIPAC) is based on the TITMS320C25 digital signal processing (DSP) chip and has been implemented as a co-processor board for an IBM PC-AT compatible computer. The underlying philosophy can readily be applied to different hardware platforms. By using multiple DSP chips or incorporating algorithm specific chips, the compression and expansion times can be significantly reduced to meet performance requirements.

  9. COMPRESSIBLE FLOW, ENTRAINMENT, AND MEGAPLUME

    EPA Science Inventory

    It is generally believed that low Mach number, i.e., low-velocity, flow may be assumed to be incompressible flow. Under steady-state conditions, an exact equation of continuity may then be used to show that such flow is non-divergent. However, a rigorous, compressible fluid-dynam...

  10. Teaching Time-Space Compression

    ERIC Educational Resources Information Center

    Warf, Barney

    2011-01-01

    Time-space compression shows students that geographies are plastic, mutable and forever changing. This paper justifies the need to teach this topic, which is rarely found in undergraduate course syllabi. It addresses the impacts of transportation and communications technologies to explicate its dynamics. In summarizing various conceptual…

  11. Hyperspectral imaging using compressed sensing

    NASA Astrophysics Data System (ADS)

    Ramirez I., Gabriel Eduardo; Manian, Vidya B.

    2012-06-01

    Compressed sensing (CS) has attracted a lot of attention in recent years as a promising signal processing technique that exploits a signal's sparsity to reduce its size. It allows for simple compression that does not require a lot of additional computational power, and would allow physical implementation at the sensor using spatial light multiplexers using Texas Instruments (TI) digital micro-mirror device (DMD). The DMD can be used as a random measurement matrix, reflecting the image off the DMD is the equivalent of an inner product between the images individual pixels and the measurement matrix. CS however is asymmetrical, meaning that the signals recovery or reconstruction from the measurements does require a higher level of computation. This makes the prospect of working with the compressed version of the signal in implementations such as detection or classification much more efficient. If an initial analysis shows nothing of interest, the signal need not be reconstructed. Many hyper-spectral image applications are precisely focused on these areas, and would greatly benefit from a compression technique like CS that could help minimize the light sensor down to a single pixel, lowering costs associated with the cameras while reducing the large amounts of data generated by all the bands. The present paper will show an implementation of CS using a single pixel hyper-spectral sensor, and compare the reconstructed images to those obtained through the use of a regular sensor.

  12. Prelude to compressed baryonic matter

    NASA Astrophysics Data System (ADS)

    Wilczek, Frank

    Why study compressed baryonic matter, or more generally strongly interacting matter at high densities and temperatures? Most obviously, because it's an important piece of Nature. The whole universe, in the early moments of the big bang, was filled with the stuff. Today, highly compressed baryonic matter occurs in neutron stars and during crucial moments in the development of supernovae. Also, working to understand compressed baryonic matter gives us new perspectives on ordinary baryonic matter, i.e. the matter in atomic nuclei. But perhaps the best answer is a variation on the one George Mallory gave, when asked why he sought to scale Mount Everest: Because, as a prominent feature in the landscape of physics, it's there. Compressed baryonic matter is a material we can produce in novel, challenging experiments that probe new extremes of temperature and density. On the theoretical side, it is a mathematically well-defined domain with a wealth of novel, challenging problems, as well as wide-ranging connections. Its challenges have already inspired a lot of very clever work, and revealed some wonderful surprises, as documented in this volume.

  13. Culture: Copying, Compression, and Conventionality

    ERIC Educational Resources Information Center

    Tamariz, Mónica; Kirby, Simon

    2015-01-01

    Through cultural transmission, repeated learning by new individuals transforms cultural information, which tends to become increasingly compressible (Kirby, Cornish, & Smith, 2008; Smith, Tamariz, & Kirby, 2013). Existing diffusion chain studies include in their design two processes that could be responsible for this tendency: learning…

  14. Device Assists Cardiac Chest Compression

    NASA Technical Reports Server (NTRS)

    Eichstadt, Frank T.

    1995-01-01

    Portable device facilitates effective and prolonged cardiac resuscitation by chest compression. Developed originally for use in absence of gravitation, also useful in terrestrial environments and situations (confined spaces, water rescue, medical transport) not conducive to standard manual cardiopulmonary resuscitation (CPR) techniques.

  15. Perceptually lossy compression of documents

    NASA Astrophysics Data System (ADS)

    Beretta, Giordano B.; Bhaskaran, Vasudev; Konstantinides, Konstantinos; Natarajan, Balas R.

    1997-06-01

    The main cost of owning a facsimile machine consists of the telephone charges for the communications, thus short transmission times are a key feature for facsimile machines. Similarly, on a packet-routed service such as the Internet, a low number of packets is essential to avoid operator wait times. Concomitantly, the user expectations have increased considerably. In facsimile, the switch from binary to full color increases the data size by a factor of 24. On the Internet, the switch from plain text American Standard Code for Information Interchange (ASCII) encoded files to files marked up in the Hypertext Markup Language (HTML) with ample embedded graphics has increased the size of transactions by several orders of magnitude. A common compressing method for raster files in these applications in the Joint Photographic Experts Group (JPEG) method, because efficient implementations are readily available. In this method the implementors design the discrete quantization tables (DQT) and the Huffman tables (HT) to maximize the compression factor while maintaining the introduced artifacts at the threshold of perceptual detectability. Unfortunately the achieved compression rates are unsatisfactory for applications such as color facsimile and World Wide Web (W3) browsing. We present a design methodology for image-independent DQTs that while producing perceptually lossy data, does not impair the reading performance of users. Combined with a text sharpening algorithm that compensates for scanning device limitations, the methodology presented in this paper allows us to achieve compression ratios near 1:100.

  16. Volatile Emissions from Compressed Tissue

    PubMed Central

    Dini, Francesca; Capuano, Rosamaria; Strand, Tillan; Ek, Anna-Christina; Lindgren, Margareta; Paolesse, Roberto; Di Natale, Corrado; Lundström, Ingemar

    2013-01-01

    Since almost every fifth patient treated in hospital care develops pressure ulcers, early identification of risk is important. A non-invasive method for the elucidation of endogenous biomarkers related to pressure ulcers could be an excellent tool for this purpose. We therefore found it of interest to determine if there is a difference in the emissions of volatiles from compressed and uncompressed tissue. The ultimate goal is to find a non-invasive method to obtain an early warning for the risk of developing pressure ulcers for bed-ridden persons. Chemical analysis of the emissions, collected in compresses, was made with gas-chromatography – mass spectrometry and with a chemical sensor array, the so called electronic nose. It was found that the emissions from healthy and hospitalized persons differed significantly irrespective of the site. Within each group there was a clear difference between the compressed and uncompressed site. Peaks that could be certainly deemed as markers of the compression were, however, not identified. Nonetheless, different compounds connected to the application of local mechanical pressure were found. The results obtained with GC-MS reveal the complexity of VOC composition, thus an array of non-selective chemical sensors seems to be a suitable choice for the analysis of skin emission from compressed tissues; it may represent a practical instrument for bed side diagnostics. Results show that the adopted electronic noses are likely sensitive to the total amount of the emission rather than to its composition. The development of a gas sensor-based device requires then the design of sensor receptors adequate to detect the VOCs bouquet typical of pressure. This preliminary experiment evidences the necessity of studies where each given person is followed for a long time in a ward in order to detect the insurgence of specific VOCs pattern changes signalling the occurrence of ulcers. PMID:23874929

  17. Two algorithms for compressing noise like signals

    NASA Astrophysics Data System (ADS)

    Agaian, Sos S.; Cherukuri, Ravindranath; Akopian, David

    2005-05-01

    Compression is a technique that is used to encode data so that the data needs less storage/memory space. Compression of random data is vital in case where data where we need preserve data that has low redundancy and whose power spectrum is close to noise. In case of noisy signals that are used in various data hiding schemes the data has low redundancy and low energy spectrum. Therefore, upon compressing with lossy compression algorithms the low energy spectrum might get lost. Since the LSB plane data has low redundancy, lossless compression algorithms like Run length, Huffman coding, Arithmetic coding are in effective in providing a good compression ratio. These problems motivated in developing a new class of compression algorithms for compressing noisy signals. In this paper, we introduce a two new compression technique that compresses the random data like noise with reference to know pseudo noise sequence generated using a key. In addition, we developed a representation model for digital media using the pseudo noise signals. For simulation, we have made comparison between our methods and existing compression techniques like Run length that shows the Run length cannot compress when data is random but the proposed algorithms can compress. Furthermore, the proposed algorithms can be extended to all kinds of random data used in various applications.

  18. Software documentation for compression-machine cavity control

    SciTech Connect

    Floersch, R.H.

    1981-04-01

    A new system design using closed loop control on the hydraulic system of compression transfer presses used to make filled elastomer parts will result in improved accuracy and repeatability of speed and pressure control during critical forming stages before part cure. The new design uses a microprocessor to supply set points and timing functions to the control system. Presented are the hardware and software architecture and objectives for the microprocessor portion of the control system.

  19. ADVANCED RECIPROCATING COMPRESSION TECHNOLOGY (ARCT)

    SciTech Connect

    Danny M. Deffenbaugh; Klaus Brun; Ralph E. Harris; J. Pete Harrell; Robert J. Mckee; J. Jeffrey Moore; Steven J. Svedeman; Anthony J. Smalley; Eugene L. Broerman; Robert A Hart; Marybeth G. Nored; Ryan S. Gernentz; Shane P. Siebenaler

    2005-12-01

    The U.S. natural gas pipeline industry is facing the twin challenges of increased flexibility and capacity expansion. To meet these challenges, the industry requires improved choices in gas compression to address new construction and enhancement of the currently installed infrastructure. The current fleet of installed reciprocating compression is primarily slow-speed integral machines. Most new reciprocating compression is and will be large, high-speed separable units. The major challenges with the fleet of slow-speed integral machines are: limited flexibility and a large range in performance. In an attempt to increase flexibility, many operators are choosing to single-act cylinders, which are causing reduced reliability and integrity. While the best performing units in the fleet exhibit thermal efficiencies between 90% and 92%, the low performers are running down to 50% with the mean at about 80%. The major cause for this large disparity is due to installation losses in the pulsation control system. In the better performers, the losses are about evenly split between installation losses and valve losses. The major challenges for high-speed machines are: cylinder nozzle pulsations, mechanical vibrations due to cylinder stretch, short valve life, and low thermal performance. To shift nozzle pulsation to higher orders, nozzles are shortened, and to dampen the amplitudes, orifices are added. The shortened nozzles result in mechanical coupling with the cylinder, thereby, causing increased vibration due to the cylinder stretch mode. Valve life is even shorter than for slow speeds and can be on the order of a few months. The thermal efficiency is 10% to 15% lower than slow-speed equipment with the best performance in the 75% to 80% range. The goal of this advanced reciprocating compression program is to develop the technology for both high speed and low speed compression that will expand unit flexibility, increase thermal efficiency, and increase reliability and integrity

  20. The Critical Point Facility (CPF)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The Critical Point Facility (CPF) is an ESA multiuser facility designed for microgravity research onboard Spacelab. It has been conceived and built to offer investigators opportunities to conduct research on critical point phenomena in microgravity. This facility provides the high precision and stability temperature standards required in this field of research. It has been primarily designed for the purpose of optical investigations of transparent fluids. During a Spacelab mission, the CPF automatically processes several thermostats sequentially, each thermostat corresponding to an experiment. The CPF is now integrated in Spacelab at Kennedy Space Center, in preparation for the International Microgravity Lab. mission. The CPF was designed to submit transparent fluids to an adequate, user defined thermal scenario, and to monitor their behavior by using thermal and optical means. Because they are strongly affected by gravity, a good understanding of critical phenomena in fluids can only be gained in low gravity conditions. Fluids at the critical point become compressed under their own weight. The role played by gravity in the formation of interfaces between distinct phases is not clearly understood.

  1. Measurement and control for mechanical compressive stress

    NASA Astrophysics Data System (ADS)

    Li, Qing; Ye, Guang; Pan, Lan; Wu, Xiushan

    2001-12-01

    At present, the indirect method is applied to measuring and controlling mechanical compressive stress, which is the measurement and control of rotating torque of screw with torque transducer during screw revolving. Because the friction coefficient between every screw-cap and washer, of screw-thread is different, the compressive stress of every screw may is different when the machinery is equipped. Therefore, the accurate measurement and control of mechanical compressive stress is realized by the direct measurement of mechanical compressive stress. The author introduces the research of contrast between compressive stress and rotating torque in the paper. The structure and work principle of a special washer type transducer is discussed emphatically. The special instrument cooperates with the washer type transducer for measuring and controlling mechanical compressive stress. The control tactics based on the rate of compressive stress is put to realize accurate control of mechanical compressive stress.

  2. Infraspinatus muscle atrophy from suprascapular nerve compression.

    PubMed

    Cordova, Christopher B; Owens, Brett D

    2014-02-01

    Muscle weakness without pain may signal a nerve compression injury. Because these injuries should be identified and treated early to prevent permanent muscle weakness and atrophy, providers should consider suprascapular nerve compression in patients with shoulder muscle weakness. PMID:24463748

  3. Cluster compression algorithm: A joint clustering/data compression concept

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.

    1977-01-01

    The Cluster Compression Algorithm (CCA), which was developed to reduce costs associated with transmitting, storing, distributing, and interpreting LANDSAT multispectral image data is described. The CCA is a preprocessing algorithm that uses feature extraction and data compression to more efficiently represent the information in the image data. The format of the preprocessed data enables simply a look-up table decoding and direct use of the extracted features to reduce user computation for either image reconstruction, or computer interpretation of the image data. Basically, the CCA uses spatially local clustering to extract features from the image data to describe spectral characteristics of the data set. In addition, the features may be used to form a sequence of scalar numbers that define each picture element in terms of the cluster features. This sequence, called the feature map, is then efficiently represented by using source encoding concepts. Various forms of the CCA are defined and experimental results are presented to show trade-offs and characteristics of the various implementations. Examples are provided that demonstrate the application of the cluster compression concept to multi-spectral images from LANDSAT and other sources.

  4. Simultaneous denoising and compression of multispectral images

    NASA Astrophysics Data System (ADS)

    Hagag, Ahmed; Amin, Mohamed; Abd El-Samie, Fathi E.

    2013-01-01

    A new technique for denoising and compression of multispectral satellite images to remove the effect of noise on the compression process is presented. One type of multispectral images has been considered: Landsat Enhanced Thematic Mapper Plus. The discrete wavelet transform (DWT), the dual-tree DWT, and a simple Huffman coder are used in the compression process. Simulation results show that the proposed technique is more effective than other traditional compression-only techniques.

  5. FRESCO: Referential compression of highly similar sequences.

    PubMed

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware. PMID:24524158

  6. Subpicosecond compression experiments at Los Alamos National Laboratory

    SciTech Connect

    Carlsten, B.E.; Russell, S.J.; Kinross-Wright, J.M.

    1995-09-01

    The authors report on recent experiments using a magnetic chicane compressor at 8 MeV. Electron bunches at both low (0.1 nC) and high (1 nC) charges were compressed from 20 ps to less than 1 ps (FWHM). A transverse deflecting rf cavity was used to measure the bunch length at low charge; the bunch length at high charge was inferred from an induced energy spread of the beam. The longitudinal centrifugal-space charge force is calculated using a point-to-point numerical simulation and is shown not to influence the energy-spread measurement.

  7. 29 CFR 1926.803 - Compressed air.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 8 2010-07-01 2010-07-01 false Compressed air. 1926.803 Section 1926.803 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR (CONTINUED) SAFETY AND HEALTH REGULATIONS FOR CONSTRUCTION Underground Construction, Caissons, Cofferdams and Compressed Air § 1926.803 Compressed...

  8. Multichannel Compression, Temporal Cues, and Audibility.

    ERIC Educational Resources Information Center

    Souza, Pamela E.; Turner, Christopher W.

    1998-01-01

    The effect of the reduction of the temporal envelope produced by multichannel compression on recognition was examined in 16 listeners with hearing loss, with particular focus on audibility of the speech signal. Multichannel compression improved speech recognition when superior audibility was provided by a two-channel compression system over linear…

  9. General-Purpose Compression for Efficient Retrieval.

    ERIC Educational Resources Information Center

    Cannane, Adam; Williams, Hugh E.

    2001-01-01

    Discusses compression of databases that reduces space requirements and retrieval times; considers compression of documents in text databases based on semistatic modeling with words; and proposes a scheme for general purpose compression that can be applied to all types of data stored in large collections. (Author/LRW)

  10. Tomographic Image Compression Using Multidimensional Transforms.

    ERIC Educational Resources Information Center

    Villasenor, John D.

    1994-01-01

    Describes a method for compressing tomographic images obtained using Positron Emission Tomography (PET) and Magnetic Resonance (MR) by applying transform compression using all available dimensions. This takes maximum advantage of redundancy of the data, allowing significant increases in compression efficiency and performance. (13 references) (KRN)

  11. Growing concern following compression mammography.

    PubMed

    van Netten, Johannes Pieter; Hoption Cann, Stephen; Thornton, Ian; Finegan, Rory

    2016-01-01

    A patient without clinical symptoms had a mammogram in October 2008. The procedure caused intense persistent pain, swelling and development of a haematoma following mediolateral left breast compression. Three months later, a 9×11 cm mass developed within the same region. Core biopsies showed a necrotizing high-grade ductal carcinoma, with a high mitotic index. Owing to its extensive size, the patient began chemotherapy followed by trastuzumab and later radiotherapy to obtain clear margins for a subsequent mastectomy. The mastectomy in October 2009 revealed an inflammatory carcinoma, with 2 of 3 nodes infiltrated by the tumour. The stage IIIC tumour, oestrogen and progesterone receptor negative, was highly HER2 positive. A recurrence led to further chemotherapy in February 2011. In July 2011, another recurrence was removed from the mastectomy scar. She died of progressive disease in 2012. In this article, we discuss the potential influence of compression on the natural history of the tumour. PMID:27581236

  12. Stability of compressible boundary layers

    NASA Technical Reports Server (NTRS)

    Nayfeh, Ali H.

    1989-01-01

    The stability of compressible 2-D and 3-D boundary layers is reviewed. The stability of 2-D compressible flows differs from that of incompressible flows in two important features: There is more than one mode of instability contributing to the growth of disturbances in supersonic laminar boundary layers and the most unstable first mode wave is 3-D. Whereas viscosity has a destabilizing effect on incompressible flows, it is stabilizing for high supersonic Mach numbers. Whereas cooling stabilizes first mode waves, it destabilizes second mode waves. However, second order waves can be stabilized by suction and favorable pressure gradients. The influence of the nonparallelism on the spatial growth rate of disturbances is evaluated. The growth rate depends on the flow variable as well as the distance from the body. Floquet theory is used to investigate the subharmonic secondary instability.

  13. Compressed sensing based video multicast

    NASA Astrophysics Data System (ADS)

    Schenkel, Markus B.; Luo, Chong; Frossard, Pascal; Wu, Feng

    2010-07-01

    We propose a new scheme for wireless video multicast based on compressed sensing. It has the property of graceful degradation and, unlike systems adhering to traditional separate coding, it does not suffer from a cliff effect. Compressed sensing is applied to generate measurements of equal importance from a video such that a receiver with a better channel will naturally have more information at hands to reconstruct the content without penalizing others. We experimentally compare different random matrices at the encoder side in terms of their performance for video transmission. We further investigate how properties of natural images can be exploited to improve the reconstruction performance by transmitting a small amount of side information. And we propose a way of exploiting inter-frame correlation by extending only the decoder. Finally we compare our results with a different scheme targeting the same problem with simulations and find competitive results for some channel configurations.

  14. Using autoencoders for mammogram compression.

    PubMed

    Tan, Chun Chet; Eswaran, Chikkannan

    2011-02-01

    This paper presents the results obtained for medical image compression using autoencoder neural networks. Since mammograms (medical images) are usually of big sizes, training of autoencoders becomes extremely tedious and difficult if the whole image is used for training. We show in this paper that the autoencoders can be trained successfully by using image patches instead of the whole image. The compression performances of different types of autoencoders are compared based on two parameters, namely mean square error and structural similarity index. It is found from the experimental results that the autoencoder which does not use Restricted Boltzmann Machine pre-training yields better results than those which use this pre-training method. PMID:20703586

  15. Lithological Uncertainty Expressed by Normalized Compression Distance

    NASA Astrophysics Data System (ADS)

    Jatnieks, J.; Saks, T.; Delina, A.; Popovs, K.

    2012-04-01

    prediction by partial matching (PPM), used for computing the NCD metric, is highly dependant on context. We assign unique symbols for aggregate lithology types and serialize the borehole logs into text strings, where the string length represents a normalized borehole depth. This encoding ensures that both lithology types as well as depth and sequence of strata is comparable in a form most native to the universal data compression software that calculates the pairwise NCD dissimilarity matrix. The NCD results can be used for generalization of the Quaternary structure using spatial clustering followed by a Voronoi tessellation using boreholes as generator points. After dissolving cluster membership identifiers of the borehole Voronoi polygons in GIS environment, regions representing similar lithological structure can be visualized. The exact number of regions and their homogeneity depends on parameters of the clustering solution. This study is supported by the European Social Fund project No. 2009/0212/1DP/1.1.1.2.0/09/APIA/VIAA/060 Keywords: geological uncertainty, lithological uncertainty, generalization, information distance, normalized compression distance, data compression

  16. 76 FR 4338 - Research and Development Strategies for Compressed & Cryo-Compressed Hydrogen Storage Workshops

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-25

    ... Research and Development Strategies for Compressed & Cryo- Compressed Hydrogen Storage Workshops AGENCY... Laboratory, in conjunction with the Hydrogen Storage team of the EERE Fuel Cell Technologies Program, will be hosting two days of workshops on compressed and cryo-compressed hydrogen storage in the Washington,...

  17. Turbulence modeling for compressible flows

    NASA Technical Reports Server (NTRS)

    Marvin, J. G.

    1977-01-01

    Material prepared for a course on Applications and Fundamentals of Turbulence given at the University of Tennessee Space Institute, January 10 and 11, 1977, is presented. A complete concept of turbulence modeling is described, and examples of progess for its use in computational aerodynimics are given. Modeling concepts, experiments, and computations using the concepts are reviewed in a manner that provides an up-to-date statement on the status of this problem for compressible flows.

  18. A Simplified Adiabatic Compression Apparatus

    NASA Astrophysics Data System (ADS)

    Moloney, Michael J.; McGarvey, Albert P.

    2007-10-01

    Mottmann described an excellent way to measure the ratio of specific heats for air (γ = Cp/Cv) by suddenly compressing a plastic 2-liter bottle. His arrangement can be simplified so that no valves are involved and only a single connection needs to be made. This is done by adapting the plastic cap of a 2-liter plastic bottle so it connects directly to a Vernier Software Gas Pressure Sensor2 and the LabPro3 interface.

  19. Direct simulation of compressible turbulence

    NASA Technical Reports Server (NTRS)

    Zang, T. A.; Erlebacher, Gordon; Hussaini, M. Y.

    1989-01-01

    Several direct simulations of 3-D homogeneous, compressible turbulence are presented with emphasis on the differences with incompressible turbulent simulations. A fully spectral collocation algorithm, periodic in all directions coupled with a 3rd order Runge-Kutta time discretization scheme is sufficient to produce well-resolved flows at Taylor Reynolds numbers below 40 on grids of 128x128x128. A Helmholtz decomposition of velocity is useful to differentiate between the purely compressible effects and those effects solely due to vorticity production. In the context of homogeneous flows, this decomposition in unique. Time-dependent energy and dissipation spectra of the compressible and solenoidal velocity components indicate the presence of localized small scale structures. These structures are strongly a function of the initial conditions. Researchers concentrate on a regime characterized by very small fluctuating Mach numbers Ma (on the order of 0.03) and density and temperature fluctuations much greater than sq Ma. This leads to a state in which more than 70 percent of the kinetic energy is contained in the so-called compressible component of the velocity. Furthermore, these conditions lead to the formation of curved weak shocks (or shocklets) which travel at approximately the sound speed across the physical domain. Various terms in the vorticity and divergence of velocity production equations are plotted versus time to gain some understanding of how small scales are actually formed. Possible links with Burger turbulence are examined. To visualize better the dynamics of the flow, new graphic visualization techniques have been developed. The 3-D structure of the shocks are visualized with the help of volume rendering algorithms developed in-house. A combination of stereographic projection and animation greatly increase the number of visual cues necessary to properly interpret the complex flow.

  20. Comparison of Artificial Compressibility Methods

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Housman, Jeffrey; Kwak, Dochan

    2003-01-01

    Various artificial compressibility methods for calculating three-dimensional, steady and unsteady, laminar and turbulent, incompressible Navier-Stokes equations are compared in this work. Each method is described in detail along with appropriate physical and numerical boundary conditions. Analysis of well-posedness and numerical solutions to test problems for each method are provided. A comparison based on convergence behavior, accuracy, stability and robustness is used to establish the relative positive and negative characteristics of each method.

  1. Compressing TV-image data

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Lee, J.; Rice, R. F.; Schlutsmeyer, A. P.

    1981-01-01

    Compressing technique calculates activity estimator for each segment of image line. Estimator is used in conjunction with allowable bits per line, N, to determine number of bits necessary to code each segment and which segments can tolerate truncation. Preprocessed line data are then passed to adaptive variable-length coder, which selects optimum transmission code. Method increases capacity of broadcast and cable television transmissions and helps reduce size of storage medium for video and digital audio recordings.

  2. SNLL materials testing compression facility

    SciTech Connect

    Kawahara, W.A.; Brandon, S.L.; Korellis, J.S.

    1986-04-01

    This report explains software enhancements and fixture modifications which expand the capabilities of a servo-hydraulic test system to include static computer-controlled ''constant true strain rate'' compression testing on cylindrical specimens. True strains in excess of -1.0 are accessible. Special software features include schemes to correct for system compliance and the ability to perform strain-rate changes; all software for test control and data acquisition/reduction is documented.

  3. Antiproton compression and radial measurements

    SciTech Connect

    Andresen, G. B.; Bowe, P. D.; Hangst, J. S.; Bertsche, W.; Butler, E.; Charlton, M.; Humphries, A. J.; Jenkins, M. J.; Joergensen, L. V.; Madsen, N.; Werf, D. P. van der; Bray, C. C.; Chapman, S.; Fajans, J.; Povilus, A.; Wurtele, J. S.; Cesar, C. L.; Lambo, R.; Silveira, D. M.; Fujiwara, M. C.

    2008-08-08

    Control of the radial profile of trapped antiproton clouds is critical to trapping antihydrogen. We report detailed measurements of the radial manipulation of antiproton clouds, including areal density compressions by factors as large as ten, achieved by manipulating spatially overlapped electron plasmas. We show detailed measurements of the near-axis antiproton radial profile, and its relation to that of the electron plasma. We also measure the outer radial profile by ejecting antiprotons to the trap wall using an octupole magnet.

  4. Compressed air energy storage system

    SciTech Connect

    Ahrens, F.W.; Kartsounes, G.T.

    1981-07-28

    An internal combustion reciprocating engine is operable as a compressor during slack demand periods utilizing excess power from a power grid to charge air into an air storage reservoir and as an expander during peak demand periods to feed power into the power grid utilizing air obtained from the air storage reservoir together with combustible fuel. Preferably the internal combustion reciprocating engine is operated at high pressure and a low pressure turbine and compressor are also employed for air compression and power generation.

  5. Transposed compression piston and cylinder

    SciTech Connect

    Ross, M.A.

    1992-04-14

    This patent describes an improved V-type two piston Stirling engine wherein the improvement is a transposed compression piston slidably engaged in a mating cylinder. It comprises: a cylindrical body which is pivotally connected to a connecting rod at a pivot axis which is relatively nearer the outer end of the cylindrical body and has a seal relatively nearer the inner end of the cylindrical body.

  6. Compressed air energy storage system

    DOEpatents

    Ahrens, Frederick W.; Kartsounes, George T.

    1981-01-01

    An internal combustion reciprocating engine is operable as a compressor during slack demand periods utilizing excess power from a power grid to charge air into an air storage reservoir and as an expander during peak demand periods to feed power into the power grid utilizing air obtained from the air storage reservoir together with combustible fuel. Preferably the internal combustion reciprocating engine is operated at high pressure and a low pressure turbine and compressor are also employed for air compression and power generation.

  7. Compressed air energy storage system

    DOEpatents

    Ahrens, F.W.; Kartsounes, G.T.

    An internal combustion reciprocating engine is operable as a compressor during slack demand periods utilizing excess power from a power grid to charge air into an air storage reservoir and as an expander during peak demand periods to feed power into the power grid utilizing air obtained from the air storage reservoir together with combustion reciprocating engine is operated at high pressure and a low pressure turbine and compressor are also employed for air compression and power generation.

  8. Viscosity of nitrogen near the critical point

    NASA Technical Reports Server (NTRS)

    Basu, R. S.; Sengers, J. V.

    1978-01-01

    The formulation of a quantitative description of the critical enhancement in the shear viscosity of fluids near the gas-liquid critical point is considered. The critical point is a point of marginal thermodynamic stability. In the vicinity of the critical point, large-scale density fluctuations are present in the fluid. The critical enhancement of the transport properties is related to the correlation length. The correlation length is related to the compressibility, thus providing consistency between the equations for the transport properties and the equation of state in the critical region. The critical region parameters for nitrogen are presented in a table. It is found that the critical viscosity enhancement observed by Zozulya and Blagoi (1974) for nitrogen is consistent with current theoretical predictions

  9. Compressibility Effects in Aeronautical Engineering

    NASA Technical Reports Server (NTRS)

    Stack, John

    1941-01-01

    Compressible-flow research, while a relatively new field in aeronautics, is very old, dating back almost to the development of the first firearm. Over the last hundred years, researches have been conducted in the ballistics field, but these results have been of practically no use in aeronautical engineering because the phenomena that have been studied have been the more or less steady supersonic condition of flow. Some work that has been done in connection with steam turbines, particularly nozzle studies, has been of value, In general, however, understanding of compressible-flow phenomena has been very incomplete and permitted no real basis for the solution of aeronautical engineering problems in which.the flow is likely to be unsteady because regions of both subsonic and supersonic speeds may occur. In the early phases of the development of the airplane, speeds were so low that the effects of compressibility could be justifiably ignored. During the last war and immediately after, however, propellers exhibited losses in efficiency as the tip speeds approached the speed of sound, and the first experiments of an aeronautical nature were therefore conducted with propellers. Results of these experiments indicated serious losses of efficiency, but aeronautical engineers were not seriously concerned at the time became it was generally possible. to design propellers with quite low tip. speeds. With the development of new engines having increased power and rotational speeds, however, the problems became of increasing importance.

  10. Snapshot colored compressive spectral imager.

    PubMed

    Correa, Claudia V; Arguello, Henry; Arce, Gonzalo R

    2015-10-01

    Traditional spectral imaging approaches require sensing all the voxels of a scene. Colored mosaic FPA detector-based architectures can acquire sets of the scene's spectral components, but the number of spectral planes depends directly on the number of available filters used on the FPA, which leads to reduced spatiospectral resolutions. Instead of sensing all the voxels of the scene, compressive spectral imaging (CSI) captures coded and dispersed projections of the spatiospectral source. This approach mitigates the resolution issues by exploiting optical phenomena in lenses and other elements, which, in turn, compromise the portability of the devices. This paper presents a compact snapshot colored compressive spectral imager (SCCSI) that exploits the benefits of the colored mosaic FPA detectors and the compression capabilities of CSI sensing techniques. The proposed optical architecture has no moving parts and can capture the spatiospectral information of a scene in a single snapshot by using a dispersive element and a color-patterned detector. The optical and the mathematical models of SCCSI are presented along with a testbed implementation of the system. Simulations and real experiments show the accuracy of SCCSI and compare the reconstructions with those of similar CSI optical architectures, such as the CASSI and SSCSI systems, resulting in improvements of up to 6 dB and 1 dB of PSNR, respectively. PMID:26479928

  11. LASER COMPRESSION OF NANOCRYSTALLINE METALS

    SciTech Connect

    Meyers, M. A.; Jarmakani, H. N.; Bringa, E. M.; Earhart, P.; Remington, B. A.; Vo, N. Q.; Wang, Y. M.

    2009-12-28

    Shock compression in nanocrystalline nickel is simulated over a range of pressures (10-80 GPa) and compared with experimental results. Laser compression carried out at Omega and Janus yields new information on the deformation mechanisms of nanocrystalline Ni. Although conventional deformation does not produce hardening, the extreme regime imparted by laser compression generates an increase in hardness, attributed to the residual dislocations observed in the structure by TEM. An analytical model is applied to predict the critical pressure for the onset of twinning in nanocrystalline nickel. The slip-twinning transition pressure is shifted from 20 GPa, for polycrystalline Ni, to 80 GPa, for Ni with g. s. of 10 nm. Contributions to the net strain from the different mechanisms of plastic deformation (partials, perfect dislocations, twinning, and grain boundary shear) were quantified in the nanocrystalline samples through MD calculations. The effect of release, a phenomenon often neglected in MD simulations, on dislocation behavior was established. A large fraction of the dislocations generated at the front are annihilated.

  12. Adiabatic Compression of Oxygen: Real Fluid Temperatures

    NASA Technical Reports Server (NTRS)

    Barragan, Michelle; Wilson, D. Bruce; Stoltzfus, Joel M.

    2000-01-01

    The adiabatic compression of oxygen has been identified as an ignition source for systems operating in enriched oxygen atmospheres. Current practice is to evaluate the temperature rise on compression by treating oxygen as an ideal gas with constant heat capacity. This paper establishes the appropriate thermodynamic analysis for the common occurrence of adiabatic compression of oxygen and in the process defines a satisfactory equation of state (EOS) for oxygen. It uses that EOS to model adiabatic compression as isentropic compression and calculates final temperatures for this system using current approaches for comparison.

  13. Future Prospects of Low Compression Ignition Engines

    NASA Astrophysics Data System (ADS)

    Azim, M. A.

    2014-01-01

    This study presents a review and analysis of the effects of compression ratio and inlet air preheating on engine performance in order to assess the future prospects of low compression ignition engines. Regulation of the inlet air preheating allows some control over the combustion process in compression ignition engines. Literature shows that low compression ratio and inlet air preheating are more beneficial to internal combustion engines than detrimental. Even the disadvantages due to low compression ratio are outweighed by the advantages due to inlet air preheating and vice versa.

  14. COMPRESSION WAVES AND PHASE PLOTS: SIMULATIONS

    SciTech Connect

    Orlikowski, D; Minich, R

    2011-08-01

    Compression wave analysis started nearly 50 years ago with Fowles. Coperthwaite and Williams gave a method that helps identify simple and steady waves. We have been developing a method that gives describes the non-isentropic character of compression waves, in general. One result of that work is a simple analysis tool. Our method helps clearly identify when a compression wave is a simple wave, a steady wave (shock), and when the compression wave is in transition. This affects the analysis of compression wave experiments and the resulting extraction of the high-pressure equation of state.

  15. Efficacy of compression of different capacitance beds in the amelioration of orthostatic hypotension

    NASA Technical Reports Server (NTRS)

    Denq, J. C.; Opfer-Gehrking, T. L.; Giuliani, M.; Felten, J.; Convertino, V. A.; Low, P. A.

    1997-01-01

    Orthostatic hypotension (OH) is the most disabling and serious manifestation of adrenergic failure, occurring in the autonomic neuropathies, pure autonomic failure (PAF) and multiple system atrophy (MSA). No specific treatment is currently available for most etiologies of OH. A reduction in venous capacity, secondary to some physical counter maneuvers (e.g., squatting or leg crossing), or the use of compressive garments, can ameliorate OH. However, there is little information on the differential efficacy, or the mechanisms of improvement, engendered by compression of specific capacitance beds. We therefore evaluated the efficacy of compression of specific compartments (calves, thighs, low abdomen, calves and thighs, and all compartments combined), using a modified antigravity suit, on the end-points of orthostatic blood pressure, and symptoms of orthostatic intolerance. Fourteen patients (PAF, n = 9; MSA, n = 3; diabetic autonomic neuropathy, n = 2; five males and nine females) with clinical OH were studied. The mean age was 62 years (range 31-78). The mean +/- SEM orthostatic systolic blood pressure when all compartments were compressed was 115.9 +/- 7.4 mmHg, significantly improved (p < 0.001) over the head-up tilt value without compression of 89.6 +/- 7.0 mmHg. The abdomen was the only single compartment whose compression significantly reduced OH (p < 0.005). There was a significant increase of peripheral resistance index (PRI) with compression of abdomen (p < 0.001) or all compartments (p < 0.001); end-diastolic index and cardiac index did not change. We conclude that denervation increases vascular capacity, and that venous compression improves OH by reducing this capacity and increasing PRI. Compression of all compartments is the most efficacious, followed by abdominal compression, whereas leg compression alone was less effective, presumably reflecting the large capacity of the abdomen relative to the legs.

  16. Spatial versus spectral compression ratio in compressive sensing of hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    August, Yitzhak; Vachman, Chaim; Stern, Adrian

    2013-05-01

    Compressive hyperspectral imaging is based on the fact that hyperspectral data is highly redundant. However, there is no symmetry between the compressibility of the spatial and spectral domains, and that should be taken into account for optimal compressive hyperspectral imaging system design. Here we present a study of the influence of the ratio between the compression in the spatial and spectral domains on the performance of a 3D separable compressive hyperspectral imaging method we recently developed.

  17. Practicality of magnetic compression for plasma density control

    NASA Astrophysics Data System (ADS)

    Gueroult, Renaud; Fisch, Nathaniel J.

    2016-03-01

    Plasma densification through magnetic compression has been suggested for time-resolved control of the wave properties in plasma-based accelerators [P. F. Schmit and N. J. Fisch, Phys. Rev. Lett. 109, 255003 (2012)]. Using particle in cell simulations with real mass ratio, the practicality of large magnetic compression on timescales shorter than the ion gyro-period is investigated. For compression times shorter than the transit time of a compressional Alfven wave across the plasma slab, results show the formation of two counter-propagating shock waves, leading to a highly non-uniform plasma density profile. Furthermore, the plasma slab displays large hydromagnetic like oscillations after the driving field has reached steady state. Peak compression is obtained when the two shocks collide in the mid-plane. At this instant, very large plasma heating is observed, and the plasma β is estimated to be about 1. Although these results point out a densification mechanism quite different and more complex than initially envisioned, these features still might be advantageous in particle accelerators.

  18. Practicality of magnetic compression for plasma density control

    DOE PAGESBeta

    Gueroult, Renaud; Fisch, Nathaniel J.

    2016-03-16

    Here, plasma densification through magnetic compression has been suggested for time-resolved control of the wave properties in plasma-based accelerators [P. F. Schmit and N. J. Fisch, Phys. Rev. Lett. 109, 255003 (2012)]. Using particle in cell simulations with real mass ratio, the practicality of large magnetic compression on timescales shorter than the ion gyro-period is investigated. For compression times shorter than the transit time of a compressional Alfven wave across the plasma slab, results show the formation of two counter-propagating shock waves, leading to a highly non-uniform plasma density profile. Furthermore, the plasma slab displays large hydromagnetic like oscillations aftermore » the driving field has reached steady state. Peak compression is obtained when the two shocks collide in the mid-plane. At this instant, very large plasma heating is observed, and the plasmaβ is estimated to be about 1. Although these results point out a densification mechanism quite different and more complex than initially envisioned, these features still might be advantageous in particle accelerators.« less

  19. Envera Variable Compression Ratio Engine

    SciTech Connect

    Charles Mendler

    2011-03-15

    Aggressive engine downsizing, variable compression ratio and use of the Atkinson cycle are being combined to improve fuel economy by up to 40 percent relative to port fuel injected gasoline engines, while maintaining full engine power. Approach Engine downsizing is viewed by US and foreign automobile manufacturers as one of the best options for improving fuel economy. While this strategy has already demonstrated a degree of success, downsizing and fuel economy gains are currently limited. With new variable compression ratio technology however, the degree of engine downsizing and fuel economy improvement can be greatly increased. A small variable compression ratio (VCR) engine has the potential to return significantly higher vehicle fuel economy while also providing high power. Affordability and potential for near term commercialization are key attributes of the Envera VCR engine. VCR Technology To meet torque and power requirements, a smaller engine needs to do more work per stroke. This is typically accomplished by boosting the incoming charge with either a turbo or supercharger so that more energy is present in the cylinder per stroke to do the work. With current production engines the degree of engine boosting (which correlates to downsizing) is limited by detonation (combustion knock) at high boost levels. Additionally, the turbo or supercharger needs to be responsive and efficient while providing the needed boost. VCR technology eliminates the limitation of engine knock at high load levels by reducing compression ratio to {approx}9:1 (or whatever level is appropriate) when high boost pressures are needed. By reducing the compression ratio during high load demand periods there is increased volume in the cylinder at top dead center (TDC) which allows more charge (or energy) to be present in the cylinder without increasing the peak pressure. Cylinder pressure is thus kept below the level at which the engine would begin to knock. When loads on the engine are low

  20. Chapter 22: Compressed Air Evaluation Protocol

    SciTech Connect

    Benton, N.

    2014-11-01

    Compressed-air systems are used widely throughout industry for many operations, including pneumatic tools, packaging and automation equipment, conveyors, and other industrial process operations. Compressed-air systems are defined as a group of subsystems composed of air compressors, air treatment equipment, controls, piping, pneumatic tools, pneumatically powered machinery, and process applications using compressed air. A compressed-air system has three primary functional subsystems: supply, distribution, and demand. Air compressors are the primary energy consumers in a compressed-air system and are the primary focus of this protocol. The two compressed-air energy efficiency measures specifically addressed in this protocol are: high-efficiency/variable speed drive (VSD) compressor replacing modulating compressor; compressed-air leak survey and repairs. This protocol provides direction on how to reliably verify savings from these two measures using a consistent approach for each.

  1. Image Compression in Signal-Dependent Noise

    NASA Astrophysics Data System (ADS)

    Shahnaz, Rubeena; Walkup, John F.; Krile, Thomas F.

    1999-09-01

    The performance of an image compression scheme is affected by the presence of noise, and the achievable compression may be reduced significantly. We investigated the effects of specific signal-dependent-noise (SDN) sources, such as film-grain and speckle noise, on image compression, using JPEG (Joint Photographic Experts Group) standard image compression. For the improvement of compression ratios noisy images are preprocessed for noise suppression before compression is applied. Two approaches are employed for noise suppression. In one approach an estimator designed specifically for the SDN model is used. In an alternate approach, the noise is first transformed into signal-independent noise (SIN) and then an estimator designed for SIN is employed. The performances of these two schemes are compared. The compression results achieved for noiseless, noisy, and restored images are also presented.

  2. Influence of Tension-Compression Asymmetry on the Mechanical Behavior of AZ31B Magnesium Alloy Sheets in Bending

    NASA Astrophysics Data System (ADS)

    Zhou, Ping; Beeh, Elmar; Friedrich, Horst E.

    2016-03-01

    Magnesium alloys are promising materials for lightweight design in the automotive industry due to their high strength-to-mass ratio. This study aims to study the influence of tension-compression asymmetry on the radius of curvature and energy absorption capacity of AZ31B-O magnesium alloy sheets in bending. The mechanical properties were characterized using tension, compression, and three-point bending tests. The material exhibits significant tension-compression asymmetry in terms of strength and strain hardening rate due to extension twinning in compression. The compressive yield strength is much lower than the tensile yield strength, while the strain hardening rate is much higher in compression. Furthermore, the tension-compression asymmetry in terms of r value (Lankford value) was also observed. The r value in tension is much higher than that in compression. The bending results indicate that the AZ31B-O sheet can outperform steel and aluminum sheets in terms of specific energy absorption in bending mainly due to its low density. In addition, the AZ31B-O sheet was deformed with a larger radius of curvature than the steel and aluminum sheets, which brings a benefit to energy absorption capacity. Finally, finite element simulation for three-point bending was performed using LS-DYNA and the results confirmed that the larger radius of curvature of a magnesium specimen is mainly attributed to the high strain hardening rate in compression.

  3. Floating Point Control Library

    2007-08-02

    Floating Point Control is a Library that allows for the manipulation of floating point unit exception masking funtions control exceptions in both the Streaming "Single Instruction, Multiple Data" Extension 2 (SSE2) unit and the floating point unit simultaneously. FPC also provides macros to set floating point rounding and precision control.

  4. Near-field analysis of a compressive supersonic ramp

    NASA Astrophysics Data System (ADS)

    Emanuel, George

    1982-07-01

    Steady, two-dimensional, inviscid, supersonic flow is analyzed for a compressive turn where the wall is contoured to provide a centered compression fan. The focal point of the compression is the origin of the usual (primary) oblique shock wave, a slipstream, and a secondary pressure disturbance. This disturbance can be an expansion, a weak solution shock, or a strong solution shock. In the vicinity of the focal point (the near field) there are seven possibilities, one of which is no real solution. For small wall turn angles, there is a unique near-field solution where the primary shock is the weak solution. In this case the secondary disturbance, whose strength is quite small, is either an expansion or a weak solution oblique shock wave. For larger turn angles, two near-field solutions are possible, and for still larger angles, none. At relatively large wall turn angles, where the usual oblique shock equations still provide an attached solution, the near-field equations do not have a solution when the Mach number is sufficiently large.

  5. Energy Spectra in Weakly Compressible and Isothermal Turbulence

    NASA Astrophysics Data System (ADS)

    He, Guowei; Dong, Yufeng

    2014-11-01

    The universal scaling of energy spectra of velocity fluctuations is fundamentally important to understand turbulent flows. For incompressible turbulence, the universal scaling -5/3 of energy spectra is originally proposed by Kolmogorov, based on dimensional analysis. This empirical result is further derived from the Navier-Stokes equations, using the two-point closure approaches. However, for compressible turbulence, the dimensional analysis is difficult to be conducted due to nonlinear coupling of velocity, density and pressure. In this paper, we will use a two-point closure approach, EDQNM, to derive the universal scaling of energy spectra for compressible and isothermal turbulence. In the EDQNM equations, the eddy-damping rates are determined by the recently developed swept-wave model for space-time correlations (Phys. Rev. E 88, 021001(R) (2013)). The leading term in the eddy-damping rates leads to the -7/3 scaling for dilatational energy spectra, while the sub-leading one leads to the -3 scaling. The former implies that dilatational components are dominated by acoustic-wave time scales; the latter implies that dilatational components dominated by local straining time scales. Our DNS result appears to favor the -7/3 scaling. This study clarifies the possible scaling of compressible energy spectra in terms of space-time correlations.

  6. Direct comparisons of compressible magnetohydrodynamics and reduced magnetohydrodynamics turbulence

    NASA Astrophysics Data System (ADS)

    Dmitruk, Pablo; Matthaeus, William H.; Oughton, Sean

    2005-11-01

    Direct numerical simulations of low Mach number compressible three-dimensional magnetohydrodynamic (CMHD3D) turbulence in the presence of a strong mean magnetic field are compared with simulations of reduced magnetohydrodynamics (RMHD). Periodic boundary conditions in the three spatial coordinates are considered. Different sets of initial conditions are chosen to explore the applicability of RMHD and to study how close the solution remains to the full compressible MHD solution as both freely evolve in time. In a first set, the initial state is prepared to satisfy the conditions assumed in the derivation of RMHD, namely, a strong mean magnetic field and plane-polarized fluctuations, varying weakly along the mean magnetic field. In those circumstances, simulations show that RMHD and CMHD3D evolve almost indistinguishably from one another. When some of the conditions are relaxed the agreement worsens but RMHD remains fairly close to CMHD3D, especially when the mean magnetic field is large enough. Moreover, the well-known spectral anisotropy effect promotes the dynamical attainment of the conditions for RMHD applicability. Global quantities (mean energies, mean-square current, and vorticity) and energy spectra from the two solutions are compared and point-to-point separation estimations are computed. The specific results shown here give support to the use of RMHD as a valid approximation of compressible MHD with a mean magnetic field under certain but quite practical conditions.

  7. Compressive rendering: a rendering application of compressed sensing.

    PubMed

    Sen, Pradeep; Darabi, Soheil

    2011-04-01

    Recently, there has been growing interest in compressed sensing (CS), the new theory that shows how a small set of linear measurements can be used to reconstruct a signal if it is sparse in a transform domain. Although CS has been applied to many problems in other fields, in computer graphics, it has only been used so far to accelerate the acquisition of light transport. In this paper, we propose a novel application of compressed sensing by using it to accelerate ray-traced rendering in a manner that exploits the sparsity of the final image in the wavelet basis. To do this, we raytrace only a subset of the pixel samples in the spatial domain and use a simple, greedy CS-based algorithm to estimate the wavelet transform of the image during rendering. Since the energy of the image is concentrated more compactly in the wavelet domain, less samples are required for a result of given quality than with conventional spatial-domain rendering. By taking the inverse wavelet transform of the result, we compute an accurate reconstruction of the desired final image. Our results show that our framework can achieve high-quality images with approximately 75 percent of the pixel samples using a nonadaptive sampling scheme. In addition, we also perform better than other algorithms that might be used to fill in the missing pixel data, such as interpolation or inpainting. Furthermore, since the algorithm works in image space, it is completely independent of scene complexity. PMID:21311092

  8. Myofascial trigger point pain.

    PubMed

    Jaeger, Bernadette

    2013-01-01

    Myofascial trigger point pain is an extremely prevalent cause of persistent pain disorders in all parts of the body, not just the head, neck, and face. Features include deep aching pain in any structure, referred from focally tender points in taut bands of skeletal muscle (the trigger points). Diagnosis depends on accurate palpation with 2-4 kg/cm2 of pressure for 10 to 20 seconds over the suspected trigger point to allow the referred pain pattern to develop. In the head and neck region, cervical muscle trigger points (key trigger points) often incite and perpetuate trigger points (satellite trigger points) and referred pain from masticatory muscles. Management requires identification and control of as many perpetuating factors as possible (posture, body mechanics, psychological stress or depression, poor sleep or nutrition). Trigger point therapies such as spray and stretch or trigger point injections are best used as adjunctive therapy. PMID:24864393

  9. Computed Tomography Image Compressibility and Limitations of Compression Ratio-Based Guidelines.

    PubMed

    Pambrun, Jean-François; Noumeir, Rita

    2015-12-01

    Finding optimal compression levels for diagnostic imaging is not an easy task. Significant compressibility variations exist between modalities, but little is known about compressibility variations within modalities. Moreover, compressibility is affected by acquisition parameters. In this study, we evaluate the compressibility of thousands of computed tomography (CT) slices acquired with different slice thicknesses, exposures, reconstruction filters, slice collimations, and pitches. We demonstrate that exposure, slice thickness, and reconstruction filters have a significant impact on image compressibility due to an increased high frequency content and a lower acquisition signal-to-noise ratio. We also show that compression ratio is not a good fidelity measure. Therefore, guidelines based on compression ratio should ideally be replaced with other compression measures better correlated with image fidelity. Value-of-interest (VOI) transformations also affect the perception of quality. We have studied the effect of value-of-interest transformation and found significant masking of artifacts when window is widened. PMID:25804842

  10. Recognising metastatic spinal cord compression.

    PubMed

    Bowers, Ben

    2015-04-01

    Metastatic spinal cord compression (MSCC) is a potentially life changing oncological emergency. Neurological function and quality of life can be preserved if patients receive an early diagnosis and rapid access to acute interventions to prevent or reduce nerve damage. Symptoms include developing spinal pain, numbness or weakness in arms or legs, or unexplained changes in bladder and bowel function. Community nurses are well placed to pick up on the 'red flag' symptoms of MSCC and ensure patients access prompt, timely investigations to minimise damage. PMID:25839873

  11. Vapor Compression Distillation Flight Experiment

    NASA Technical Reports Server (NTRS)

    Hutchens, Cindy F.

    2002-01-01

    One of the major requirements associated with operating the International Space Station is the transportation -- space shuttle and Russian Progress spacecraft launches - necessary to re-supply station crews with food and water. The Vapor Compression Distillation (VCD) Flight Experiment, managed by NASA's Marshall Space Flight Center in Huntsville, Ala., is a full-scale demonstration of technology being developed to recycle crewmember urine and wastewater aboard the International Space Station and thereby reduce the amount of water that must be re-supplied. Based on results of the VCD Flight Experiment, an operational urine processor will be installed in Node 3 of the space station in 2005.

  12. Krylov methods for compressible flows

    NASA Technical Reports Server (NTRS)

    Tidriri, M. D.

    1995-01-01

    We investigate the application of Krylov methods to compressible flows, and the effect of implicit boundary conditions on the implicit solution of nonlinear problems. Two defect-correction procedures, namely, approximate factorization (AF) for structured grids and ILU/GMRES for general grids, are considered. Also considered here are Newton-Krylov matrix-free methods that we combined with the use of mixed discretization schemes in the implicitly defined Jacobian and its preconditioner. Numerical experiments that show the performance of our approaches are then presented.

  13. Limiting SUSY compressed spectra scenarios

    NASA Astrophysics Data System (ADS)

    Nelson, Andy; Tanedo, Philip; Whiteson, Daniel

    2016-06-01

    Typical searches for supersymmetry cannot test models in which the two lightest particles have a small ("compressed") mass splitting, due to the small momentum of the particles produced in the decay of the second-to-lightest particle. However, data sets with large missing transverse momentum (ETmiss) can generically search for invisible particle production and therefore provide constraints on such models. We apply data from the ATLAS monojet (jet+ETmiss ) and vector-boson-fusion (forward jets and ETmiss ) searches to such models. In all cases, experimental limits are at least five times weaker than theoretical predictions.

  14. Image coding compression based on DCT

    NASA Astrophysics Data System (ADS)

    Feng, Fei; Liu, Peixue; Jiang, Baohua

    2012-04-01

    With the development of computer science and communications, the digital image processing develops more and more fast. High quality images are loved by people, but it will waste more stored space in our computer and it will waste more bandwidth when it is transferred by Internet. Therefore, it's necessary to have an study on technology of image compression. At present, many algorithms about image compression is applied to network and the image compression standard is established. In this dissertation, some analysis on DCT will be written. Firstly, the principle of DCT will be shown. It's necessary to realize image compression, because of the widely using about this technology; Secondly, we will have a deep understanding of DCT by the using of Matlab, the process of image compression based on DCT, and the analysis on Huffman coding; Thirdly, image compression based on DCT will be shown by using Matlab and we can have an analysis on the quality of the picture compressed. It is true that DCT is not the only algorithm to realize image compression. I am sure there will be more algorithms to make the image compressed have a high quality. I believe the technology about image compression will be widely used in the network or communications in the future.

  15. Algorithmic height compression of unordered trees.

    PubMed

    Ben-Naoum, Farah; Godin, Christophe

    2016-01-21

    By nature, tree structures frequently present similarities between their sub-parts. Making use of this redundancy, different types of tree compression techniques have been designed in the literature to reduce the complexity of tree structures. A popular and efficient way to compress a tree consists of merging its isomorphic subtrees, which produces a directed acyclic graph (DAG) equivalent to the original tree. An important property of this method is that the compressed structure (i.e. the DAG) has the same height as the original tree, thus limiting partially the possibility of compression. In this paper we address the problem of further compressing this DAG in height. The difficulty is that compression must be carried out on substructures that are not exactly isomorphic as they are strictly nested within each-other. We thus introduced a notion of quasi-isomorphism between subtrees that makes it possible to define similar patterns along any given path in a tree. We then proposed an algorithm to detect these patterns and to merge them, thus leading to compressed structures corresponding to DAGs augmented with return edges. In this way, redundant information is removed from the original tree in both width and height, thus achieving minimal structural compression. The complete compression algorithm is then illustrated on the compression of various plant-like structures. PMID:26551155

  16. Influence of bicycle seat pressure on compression of the perineum: a MRI analysis.

    PubMed

    Bressel, Eadric; Reeve, Tracey; Parker, Dan; Cronin, John

    2007-01-01

    It is a common belief that bicycle seat pressure compresses neurovascular tissues in the perineum and may lead to perineal and penile pathologies in male cyclists. The purpose of this study was to examine the effect bicycle seat pressure has on compression of the perineal cavernous spaces, which house the penile neurovascular tissues. A second purpose was to identify where peak cavernous compression occurs in relation to a bicycle seat. Five males were assessed for compression of the corpus spongiosum and corpora cavernosa with and without bicycle seat pressure using MRI. Seat pressure was applied using a custom loading device designed to replicate seat pressure recorded during stationary bicycling. The distance between a horizontal midline of the seat and the point of peak cavernous space compression was made on sagittal plane images. Diameter measurements of the cavernous spaces at the point of peak compression were made on coronal plane images. Results revealed that peak cavernous space compression occurred below the pubic symphysis, 40.7(+/-11.4) mm anterior to the midline of the seat. Corpus spongiosum values in the unloaded condition were 148% greater than the loaded condition (p=0.008). Similarly, the left and right corpora cavernosa values for the unloaded condition were 252% and 232% greater, respectively, than the loaded condition (p=0.02-0.03). Cavernous spaces that house penile arteries and nerves were compressed maximally below the pubic symphysis. Because this location of peak compression was not different between subjects, it may be a universal impingement zone that limits blood flow and neural activity to and from the penis. This information can be used to optimize seat design and thus reduce perineal injuries. PMID:16423357

  17. Shear waves in inhomogeneous, compressible fluids in a gravity field.

    PubMed

    Godin, Oleg A

    2014-03-01

    While elastic solids support compressional and shear waves, waves in ideal compressible fluids are usually thought of as compressional waves. Here, a class of acoustic-gravity waves is studied in which the dilatation is identically zero, and the pressure and density remain constant in each fluid particle. These shear waves are described by an exact analytic solution of linearized hydrodynamics equations in inhomogeneous, quiescent, inviscid, compressible fluids with piecewise continuous parameters in a uniform gravity field. It is demonstrated that the shear acoustic-gravity waves also can be supported by moving fluids as well as quiescent, viscous fluids with and without thermal conductivity. Excitation of a shear-wave normal mode by a point source and the normal mode distortion in realistic environmental models are considered. The shear acoustic-gravity waves are likely to play a significant role in coupling wave processes in the ocean and atmosphere. PMID:24606251

  18. Multiview video and depth compression for free-view navigation

    NASA Astrophysics Data System (ADS)

    Higuchi, Yuta; Tehrani, Mehrdad Panahpour; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki

    2012-03-01

    In this paper, we discuss a multiview video and depth coding system for Multiview video applications such as 3DTV and Free View-point Television (FTV) 1. We target an appropriate multiview and depth compression method. And then we investigate the effect on free view synthesis quality by changing the transmission rates between multiview and depth sequences. In the simulations, we employ MVC in parallel to compress the multiview video and depth sequences at different bitrates, and compare the virtual view sequences generated by decoded data with the original video sequences taken in the same viewpoint. Our experimental results show that bitrates of multi depth stream has less effect on the view synthesis quality compared with the multi view stream.

  19. Hybrid thermal link-wise artificial compressibility method

    NASA Astrophysics Data System (ADS)

    Obrecht, Christian; Kuznik, Frédéric

    2015-10-01

    Thermal flow prediction is a subject of interest from a scientific and engineering points of view. Our motivation is to develop an accurate, easy to implement and highly scalable method for convective flows simulation. To this end, we present an extension to the link-wise artificial compressibility method (LW-ACM) for thermal simulation of weakly compressible flows. The novel hybrid formulation uses second-order finite difference operators of the energy equation based on the same stencils as the LW-ACM. For validation purposes, the differentially heated cubic cavity was simulated. The simulations remained stable for Rayleigh numbers up to Ra =108. The Nusselt numbers at isothermal walls and dynamics quantities are in good agreement with reference values from the literature. Our results show that the hybrid thermal LW-ACM is an effective and easy-to-use solution to solve convective flows.

  20. Compression creep of filamentary composites

    NASA Technical Reports Server (NTRS)

    Graesser, D. L.; Tuttle, M. E.

    1988-01-01

    Axial and transverse strain fields induced in composite laminates subjected to compressive creep loading were compared for several types of laminate layups. Unidirectional graphite/epoxy as well as multi-directional graphite/epoxy and graphite/PEEK layups were studied. Specimens with and without holes were tested. The specimens were subjected to compressive creep loading for a 10-hour period. In-plane displacements were measured using moire interferometry. A computer based data reduction scheme was developed which reduces the whole-field displacement fields obtained using moire to whole-field strain contour maps. Only slight viscoelastic response was observed in matrix-dominated laminates, except for one test in which catastrophic specimen failure occurred after a 16-hour period. In this case the specimen response was a complex combination of both viscoelastic and fracture mechanisms. No viscoelastic effects were observed for fiber-dominated laminates over the 10-hour creep time used. The experimental results for specimens with holes were compared with results obtained using a finite-element analysis. The comparison between experiment and theory was generally good. Overall strain distributions were very well predicted. The finite element analysis typically predicted slightly higher strain values at the edge of the hole, and slightly lower strain values at positions removed from the hole, than were observed experimentally. It is hypothesized that these discrepancies are due to nonlinear material behavior at the hole edge, which were not accounted for during the finite-element analysis.

  1. Fusion in Magnetically Compressed Targets

    NASA Astrophysics Data System (ADS)

    Mokhov, V. N.

    2004-11-01

    A comparative analysis is presented of the positive and negative features of systems using magnetic compression of the thermonuclear fusion target (MAGO/MTF) aimed at solving the controlled thermonuclear fusion (CTF) problem. The niche for the MAGO/MTF system, among the other CTF systems, in the parameter space of the energy delivered to the target, and its input time to the target, is shown. This approach was investigated at RFNC-VNIIEF for more than 15 years using the unique technique of applying explosive magnetic generators (EMG) as the energy source to preheat fusion plasma, and accelerate a liner to compress the preheated fusion plasma to the parameters required for ignition. EMG based systems produce already fusion neutrons, and their relatively low cost and record energy yield enable full scale experiments to study the possibility of achieving ignition threshold without constructing expensive stationary installations. A short review of the milestone results on the road to solving the CTF problem in the MAGO/MTF system is given.

  2. Longwave infrared compressive hyperspectral imager

    NASA Astrophysics Data System (ADS)

    Dupuis, Julia R.; Kirby, Michael; Cosofret, Bogdan R.

    2015-06-01

    Physical Sciences Inc. (PSI) is developing a longwave infrared (LWIR) compressive sensing hyperspectral imager (CS HSI) based on a single pixel architecture for standoff vapor phase plume detection. The sensor employs novel use of a high throughput stationary interferometer and a digital micromirror device (DMD) converted for LWIR operation in place of the traditional cooled LWIR focal plane array. The CS HSI represents a substantial cost reduction over the state of the art in LWIR HSI instruments. Radiometric improvements for using the DMD in the LWIR spectral range have been identified and implemented. In addition, CS measurement and sparsity bases specifically tailored to the CS HSI instrument and chemical plume imaging have been developed and validated using LWIR hyperspectral image streams of chemical plumes. These bases enable comparable statistics to detection based on uncompressed data. In this paper, we present a system model predicting the overall performance of the CS HSI system. Results from a breadboard build and test validating the system model are reported. In addition, the measurement and sparsity basis work demonstrating the plume detection on compressed hyperspectral images is presented.

  3. Fast spectrophotometry with compressive sensing

    NASA Astrophysics Data System (ADS)

    Starling, David; Storer, Ian

    2015-03-01

    Spectrophotometers and spectrometers have numerous applications in the physical sciences and engineering, resulting in a plethora of designs and requirements. A good spectrophotometer balances the need for high photometric precision, high spectral resolution, high durability and low cost. One way to address these design objectives is to take advantage of modern scanning and detection techniques. A common imaging method that has improved signal acquisition speed and sensitivity in limited signal scenarios is the single pixel camera. Such cameras utilize the sparsity of a signal to sample below the Nyquist rate via a process known as compressive sensing. Here, we show that a single pixel camera using compressive sensing algorithms and a digital micromirror device can replace the common scanning mechanisms found in virtually all spectrophotometers, providing a very low cost solution and improving data acquisition time. We evaluate this single pixel spectrophotometer by studying a variety of samples tested against commercial products. We conclude with an analysis of flame spectra and possible improvements for future designs.

  4. Low-bit-rate efficient compression for seismic data

    NASA Astrophysics Data System (ADS)

    Averbuch, Amir Z.; Meyer, Francois G.; Stroemberg, Jan-Olov; Coifman, Ronald R.; Vassiliou, Anthony A.

    2001-12-01

    The main drive behind the use of data compression in seismic data is the very large size of seismic data acquired. Some of the most recent acquired marine seismic data sets exceed 10 Tbytes, and in fact there are currently seismic surveys planned with a volume of around 120 Tbytes. Nevertheless, seismic data are quite different from the typical images used in image processing and multimedia applications. Some of their major differences are the data dynamic range exceeding 100 dB in theory, very often it is data with extensive oscillatory nature, the x and y directions represent different physical meaning, and there is significant amount of coherent noise which is often present in seismic data. The objective of this paper is to achieve higher compression ratio, than achieved with the wavelet/uniform quantization/Huffman coding family of compression schemes, with a comparable level of residual noise. The goal is to achieve above 40dB in the decompressed seismic data sets. One of the conclusions is that adaptive multiscale local cosine transform with different windows sizes performs well on all the seismic data sets and outperforms the other methods from the SNR point of view. Comparison with other methods (old and new) are given in the full paper. The main conclusion is that multidimensional adaptive multiscale local cosine transform with different windows sizes perform well on all the seismic data sets and outperforms other methods from the SNR point of view. Special emphasis was given to achieve faster processing speed which is another critical issue that is examined in the paper. Some of these algorithms are also suitable for multimedia type compression.

  5. Effect of Breast Compression on Lesion Characteristic Visibility with Diffraction-Enhanced Imaging

    SciTech Connect

    Faulconer, L.; Parham, C; Connor, D; Kuzmiak, C; Koomen, M; Lee, Y; Cho, K; Rafoth, J; Livasy, C; et al.

    2010-01-01

    Conventional mammography can not distinguish between transmitted, scattered, or refracted x-rays, thus requiring breast compression to decrease tissue depth and separate overlapping structures. Diffraction-enhanced imaging (DEI) uses monochromatic x-rays and perfect crystal diffraction to generate images with contrast based on absorption, refraction, or scatter. Because DEI possesses inherently superior contrast mechanisms, the current study assesses the effect of breast compression on lesion characteristic visibility with DEI imaging of breast specimens. Eleven breast tissue specimens, containing a total of 21 regions of interest, were imaged by DEI uncompressed, half-compressed, or fully compressed. A fully compressed DEI image was displayed on a soft-copy mammography review workstation, next to a DEI image acquired with reduced compression, maintaining all other imaging parameters. Five breast imaging radiologists scored image quality metrics considering known lesion pathology, ranking their findings on a 7-point Likert scale. When fully compressed DEI images were compared to those acquired with approximately a 25% difference in tissue thickness, there was no difference in scoring of lesion feature visibility. For fully compressed DEI images compared to those acquired with approximately a 50% difference in tissue thickness, across the five readers, there was a difference in scoring of lesion feature visibility. The scores for this difference in tissue thickness were significantly different at one rocking curve position and for benign lesion characterizations. These results should be verified in a larger study because when evaluating the radiologist scores overall, we detected a significant difference between the scores reported by the five radiologists. Reducing the need for breast compression might increase patient comfort during mammography. Our results suggest that DEI may allow a reduction in compression without substantially compromising clinical image

  6. Industrial Compressed Air System Energy Efficiency Guidebook.

    SciTech Connect

    United States. Bonneville Power Administration.

    1993-12-01

    Energy efficient design, operation and maintenance of compressed air systems in industrial plants can provide substantial reductions in electric power and other operational costs. This guidebook will help identify cost effective, energy efficiency opportunities in compressed air system design, re-design, operation and maintenance. The guidebook provides: (1) a broad overview of industrial compressed air systems, (2) methods for estimating compressed air consumption and projected air savings, (3) a description of applicable, generic energy conservation measures, and, (4) a review of some compressed air system demonstration projects that have taken place over the last two years. The primary audience for this guidebook includes plant maintenance supervisors, plant engineers, plant managers and others interested in energy management of industrial compressed air systems.

  7. Color space selection for JPEG image compression

    NASA Astrophysics Data System (ADS)

    Moroney, Nathan; Fairchild, Mark D.

    1995-10-01

    The Joint Photographic Experts Group's image compression algorithm has been shown to provide a very efficient and powerful method of compressing images. However, there is little substantive information about which color space should be utilized when implementing the JPEG algorithm. Currently, the JPEG algorithm is set up for use with any three-component color space. The objective of this research is to determine whether or not the color space selected will significantly improve the image compression. The RGB, XYZ, YIQ, CIELAB, CIELUV, and CIELAB LCh color spaces were examined and compared. Both numerical measures and psychophysical techniques were used to assess the results. The final results indicate that the device space, RGB, is the worst color space to compress images. In comparison, the nonlinear transforms of the device space, CIELAB and CIELUV, are the best color spaces to compress images. The XYZ, YIQ, and CIELAB LCh color spaces resulted in intermediate levels of compression.

  8. Atomic effect algebras with compression bases

    SciTech Connect

    Caragheorgheopol, Dan; Tkadlec, Josef

    2011-01-15

    Compression base effect algebras were recently introduced by Gudder [Demonstr. Math. 39, 43 (2006)]. They generalize sequential effect algebras [Rep. Math. Phys. 49, 87 (2002)] and compressible effect algebras [Rep. Math. Phys. 54, 93 (2004)]. The present paper focuses on atomic compression base effect algebras and the consequences of atoms being foci (so-called projections) of the compressions in the compression base. Part of our work generalizes results obtained in atomic sequential effect algebras by Tkadlec [Int. J. Theor. Phys. 47, 185 (2008)]. The notion of projection-atomicity is introduced and studied, and several conditions that force a compression base effect algebra or the set of its projections to be Boolean are found. Finally, we apply some of these results to sequential effect algebras and strengthen a previously established result concerning a sufficient condition for them to be Boolean.

  9. Atomic effect algebras with compression bases

    NASA Astrophysics Data System (ADS)

    Caragheorgheopol, Dan; Tkadlec, Josef

    2011-01-01

    Compression base effect algebras were recently introduced by Gudder [Demonstr. Math. 39, 43 (2006)]. They generalize sequential effect algebras [Rep. Math. Phys. 49, 87 (2002)] and compressible effect algebras [Rep. Math. Phys. 54, 93 (2004)]. The present paper focuses on atomic compression base effect algebras and the consequences of atoms being foci (so-called projections) of the compressions in the compression base. Part of our work generalizes results obtained in atomic sequential effect algebras by Tkadlec [Int. J. Theor. Phys. 47, 185 (2008)]. The notion of projection-atomicity is introduced and studied, and several conditions that force a compression base effect algebra or the set of its projections to be Boolean are found. Finally, we apply some of these results to sequential effect algebras and strengthen a previously established result concerning a sufficient condition for them to be Boolean.

  10. Micromechanics of composite laminate compression failure

    NASA Technical Reports Server (NTRS)

    Guynn, E. Gail; Bradley, Walter L.

    1986-01-01

    The Dugdale analysis for metals loaded in tension was adapted to model the failure of notched composite laminates loaded in compression. Compression testing details, MTS alignment verification, and equipment needs were resolved. Thus far, only 2 ductile material systems, HST7 and F155, were selected for study. A Wild M8 Zoom Stereomicroscope and necessary attachments for video taping and 35 mm pictures were purchased. Currently, this compression test system is fully operational. A specimen is loaded in compression, and load vs shear-crippling zone size is monitored and recorded. Data from initial compression tests indicate that the Dugdale model does not accurately predict the load vs damage zone size relationship of notched composite specimens loaded in compression.

  11. The topology and vorticity dynamics of a three-dimensional plane compressible wake

    NASA Technical Reports Server (NTRS)

    Chen, Jacqueline H.; Cantwell, Brian J.; Mansour, Nagi N.

    1989-01-01

    The three-dimensional aspects of transition in a low Mach number plane compressible wake are studied numerically. Comparisons are made between the topology of the velocity field and the vorticity dynamics of the flow based on results from direct numerical simulations of the full compressible Navier-Stokes equations. The velocity field is integrated to obtain instantaneous streamlines at different stages in the evolution. A generalized three-dimensional critical point theory is applied to classify the critical points of the velocity field.

  12. Technique for chest compressions in adult CPR

    PubMed Central

    2011-01-01

    Chest compressions have saved the lives of countless patients in cardiac arrest as they generate a small but critical amount of blood flow to the heart and brain. This is achieved by direct cardiac massage as well as a thoracic pump mechanism. In order to optimize blood flow excellent chest compression technique is critical. Thus, the quality of the delivered chest compressions is a pivotal determinant of successful resuscitation. If a patient is found unresponsive without a definite pulse or normal breathing then the responder should assume that this patient is in cardiac arrest, activate the emergency response system and immediately start chest compressions. Contra-indications to starting chest compressions include a valid Do Not Attempt Resuscitation Order. Optimal technique for adult chest compressions includes positioning the patient supine, and pushing hard and fast over the center of the chest with the outstretched arms perpendicular to the patient's chest. The rate should be at least 100 compressions per minute and any interruptions should be minimized to achieve a minimum of 60 actually delivered compressions per minute. Aggressive rotation of compressors prevents decline of chest compression quality due to fatigue. Chest compressions are terminated following return of spontaneous circulation. Unconscious patients with normal breathing are placed in the recovery position. If there is no return of spontaneous circulation, then the decision to terminate chest compressions is based on the clinical judgment that the patient's cardiac arrest is unresponsive to treatment. Finally, it is important that family and patients' loved ones who witness chest compressions be treated with consideration and sensitivity. PMID:22152601

  13. Compressed data for the movie industry

    NASA Astrophysics Data System (ADS)

    Tice, Bradley S.

    2013-12-01

    The paper will present a compression algorithm that will allow for both random and non-random sequential binary strings of data to be compressed for storage and transmission of media information. The compression system has direct applications to the storage and transmission of digital media such as movies, television, audio signals and other visual and auditory signals needed for engineering practicalities in such industries.

  14. Subband Coding Methods for Seismic Data Compression

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Pollara, F.

    1995-01-01

    This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  15. Eccentric crank variable compression ratio mechanism

    DOEpatents

    Lawrence, Keith Edward; Moser, William Elliott; Roozenboom, Stephan Donald; Knox, Kevin Jay

    2008-05-13

    A variable compression ratio mechanism for an internal combustion engine that has an engine block and a crankshaft is disclosed. The variable compression ratio mechanism has a plurality of eccentric disks configured to support the crankshaft. Each of the plurality of eccentric disks has at least one cylindrical portion annularly surrounded by the engine block. The variable compression ratio mechanism also has at least one actuator configured to rotate the plurality of eccentric disks.

  16. Cauda equina compression presenting as spontaneous priapism.

    PubMed Central

    Ravindran, M

    1979-01-01

    Disturbance of autonomic function is an unusual feature of compression of the cauda equina. A 61 year old man who had complete occlusion of the lumbar spinal canal with compression of the cauda equina from a large centrally prolapsed disc, had spontaneous priapism, precipitated by walking and relieved by resting. This symptom was comparable to claudication by compression of cauda equina. It subsided completely after surgical removal of a prolapsed L4-5 disc. Images PMID:438839

  17. Compression-ignition fuel properties of Fischer-Tropsch syncrude

    SciTech Connect

    Suppes, G.J.; Terry, J.G.; Burkhart, M.L.; Cupps, M.P.

    1998-05-01

    Fischer-Tropsch conversion of natural gas to liquid hydrocarbon fuel typically includes Fischer-Tropsch synthesis followed by refining (hydrocracking and distillation) of the syncrude into mostly diesel or kerosene with some naphtha (a feedstock for gasoline production). Refining is assumed necessary, possibly overlooking the exception fuel qualities of syncrude for more direct utilization as a compression-ignition (CI) fuel. This paper evaluates cetane number, viscosity, cloud-point, and pour-point properties of syncrude and blends of syncrude with blend stocks such as ethanol and diethyl ether. The results show that blends comprised primarily of syncrude are potentially good CI fuels, with pour-point temperature depression being the largest development obstacle. The resulting blends may provide a much-needed and affordable alternative CI fuel. Particularly good market opportunities exist with Environmental Policy Act (EPACT) applications.

  18. Compressible Lagrangian hydrodynamics without Lagrangian cells

    NASA Astrophysics Data System (ADS)

    Clark, Robert A.

    The partial differential Eqs [2.1, 2.2, and 2.3], along with the equation of state 2.4, which describe the time evolution of compressible fluid flow can be solved without the use of a Lagrangian mesh. The method follows embedded fluid points and uses finite difference approximations to ěc nablaP and ěc nabla · ěc u to update p, ěc u and e. We have demonstrated that the method can accurately calculate highly distorted flows without difficulty. The finite difference approximations are not unique, improvements may be found in the near future. The neighbor selection is not unique, but the one being used at present appears to do an excellent job. The method could be directly extended to three dimensions. One drawback to the method is the failure toexplicitly conserve mass, momentum and energy. In fact, at any given time, the mass is not defined. We must perform an auxiliary calculation by integrating the density field over space to obtain mass, energy and momentum. However, in all cases where we have done this, we have found the drift in these quantities to be no more than a few percent.

  19. Effect of compressibility on the annihilation process

    NASA Astrophysics Data System (ADS)

    Hnatich, M.; Honkonen, J.; Lučivjanský, T.

    2013-07-01

    Using the renormalization group in the perturbation theory, we study the influence of a random velocity field on the kinetics of the single-species annihilation reaction at and below its critical dimension d c = 2. The advecting velocity field is modeled by a Gaussian variable self-similar in space with a finite-radius time correlation (the Antonov-Kraichnan model). We take the effect of the compressibility of the velocity field into account and analyze the model near its critical dimension using a three-parameter expansion in ∈, Δ, and η, where ∈ is the deviation from the Kolmogorov scaling, Δ is the deviation from the (critical) space dimension two, and η is the deviation from the parabolic dispersion law. Depending on the values of these exponents and the compressiblity parameter α, the studied model can exhibit various asymptotic (long-time) regimes corresponding to infrared fixed points of the renormalization group. We summarize the possible regimes and calculate the decay rates for the mean particle number in the leading order of the perturbation theory.

  20. Coherent radar imaging based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Zhu, Qian; Volz, Ryan; Mathews, John D.

    2015-12-01

    High-resolution radar images in the horizontal spatial domain generally require a large number of different baselines that usually come with considerable cost. In this paper, aspects of compressed sensing (CS) are introduced to coherent radar imaging. We propose a single CS-based formalism that enables the full three-dimensional (3-D)—range, Doppler frequency, and horizontal spatial (represented by the direction cosines) domain—imaging. This new method can not only reduce the system costs and decrease the needed number of baselines by enabling spatial sparse sampling but also achieve high resolution in the range, Doppler frequency, and horizontal space dimensions. Using an assumption of point targets, a 3-D radar signal model for imaging has been derived. By comparing numerical simulations with the fast Fourier transform and maximum entropy methods at different signal-to-noise ratios, we demonstrate that the CS method can provide better performance in resolution and detectability given comparatively few available measurements relative to the number required by Nyquist-Shannon sampling criterion. These techniques are being applied to radar meteor observations.

  1. Human Identification Using Compressed ECG Signals.

    PubMed

    Camara, Carmen; Peris-Lopez, Pedro; Tapiador, Juan E

    2015-11-01

    As a result of the increased demand for improved life styles and the increment of senior citizens over the age of 65, new home care services are demanded. Simultaneously, the medical sector is increasingly becoming the new target of cybercriminals due the potential value of users' medical information. The use of biometrics seems an effective tool as a deterrent for many of such attacks. In this paper, we propose the use of electrocardiograms (ECGs) for the identification of individuals. For instance, for a telecare service, a user could be authenticated using the information extracted from her ECG signal. The majority of ECG-based biometrics systems extract information (fiducial features) from the characteristics points of an ECG wave. In this article, we propose the use of non-fiducial features via the Hadamard Transform (HT). We show how the use of highly compressed signals (only 24 coefficients of HT) is enough to unequivocally identify individuals with a high performance (classification accuracy of 0.97 and with identification system errors in the order of 10(-2)). PMID:26364201

  2. 13. Detail, upper chord connection point on upstream side of ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    13. Detail, upper chord connection point on upstream side of truss, showing connection of upper chord, laced vertical compression member, knee-braced strut, counters, and laterals. - Red Bank Creek Bridge, Spanning Red Bank Creek at Rawson Road, Red Bluff, Tehama County, CA

  3. Laser observations of the moon: Normal points for 1973

    NASA Technical Reports Server (NTRS)

    Mulholland, J. D.; Shelus, P. J.; Silverburg, E. C.

    1975-01-01

    McDonald Observatory lunar laser ranging observations for 1973 are presented in the form of compressed normal points and amendments for the 1969-1972 data set are given. Observations of the reflector mounted on the Soviet roving vehicle Lunakhod 2 have also been included.

  4. Laser observations of the moon - Normal points for 1973

    NASA Technical Reports Server (NTRS)

    Mulholland, J. D.; Shelus, P. J.; Silverberg, E. C.

    1975-01-01

    McDonald Observatory lunar laser-ranging observations for 1973 are presented in the form of compressed normal points, and amendments for the 1969-1972 data set are given. Observations of the reflector mounted on the Soviet roving vehicle Lunakhod 2 have also been included.

  5. Energy efficiency improvements in Chinese compressed airsystems

    SciTech Connect

    McKane, Aimee; Li, Li; Li, Yuqi; Taranto, T.

    2007-06-01

    Industrial compressed air systems use more than 9 percent ofall electricity used in China. Experience in China and elsewhere hasshown that these systems can be much more energy efficient when viewed asa whole system and rather than as isolated components.This paper presentsa summary and analysis of several compressed air system assessments.Through these assessments, typical compressed air management practices inChina are analyzed. Recommendations are made concerning immediate actionsthat China s enterprises can make to improve compressed air systemefficiency using best available technology and managementstrategies.

  6. Compressible turbulent flows: Modeling and similarity considerations

    NASA Technical Reports Server (NTRS)

    Zeman, Otto

    1991-01-01

    With the recent revitalization of high speed flow research, compressibility presents a new set of challenging problems to turbulence researchers. Questions arise as to what extent compressibility affects turbulence dynamics, structures, the Reynolds stress-mean velocity (constitutive) relation, and the accompanying processes of heat transfer and mixing. In astrophysical applications, compressible turbulence is believed to play an important role in intergalactic gas cloud dynamics and in accretion disk convection. Understanding and modeling of the compressibility effects in free shear flows, boundary layers, and boundary layer/shock interactions is discussed.

  7. Spectral image compression for data communications

    NASA Astrophysics Data System (ADS)

    Hauta-Kasari, Markku; Lehtonen, Juha; Parkkinen, Jussi P. S.; Jaeaeskelaeinen, Timo

    2000-12-01

    We report a technique for spectral image compression to be used in the field of data communications. The spectral domain of the images is represented by a low-dimensional component image set, which is used to obtain an efficient compression of the high-dimensional spectral data. The component images are compressed using a similar technique as the JPEG- and MPEG-type compressions use to subsample the chrominance channels. The spectral compression is based on Principal Component Analysis (PCA) combined with color image transmission coding technique of 'chromatic channel subsampling' of the component images. The component images are subsampled using 4:2:2, 4:2:0, and 4:1:1-based compressions. In addition, we extended the test for larger block sizes and larger number of component images than in the original JPEG- and MPEG-standards. Totally 50 natural spectral images were used as test material in our experiments. Several error measures of the compression are reported. The same compressions are done using Independent Component Analysis and the results are compared with PCA. These methods give a good compression ratio while keeping visual quality of color still good. Quantitative comparisons between the original and reconstructed spectral images are presented.

  8. Compression of digital chest x-rays

    NASA Astrophysics Data System (ADS)

    Cohn, Michael; Trefler, Martin; Young, Tzay S.

    1990-07-01

    The application of digital technologies to chest radiography holds the promise of routine application of intage processing techniques to effect image enhancement. However, due to their inherent spatial resolution, digital chest images impose severe constraints on data storage devices. Compression of these images will relax such constraints and facilitate image transmission on a digital network. We have evaluated image processing algorithms aimed at compression of digital chest images while improving the diagnostic quality of the image. The image quality has been measured with respect to the task of tumor detection. Compression ratios of as high as 2:1 have been achieved. This compression can then be supplemented by irreversible methods.

  9. Pulsed spheromak reactor with adiabatic compression

    SciTech Connect

    Fowler, T K

    1999-03-29

    Extrapolating from the Pulsed Spheromak reactor and the LINUS concept, we consider ignition achieved by injecting a conducting liquid into the flux conserver to compress a low temperature spheromak created by gun injection and ohmic heating. The required energy to achieve ignition and high gain by compression is comparable to that required for ohmic ignition and the timescale is similar so that the mechanical power to ignite by compression is comparable to the electrical power to ignite ohmically. Potential advantages and problems are discussed. Like the High Beta scenario achieved by rapid fueling of an ohmically ignited plasma, compression must occur on timescales faster than Taylor relaxation.

  10. Evaluation and Management of Vertebral Compression Fractures

    PubMed Central

    Alexandru, Daniela; So, William

    2012-01-01

    Compression fractures affect many individuals worldwide. An estimated 1.5 million vertebral compression fractures occur every year in the US. They are common in elderly populations, and 25% of postmenopausal women are affected by a compression fracture during their lifetime. Although these fractures rarely require hospital admission, they have the potential to cause significant disability and morbidity, often causing incapacitating back pain for many months. This review provides information on the pathogenesis and pathophysiology of compression fractures, as well as clinical manifestations and treatment options. Among the available treatment options, kyphoplasty and percutaneous vertebroplasty are two minimally invasive techniques to alleviate pain and correct the sagittal imbalance of the spine. PMID:23251117

  11. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  12. [Irreversible image compression in radiology. Current status].

    PubMed

    Pinto dos Santos, D; Jungmann, F; Friese, C; Düber, C; Mildenberger, P

    2013-03-01

    Due to increasing amounts of data in radiology methods for image compression appear both economically and technically interesting. Irreversible image compression allows markedly higher reduction of data volume in comparison with reversible compression algorithms but is, however, accompanied by a certain amount of mathematical and visual loss of information. Various national and international radiological societies have published recommendations for the use of irreversible image compression. The degree of acceptable compression varies across modalities and regions of interest.The DICOM standard supports JPEG, which achieves compression through tiling, DCT/DWT and quantization. Although mathematical loss due to rounding up errors and reduction of high frequency information occurs this results in relatively low visual degradation.It is still unclear where to implement irreversible compression in the radiological workflow as only few studies analyzed the impact of irreversible compression on specialized image postprocessing. As long as this is within the limits recommended by the German Radiological Society irreversible image compression could be implemented directly at the imaging modality as it would comply with § 28 of the roentgen act (RöV). PMID:23456043

  13. Compression of rehydratable vegetables and cereals

    NASA Technical Reports Server (NTRS)

    Burns, E. E.

    1978-01-01

    Characteristics of freeze-dried compressed carrots, such as rehydration, volatile retention, and texture, were studied by relating histological changes to textural quality evaluation, and by determining the effects of storage temperature on freeze-dried compressed carrot bars. Results show that samples compressed with a high moisture content undergo only slight structural damage and rehydrate quickly. Cellular disruption as a result of compression at low moisture levels was the main reason for rehydration and texture differences. Products prepared from carrot cubes having 48% moisture compared favorably with a freshly cooked product in cohesiveness and elasticity, but were found slightly harder and more chewy.

  14. Single-pixel complementary compressive sampling spectrometer

    NASA Astrophysics Data System (ADS)

    Lan, Ruo-Ming; Liu, Xue-Feng; Yao, Xu-Ri; Yu, Wen-Kai; Zhai, Guang-Jie

    2016-05-01

    A new type of compressive spectroscopy technique employing a complementary sampling strategy is reported. In a single sequence of spectral compressive sampling, positive and negative measurements are performed, in which sensing matrices with a complementary relationship are used. The restricted isometry property condition necessary for accurate recovery of compressive sampling theory is satisfied mathematically. Compared with the conventional single-pixel spectroscopy technique, the complementary compressive sampling strategy can achieve spectral recovery of considerably higher quality within a shorter sampling time. We also investigate the influence of the sampling ratio and integration time on the recovery quality.

  15. Method of making a non-lead hollow point bullet

    DOEpatents

    Vaughn, Norman L.; Lowden, Richard A.

    2003-10-07

    The method of making a non-lead hollow point bullet has the steps of a) compressing an unsintered powdered metal composite core into a jacket, b) punching a hollow cavity tip portion into the core, c) seating an insert, the insert having a hollow point tip and a tail protrusion, on top of the core such that the tail protrusion couples with the hollow cavity tip portion, and d) swaging the open tip of the jacket.

  16. Compression of thick laminated composite beams with initial impact-like damage

    NASA Technical Reports Server (NTRS)

    Breivik, N. L.; Guerdal, Z.; Griffin, O. H., Jr.

    1992-01-01

    While the study of compression after impact of laminated composites has been under consideration for many years, the complexity of the damage initiated by low velocity impact has not lent itself to simple predictive models for compression strength. The damage modes due to non-penetrating, low velocity impact by large diameter objects can be simulated using quasi-static three-point bending. The resulting damage modes are less coupled and more easily characterized than actual impact damage modes. This study includes the compression testing of specimens with well documented initial damage states obtained from three-point bend testing. Compression strengths and failure modes were obtained for quasi-isotropic stacking sequences from 0.24 to 1.1 inches thick with both grouped and interspersed ply stacking. Initial damage prior to compression testing was divided into four classifications based on the type, extent, and location of the damage. These classifications are multiple through-thickness delaminations, isolated delamination, damage near the surface, and matrix cracks. Specimens from each classification were compared to specimens tested without initial damage in order to determine the effects of the initial damage on the final compression strength and failure modes. A finite element analysis was used to aid in the understanding and explanation of the experimental results.

  17. Tomography and Simulation of Microstructure Evolution of a Closed-Cell Polymer Foam in Compression

    SciTech Connect

    Daphalapurkar, N.P.; Hanan, J.C.; Phelps, N.B.; Bale, H.; Lu, H.

    2010-10-25

    Closed-cell foams in compression exhibit complex deformation characteristics that remain incompletely understood. In this paper the microstructural evolution of closed-cell polymethacrylimide foam was simulated in compression undergoing elastic, compaction, and densification stages. The three-dimensional microstructure of the foam is determined using Micro-Computed Tomography ({micro}-CT), and is converted to material points for simulations using the material point method (MPM). The properties of the cell-walls are determined from nanoindentation on the wall of the foam. MPM simulations captured the three stages of deformations in foam compression. Features of the microstructures from simulations are compared qualitatively with the in-situ observations of the foam under compression using {micro}-CT. The stress-strain curve simulated from MPM compares reasonably with the experimental results. Based on the results from {micro}-CT and MPM simulations, it was found that elastic buckling of cell-walls occurs even in the elastic regime of compression. Within the elastic region, less than 35% of the cell-wall material carries the majority of the compressive load. In the experiment, a shear band was observed as a result of collapse of cells in a weak zone. From this collapsed weak zone a compaction (collapse) wave was seen traveling which eventually lead to the collapse of the entire foam cell-structure. Overall, this methodology will allow prediction of material properties for microstructures driving the optimization of processing and performance in foam materials.

  18. Point-to-Point Multicast Communications Protocol

    NASA Technical Reports Server (NTRS)

    Byrd, Gregory T.; Nakano, Russell; Delagi, Bruce A.

    1987-01-01

    This paper describes a protocol to support point-to-point interprocessor communications with multicast. Dynamic, cut-through routing with local flow control is used to provide a high-throughput, low-latency communications path between processors. In addition multicast transmissions are available, in which copies of a packet are sent to multiple destinations using common resources as much as possible. Special packet terminators and selective buffering are introduced to avoid a deadlock during multicasts. A simulated implementation of the protocol is also described.

  19. VST-based lossy compression of hyperspectral data for new generation sensors

    NASA Astrophysics Data System (ADS)

    Zemliachenko, Alexander N.; Kozhemiakin, Ruslan A.; Uss, Mykhail L.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2013-10-01

    This paper addresses lossy compression of hyperspectral images acquired by sensors of new generation for which signaldependent component of the noise is prevailing compared to the noise-independent component. First, for sub-band (component-wise) compression, it is shown that there can exist an optimal operation point (OOP) for which MSE between compressed and noise-free image is minimal, i.e., maximal noise filtering effect is observed. This OOP can be observed for two approaches to lossy compression where the first one presumes direct application of a coder to original data and the second approach deals with applying direct and inverse variance stabilizing transform (VST). Second, it is demonstrated that the second approach is preferable since it usually provides slightly smaller MSE and slightly larger compression ratio (CR) in OOP. One more advantage of the second approach is that the coder parameter that controls CR can be set fixed for all sub-band images. Moreover, CR can be considerably (approximately twice) increased if sub-band images after VST are grouped and lossy compression is applied to a first sub-band image in a group and to "difference" images obtained for this group. The proposed approach is tested for Hyperion hyperspectral images and shown to provide CR about 15 for data compression in the neighborhood of OOP.

  20. PROPOSED PREDICTIVE EQUATION FOR DIAGONAL COMPRESSIVE CAPACITY OF REINFORCED CONCRETE BEAMS

    NASA Astrophysics Data System (ADS)

    Tantipidok, Patarapol; Kobayashi, Chikaharu; Matsumoto, Koji; Watanabe, Ken; Niwa, Junichiro

    The current standard specifications of JSCE fo r the diagonal compressive capacity of RC beams only consider the effect of the compressive strength of conc rete and are not applicable to high strength concrete. This research aims to investigate the effect of vari ous parameters on the diagonal compressive capacity and propose a predictive equation. Twenty five I-beams were tested by three-point bending. The verification of the effects of concrete strength, stirrup ratio and spacing, shear span to effective depth ratio, flange width to web width ratio and effective depth was performed. The diagonal compressive capacity had a linear relationship to stirrup spacing regardless of its diameter. The effect of spacing became more significant with higher concrete strength. Thus, the effect of concrete strength and stirrup spacing was interrelated. On the other hand, there were slight effects of the other parameters on the diagonal compressive capacity. Finally, a simple empirical equation for predicting the diagonal compressive capacity of RC beams was proposed. The proposed equation had an adequate simplicity and can provide an accurate estimation of the diagonal compressive capacity than the existing equations.

  1. Lossless data compression of grid-based digital elevation models: A png image format evaluation

    NASA Astrophysics Data System (ADS)

    Scarmana, G.

    2014-05-01

    At present, computers, lasers, radars, planes and satellite technologies make possible very fast and accurate topographic data acquisition for the production of maps. However, the problem of managing and manipulating this data efficiently remains. One particular type of map is the elevation map. When stored on a computer, it is often referred to as a Digital Elevation Model (DEM). A DEM is usually a square matrix of elevations. It is like an image, except that it contains a single channel of information (that is, elevation) and can be compressed in a lossy or lossless manner by way of existing image compression protocols. Compression has the effect of reducing memory requirements and speed of transmission over digital links, while maintaining the integrity of data as required. In this context, this paper investigates the effects of the PNG (Portable Network Graphics) lossless image compression protocol on floating-point elevation values for 16-bit DEMs of dissimilar terrain characteristics. The PNG is a robust, universally supported, extensible, lossless, general-purpose and patent-free image format. Tests demonstrate that the compression ratios and run decompression times achieved with the PNG lossless compression protocol can be comparable to, or better than, proprietary lossless JPEG variants, other image formats and available lossless compression algorithms.

  2. Genetic disorders producing compressive radiculopathy.

    PubMed

    Corey, Joseph M

    2006-11-01

    Back pain is a frequent complaint seen in neurological practice. In evaluating back pain, neurologists are asked to evaluate patients for radiculopathy, determine whether they may benefit from surgery, and help guide management. Although disc herniation is the most common etiology of compressive radiculopathy, there are many other causes, including genetic disorders. This article is a discussion of genetic disorders that cause or contribute to radiculopathies. These genetic disorders include neurofibromatosis, Paget's disease of bone, and ankylosing spondylitis. Numerous genetic disorders can also lead to deformities of the spine, including spinal muscular atrophy, Friedreich's ataxia, Charcot-Marie-Tooth disease, familial dysautonomia, idiopathic torsional dystonia, Marfan's syndrome, and Ehlers-Danlos syndrome. However, the extent of radiculopathy caused by spine deformities is essentially absent from the literature. Finally, recent investigation into the heritability of disc degeneration and lumbar disc herniation suggests a significant genetic component in the etiology of lumbar disc disease. PMID:17048153

  3. Photon counting compressive depth mapping.

    PubMed

    Howland, Gregory A; Lum, Daniel J; Ware, Matthew R; Howell, John C

    2013-10-01

    We demonstrate a compressed sensing, photon counting lidar system based on the single-pixel camera. Our technique recovers both depth and intensity maps from a single under-sampled set of incoherent, linear projections of a scene of interest at ultra-low light levels around 0.5 picowatts. Only two-dimensional reconstructions are required to image a three-dimensional scene. We demonstrate intensity imaging and depth mapping at 256 × 256 pixel transverse resolution with acquisition times as short as 3 seconds. We also show novelty filtering, reconstructing only the difference between two instances of a scene. Finally, we acquire 32 × 32 pixel real-time video for three-dimensional object tracking at 14 frames-per-second. PMID:24104293

  4. Compressive imaging in scattering media.

    PubMed

    Durán, V; Soldevila, F; Irles, E; Clemente, P; Tajahuerce, E; Andrés, P; Lancis, J

    2015-06-01

    One challenge that has long held the attention of scientists is that of clearly seeing objects hidden by turbid media, as smoke, fog or biological tissue, which has major implications in fields such as remote sensing or early diagnosis of diseases. Here, we combine structured incoherent illumination and bucket detection for imaging an absorbing object completely embedded in a scattering medium. A sequence of low-intensity microstructured light patterns is launched onto the object, whose image is accurately reconstructed through the light fluctuations measured by a single-pixel detector. Our technique is noninvasive, does not require coherent sources, raster scanning nor time-gated detection and benefits from the compressive sensing strategy. As a proof of concept, we experimentally retrieve the image of a transilluminated target both sandwiched between two holographic diffusers and embedded in a 6mm-thick sample of chicken breast. PMID:26072804

  5. Compression molding of aerogel microspheres

    DOEpatents

    Pekala, R.W.; Hrubesh, L.W.

    1998-03-24

    An aerogel composite material produced by compression molding of aerogel microspheres (powders) mixed together with a small percentage of polymer binder to form monolithic shapes in a cost-effective manner is disclosed. The aerogel composites are formed by mixing aerogel microspheres with a polymer binder, placing the mixture in a mold and heating under pressure, which results in a composite with a density of 50--800 kg/m{sup 3} (0.05--0.80 g/cc). The thermal conductivity of the thus formed aerogel composite is below that of air, but higher than the thermal conductivity of monolithic aerogels. The resulting aerogel composites are attractive for applications such as thermal insulation since fabrication thereof does not require large and expensive processing equipment. In addition to thermal insulation, the aerogel composites may be utilized for filtration, ICF target, double layer capacitors, and capacitive deionization. 4 figs.

  6. Shock compression of liquid hydrazine

    SciTech Connect

    Garcia, B.O.; Chavez, D.J.

    1995-01-01

    Liquid hydrazine (N{sub 2}H{sub 4}) is a propellant used by the Air Force and NASA for aerospace propulsion and power systems. Because the propellant modules that contain the hydrazine can be subject to debris impacts during their use, the shock states that can occur in the hydrazine need to be characterized to safely predict its response. Several shock compression experiments have been conducted in an attempt to investigate the detonability of liquid hydrazine; however, the experiments results disagree. Therefore, in this study, we reproduced each experiment numerically to evaluate in detail the shock wave profiles generated in the liquid hydrazine. This paper presents the results of each numerical simulation and compares the results to those obtained in experiment. We also present the methodology of our approach, which includes chemical kinetic experiments, chemical equilibrium calculations, and characterization of the equation of state of liquid hydrazine.

  7. Shock compression of liquid hydrazine

    SciTech Connect

    Garcia, B.O.; Chavez, D.J.

    1996-05-01

    Liquid hydrazine (N{sub 2}H{sub 4}) is a propellant used for aerospace propulsion and power systems. Because the propellant modules can be subject to debris impacts during their use, the shock states that can occur in the hydrazine need to be characterized to safely predict its response. Several shock compression experiments have been conducted to investigate the shock detonability of liquid hydrazine; however, the experiments{close_quote} results disagree. Therefore, in this study, we reproduced each experiment numerically to evaluate in detail the shock wave profiles generated in the liquid hydrazine. This paper presents the results of each numerical simulation and compares the results to those obtained in experiment. {copyright} {ital 1996 American Institute of Physics.}

  8. Compressibility of Mercury's dayside magnetosphere

    NASA Astrophysics Data System (ADS)

    Zhong, J.; Wan, W. X.; Wei, Y.; Slavin, J. A.; Raines, J. M.; Rong, Z. J.; Chai, L. H.; Han, X. H.

    2015-12-01

    The Mercury is experiencing significant variations of solar wind forcing along its large eccentric orbit. With 12 Mercury years of data from Mercury Surface, Space ENvironment, GEochemistry, and Ranging, we demonstrate that Mercury's distance from the Sun has a great effect on the size of the dayside magnetosphere that is much larger than the temporal variations. The mean solar wind standoff distance was found to be about 0.27 Mercury radii (RM) closer to the Mercury at perihelion than at aphelion. At perihelion the subsolar magnetopause can be compressed below 1.2 RM of ~2.5% of the time. The relationship between the average magnetopause standoff distance and heliocentric distance suggests that on average the effects of the erosion process appears to counter balance those of induction in Mercury's interior at perihelion. However, at aphelion, where solar wind pressure is lower and Alfvénic Mach number is higher, the effects of induction appear dominant.

  9. Compressed Sensing Based Interior Tomography

    PubMed Central

    Yu, Hengyong; Wang, Ge

    2010-01-01

    While the conventional wisdom is that the interior problem does not have a unique solution, by analytic continuation we recently showed that the interior problem can be uniquely and stably solved if we have a known sub-region inside a region-of-interest (ROI). However, such a known sub-region does not always readily available, and it is even impossible to find in some cases. Based on the compressed sensing theory, here we prove that if an object under reconstruction is essentially piecewise constant, a local ROI can be exactly and stably reconstructed via the total variation minimization. Because many objects in CT applications can be approximately modeled as piecewise constant, our approach is practically useful and suggests a new research direction of interior tomography. To illustrate the merits of our finding, we develop an iterative interior reconstruction algorithm that minimizes the total variation of a reconstructed image, and evaluate the performance in numerical simulation. PMID:19369711

  10. The inviscid compressible Goertler problem

    NASA Technical Reports Server (NTRS)

    Dando, Andrew; Seddougui, Sharon O.

    1991-01-01

    The growth rate is studied of Goertler vortices in a compressible flow in the inviscid limit of large Goertler number. Numerical solutions are obtained for 0(1) wavenumbers. The further limits of large Mach number and large wavenumber with 0(1) Mach number are considered. It is shown that two different types of disturbance modes can appear in this problem. The first is a wall layer mode, so named as it has its eigenfunctions trapped in a thin layer away from the wall and termed a trapped layer mode for large wavenumbers and an adjustment layer mode for large Mach numbers, since then this mode has its eigenfunctions concentrated in the temperature adjustment layer. The near crossing of the modes which occurs in each of the limits mentioned is investigated.

  11. Grid-free compressive beamforming.

    PubMed

    Xenaki, Angeliki; Gerstoft, Peter

    2015-04-01

    The direction-of-arrival (DOA) estimation problem involves the localization of a few sources from a limited number of observations on an array of sensors, thus it can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve high-resolution imaging. On a discrete angular grid, the CS reconstruction degrades due to basis mismatch when the DOAs do not coincide with the angular directions on the grid. To overcome this limitation, a continuous formulation of the DOA problem is employed and an optimization procedure is introduced, which promotes sparsity on a continuous optimization variable. The DOA estimation problem with infinitely many unknowns, i.e., source locations and amplitudes, is solved over a few optimization variables with semidefinite programming. The grid-free CS reconstruction provides high-resolution imaging even with non-uniform arrays, single-snapshot data and under noisy conditions as demonstrated on experimental towed array data. PMID:25920844

  12. High energy femtosecond pulse compression

    NASA Astrophysics Data System (ADS)

    Lassonde, Philippe; Mironov, Sergey; Fourmaux, Sylvain; Payeur, Stéphane; Khazanov, Efim; Sergeev, Alexander; Kieffer, Jean-Claude; Mourou, Gerard

    2016-07-01

    An original method for retrieving the Kerr nonlinear index was proposed and implemented for TF12 heavy flint glass. Then, a defocusing lens made of this highly nonlinear glass was used to generate an almost constant spectral broadening across a Gaussian beam profile. The lens was designed with spherical curvatures chosen in order to match the laser beam profile, such that the product of the thickness with intensity is constant. This solid-state optics in combination with chirped mirrors was used to decrease the pulse duration at the output of a terawatt-class femtosecond laser. We demonstrated compression of a 33 fs pulse to 16 fs with 170 mJ energy.

  13. Compression molding of aerogel microspheres

    DOEpatents

    Pekala, Richard W.; Hrubesh, Lawrence W.

    1998-03-24

    An aerogel composite material produced by compression molding of aerogel microspheres (powders) mixed together with a small percentage of polymer binder to form monolithic shapes in a cost-effective manner. The aerogel composites are formed by mixing aerogel microspheres with a polymer binder, placing the mixture in a mold and heating under pressure, which results in a composite with a density of 50-800 kg/m.sup.3 (0.05-0.80 g/cc). The thermal conductivity of the thus formed aerogel composite is below that of air, but higher than the thermal conductivity of monolithic aerogels. The resulting aerogel composites are attractive for applications such as thermal insulation since fabrication thereof does not require large and expensive processing equipment. In addition to thermal insulation, the aerogel composites may be utilized for filtration, ICF target, double layer capacitors, and capacitive deionization.

  14. Compressed natural gas measurement issues

    SciTech Connect

    Blazek, C.F.; Kinast, J.A.; Freeman, P.M.

    1993-12-31

    The Natural Gas Vehicle Coalition`s Measurement and Metering Task Group (MMTG) was established on July 1st, 1992 to develop suggested revisions to National Institute of Standards & Technology (NIST) Handbook 44-1992 (Specifications, Tolerances, and Other Technical Requirements for Weighing and Measuring Devices) and NIST Handbook 130-1991 (Uniform Laws & Regulations). Specifically, the suggested revisions will address the sale and measurement of compressed natural gas when sold as a motor vehicle fuel. This paper briefly discusses the activities of the MMTG and its interaction with NIST. The paper also discusses the Institute of Gas Technology`s (IGT) support of the MMTG in the area of natural gas composition, their impact on metering technology applicable to high pressure fueling stations as well as conversion factors for the establishment of ``gallon gasoline equivalent`` of natural gas. The final portion of this paper discusses IGT`s meter research activities and its meter test facility.

  15. High performance file compression algorithm for video-on-demand e-learning system

    NASA Astrophysics Data System (ADS)

    Nomura, Yoshihiko; Matsuda, Ryutaro; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    2005-10-01

    Information processing and communication technology are progressing quickly, and are prevailing throughout various technological fields. Therefore, the development of such technology should respond to the needs for improvement of quality in the e-learning education system. The authors propose a new video-image compression processing system that ingeniously employs the features of the lecturing scene: recognizing the a lecturer and a lecture stick by pattern recognition techniques, the video-image compression processing system deletes the figure of a lecturer of low importance and displays only the end point of a lecture stick. It enables us to create the highly compressed lecture video files, which are suitable for the Internet distribution. We compare this technique with the other simple methods such as the lower frame-rate video files, and the ordinary MPEG files. The experimental result shows that the proposed compression processing system is much more effective than the others.

  16. Analysis of Compression Algorithm in Ground Collision Avoidance Systems (Auto-GCAS)

    NASA Technical Reports Server (NTRS)

    Schmalz, Tyler; Ryan, Jack

    2011-01-01

    Automatic Ground Collision Avoidance Systems (Auto-GCAS) utilizes Digital Terrain Elevation Data (DTED) stored onboard a plane to determine potential recovery maneuvers. Because of the current limitations of computer hardware on military airplanes such as the F-22 and F-35, the DTED must be compressed through a lossy technique called binary-tree tip-tilt. The purpose of this study is to determine the accuracy of the compressed data with respect to the original DTED. This study is mainly interested in the magnitude of the error between the two as well as the overall distribution of the errors throughout the DTED. By understanding how the errors of the compression technique are affected by various factors (topography, density of sampling points, sub-sampling techniques, etc.), modifications can be made to the compression technique resulting in better accuracy. This, in turn, would minimize unnecessary activation of A-GCAS during flight as well as maximizing its contribution to fighter safety.

  17. Survey of data compression techniques

    SciTech Connect

    Gryder, R.; Hake, K.

    1991-09-01

    PM-AIM must provide to customers in a timely fashion information about Army acquisitions. This paper discusses ways that PM-AIM can reduce the volume of data that must be transmitted between sites. Although this paper primarily discusses techniques of data compression, it also briefly discusses other options for meeting the PM-AIM requirements. The options available to PM-AIM, in addition to hardware and software data compression, include less-frequent updates, distribution of partial updates, distributed data base design, and intelligent network design. Any option that enhances the performance of the PM-AIM network is worthy of consideration. The recommendations of this paper apply to the PM-AIM project in three phases: the current phase, the target phase, and the objective phase. Each recommendation will be identified as (1) appropriate for the current phase, (2) considered for implementation during the target phase, or (3) a feature that should be part of the objective phase of PM-AIM`s design. The current phase includes only those measures that can be taken with the installed leased lines. The target phase includes those measures that can be taken in transferring the traffic from the leased lines to the DSNET environment with minimal changes in the current design. The objective phase includes all the things that should be done as a matter of course. The objective phase for PM-AIM appears to be a distributed data base with data for each site stored locally and all sites having access to all data.

  18. Survey of data compression techniques

    SciTech Connect

    Gryder, R.; Hake, K.

    1991-09-01

    PM-AIM must provide to customers in a timely fashion information about Army acquisitions. This paper discusses ways that PM-AIM can reduce the volume of data that must be transmitted between sites. Although this paper primarily discusses techniques of data compression, it also briefly discusses other options for meeting the PM-AIM requirements. The options available to PM-AIM, in addition to hardware and software data compression, include less-frequent updates, distribution of partial updates, distributed data base design, and intelligent network design. Any option that enhances the performance of the PM-AIM network is worthy of consideration. The recommendations of this paper apply to the PM-AIM project in three phases: the current phase, the target phase, and the objective phase. Each recommendation will be identified as (1) appropriate for the current phase, (2) considered for implementation during the target phase, or (3) a feature that should be part of the objective phase of PM-AIM's design. The current phase includes only those measures that can be taken with the installed leased lines. The target phase includes those measures that can be taken in transferring the traffic from the leased lines to the DSNET environment with minimal changes in the current design. The objective phase includes all the things that should be done as a matter of course. The objective phase for PM-AIM appears to be a distributed data base with data for each site stored locally and all sites having access to all data.

  19. Compressed natural gas (CNG) measurement

    SciTech Connect

    Husain, Z.D.; Goodson, F.D.

    1995-12-01

    The increased level of environmental awareness has raised concerns about pollution. One area of high attention is the internal combustion engine. The internal combustion engine in and of itself is not a major pollution threat. However, the vast number of motor vehicles in use release large quantities of pollutants. Recent technological advances in ignition and engine controls coupled with unleaded fuels and catalytic converters have reduced vehicular emissions significantly. Alternate fuels have the potential to produce even greater reductions in emissions. The Natural Gas Vehicle (NGV) has been a significant alternative to accomplish the goal of cleaner combustion. Of the many alternative fuels under investigation, compressed natural gas (CNG) has demonstrated the lowest levels of emission. The only vehicle certified by the State of California as an Ultra Low Emission Vehicle (ULEV) was powered by CNG. The California emissions tests of the ULEV-CNG vehicle revealed the following concentrations: Non-Methane Hydrocarbons 0.005 grams/mile Carbon Monoxide 0.300 grams/mile Nitrogen Oxides 0.040 grams/mile. Unfortunately, CNG vehicles will not gain significant popularity until compressed natural gas is readily available in convenient locations in urban areas and in proximity to the Interstate highway system. Approximately 150,000 gasoline filling stations exist in the United States while number of CNG stations is about 1000 and many of those CNG stations are limited to fleet service only. Discussion in this paper concentrates on CNG flow measurement for fuel dispensers. Since the regulatory changes and market demands affect the flow metering and dispenser station design those aspects are discussed. The CNG industry faces a number of challenges.

  20. Abdominal Trigger Points and Psychological Function.

    PubMed

    Reeves, Roy R; Ladner, Mark E

    2016-02-01

    Myofascial trigger points (TPs) are a poorly understood phenomenon involving the myofascial system and its related neural, lymphatic, and circulatory elements. Compression or massage of a TP causes localized pain and may cause referred pain and autonomic phenomena. The authors describe a 58-year-old woman who experienced precipitation of substantial psychological symptoms directly related to her treatment for a lower abdominal TP. Her symptoms resolved after 2 weeks of receiving high-velocity, low-amplitude manipulation and soft tissue massage. Particularly in the abdomen, TPs may be associated with psychological reactions as well as physical aspects of bodily function. PMID:26830528

  1. An integrated circuit floating point accumulator

    NASA Technical Reports Server (NTRS)

    Goldsmith, T. C.

    1977-01-01

    Goddard Space Flight Center has developed a large scale integrated circuit (type 623) which can perform pulse counting, storage, floating point compression, and serial transmission, using a single monolithic device. Counts of 27 or 19 bits can be converted to transmitted values of 12 or 8 bits respectively. Use of the 623 has resulted in substantial savaings in weight, volume, and dollar resources on at least 11 scientific instruments to be flown on 4 NASA spacecraft. The design, construction, and application of the 623 are described.

  2. A New Approach for Fingerprint Image Compression

    SciTech Connect

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  3. Point by Point: Adding up Motivation

    ERIC Educational Resources Information Center

    Marchionda, Denise

    2010-01-01

    Students often view their course grades as a mysterious equation of teacher-given grades, teacher-given grace, and some other ethereal components based on luck. However, giving students the power to earn points based on numerous daily/weekly assignments and attendance makes the grading process objective and personal, freeing the instructor to…

  4. Bunch length compression method for free electron lasers to avoid parasitic compressions

    DOEpatents

    Douglas, David R.; Benson, Stephen; Nguyen, Dinh Cong; Tennant, Christopher; Wilson, Guy

    2015-05-26

    A method of bunch length compression method for a free electron laser (FEL) that avoids parasitic compressions by 1) applying acceleration on the falling portion of the RF waveform, 2) compressing using a positive momentum compaction (R.sub.56>0), and 3) compensating for aberration by using nonlinear magnets in the compressor beam line.

  5. Splitting the Cartesian point

    SciTech Connect

    Blodwell, J.F.

    1987-10-01

    It is argued that the point structure of space and time must be constructed from the primitive extensional character of space and time. A procedure for doing this is laid down and applied to one-dimensional and two-dimensional systems of abstract extensions. Topological and metrical properties of the constructed point systems, which differ nontrivially from the usual R and R/sup 2/ models, are examined. Briefly, constructed points are associated with directions and the Cartesian point is split. In one-dimension each point splits into a point pair compatible with the linear ordering. An application to one-dimensional particle motion is given, with the result that natural topological assumptions force the number of left point, right point transitions to remain locally finite in a continuous motion. In general, Cartesian points are seen to correspond to certain filters on a suitable Boolean algebra. Constructed points correspond to ultrafilters. Thus, point construction gives a natural refinement of the Cartesian systems.

  6. Magnetic Bunch Compression for a Compact Compton Source

    SciTech Connect

    Gamage, B.; Satogata, Todd J.

    2013-12-01

    A compact electron accelerator suitable for Compton source applications is in design at the Center for Accelerator Science at Old Dominion University and Jefferson Lab. Here we discuss two options for transverse magnetic bunch compression and final focus, each involving a 4-dipole chicane with M_{56} tunable over a range of 1.5-2.0m with independent tuning of final focus to interaction point $\\beta$*=5mm. One design has no net bending, while the other has net bending of 90 degrees and is suitable for compact corner placement.

  7. Resonance frequencies of a cavity containing a compressible viscous fluid

    NASA Astrophysics Data System (ADS)

    Conca, C.; Planchard, J.; Vanninathan, M.

    1993-03-01

    The aim of this paper is to study the resonance spectrum of a cavity containing a compressible viscous fluid. This system admits a discrete infinite sequence of eigenvalues whose real parts are negative, which is interpreted as the damping effect introduced by viscosity. Only a finite number of them have non-zero imaginary parts and this number depends on viscosity; a simple criterion is given for their position in the complex plane. The case of a cavity containing an elastic mechanical system immersed in the fluid is also examined; from a qualitative point of view, the nature of the resonance spectrum remains unchanged.

  8. Knowledge-based image bandwidth compression and enhancement

    NASA Astrophysics Data System (ADS)

    Saghri, John A.; Tescher, Andrew G.

    1987-01-01

    Techniques for incorporating a priori knowledge in the digital coding and bandwidth compression of image data are described and demonstrated. An algorithm for identifying and highlighting thin lines and point objects prior to coding is presented, and the precoding enhancement of a slightly smoothed version of the image is shown to be more effective than enhancement of the original image. Also considered are readjustment of the local distortion parameter and variable-block-size coding. The line-segment criteria employed in the classification are listed in a table, and sample images demonstrating the effectiveness of the enhancement techniques are presented.

  9. Underwing Compression Vortex-Attenuation Device

    NASA Technical Reports Server (NTRS)

    Patterson, James C., Jr.

    1994-01-01

    Underwing compression vortex-attenuation device designed to provide method for attenuating lift-induced vortex generated by wings of airplane. Includes compression panel attached to lower surface of wing, facing perpendicular to streamwise airflow. Concept effective on all types of aircraft. Causes increase in wing lift rather than reduction when deployed. Device of interest to aircraft designers and enhances air safety in general.

  10. Hardware compression using common portions of data

    DOEpatents

    Chang, Jichuan; Viswanathan, Krishnamurthy

    2015-03-24

    Methods and devices are provided for data compression. Data compression can include receiving a plurality of data chunks, sampling at least some of the plurality of data chunks extracting a common portion from a number of the plurality of data chunks based on the sampling, and storing a remainder of the plurality of data chunks in memory.

  11. Sudden Viscous Dissipation of Compressing Turbulence

    NASA Astrophysics Data System (ADS)

    Davidovits, Seth; Fisch, Nathaniel J.

    2016-03-01

    Compression of turbulent plasma can amplify the turbulent kinetic energy, if the compression is fast compared to the viscous dissipation time of the turbulent eddies. A sudden viscous dissipation mechanism is demonstrated, whereby this amplified turbulent kinetic energy is rapidly converted into thermal energy, suggesting a new paradigm for fast ignition inertial fusion.

  12. Sudden Viscous Dissipation of Compressing Turbulence.

    PubMed

    Davidovits, Seth; Fisch, Nathaniel J

    2016-03-11

    Compression of turbulent plasma can amplify the turbulent kinetic energy, if the compression is fast compared to the viscous dissipation time of the turbulent eddies. A sudden viscous dissipation mechanism is demonstrated, whereby this amplified turbulent kinetic energy is rapidly converted into thermal energy, suggesting a new paradigm for fast ignition inertial fusion. PMID:27015488

  13. LOW-VELOCITY COMPRESSIBLE FLOW THEORY

    EPA Science Inventory

    The widespread application of incompressible flow theory dominates low-velocity fluid dynamics, virtually preventing research into compressible low-velocity flow dynamics. Yet, compressible solutions to simple and well-defined flow problems and a series of contradictions in incom...

  14. Sudden Viscous Dissipation of Compressing Turbulence

    DOE PAGESBeta

    Davidovits, Seth; Fisch, Nathaniel J.

    2016-03-11

    Here we report compression of turbulent plasma can amplify the turbulent kinetic energy, if the compression is fast compared to the viscous dissipation time of the turbulent eddies. A sudden viscous dissipation mechanism is demonstrated, whereby this amplified turbulent kinetic energy is rapidly converted into thermal energy, suggesting a new paradigm for fast ignition inertial fusion.

  15. Compression of turbulence-affected video signals

    NASA Astrophysics Data System (ADS)

    Mahpod, Shahar; Yitzhaky, Yitzhak

    2009-08-01

    A video signal obtained through a relatively long-distance atmospheric medium suffers from blur and spatiotemporal image movements caused by the air turbulence. These phenomena, which reduce the visual quality of the signal, reduce also the compression rate for motion-estimation based video compression techniques, and cause an increase of the required bandwidth of the compressed signal. The compression rate reduction results from the frequent large amount of random image local movements which differ from one image to the other, resulting from the turbulence effects. In this research we examined the increase of compression rate by developing and comparing two approaches. In the first approach, a pre-processing image restoration is first performed, which includes reduction of the random movements in the video signal and optionally de-blurring the image. Then, a standard compression process is carried out. In this case, the final de-compressed video signal is a restored version of the recorded one. The second approach intends to predict turbulence-induced motion vectors according to the latest images in the sequence. In this approach the final decompressed image should be as much the same as the recorded image (including the spatiotemporal movements). It was found that the first approach improves the compression ratio. At the second approach it was found that after running short temporal median on the video sequence the turbulence optical flow progress can be predicted very well, but this result was not enough for producing a significant improvement at this stage.

  16. A Comparative Study of Compression Video Technology.

    ERIC Educational Resources Information Center

    Keller, Chris A.; And Others

    The purpose of this study was to provide an overview of compression devices used to increase the cost effectiveness of teleconferences by reducing satellite bandwidth requirements for the transmission of television pictures and accompanying audio signals. The main body of the report describes the comparison study of compression rates and their…

  17. Recoil Experiments Using a Compressed Air Cannon

    ERIC Educational Resources Information Center

    Taylor, Brett

    2006-01-01

    Ping-Pong vacuum cannons, potato guns, and compressed air cannons are popular and dramatic demonstrations for lecture and lab. Students enjoy them for the spectacle, but they can also be used effectively to teach physics. Recently we have used a student-built compressed air cannon as a laboratory activity to investigate impulse, conservation of…

  18. Classical data compression with quantum side information

    SciTech Connect

    Devetak, I.; Winter, A.

    2003-10-01

    The problem of classical data compression when the decoder has quantum side information at his disposal is considered. This is a quantum generalization of the classical Slepian-Wolf theorem. The optimal compression rate is found to be reduced from the Shannon entropy of the source by the Holevo information between the source and side information.

  19. A New Compression Method for FITS Tables

    NASA Technical Reports Server (NTRS)

    Pence, William; Seaman, Rob; White, Richard L.

    2010-01-01

    As the size and number of FITS binary tables generated by astronomical observatories increases, so does the need for a more efficient compression method to reduce the amount disk space and network bandwidth required to archive and down1oad the data tables. We have developed a new compression method for FITS binary tables that is modeled after the FITS tiled-image compression compression convention that has been in use for the past decade. Tests of this new method on a sample of FITS binary tables from a variety of current missions show that on average this new compression technique saves about 50% more disk space than when simply compressing the whole FITS file with gzip. Other advantages of this method are (1) the compressed FITS table is itself a valid FITS table, (2) the FITS headers remain uncompressed, thus allowing rapid read and write access to the keyword values, and (3) in the common case where the FITS file contains multiple tables, each table is compressed separately and may be accessed without having to uncompress the whole file.

  20. Lossless Compression on MRI Images Using SWT.

    PubMed

    Anusuya, V; Raghavan, V Srinivasa; Kavitha, G

    2014-10-01

    Medical image compression is one of the growing research fields in biomedical applications. Most medical images need to be compressed using lossless compression as each pixel information is valuable. With the wide pervasiveness of medical imaging applications in health-care settings and the increased interest in telemedicine technologies, it has become essential to reduce both storage and transmission bandwidth requirements needed for archival and communication of related data, preferably by employing lossless compression methods. Furthermore, providing random access as well as resolution and quality scalability to the compressed data has become of great utility. Random access refers to the ability to decode any section of the compressed image without having to decode the entire data set. The system proposes to implement a lossless codec using an entropy coder. 3D medical images are decomposed into 2D slices and subjected to 2D-stationary wavelet transform (SWT). The decimated coefficients are compressed in parallel using embedded block coding with optimized truncation of the embedded bit stream. These bit streams are decoded and reconstructed using inverse SWT. Finally, the compression ratio (CR) is evaluated to prove the efficiency of the proposal. As an enhancement, the proposed system concentrates on minimizing the computation time by introducing parallel computing on the arithmetic coding stage as it deals with multiple subslices. PMID:24848945

  1. Factors modulating effective chest compressions in the neonatal period.

    PubMed

    Mildenhall, Lindsay F J; Huynh, Trang K

    2013-12-01

    The need for chest compressions in the newborn is a rare occurrence. The methods employed for delivery of chest compressions have been poorly researched. Techniques that have been studied include compression:ventilation ratios, thumb versus finger method of delivering compressions, depth of compression, site on chest of compression, synchrony or asynchrony of breaths with compressions, and modalities to improve the compression technique and consistency. Although still in its early days, an evidence-based guideline for chest compressions is beginning to take shape. PMID:23920076

  2. Insertion Profiles of 4 Headless Compression Screws

    PubMed Central

    Hart, Adam; Harvey, Edward J.; Lefebvre, Louis-Philippe; Barthelat, Francois; Rabiei, Reza; Martineau, Paul A.

    2013-01-01

    Purpose In practice, the surgeon must rely on screw position (insertion depth) and tactile feedback from the screwdriver (insertion torque) to gauge compression. In this study, we identified the relationship between interfragmentary compression and these 2 factors. Methods The Acutrak Standard, Acutrak Mini, Synthes 3.0, and Herbert-Whipple implants were tested using a polyurethane foam scaphoid model. A specialized testing jig simultaneously measured compression force, insertion torque, and insertion depth at half-screw-turn intervals until failure occurred. Results The peak compression occurs at an insertion depth of −3.1 mm, −2.8 mm, 0.9 mm, and 1.5 mm for the Acutrak Mini, Acutrak Standard, Herbert-Whipple, and Synthes screws respectively (insertion depth is positive when the screw is proud above the bone and negative when buried). The compression and insertion torque at a depth of −2 mm were found to be 113 ± 18 N and 0.348 ± 0.052 Nm for the Acutrak Standard, 104 ± 15 N and 0.175 ± 0.008 Nm for the Acutrak Mini, 78 ± 9 N and 0.245 ± 0.006 Nm for the Herbert-Whipple, and 67 ± 2N, 0.233 ± 0.010 Nm for the Synthes headless compression screws. Conclusions All 4 screws generated a sizable amount of compression (> 60 N) over a wide range of insertion depths. The compression at the commonly recommended insertion depth of −2 mm was not significantly different between screws; thus, implant selection should not be based on compression profile alone. Conically shaped screws (Acutrak) generated their peak compression when they were fully buried in the foam whereas the shanked screws (Synthes and Herbert-Whipple) reached peak compression before they were fully inserted. Because insertion torque correlated poorly with compression, surgeons should avoid using tactile judgment of torque as a proxy for compression. Clinical relevance Knowledge of the insertion profile may improve our understanding of the implants, provide a better basis for comparing screws

  3. HVS-motivated quantization schemes in wavelet image compression

    NASA Astrophysics Data System (ADS)

    Topiwala, Pankaj N.

    1996-11-01

    Wavelet still image compression has recently been a focus of intense research, and appears to be maturing as a subject. Considerable coding gains over older DCT-based methods have been achieved, while the computational complexity has been made very competitive. We report here on a high performance wavelet still image compression algorithm optimized for both mean-squared error (MSE) and human visual system (HVS) characteristics. We present the problem of optimal quantization from a Lagrange multiplier point of view, and derive novel solutions. Ideally, all three components of a typical image compression system: transform, quantization, and entropy coding, should be optimized simultaneously. However, the highly nonlinear nature of quantization and encoding complicates the formulation of the total cost function. In this report, we consider optimizing the filter, and then the quantizer, separately, holding the other two components fixed. While optimal bit allocation has been treated in the literature, we specifically address the issue of setting the quantization stepsizes, which in practice is quite different. In this paper, we select a short high- performance filter, develop an efficient scalar MSE- quantizer, and four HVS-motivated quantizers which add some value visually without incurring any MSE losses. A combination of run-length and empirically optimized Huffman coding is fixed in this study.

  4. MOFs under pressure: the reversible compression of a single crystal.

    PubMed

    Gagnon, Kevin J; Beavers, Christine M; Clearfield, Abraham

    2013-01-30

    The structural change and resilience of a single crystal of a metal-organic framework (MOF), Zn(HO(3)PC(4)H(8)PO(3)H)·2H(2)O (ZAG-4), was investigated under high pressures (0-9.9 GPa) using in situ single crystal X-ray diffraction. Although the unit cell volume decreases over 27%, the quality of the single crystal is retained and the unit cell parameters revert to their original values after pressure has been removed. This framework is considerably compressible with a bulk modulus calculated at ∼11.7 GPa. The b-axis also exhibits both positive and negative linear compressibility. Within the applied pressures investigated, there was no discernible failure or amorphization point for this compound. The alkyl chains in the structure provide a spring-like cushion to stabilize the compression of the system allowing for large distortions in the metal coordination environment, without destruction of the material. This intriguing observation only adds to the current speculation as to whether or not MOFs may find a role as a new class of piezofunctional solid-state materials for application as highly sensitive pressure sensors, shock absorbing materials, pressure switches, or smart body armor. PMID:23320490

  5. Surface-cooling effects on compressible boundary-layer instability

    NASA Technical Reports Server (NTRS)

    Seddougui, Sharon O.; Bowles, R. I.; Smith, F. T.

    1990-01-01

    The influence of surface cooling on compressible boundary layer instability is discussed theoretically for both viscous and inviscid modes, at high Reynolds numbers. The cooling enhances the surface heat transfer and shear stress, creating a high heat transfer sublayer. This has the effect of distorting and accentuating the viscous Tollmien-Schlichting modes to such an extent that their spatial growth rates become comparable with, and can even exceed, the growth rates of inviscid modes, including those found previously. This is for moderate cooling, and it applies at any Mach number. In addition, the moderate cooling destabilizes otherwise stable viscous or inviscid modes, in particular triggering outward-traveling waves at the edge of the boundary layer in the supersonic regime. Severe cooling is also discussed as it brings compressible dynamics directly into play within the viscous sublayer. All the new cooled modes found involve the heat transfer sublayer quite actively, and they are often multi-structured in form and may be distinct from those observed in previous computational and experimental investigations. The corresponding nonlinear processes are also pointed out with regard to transition in the cooled compressible boundary layer. Finally, comparisons with Lysenko and Maslov's (1984) experiments on surface cooling are presented.

  6. The future of artificial satellite theories Hybrid ephemeris compression model

    NASA Astrophysics Data System (ADS)

    Hoots, Felix R.; France, Richard G.

    1996-03-01

    Since the time of Newton, astrodynamics has focused on the analytical solution of orbital problems. This was driven by the desire to obtain a theoretical understanding of the motion and the practical desire to be able to produce a computational result, Only with the advent of the computer did numerical integration become a practical consideration for solving dynamical problems. Although computer technology is not yet to the point of being able to provide numerical integration support for all satellite orbits, we are in a transition period which is being driven by the unprecedented increase in computational power, This transition will affect the future of analytical, semi-analytical and numerical artificial satellite theories in a dramatic way, In fact, the role for semi-analytical theories may disappear. During the time of transition, a central site may have the capacity to maintain the orbits using numerical integration, but the user may not have such a capacity or may need results in a more timely manner, One way to provide for this transition need is through the use of some type of satellite ephemeris compression. Through the combined use of a power series and a Fourier series, good quality ephemeris compression has been achieved for 7 day periods, The ephemeris compression requires less than 40 terms and is valid for all eccentricities and inclinations.

  7. Avalanches in compressed porous SiO(2)-based materials.

    PubMed

    Nataf, Guillaume F; Castillo-Villa, Pedro O; Baró, Jordi; Illa, Xavier; Vives, Eduard; Planes, Antoni; Salje, Ekhard K H

    2014-08-01

    The failure dynamics in SiO(2)-based porous materials under compression, namely the synthetic glass Gelsil and three natural sandstones, has been studied for slowly increasing compressive uniaxial stress with rates between 0.2 and 2.8 kPa/s. The measured collapsed dynamics is similar to Vycor, which is another synthetic porous SiO(2) glass similar to Gelsil but with a different porous mesostructure. Compression occurs by jerks of strain release and a major collapse at the failure point. The acoustic emission and shrinking of the samples during jerks are measured and analyzed. The energy of acoustic emission events, its duration, and waiting times between events show that the failure process follows avalanche criticality with power law statistics over ca. 4 decades with a power law exponent ɛ≃ 1.4 for the energy distribution. This exponent is consistent with the mean-field value for the collapse of granular media. Besides the absence of length, energy, and time scales, we demonstrate the existence of aftershock correlations during the failure process. PMID:25215740

  8. Compressed bitmap indices for efficient query processing

    SciTech Connect

    Wu, Kesheng; Otoo, Ekow; Shoshani, Arie

    2001-09-30

    Many database applications make extensive use of bitmap indexing schemes. In this paper, we study how to improve the efficiencies of these indexing schemes by proposing new compression schemes for the bitmaps. Most compression schemes are designed primarily to achieve good compression. During query processing they can be orders of magnitude slower than their uncompressed counterparts. The new schemes are designed to bridge this performance gap by reducing compression effectiveness and improving operation speed. In a number of tests on both synthetic data and real application data, we found that the new schemes significantly outperform the well-known compression schemes while using only modestly more space. For example, compared to the Byte-aligned Bitmap Code, the new schemes are 12 times faster and it uses only 50 percent more space. The new schemes use much less space(<30 percent) than the uncompressed scheme and are faster in a majority of the test cases.

  9. Postprocessing of Compressed Images via Sequential Denoising.

    PubMed

    Dar, Yehuda; Bruckstein, Alfred M; Elad, Michael; Giryes, Raja

    2016-07-01

    In this paper, we propose a novel postprocessing technique for compression-artifact reduction. Our approach is based on posing this task as an inverse problem, with a regularization that leverages on existing state-of-the-art image denoising algorithms. We rely on the recently proposed Plug-and-Play Prior framework, suggesting the solution of general inverse problems via alternating direction method of multipliers, leading to a sequence of Gaussian denoising steps. A key feature in our scheme is a linearization of the compression-decompression process, so as to get a formulation that can be optimized. In addition, we supply a thorough analysis of this linear approximation for several basic compression procedures. The proposed method is suitable for diverse compression techniques that rely on transform coding. In particular, we demonstrate impressive gains in image quality for several leading compression methods-JPEG, JPEG2000, and HEVC. PMID:27214878

  10. Multiresolution Distance Volumes for Progressive Surface Compression

    SciTech Connect

    Laney, D E; Bertram, M; Duchaineau, M A; Max, N L

    2002-04-18

    We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distance volumes for surface compression and progressive reconstruction for complex high genus surfaces.

  11. Postprocessing of Compressed Images via Sequential Denoising

    NASA Astrophysics Data System (ADS)

    Dar, Yehuda; Bruckstein, Alfred M.; Elad, Michael; Giryes, Raja

    2016-07-01

    In this work we propose a novel postprocessing technique for compression-artifact reduction. Our approach is based on posing this task as an inverse problem, with a regularization that leverages on existing state-of-the-art image denoising algorithms. We rely on the recently proposed Plug-and-Play Prior framework, suggesting the solution of general inverse problems via Alternating Direction Method of Multipliers (ADMM), leading to a sequence of Gaussian denoising steps. A key feature in our scheme is a linearization of the compression-decompression process, so as to get a formulation that can be optimized. In addition, we supply a thorough analysis of this linear approximation for several basic compression procedures. The proposed method is suitable for diverse compression techniques that rely on transform coding. Specifically, we demonstrate impressive gains in image quality for several leading compression methods - JPEG, JPEG2000, and HEVC.

  12. An efficient medical image compression scheme.

    PubMed

    Li, Xiaofeng; Shen, Yi; Ma, Jiachen

    2005-01-01

    In this paper, a fast lossless compression scheme is presented for the medical image. This scheme consists of two stages. In the first stage, a Differential Pulse Code Modulation (DPCM) is used to decorrelate the raw image data, therefore increasing the compressibility of the medical image. In the second stage, an effective scheme based on the Huffman coding method is developed to encode the residual image. This newly proposed scheme could reduce the cost for the Huffman coding table while achieving high compression ratio. With this algorithm, a compression ratio higher than that of the lossless JPEG method for image can be obtained. At the same time, this method is quicker than the lossless JPEG2000. In other words, the newly proposed algorithm provides a good means for lossless medical image compression. PMID:17280962

  13. Interactive computer graphics applications for compressible aerodynamics

    NASA Technical Reports Server (NTRS)

    Benson, Thomas J.

    1994-01-01

    Three computer applications have been developed to solve inviscid compressible fluids problems using interactive computer graphics. The first application is a compressible flow calculator which solves for isentropic flow, normal shocks, and oblique shocks or centered expansions produced by two dimensional ramps. The second application couples the solutions generated by the first application to a more graphical presentation of the results to produce a desk top simulator of three compressible flow problems: 1) flow past a single compression ramp; 2) flow past two ramps in series; and 3) flow past two opposed ramps. The third application extends the results of the second to produce a design tool which solves for the flow through supersonic external or mixed compression inlets. The applications were originally developed to run on SGI or IBM workstations running GL graphics. They are currently being extended to solve additional types of flow problems and modified to operate on any X-based workstation.

  14. Compressive Hyperspectral Imaging With Side Information

    NASA Astrophysics Data System (ADS)

    Yuan, Xin; Tsai, Tsung-Han; Zhu, Ruoyu; Llull, Patrick; Brady, David; Carin, Lawrence

    2015-09-01

    A blind compressive sensing algorithm is proposed to reconstruct hyperspectral images from spectrally-compressed measurements.The wavelength-dependent data are coded and then superposed, mapping the three-dimensional hyperspectral datacube to a two-dimensional image. The inversion algorithm learns a dictionary {\\em in situ} from the measurements via global-local shrinkage priors. By using RGB images as side information of the compressive sensing system, the proposed approach is extended to learn a coupled dictionary from the joint dataset of the compressed measurements and the corresponding RGB images, to improve reconstruction quality. A prototype camera is built using a liquid-crystal-on-silicon modulator. Experimental reconstructions of hyperspectral datacubes from both simulated and real compressed measurements demonstrate the efficacy of the proposed inversion algorithm, the feasibility of the camera and the benefit of side information.

  15. The New CCSDS Image Compression Recommendation

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron B.; Masschelein, Bart; Moury, Gilles; Schafer, Christoph

    2004-01-01

    The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists a two dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An ASIC implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm.

  16. Using irreversible compression in digital radiology: a preliminary study of the opinions of radiologists

    NASA Astrophysics Data System (ADS)

    Seeram, Euclid

    2006-03-01

    The large volumes of digital images produced by digital imaging modalities in Radiology have provided the motivation for the development of picture archiving and communication systems (PACS) in an effort to provide an organized mechanism for digital image management. The development of more sophisticated methods of digital image acquisition (Multislice CT and Digital Mammography, for example), as well as the implementation and performance of PACS and Teleradiology systems in a health care environment, have created challenges in the area of image compression with respect to storing and transmitting digital images. Image compression can be reversible (lossless) or irreversible (lossy). While in the former, there is no loss of information, the latter presents concerns since there is a loss of information. This loss of information from diagnostic medical images is of primary concern not only to radiologists, but also to patients and their physicians. In 1997, Goldberg pointed out that "there is growing evidence that lossy compression can be applied without significantly affecting the diagnostic content of images... there is growing consensus in the radiologic community that some forms of lossy compression are acceptable". The purpose of this study was to explore the opinions of expert radiologists, and related professional organizations on the use of irreversible compression in routine practice The opinions of notable radiologists in the US and Canada are varied indicating no consensus of opinion on the use of irreversible compression in primary diagnosis, however, they are generally positive on the notion of the image storage and transmission advantages. Almost all radiologists are concerned with the litigation potential of an incorrect diagnosis based on irreversible compressed images. The survey of several radiology professional and related organizations reveals that no professional practice standards exist for the use of irreversible compression. Currently, the

  17. A novel DDS using nonlinear ROM addressing with improved compression ratio and quantization noise.

    PubMed

    Chimakurthy, Lakshmi S Jyothi; Ghosh, Malinky; Dai, Fa Foster; Jaeger, Richard C

    2006-02-01

    This paper presents a novel direct digital frequency synthesis (DDFS) ROM compression technique based on two properties of a sine function: (a) piecewise linear technique to approximate a sinusoid, and (b) variation in the slope of the sinusoid at different phase angles. In the proposed DDFS architecture the ROM stores a few of the sinusoidal values, and the interpolation points between the successive stored values are calculated using linear and nonlinear addressing schemes. The nonlinear addressing scheme is used to adaptively vary the number of interpolation points as the slope of the sinusoid changes, leading to a greatly reduced ROM size. The proposed architecture achieves a high compression ratio with a spurious response comparable to that of recent ROM compression techniques. To validate the proposed DDS architecture, the linear, nonlinear, and conventional DDS ROM architectures were implemented in a Xilinx Spartan II FPGA and their spurious performances were compared. PMID:16529101

  18. Fast lossless compression via cascading Bloom filters

    PubMed Central

    2014-01-01

    Background Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. Results We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Conclusions Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time

  19. Compression behaviour of anhydrous and hydrate forms of sodium naproxen.

    PubMed

    Malaj, Ledjan; Censi, Roberta; Gashi, Zehadin; Di Martino, Piera

    2010-05-10

    The aim of the present work was to investigate the technological properties and the compression behaviour of the anhydrous and hydrate solid forms of sodium naproxen. Among the hydrates, the following forms were studied: the monohydrate (MSN), obtained by dehydrating a dihydrated form (DSN) in each turn obtained by exposing the anhydrous form at 55% RH; a dihydrated form (CSN) obtained by crystallizing sodium naproxen from water, the tetrahydrated form (TSN) obtained by exposing the anhydrous form at 75% RH. The physico-chemical (crystalline form and water content), the micromeritic (crystal morphology and particle size) and the mechanical properties (Carr's index, apparent particle density, compression behaviour, elastic recovery and strength of compact) were evaluated. We made every effort to reduce differences in crystal habit, particle size and distribution, and amount of absorbed water among the samples, so that the only factors affecting their technological behaviour would be the degree of hydration and the crystalline structure. This study demonstrates a correlation between the compression behaviour and the water molecules present in the crystalline structures. The sites where water molecules are accommodated in the crystalline structure behave like weak points where the crystalline lattice yields under compression. The crystal deformability is proportional to the number of water molecules in these sites; the higher the water content, the higher the deformability, because the densification behaviour changes from a predominantly elastic deformation to a plastic behaviour. The deformability is responsible for a higher densification tendency that favours larger interparticle bonding areas that may explain the better tabletability of TSN and CSN. PMID:20117196

  20. Shock Compression of Liquid Hydrazine.

    NASA Astrophysics Data System (ADS)

    Voskoboinikov, I. M.

    1999-06-01

    The possibility of calculation of the parameters of a shock compression of liquid hydrazine within the frameworks of the schemes is shown. When the mass velocities behind shock fronts do not exceed the value equals 3.1 km/s, it may be managed under assumption of the retention of the initial compound (hydrazine) behind a shock front. The detonation velocities of hydrazine solutions with nitromethane and hydrazinenitrate correspond to the destruction of hydrazine up to ammonia and nitrogen that is accompanied by a noticeable energy release. The estimates performed demonstrate a possibility of the detonation of a liquid hydrazine with the velocity equals 8 km/s, during which the heating up of the substance behind a shock front (equals approximately 2000 K) is comparable with those observed during detonation of liquid explosives. The large values of the critical diameter of detonation are expected because of activation energy of hydrazine decomposition equals 53.2 kcal/mol. They are decreased up on addition of a certain amount of liquid explosives. Their more rapid decomposition behind a shock front gives rise to the temperature increase that is sufficient for destruction of hydrazine.

  1. Compressive sensing for nuclear security.

    SciTech Connect

    Gestner, Brian Joseph

    2013-12-01

    Special nuclear material (SNM) detection has applications in nuclear material control, treaty verification, and national security. The neutron and gamma-ray radiation signature of SNMs can be indirectly observed in scintillator materials, which fluoresce when exposed to this radiation. A photomultiplier tube (PMT) coupled to the scintillator material is often used to convert this weak fluorescence to an electrical output signal. The fluorescence produced by a neutron interaction event differs from that of a gamma-ray interaction event, leading to a slightly different pulse in the PMT output signal. The ability to distinguish between these pulse types, i.e., pulse shape discrimination (PSD), has enabled applications such as neutron spectroscopy, neutron scatter cameras, and dual-mode neutron/gamma-ray imagers. In this research, we explore the use of compressive sensing to guide the development of novel mixed-signal hardware for PMT output signal acquisition. Effectively, we explore smart digitizers that extract sufficient information for PSD while requiring a considerably lower sample rate than conventional digitizers. Given that we determine the feasibility of realizing these designs in custom low-power analog integrated circuits, this research enables the incorporation of SNM detection into wireless sensor networks.

  2. Spectral compression of single photons

    NASA Astrophysics Data System (ADS)

    Lavoie, J.; Donohue, J. M.; Wright, L. G.; Fedrizzi, A.; Resch, K. J.

    2013-05-01

    Photons are critical to quantum technologies because they can be used for virtually all quantum information tasks, for example, in quantum metrology, as the information carrier in photonic quantum computation, as a mediator in hybrid systems, and to establish long-distance networks. The physical characteristics of photons in these applications differ drastically; spectral bandwidths span 12 orders of magnitude from 50 THz (ref. 6) for quantum-optical coherence tomography to 50 Hz for certain quantum memories. Combining these technologies requires coherent interfaces that reversibly map centre frequencies and bandwidths of photons to avoid excessive loss. Here, we demonstrate bandwidth compression of single photons by a factor of 40 as well as tunability over a range 70 times that bandwidth via sum-frequency generation with chirped laser pulses. This constitutes a time-to-frequency interface for light capable of converting time-bin to colour entanglement, and enables ultrafast timing measurements. It is a step towards arbitrary waveform generation for single and entangled photons.

  3. PHELIX for flux compression studies

    SciTech Connect

    Turchi, Peter J; Rousculp, Christopher L; Reinovsky, Robert E; Reass, William A; Griego, Jeffrey R; Oro, David M; Merrill, Frank E

    2010-06-28

    PHELIX (Precision High Energy-density Liner Implosion eXperiment) is a concept for studying electromagnetic implosions using proton radiography. This approach requires a portable pulsed power and liner implosion apparatus that can be operated in conjunction with an 800 MeV proton beam at the Los Alamos Neutron Science Center. The high resolution (< 100 micron) provided by proton radiography combined with similar precision of liner implosions driven electromagnetically can permit close comparisons of multi-frame experimental data and numerical simulations within a single dynamic event. To achieve a portable implosion system for use at high energy-density in a proton laboratory area requires sub-megajoule energies applied to implosions only a few cms in radial and axial dimension. The associated inductance changes are therefore relatively modest, so a current step-up transformer arrangement is employed to avoid excessive loss to parasitic inductances that are relatively large for low-energy banks comprising only several capacitors and switches. We describe the design, construction and operation of the PHELIX system and discuss application to liner-driven, magnetic flux compression experiments. For the latter, the ability of strong magnetic fields to deflect the proton beam may offer a novel technique for measurement of field distributions near perturbed surfaces.

  4. Artificial Compressibility with Entropic Damping

    NASA Astrophysics Data System (ADS)

    Clausen, Jonathan; Roberts, Scott

    2012-11-01

    Artificial Compressibility (AC) methods relax the strict incompressibility constraint associated with the incompressible Navier-Stokes equations. Instead, they rely on an artificial equation of state relating pressure and density fluctuations through a numerical Mach number. Such methods are not new: the first AC methods date back to Chorin (1967). More recent applications can be found in the lattice-Boltzmann method, which is a kinetic/mesoscopic method that converges to an AC form of the Navier-Stokes equations. With computing hardware trending towards massively parallel architectures in order to achieve high computational throughput, AC style methods have become attractive due to their local information propagation and concomitant parallelizable algorithms. In this work, we examine a damped form of AC in the context of finite-difference and finite-element methods, with a focus on achieving time-accurate simulations. Also, we comment on the scalability of the various algorithms. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  5. The compression pathway of quartz

    SciTech Connect

    Thompson, Richard M.; Downs, Robert T.; Dera, Przemyslaw

    2011-11-07

    The structure of quartz over the temperature domain (298 K, 1078 K) and pressure domain (0 GPa, 20.25 GPa) is compared to the following three hypothetical quartz crystals: (1) Ideal {alpha}-quartz with perfectly regular tetrahedra and the same volume and Si-O-Si angle as its observed equivalent (ideal {beta}-quartz has Si-O-Si angle fixed at 155.6{sup o}). (2) Model {alpha}-quartz with the same Si-O-Si angle and cell parameters as its observed equivalent, derived from ideal by altering the axial ratio. (3) BCC quartz with a perfectly body-centered cubic arrangement of oxygen anions and the same volume as its observed equivalent. Comparison of experimental data recorded in the literature for quartz with these hypothetical crystal structures shows that quartz becomes more ideal as temperature increases, more BCC as pressure increases, and that model quartz is a very good representation of observed quartz under all conditions. This is consistent with the hypothesis that quartz compresses through Si-O-Si angle-bending, which is resisted by anion-anion repulsion resulting in increasing distortion of the c/a axial ratio from ideal as temperature decreases and/or pressure increases.

  6. Compressing images for the Internet

    NASA Astrophysics Data System (ADS)

    Beretta, Giordano B.

    1998-01-01

    The World Wide Web has rapidly become the hot new mass communications medium. Content creators are using similar design and layout styles as in printed magazines, i.e., with many color images and graphics. The information is transmitted over plain telephone lines, where the speed/price trade-off is much more severe than in the case of printed media. The standard design approach is to use palettized color and to limit as much as possible the number of colors used, so that the images can be encoded with a small number of bits per pixel using the Graphics Interchange Format (GIF) file format. The World Wide Web standards contemplate a second data encoding method (JPEG) that allows color fidelity but usually performs poorly on text, which is a critical element of information communicated on this medium. We analyze the spatial compression of color images and describe a methodology for using the JPEG method in a way that allows a compact representation while preserving full color fidelity.

  7. Static Compression of Tetramethylammonium Borohydride

    SciTech Connect

    Dalton, Douglas Allen; Somayazulu, M.; Goncharov, Alexander F.; Hemley, Russell J.

    2011-11-15

    Raman spectroscopy and synchrotron X-ray diffraction are used to examine the high-pressure behavior of tetramethylammonium borohydride (TMAB) to 40 GPa at room temperature. The measurements reveal weak pressure-induced structural transitions around 5 and 20 GPa. Rietveld analysis and Le Bail fits of the powder diffraction data based on known structures of tetramethylammonium salts indicate that the transitions are mediated by orientational ordering of the BH{sub 4}{sup -} tetrahedra followed by tilting of the (CH{sub 3}){sub 4}N{sup +} groups. X-ray diffraction patterns obtained during pressure release suggest reversibility with a degree of hysteresis. Changes in the Raman spectrum confirm that these transitions are not accompanied by bonding changes between the two ionic species. At ambient conditions, TMAB does not possess dihydrogen bonding, and Raman data confirms that this feature is not activated upon compression. The pressure-volume equation of state obtained from the diffraction data gives a bulk modulus [K{sub 0} = 5.9(6) GPa, K'{sub 0} = 9.6(4)] slightly lower than that observed for ammonia borane. Raman spectra obtained over the entire pressure range (spanning over 40% densification) indicate that the intramolecular vibrational modes are largely coupled.

  8. Shock compression profiles in ceramics

    SciTech Connect

    Grady, D.E.; Moody, R.L.

    1996-03-01

    An investigation of the shock compression properties of high-strength ceramics has been performed using controlled planar impact techniques. In a typical experimental configuration, a ceramic target disc is held stationary, and it is struck by plates of either a similar ceramic or by plates of a well-characterized metal. All tests were performed using either a single-stage propellant gun or a two-stage light-gas gun. Particle velocity histories were measured with laser velocity interferometry (VISAR) at the interface between the back of the target ceramic and a calibrated VISAR window material. Peak impact stresses achieved in these experiments range from about 3 to 70 GPa. Ceramics tested under shock impact loading include: Al{sub 2}O{sub 3}, AlN, B{sub 4}C, SiC, Si{sub 3}N{sub 4}, TiB{sub 2}, WC and ZrO{sub 2}. This report compiles the VISAR wave profiles and experimental impact parameters within a database-useful for response model development, computational model validation studies, and independent assessment of the physics of dynamic deformation on high-strength, brittle solids.

  9. Stability of rectangular plates with longitudinal or transverse stiffeners under uniform compression

    NASA Technical Reports Server (NTRS)

    Barbre, R

    1939-01-01

    In the present paper, the complete buckling conditions of stiffened plates are being developed for uniform compression. We shall treat plates with one or two longitudinal or transverse stiffeners at any point, discuss the buckling conditions, and evaluate them for different cases.

  10. Stabilization of Rayleigh-Taylor instability in the presence of viscosity and compressibility: A critical analysis

    NASA Astrophysics Data System (ADS)

    Mitra, A.; Roychoudhury, R.; Khan, M.

    2016-02-01

    The stabilization of the Rayleigh-Taylor instability growth rate due to the combined effect of viscosity and compressibility has been studied. A detailed explanation of the observed results has been made from theoretical point of view. The numerical results have been compared qualitatively with those of Plesset and Whipple [Phys. Fluids 17, 1 (1974)] and Bernstein and Book [Phys. Fluids 26, 453 (1983)].

  11. Sugar Determination in Foods with a Radially Compressed High Performance Liquid Chromatography Column.

    ERIC Educational Resources Information Center

    Ondrus, Martin G.; And Others

    1983-01-01

    Advocates use of Waters Associates Radial Compression Separation System for high performance liquid chromatography. Discusses instrumentation and reagents, outlining procedure for analyzing various foods and discussing typical student data. Points out potential problems due to impurities and pump seal life. Suggests use of ribose as internal…

  12. Weakly relativistic and ponderomotive effects on self-focusing and self-compression of laser pulses in near critical plasmas

    SciTech Connect

    Bokaei, B.; Niknam, A. R.

    2014-10-15

    The spatiotemporal dynamics of high power laser pulses in near critical plasmas are studied taking in to account the effects of relativistic and ponderomotive nonlinearities. First, within one-dimensional analysis, the effects of initial parameters such as laser intensity, plasma density, and plasma electron temperature on the self-compression mechanism are discussed. The results illustrate that the ponderomotive nonlinearity obstructs the relativistic self-compression above a certain intensity value. Moreover, the results indicate the existence of the turning point temperature in which the compression process has its strongest strength. Next, the three-dimensional analysis of laser pulse propagation is investigated by coupling the self-focusing equation with the self-compression one. It is shown that in contrast to the case in which the only relativistic nonlinearity is considered, in the presence of ponderomotive nonlinearity, the self-compression mechanism obstructs the self-focusing and leads to an increase of the laser spot size.

  13. Cloud Optimized Image Format and Compression

    NASA Astrophysics Data System (ADS)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  14. Multiresolution Distance Volumes for Progressive Surface Compression

    SciTech Connect

    Laney, D; Bertram, M; Duchaineau, M; Max, N

    2002-01-14

    Surfaces generated by scientific simulation and range scanning can reach into the billions of polygons. Such surfaces must be aggressively compressed, but at the same time should provide for level of detail queries. Progressive compression techniques based on subdivision surfaces produce impressive results on range scanned models. However, these methods require the construction of a base mesh which parameterizes the surface to be compressed and encodes the topology of the surface. For complex surfaces with high genus and/or a large number of components, the computation of an appropriate base mesh is difficult and often infeasible. We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our method avoids the costly base-mesh construction step and offers several improvements over previous attempts at compressing signed-distance functions, including an {Omicron}(n) distance transform, a new zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distance volumes for surface compression and progressive reconstruction for complex high genus surfaces.

  15. Aerodynamics inside a rapid compression machine

    SciTech Connect

    Mittal, Gaurav; Sung, Chih-Jen

    2006-04-15

    The aerodynamics inside a rapid compression machine after the end of compression is investigated using planar laser-induced fluorescence (PLIF) of acetone. To study the effect of reaction chamber configuration on the resulting aerodynamics and temperature field, experiments are conducted and compared using a creviced piston and a flat piston under varying conditions. Results show that the flat piston design leads to significant mixing of the cold vortex with the hot core region, which causes alternate hot and cold regions inside the combustion chamber. At higher pressures, the effect of the vortex is reduced. The creviced piston head configuration is demonstrated to result in drastic reduction of the effect of the vortex. Experimental conditions are also simulated using the Star-CD computational fluid dynamics package. Computed results closely match with experimental observation. Numerical results indicate that with a flat piston design, gas velocity after compression is very high and the core region shrinks quickly due to rapid entrainment of cold gases. Whereas, for a creviced piston head design, gas velocity after compression is significantly lower and the core region remains unaffected for a long duration. As a consequence, for the flat piston, adiabatic core assumption can significantly overpredict the maximum temperature after the end of compression. For the creviced piston, the adiabatic core assumption is found to be valid even up to 100 ms after compression. This work therefore experimentally and numerically substantiates the importance of piston head design for achieving a homogeneous core region inside a rapid compression machine. (author)

  16. GPU Lossless Hyperspectral Data Compression System

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh I.; Keymeulen, Didier; Kiely, Aaron B.; Klimesh, Matthew A.

    2014-01-01

    Hyperspectral imaging systems onboard aircraft or spacecraft can acquire large amounts of data, putting a strain on limited downlink and storage resources. Onboard data compression can mitigate this problem but may require a system capable of a high throughput. In order to achieve a high throughput with a software compressor, a graphics processing unit (GPU) implementation of a compressor was developed targeting the current state-of-the-art GPUs from NVIDIA(R). The implementation is based on the fast lossless (FL) compression algorithm reported in "Fast Lossless Compression of Multispectral-Image Data" (NPO- 42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which operates on hyperspectral data and achieves excellent compression performance while having low complexity. The FL compressor uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. The new Consultative Committee for Space Data Systems (CCSDS) Standard for Lossless Multispectral & Hyperspectral image compression (CCSDS 123) is based on the FL compressor. The software makes use of the highly-parallel processing capability of GPUs to achieve a throughput at least six times higher than that of a software implementation running on a single-core CPU. This implementation provides a practical real-time solution for compression of data from airborne hyperspectral instruments.

  17. Segmentation-based CT image compression

    NASA Astrophysics Data System (ADS)

    Thammineni, Arunoday; Mukhopadhyay, Sudipta; Kamath, Vidya

    2004-04-01

    The existing image compression standards like JPEG and JPEG 2000, compress the whole image as a single frame. This makes the system simple but inefficient. The problem is acute for applications where lossless compression is mandatory viz. medical image compression. If the spatial characteristics of the image are considered, it can give rise to a more efficient coding scheme. For example, CT reconstructed images have uniform background outside the field of view (FOV). Even the portion within the FOV can be divided as anatomically relevant and irrelevant parts. They have distinctly different statistics. Hence coding them separately will result in more efficient compression. Segmentation is done based on thresholding and shape information is stored using 8-connected differential chain code. Simple 1-D DPCM is used as the prediction scheme. The experiments show that the 1st order entropies of images fall by more than 11% when each segment is coded separately. For simplicity and speed of decoding Huffman code is chosen for entropy coding. Segment based coding will have an overhead of one table per segment but the overhead is minimal. Lossless compression of image based on segmentation resulted in reduction of bit rate by 7%-9% compared to lossless compression of whole image as a single frame by the same prediction coder. Segmentation based scheme also has the advantage of natural ROI based progressive decoding. If it is allowed to delete the diagnostically irrelevant portions, the bit budget can go down as much as 40%. This concept can be extended to other modalities.

  18. Lossless compression of VLSI layout image data.

    PubMed

    Dai, Vito; Zakhor, Avideh

    2006-09-01

    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data. PMID:16948299

  19. Electrorheological fluid under elongation, compression, and shearing

    NASA Astrophysics Data System (ADS)

    Tian, Y.; Meng, Y.; Mao, H.; Wen, S.

    2002-03-01

    Electrorheological (ER) fluid based on zeolite and silicone oil under elongation, compression, and shearing was investigated at room temperature. Dc electric fields were applied on the ER fluid when elongation and compression were carried out on a self-constructed test system. The shear yield stress, presenting the macroscopic interactions of particles in the ER fluid along the direction of shearing and perpendicular to the direction of the electric field, was also obtained by a HAAKE RV20 rheometer. The tensile yield stress, presenting the macroscopic interactions of particles in the ER fluid along the direction of the electric field, was achieved as the peak value in the elongating curve with an elongating yield strain of 0.15-0.20. A shear yield angle of about 15°-18.5° reasonably connected tensile yield stress with shear yield stress, agreeing with the shear yield angle tested well by other researchers. The compressing tests showed that the ER fluid has a high compressive modulus under a small compressive strain lower than 0.1. The compressive stress has an exponential relationship with the compressive strain when it is higher than 0.1, and it is much higher than shear yield stress.

  20. MAFCO: A Compression Tool for MAF Files

    PubMed Central

    Matos, Luís M. O.; Neves, António J. R.; Pratas, Diogo; Pinho, Armando J.

    2015-01-01

    In the last decade, the cost of genomic sequencing has been decreasing so much that researchers all over the world accumulate huge amounts of data for present and future use. These genomic data need to be efficiently stored, because storage cost is not decreasing as fast as the cost of sequencing. In order to overcome this problem, the most popular general-purpose compression tool, gzip, is usually used. However, these tools were not specifically designed to compress this kind of data, and often fall short when the intention is to reduce the data size as much as possible. There are several compression algorithms available, even for genomic data, but very few have been designed to deal with Whole Genome Alignments, containing alignments between entire genomes of several species. In this paper, we present a lossless compression tool, MAFCO, specifically designed to compress MAF (Multiple Alignment Format) files. Compared to gzip, the proposed tool attains a compression gain from 34% to 57%, depending on the data set. When compared to a recent dedicated method, which is not compatible with some data sets, the compression gain of MAFCO is about 9%. Both source-code and binaries for several operating systems are freely available for non-commercial use at: http://bioinformatics.ua.pt/software/mafco. PMID:25816229

  1. Nickel Curie Point Engine

    ERIC Educational Resources Information Center

    Chiaverina, Chris; Lisensky, George

    2014-01-01

    Ferromagnetic materials such as nickel, iron, or cobalt lose the electron alignment that makes them attracted to a magnet when sufficient thermal energy is added. The temperature at which this change occurs is called the "Curie temperature," or "Curie point." Nickel has a Curie point of 627 K, so a candle flame is a sufficient…

  2. Torsade de pointes.

    PubMed

    Munro, P T; Graham, C A

    2002-09-01

    A case is described of torsade de pointes in a 41 year old woman with pre-existing QTc prolongation, potentially exacerbated by treatment with sotalol. Previous cardiac investigations had been normal and after a second episode of ventricular fibrillation the patient was referred for electrophysiological studies. The authors review the physiology, causes, and treatment of QTc prolongation and torsade de pointes. PMID:12205024

  3. Model Breaking Points Conceptualized

    ERIC Educational Resources Information Center

    Vig, Rozy; Murray, Eileen; Star, Jon R.

    2014-01-01

    Current curriculum initiatives (e.g., National Governors Association Center for Best Practices and Council of Chief State School Officers 2010) advocate that models be used in the mathematics classroom. However, despite their apparent promise, there comes a point when models break, a point in the mathematical problem space where the model cannot,…

  4. Compression of Space for Low Visibility Probes

    PubMed Central

    Born, Sabine; Krüger, Hannah M.; Zimmermann, Eckart; Cavanagh, Patrick

    2016-01-01

    Stimuli briefly flashed just before a saccade are perceived closer to the saccade target, a phenomenon known as perisaccadic compression of space (Ross et al., 1997). More recently, we have demonstrated that brief probes are attracted towards a visual reference when followed by a mask, even in the absence of saccades (Zimmermann et al., 2014a). Here, we ask whether spatial compression depends on the transient disruptions of the visual input stream caused by either a mask or a saccade. Both of these degrade the probe visibility but we show that low probe visibility alone causes compression in the absence of any disruption. In a first experiment, we varied the regions of the screen covered by a transient mask, including areas where no stimulus was presented and a condition without masking. In all conditions, we adjusted probe contrast to make the probe equally hard to detect. Compression effects were found in all conditions. To obtain compression without a mask, the probe had to be presented at much lower contrasts than with masking. Comparing mislocalizations at different probe detection rates across masking, saccades and low contrast conditions without mask or saccade, Experiment 2 confirmed this observation and showed a strong influence of probe contrast on compression. Finally, in Experiment 3, we found that compression decreased as probe duration increased both for masks and saccades although here we did find some evidence that factors other than simply visibility as we measured it contribute to compression. Our experiments suggest that compression reflects how the visual system localizes weak targets in the context of highly visible stimuli. PMID:27013989

  5. Multispectral Image Feature Points

    PubMed Central

    Aguilera, Cristhian; Barrera, Fernando; Lumbreras, Felipe; Sappa, Angel D.; Toledo, Ricardo

    2012-01-01

    This paper presents a novel feature point descriptor for the multispectral image case Far-Infrared and Visible Spectrum images. It allows matching interest points on images of the same scene but acquired in different spectral bands. Initially, points of interest are detected on both images through a SIFT-like based scale space representation. Then, these points are characterized using an Edge Oriented Histogram (EOH) descriptor. Finally, points of interest from multispectral images are matched by finding nearest couples using the information from the descriptor. The provided experimental results and comparisons with similar methods show both the validity of the proposed approach as well as the improvements it offers with respect to the current state-of-the-art.

  6. Subpicosecond Compression Experiments at Los Alamos National Laboratory

    SciTech Connect

    Carlsten, B.E.; Feldman, D.W.; Kinross-Wright, J.M.; Milder, M.L.; Russell, S.J.; Plato, J.G.; Sherwood, B.A.; Weber, M.E.; Cooper, R.G.; Sturges, R.E.

    1996-04-01

    We report on recent experiments using a magnetic chicane compressor at 8 MeV. Electron bunches at both low (0.1 nC) and high (1 nC) charges were compressed from 10{endash}15 ps to less than 1 ps (FWHM). A transverse deflecting rf cavity was used to measure the bunch length at low charge; the bunch length at high charge was inferred from the induced energy spread of the beam. The longitudinal centrifugal space-charge force [{ital Phys}. {ital Rev}. {ital E} {bold 51}, 1453 (1995)] is calculated using a point-to-point numerical simulation and is shown not to influence the energy-spread measurement. {copyright} {ital 1996 American Institute of Physics.}

  7. Lensfree color imaging on a nanostructured chip using compressive decoding

    PubMed Central

    Khademhosseinieh, Bahar; Biener, Gabriel; Sencan, Ikbal; Ozcan, Aydogan

    2010-01-01

    We demonstrate subpixel level color imaging capability on a lensfree incoherent on-chip microscopy platform. By using a nanostructured substrate, the incoherent emission from the object plane is modulated to create a unique far-field diffraction pattern corresponding to each point at the object plane. These lensfree diffraction patterns are then sampled in the far-field using a color sensor-array, where the pixels have three different types of color filters at red, green, and blue (RGB) wavelengths. The recorded RGB diffraction patterns (for each point on the structured substrate) form a basis that can be used to rapidly reconstruct any arbitrary multicolor incoherent object distribution at subpixel resolution, using a compressive sampling algorithm. This lensfree computational imaging platform could be quite useful to create a compact fluorescent on-chip microscope that has color imaging capability. PMID:21173866

  8. Software For Tie-Point Registration Of SAR Data

    NASA Technical Reports Server (NTRS)

    Rignot, Eric; Dubois, Pascale; Okonek, Sharon; Van Zyl, Jacob; Burnette, Fred; Borgeaud, Maurice

    1995-01-01

    SAR-REG software package registers synthetic-aperture-radar (SAR) image data to common reference frame based on manual tie-pointing. Image data can be in binary, integer, floating-point, or AIRSAR compressed format. For example, with map of soil characteristics, vegetation map, digital elevation map, or SPOT multispectral image, as long as user can generate binary image to be used by tie-pointing routine and data are available in one of the previously mentioned formats. Written in FORTRAN 77.

  9. Lossy compression of hyperspectral images based on noise parameters estimation and variance stabilizing transform

    NASA Astrophysics Data System (ADS)

    Zemliachenko, Alexander N.; Kozhemiakin, Ruslan A.; Uss, Mikhail L.; Abramov, Sergey K.; Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Vozel, Benoît; Chehdi, Kacem

    2014-01-01

    A problem of lossy compression of hyperspectral images is considered. A specific aspect is that we assume a signal-dependent model of noise for data acquired by new generation sensors. Moreover, a signal-dependent component of the noise is assumed dominant compared to a signal-independent noise component. Sub-band (component-wise) lossy compression is studied first, and it is demonstrated that optimal operation point (OOP) can exist. For such OOP, the mean square error between compressed and noise-free images attains global or, at least, local minimum, i.e., a good effect of noise removal (filtering) is reached. In practice, we show how compression in the neighborhood of OOP can be carried out, when a noise-free image is not available. Two approaches for reaching this goal are studied. First, lossy compression directly applied to the original data is considered. According to another approach, lossy compression is applied to images after direct variance stabilizing transform (VST) with properly adjusted parameters. Inverse VST has to be performed only after data decompression. It is shown that the second approach has certain advantages. One of them is that the quantization step for a coder can be set the same for all sub-band images. This offers favorable prerequisites for applying three-dimensional (3-D) methods of lossy compression for sub-band images combined into groups after VST. Two approaches to 3-D compression, based on the discrete cosine transform, are proposed and studied. A first approach presumes obtaining the reference and "difference" images for each group. A second performs compression directly for sub-images in a group. We show that it is a good choice to have 16 sub-images in each group. The abovementioned approaches are tested for Hyperion hyperspectral data. It is demonstrated that the compression ratio of about 15-20 can be provided for hyperspectral image compression in the neighborhood of OOP for 3-D coders, which is sufficiently larger than

  10. Data compression for full motion video transmission

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Sayood, Khalid

    1991-01-01

    Clearly transmission of visual information will be a major, if not dominant, factor in determining the requirements for, and assessing the performance of, the SEI communications systems. Projected image/video requirements which are currently anticipated for SEI mission scenarios are presented. Based on this information and projected link performance figures, the image/video data compression requirements which would allow link closure are identified. Finally several approaches which could satisfy some of the compression requirements are presented and possible future approaches which show promise for more substantial compression performance improvement are discussed.

  11. Compressible homogeneous shear: Simulation and modeling

    NASA Technical Reports Server (NTRS)

    Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.

    1992-01-01

    Compressibility effects were studied on turbulence by direct numerical simulation of homogeneous shear flow. A primary observation is that the growth of the turbulent kinetic energy decreases with increasing turbulent Mach number. The sinks provided by compressible dissipation and the pressure dilatation, along with reduced Reynolds shear stress, are shown to contribute to the reduced growth of kinetic energy. Models are proposed for these dilatational terms and verified by direct comparison with the simulations. The differences between the incompressible and compressible fields are brought out by the examination of spectra, statistical moments, and structure of the rate of strain tensor.

  12. Compressed Gas Safety for Experimental Fusion Facilities

    SciTech Connect

    Lee C. Cadwallader

    2004-09-01

    Experimental fusion facilities present a variety of hazards to the operators and staff. There are unique or specialized hazards, including magnetic fields, cryogens, radio frequency emissions, and vacuum reservoirs. There are also more general industrial hazards, such as a wide variety of electrical power, pressurized air, and cooling water systems in use, there are crane and hoist loads, working at height, and handling compressed gas cylinders. This paper outlines the projectile hazard assoicated with compressed gas cylinders and mthods of treatment to provide for compressed gas safety. This information should be of interest to personnel at both magnetic and inertial fusion experiments.

  13. An efficient compression scheme for bitmap indices

    SciTech Connect

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2004-04-13

    When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap code (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query response time

  14. Properties of compressible elastica from relativistic analogy.

    PubMed

    Oshri, Oz; Diamant, Haim

    2016-01-21

    Kirchhoff's kinetic analogy relates the deformation of an incompressible elastic rod to the classical dynamics of rigid body rotation. We extend the analogy to compressible filaments and find that the extension is similar to the introduction of relativistic effects into the dynamical system. The extended analogy reveals a surprising symmetry in the deformations of compressible elastica. In addition, we use known results for the buckling of compressible elastica to derive the explicit solution for the motion of a relativistic nonlinear pendulum. We discuss cases where the extended Kirchhoff analogy may be useful for the study of other soft matter systems. PMID:26563905

  15. Compressed Gas Safety for Experimental Fusion Facilities

    SciTech Connect

    Cadwallader, L.C.

    2005-05-15

    Experimental fusion facilities present a variety of hazards to the operators and staff. There are unique or specialized hazards, including magnetic fields, cryogens, radio frequency emissions, and vacuum reservoirs. There are also more general industrial hazards, such as a wide variety of electrical power, pressurized air and cooling water systems in use, there are crane and hoist loads, working at height, and handling compressed gas cylinders. This paper outlines the projectile hazard associated with compressed gas cylinders and methods of treatment to provide for compressed gas safety. This information should be of interest to personnel at both magnetic and inertial fusion experiments.

  16. Logarithmic compression methods for spectral data

    DOEpatents

    Dunham, Mark E.

    2003-01-01

    A method is provided for logarithmic compression, transmission, and expansion of spectral data. A log Gabor transformation is made of incoming time series data to output spectral phase and logarithmic magnitude values. The output phase and logarithmic magnitude values are compressed by selecting only magnitude values above a selected threshold and corresponding phase values to transmit compressed phase and logarithmic magnitude values. A reverse log Gabor transformation is then performed on the transmitted phase and logarithmic magnitude values to output transmitted time series data to a user.

  17. Evolution Of Nonlinear Waves in Compressing Plasma

    SciTech Connect

    P.F. Schmit, I.Y. Dodin, and N.J. Fisch

    2011-05-27

    Through particle-in-cell simulations, the evolution of nonlinear plasma waves is examined in one-dimensional collisionless plasma undergoing mechanical compression. Unlike linear waves, whose wavelength decreases proportionally to the system length L(t), nonlinear waves, such as solitary electron holes, conserve their characteristic size {Delta} during slow compression. This leads to a substantially stronger adiabatic amplification as well as rapid collisionless damping when L approaches {Delta}. On the other hand, cessation of compression halts the wave evolution, yielding a stable mode.

  18. Reversible intraframe compression of medical images.

    PubMed

    Roos, P; Viergever, M A; van Dijke, M A; Peters, J H

    1988-01-01

    The performance of several reversible, intraframe compression methods is compared by applying them to angiographic and magnetic resonance (MR) images. Reversible data compression involves two consecutive steps: decorrelation and coding. The result of the decorrelation step is presented in terms of entropy. Because Huffman coding generally approximates these entropy measures within a few percent, coding has not been investigated separately. It appears that a hierarchical decorrelation method based on interpolation (HINT) outperforms all other methods considered. The compression ratio is around 3 for angiographic images of 8-9 b/pixel, but is considerably less for MR images whose noise level is substantially higher. PMID:18230486

  19. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  20. Compression of a bundle of light rays.

    PubMed

    Marcuse, D

    1971-03-01

    The performance of ray compression devices is discussed on the basis of a phase space treatment using Liouville's theorem. It is concluded that the area in phase space of the input bundle of rays is determined solely by the required compression ratio and possible limitations on the maximum ray angle at the output of the device. The efficiency of tapers and lenses as ray compressors is approximately equal. For linear tapers and lenses the input angle of the useful rays must not exceed the compression ratio. The performance of linear tapers and lenses is compared to a particular ray compressor using a graded refractive index distribution. PMID:20094478

  1. Modulation compression for short wavelength harmonic generation

    SciTech Connect

    Qiang, J.

    2010-01-11

    Laser modulator is used to seed free electron lasers. In this paper, we propose a scheme to compress the initial laser modulation in the longitudinal phase space by using two opposite sign bunch compressors and two opposite sign energy chirpers. This scheme could potentially reduce the initial modulation wavelength by a factor of C and increase the energy modulation amplitude by a factor of C, where C is the compression factor of the first bunch compressor. Such a compressed energy modulation can be directly used to generate short wavelength current modulation with a large bunching factor.

  2. Calculation methods for compressible turbulent boundary layers

    NASA Technical Reports Server (NTRS)

    Bushnell, D. M.; Cary, A. M., Jr.; Harris, J. E.

    1976-01-01

    Calculation procedures for non-reacting compressible two- and three-dimensional turbulent boundary layers were reviewed. Integral, transformation and correlation methods, as well as finite difference solutions of the complete boundary layer equations summarized. Alternative numerical solution procedures were examined, and both mean field and mean turbulence field closure models were considered. Physics and related calculation problems peculiar to compressible turbulent boundary layers are described. A catalog of available solution procedures of the finite difference, finite element, and method of weighted residuals genre is included. Influence of compressibility, low Reynolds number, wall blowing, and pressure gradient upon mean field closure constants are reported.

  3. Compression of Complex-Valued SAR Imagery

    SciTech Connect

    Eichel P.; Ives, R.W.

    1999-03-03

    Synthetic Aperture Radars are coherent imaging systems that produce complex-valued images of the ground. Because modern systems can generate large amounts of data, there is substantial interest in applying image compression techniques to these products. In this paper, we examine the properties of complex-valued SAR images relevant to the task of data compression. We advocate the use of transform-based compression methods but employ radically different quantization strategies than those commonly used for incoherent optical images. The theory, methodology, and examples are presented.

  4. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  5. Data compression for full motion video transmission

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Sayood, Khalid

    1991-01-01

    Clearly transmission of visual information will be a major, if not dominant, factor in determining the requirements for, and assessing the performance of the Space Exploration Initiative (SEI) communications systems. Projected image/video requirements which are currently anticipated for SEI mission scenarios are presented. Based on this information and projected link performance figures, the image/video data compression requirements which would allow link closure are identified. Finally several approaches which could satisfy some of the compression requirements are presented and possible future approaches which show promise for more substantial compression performance improvement are discussed.

  6. Analysis of kink band formation under compression

    NASA Technical Reports Server (NTRS)

    Hahn, H. Thomas

    1987-01-01

    The kink band formation in unidirectional composites under compression is analyzed in the present paper. The kinematics of kink band formation is described in terms of a deformation tensor. Equilibrium conditions are then applied to relate the compression load to the deformation of fibers. Since the in situ shear behavior of the matrix resin is not known, an analysis-experiment correlation is used to find the shear failure strain in the kink band. The present analysis thus elucidates the mechanisms and identifies the controlling parameters, of compression failure.

  7. Digital Image Compression Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.

    1993-01-01

    The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.

  8. Dependability Improvement for PPM Compressed Data by Using Compression Pattern Matching

    NASA Astrophysics Data System (ADS)

    Kitakami, Masato; Okura, Toshihiro

    Data compression is popularly applied to computer systems and communication systems in order to reduce storage size and communication time, respectively. Since large data are used frequently, string matching for such data takes a long time. If the data are compressed, the time gets much longer because decompression is necessary. Long string matching time makes computer virus scan time longer and gives serious influence to the security of data. From this, CPM (Compression Pattern Matching) methods for several compression methods have been proposed. This paper proposes CPM method for PPM which achieves fast virus scan and improves dependability of the compressed data, where PPM is based on a Markov model, uses a context information, and achieves a better compression ratio than BW transform and Ziv-Lempel coding. The proposed method encodes the context information, which is generated in the compression process, and appends the encoded data at the beginning of the compressed data as a header. The proposed method uses only the header information. Computer simulation says that augmentation of the compression ratio is less than 5 percent if the order of the PPM is less than 5 and the source file size is more than 1M bytes, where order is the maximum length of the context used in PPM compression. String matching time is independent of the source file size and is very short, less than 0.3 micro seconds in the PC used for the simulation.

  9. Compressive fluorescence microscopy for biological and hyperspectral imaging.

    PubMed

    Studer, Vincent; Bobin, Jérome; Chahid, Makhlad; Mousavi, Hamed Shams; Candes, Emmanuel; Dahan, Maxime

    2012-06-26

    The mathematical theory of compressed sensing (CS) asserts that one can acquire signals from measurements whose rate is much lower than the total bandwidth. Whereas the CS theory is now well developed, challenges concerning hardware implementations of CS-based acquisition devices--especially in optics--have only started being addressed. This paper presents an implementation of compressive sensing in fluorescence microscopy and its applications to biomedical imaging. Our CS microscope combines a dynamic structured wide-field illumination and a fast and sensitive single-point fluorescence detection to enable reconstructions of images of fluorescent beads, cells, and tissues with undersampling ratios (between the number of pixels and number of measurements) up to 32. We further demonstrate a hyperspectral mode and record images with 128 spectral channels and undersampling ratios up to 64, illustrating the potential benefits of CS acquisition for higher-dimensional signals, which typically exhibits extreme redundancy. Altogether, our results emphasize the interest of CS schemes for acquisition at a significantly reduced rate and point to some remaining challenges for CS fluorescence microscopy. PMID:22689950

  10. Predicting failure: acoustic emission of berlinite under compression.

    PubMed

    Nataf, Guillaume F; Castillo-Villa, Pedro O; Sellappan, Pathikumar; Kriven, Waltraud M; Vives, Eduard; Planes, Antoni; Salje, Ekhard K H

    2014-07-01

    Acoustic emission has been measured and statistical characteristics analyzed during the stress-induced collapse of porous berlinite, AlPO4, containing up to 50 vol% porosity. Stress collapse occurs in a series of individual events (avalanches), and each avalanche leads to a jerk in sample compression with corresponding acoustic emission (AE) signals. The distribution of AE avalanche energies can be approximately described by a power law p(E)dE = E(-ε)dE (ε ~ 1.8) over a large stress interval. We observed several collapse mechanisms whereby less porous minerals show the superposition of independent jerks, which were not related to the major collapse at the failure stress. In highly porous berlinite (40% and 50%) an increase of energy emission occurred near the failure point. In contrast, the less porous samples did not show such an increase in energy emission. Instead, in the near vicinity of the main failure point they showed a reduction in the energy exponent to ~ 1.4, which is consistent with the value reported for compressed porous systems displaying critical behavior. This suggests that a critical avalanche regime with a lack of precursor events occurs. In this case, all preceding large events were 'false alarms' and unrelated to the main failure event. Our results identify a method to use pico-seismicity detection of foreshocks to warn of mine collapse before the main failure (the collapse) occurs, which can be applied to highly porous materials only. PMID:24919038

  11. Parallel Tensor Compression for Large-Scale Scientific Data.

    SciTech Connect

    Kolda, Tamara G.; Ballard, Grey; Austin, Woody Nathan

    2015-10-01

    As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memory parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.

  12. 3M Coban 2 Layer Compression Therapy: Intelligent Compression Dynamics to Suit Different Patient Needs

    PubMed Central

    Bernatchez, Stéphanie F.; Tucker, Joseph; Schnobrich, Ellen; Parks, Patrick J.

    2012-01-01

    Problem Chronic venous insufficiency can lead to recalcitrant leg ulcers. Compression has been shown to be effective in healing these ulcers, but most products are difficult to apply and uncomfortable for patients, leading to inconsistent/ineffective clinical application and poor compliance. In addition, compression presents risks for patients with an ankle-brachial pressure index (ABPI) <0.8 because of the possibility of further compromising the arterial circulation. The ABPI is the ratio of systolic leg blood pressure (taken at ankle) to systolic arm blood pressure (taken above elbow, at brachial artery). This is measured to assess a patient's lower extremity arterial perfusion before initiating compression therapy.1 Solution Using materials science, two-layer compression systems with controlled compression and a low profile were developed. These materials allow for a more consistent bandage application with better control of the applied compression, and their low profile is compatible with most footwear, increasing patient acceptance and compliance with therapy. The original 3M™ Coban™ 2 Layer Compression System is suited for patients with an ABPI ≥0.8; 3M™ Coban™ 2 Layer Lite Compression System can be used on patients with ABPI ≥0.5. New Technology Both compression systems are composed of two layers that combine to create an inelastic sleeve conforming to the limb contour to provide a consistent proper pressure profile to reduce edema. In addition, they slip significantly less than other compression products and improve patient daily living activities and physical symptoms. Indications for Use Both compression systems are indicated for patients with venous leg ulcers, lymphedema, and other conditions where compression therapy is appropriate. Caution As with any compression system, caution must be used when mixed venous and arterial disease is present to not induce any damage. These products are not indicated when the ABPI is <0.5. PMID:24527315

  13. Stent Compression in Iliac Vein Compression Syndrome Associated with Acute Ilio-Femoral Deep Vein Thrombosis

    PubMed Central

    Cho, Hun; Kim, Jin Woo; Hong, You Sun; Lim, Sang Hyun

    2015-01-01

    Objective This study was conducted to evaluate stent compression in iliac vein compression syndrome (IVCS) and to identify its association with stent patency. Materials and Methods Between May 2005 and June 2014, after stent placement for the treatment of IVCS with acute ilio-femoral deep vein thrombosis, follow-up CT venography was performed in 48 patients (35 women, 13 men; age range 23-87 years; median age 56 years). Using follow-up CT venography, the degree of the stent compression was calculated and used to divide patients into two groups. Possible factors associated with stent compression and patency were evaluated. The cumulative degree of stent compression and patency rate were analyzed. Results All of the stents used were laser-cut nitinol stents. The proportion of limbs showing significant stent compression was 33%. Fifty-six percent of limbs in the significant stent compression group developed stent occlusion. On the other hand, only 9% of limbs in the insignificant stent compression group developed stent occlusion. Significant stent compression was inversely correlated with stent patency (p < 0.001). The median patency period evaluated with Kaplan-Meier analysis was 20.0 months for patients with significant stent compression. Other factors including gender, age, and type of stent were not correlated with stent patency. Significant stent compression occurred most frequently (87.5%) at the upper end of the stent (ilio-caval junction). Conclusion Significant compression of nitinol stents placed in IVCS highly affects stent patency. Therefore, in order to prevent stent compression in IVCS, nitinol stents with higher radial resistive force may be required. PMID:26175570

  14. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    PubMed

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression. PMID:22722754

  15. Biomechanics of turtle shells: how whole shells fail in compression.

    PubMed

    Magwene, Paul M; Socha, John J

    2013-02-01

    Turtle shells are a form of armor that provides varying degrees of protection against predation. Although this function of the shell as armor is widely appreciated, the mechanical limits of protection and the modes of failure when subjected to breaking stresses have not been well explored. We studied the mechanical properties of whole shells and of isolated bony tissues and sutures in four species of turtles (Trachemys scripta, Malaclemys terrapin, Chrysemys picta, and Terrapene carolina) using a combination of structural and mechanical tests. Structural properties were evaluated by subjecting whole shells to compressive and point loads in order to quantify maximum load, work to failure, and relative shell deformations. The mechanical properties of bone and sutures from the plastral region of the shell were evaluated using three-point bending experiments. Analysis of whole shell structural properties suggests that small shells undergo relatively greater deformations before failure than do large shells and similar amounts of energy are required to induce failure under both point and compressive loads. Location of failures occurred far more often at sulci than at sutures (representing the margins of the epidermal scutes and the underlying bones, respectively), suggesting that the small grooves in the bone created by the sulci introduce zones of weakness in the shell. Values for bending strength, ultimate bending strain, Young's modulus, and energy absorption, calculated from the three-point bending data, indicate that sutures are relatively weaker than the surrounding bone, but are able to absorb similar amounts of energy due to higher ultimate strain values. PMID:23203474

  16. Data compression: The end-to-end information systems perspective for NASA space science missions

    NASA Technical Reports Server (NTRS)

    Tai, Wallace

    1991-01-01

    The unique characteristics of compressed data have important implications to the design of space science data systems, science applications, and data compression techniques. The sequential nature or data dependence between each of the sample values within a block of compressed data introduces an error multiplication or propagation factor which compounds the effects of communication errors. The data communication characteristics of the onboard data acquisition, storage, and telecommunication channels may influence the size of the compressed blocks and the frequency of included re-initialization points. The organization of the compressed data are continually changing depending on the entropy of the input data. This also results in a variable output rate from the instrument which may require buffering to interface with the spacecraft data system. On the ground, there exist key tradeoff issues associated with the distribution and management of the science data products when data compression techniques are applied in order to alleviate the constraints imposed by ground communication bandwidth and data storage capacity.

  17. Alkane fluids confined and compressed by two smooth crystalline gold surfaces: Pure liquids and mixtures

    NASA Astrophysics Data System (ADS)

    Alvarez, Lina P. Merchan

    With the use of grand canonical molecular dynamics, we studied the slow compression(0.01m/s) of very thin liquid films made of equimolar mixtures of short and long alkane chains (hexane and hexadecane), and branched and unbranched alkanes (phytane and hexadecane). Besides comparing how these mixtures behave under constant speed compression, we will compare their properties with the behavior and structure of the pure systems undergoing the same type of slow compression. To understand the arrangement of the molecules inside the confinement, we present segmental and molecular density profiles, average length and orientation of the molecules inside well formed gaps. To observe the effects of the compression on the fluids, we present the number of confined molecules, the inlayer orientation, the solvation force and the inlayer diffusion coefficient, versus the thickness of the gap. We observe that pure hexadecane, although liquid at this temperature, starts presenting strong solid-like behavior when it is compressed to thicknesses under 30A, while pure hexane and pure phytane continue to behave liquid-like except at 13A when they show some weak solid-like features. When hexadecane is mixed with the short straight hexane, it remains liquid down to 28A at which point this mixture behaves solid-like with an enhanced alignment of the long molecules not seen in its pure form; but when hexade-cane is mixed with the branched phytane the system does not present the solid-like features seen when hexadecane is compressed pure.

  18. Pulse self-compression to single-cycle pulse widths a few decades above the self-focusing threshold

    NASA Astrophysics Data System (ADS)

    Voronin, A. A.; Zheltikov, A. M.

    2016-08-01

    We identify a physical scenario whereby optical-field waveforms with peak powers several decades above the critical power of self-focusing can self-compress to subcycle pulse widths. With beam breakup, intense hot spots, and optical damage of the material avoided within the pulse compression length by keeping this length shorter than the modulation-instability buildup length, the beam is shown to preserve its continuity at the point of subcycle pulse generation.

  19. A Simulation-based Randomized Controlled Study of Factors Influencing Chest Compression Depth

    PubMed Central

    Mayrand, Kelsey P.; Fischer, Eric J.; Ten Eyck, Raymond P.

    2015-01-01

    Introduction Current resuscitation guidelines emphasize a systems approach with a strong emphasis on quality cardiopulmonary resuscitation (CPR). Despite the American Heart Association (AHA) emphasis on quality CPR for over 10 years, resuscitation teams do not consistently meet recommended CPR standards. The objective is to assess the impact on chest compression depth of factors including bed height, step stool utilization, position of the rescuer’s arms and shoulders relative to the point of chest compression, and rescuer characteristics including height, weight, and gender. Methods Fifty-six eligible subjects, including physician assistant students and first-year emergency medicine residents, were enrolled and randomized to intervention (bed lowered and step stool readily available) and control (bed raised and step stool accessible, but concealed) groups. We instructed all subjects to complete all interventions on a high-fidelity mannequin per AHA guidelines. Secondary end points included subject arm angle, height, weight group, and gender. Results Using an intention to treat analysis, the mean compression depths for the intervention and control groups were not significantly different. Subjects positioning their arms at a 90-degree angle relative to the sagittal plane of the mannequin’s chest achieved a mean compression depth significantly greater than those compressing at an angle less than 90 degrees. There was a significant correlation between using a step stool and achieving the correct shoulder position. Subject height, weight group, and gender were all independently associated with compression depth. Conclusion Rescuer arm position relative to the patient’s chest and step stool utilization during CPR are modifiable factors facilitating improved chest compression depth. PMID:26759667

  20. Preliminary results of SAR image compression using MatrixViewTM on coherent change detection (CCD) analysis

    NASA Astrophysics Data System (ADS)

    Gresko, Lawrence S.; Gorham, LeRoy A.; Thiagarajan, Arvind

    2012-05-01

    An investigation was made into the feasibility of compressing complex Synthetic Aperture Radar (SAR) images using MatrixViewTM compression technology to achieve higher compression ratios than previously achieved. Complex SAR images contain both amplitude and phase information that are severely degraded with traditional compression techniques. This phase and amplitude information allows interferometric analysis to detect minute changes between pairs of SAR images, but is highly sensitive to any degradation in image quality. This sensitivity provides a measure to compare capabilities of different compression technologies. The interferometric process of Coherent Change Detection (CCD) is acutely sensitive to any quality loss and, therefore, is a good measure by which to compare compression capabilities of different technologies. The best compression that could be achieved by block adaptive quantization (a classical compression approach) applied to a set of I and Q phased-history samples, was a Compression Ratio (CR) of 2x. Work by Novak and Frost [3] increased this CR to 3-4x using a more complex wavelet-based Set Partitioning In Hierarchical Trees (SPIHT) algorithm (similar in its core to JPEG 2000). In each evaluation as the CR increased, degradation occurred in the reconstituted image measured by the CCD image coherence. The maximum compression was determined at the point the CCD image coherence remained > 0.9. The same investigation approach using equivalent sample data sets was performed using an emerging technology and product called MatrixViewTM. This paper documents preliminary results of MatrixView's compression of an equivalent data set to demonstrate a CR of 10-12x with an equivalent CCD coherence level of >0.9: a 300-400% improvement over SPIHT.

  1. Seneca Compressed Air Energy Storage (CAES) Project

    SciTech Connect

    2012-11-30

    This document provides specifications for the process air compressor for a compressed air storage project, requests a budgetary quote, and provides supporting information, including compressor data, site specific data, water analysis, and Seneca CAES value drivers.

  2. Pulse power applications of flux compression generators

    NASA Astrophysics Data System (ADS)

    Fowler, C. M.; Caird, R. S.; Erickson, D. J.; Freeman, B. L.

    Characteristics are presented for two different types of explosive driven flux compression generators and a megavolt pulse transformer. Status reports are given for rail gun and plasma focus programs for which the generators serve as power sources.

  3. Efficient Quantum Information Processing via Quantum Compressions

    NASA Astrophysics Data System (ADS)

    Deng, Y.; Luo, M. X.; Ma, S. Y.

    2016-01-01

    Our purpose is to improve the quantum transmission efficiency and reduce the resource cost by quantum compressions. The lossless quantum compression is accomplished using invertible quantum transformations and applied to the quantum teleportation and the simultaneous transmission over quantum butterfly networks. New schemes can greatly reduce the entanglement cost, and partially solve transmission conflictions over common links. Moreover, the local compression scheme is useful for approximate entanglement creations from pre-shared entanglements. This special task has not been addressed because of the quantum no-cloning theorem. Our scheme depends on the local quantum compression and the bipartite entanglement transfer. Simulations show the success probability is greatly dependent of the minimal entanglement coefficient. These results may be useful in general quantum network communication.

  4. Super high compression of line drawing data

    NASA Technical Reports Server (NTRS)

    Cooper, D. B.

    1976-01-01

    Models which can be used to accurately represent the type of line drawings which occur in teleconferencing and transmission for remote classrooms and which permit considerable data compression were described. The objective was to encode these pictures in binary sequences of shortest length but such that the pictures can be reconstructed without loss of important structure. It was shown that exploitation of reasonably simple structure permits compressions in the range of 30-100 to 1. When dealing with highly stylized material such as electronic or logic circuit schematics, it is unnecessary to reproduce configurations exactly. Rather, the symbols and configurations must be understood and be reproduced, but one can use fixed font symbols for resistors, diodes, capacitors, etc. Compression of pictures of natural phenomena such as can be realized by taking a similar approach, or essentially zero error reproducibility can be achieved but at a lower level of compression.

  5. All about compression: A literature review.

    PubMed

    de Carvalho, Magali Rezende; de Andrade, Isabelle Silveira; de Abreu, Alcione Matos; Leite Ribeiro, Andrea Pinto; Peixoto, Bruno Utzeri; de Oliveira, Beatriz Guitton Renaud Baptista

    2016-06-01

    Lower extremity ulcers represent a significant public health problem as they frequently progress to chronicity, significantly impact daily activities and comfort, and represent a huge financial burden to the patient and the health system. The aim of this review was to discuss the best approach for venous leg ulcers (VLUs). Online searches were conducted in Ovid MEDLINE, Ovid EMBASE, EBSCO CINAHL, and reference lists and official guidelines. Keywords considered for this review were VLU, leg ulcer, varicose ulcer, compressive therapy, compression, and stocking. A complete assessment of the patient's overall health should be performed by a trained practitioner, focusing on history of diabetes mellitus, hypertension, dietetic habits, medications, and practice of physical exercises, followed by a thorough assessment of both legs. Compressive therapy is the gold standard treatment for VLUs, and the ankle-brachial index should be measured in all patients before compression application. PMID:27210451

  6. Compression behavior of unidirectional fibrous composite

    NASA Technical Reports Server (NTRS)

    Sinclair, J. H.; Chamis, C. C.

    1982-01-01

    The longitudinal compression behavior of unidirectional fiber composites is investigated using a modified Celanese test method with thick and thin test specimens. The test data obtained are interpreted using the stress/strain curves from back-to-back strain gages, examination of fracture surfaces by scanning electron microscope, and predictive equations for distinct failure modes including fiber compression failure, Euler buckling, delamination, and flexure. The results show that the longitudinal compression fracture is induced by a combination of delamination, flexure, and fiber tier breaks. No distinct fracture surface characteristics can be associated with unique failure modes. An equation is described which can be used to extract the longitudinal compression strength knowing the longitudinal tensile and flexural strengths of the same composite system.

  7. Method for compression of binary data

    DOEpatents

    Berlin, Gary J.

    1996-01-01

    The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression.

  8. Method for compression of binary data

    DOEpatents

    Berlin, G.J.

    1996-03-26

    The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression. 5 figs.

  9. Compression of digital holographic data: an overview

    NASA Astrophysics Data System (ADS)

    Dufaux, Frederic; Xing, Yafei; Pesquet-Popescu, Beatrice; Schelkens, Peter

    2015-09-01

    Holography has the potential to become the ultimate 3D experience. Nevertheless, in order to achieve practical working systems, major scientific and technological challenges have to be tackled. In particular, as digital holographic data represents a huge amount of information, the development of efficient compression techniques is a key component. This problem has gained significant attention by the research community during the last 10 years. Given that holograms have very different signal properties when compared to natural images and video sequences, existing compression techniques (e.g. JPEG or MPEG) remain suboptimal, calling for innovative compression solutions. In this paper, we will review and analyze past and on-going work for the compression of digital holographic data.

  10. Pulse compression and prepulse suppression apparatus

    DOEpatents

    Dane, Clifford B.; Hackel, Lloyd A.; George, Edward V.; Miller, John L.; Krupke, William F.

    1993-01-01

    A pulse compression and prepulse suppression apparatus (10) for time compressing the output of a laser (14). A pump pulse (46) is separated from a seed pulse (48) by a first polarized beam splitter (20) according to the orientation of a half wave plate (18). The seed pulse (48) is directed into an SBS oscillator (44) by two plane mirrors (22, 26) and a corner mirror (24), the corner mirror (24) being movable to adjust timing. The pump pulse (46) is directed into an SBS amplifier 34 wherein SBS occurs. The seed pulse (48), having been propagated from the SBS oscillator (44), is then directed through the SBS amplifier (34) wherein it sweeps the energy of the pump pulse (46) out of the SBS amplifier (34) and is simultaneously compressed, and the time compressed pump pulse (46) is emitted as a pulse output (52). A second polarized beam splitter (38) directs any undepleted pump pulse 58 away from the SBS oscillator (44).

  11. Pulse compression and prepulse suppression apparatus

    DOEpatents

    Dane, C.B.; Hackel, L.A.; George, E.V.; Miller, J.L.; Krupke, W.F.

    1993-11-09

    A pulse compression and prepulse suppression apparatus (10) for time compressing the output of a laser (14). A pump pulse (46) is separated from a seed pulse (48) by a first polarized beam splitter (20) according to the orientation of a half wave plate (18). The seed pulse (48) is directed into an SBS oscillator (44) by two plane mirrors (22, 26) and a corner mirror (24), the corner mirror (24) being movable to adjust timing. The pump pulse (46) is directed into an SBS amplifier 34 wherein SBS occurs. The seed pulse (48), having been propagated from the SBS oscillator (44), is then directed through the SBS amplifier (34) wherein it sweeps the energy of the pump pulse (46) out of the SBS amplifier (34) and is simultaneously compressed, and the time compressed pump pulse (46) is emitted as a pulse output (52). A second polarized beam splitter (38) directs any undepleted pump pulse 58 away from the SBS oscillator (44).

  12. Large Hiatal Hernia Compressing the Heart.

    PubMed

    Matar, Andrew; Mroue, Jad; Camporesi, Enrico; Mangar, Devanand; Albrink, Michael

    2016-02-01

    We describe a 41-year-old man with De Mosier's syndrome who presented with exercise intolerance and dyspnea on exertion caused by a giant hiatal hernia compressing the heart with relief by surgical treatment. PMID:26704030

  13. Wavelet transform in electrocardiography--data compression.

    PubMed

    Provazník, I; Kozumplík, J

    1997-06-01

    An application of the wavelet transform to electrocardiography is described in the paper. The transform is used as a first stage of a lossy compression algorithm for efficient coding of rest ECG signals. The proposed technique is based on the decomposition of the ECG signal into a set of basic functions covering the time-frequency domain. Thus, non-stationary character of ECG data is considered. Some of the time-frequency signal components are removed because of their low influence to signal characteristics. Resulting components are efficiently coded by quantization, composition into a sequence of coefficients and compression by a run-length coder and a entropic Huffman coder. The proposed wavelet-based compression algorithm can compress data to average code length about 1 bit/sample. The algorithm can be also implemented to a real-time processing system when wavelet transform is computed by fast linear filters described in the paper. PMID:9291025

  14. Universal lossless compression algorithm for textual images

    NASA Astrophysics Data System (ADS)

    al Zahir, Saif

    2012-03-01

    In recent years, an unparalleled volume of textual information has been transported over the Internet via email, chatting, blogging, tweeting, digital libraries, and information retrieval systems. As the volume of text data has now exceeded 40% of the total volume of traffic on the Internet, compressing textual data becomes imperative. Many sophisticated algorithms were introduced and employed for this purpose including Huffman encoding, arithmetic encoding, the Ziv-Lempel family, Dynamic Markov Compression, and Burrow-Wheeler Transform. My research presents novel universal algorithm for compressing textual images. The algorithm comprises two parts: 1. a universal fixed-to-variable codebook; and 2. our row and column elimination coding scheme. Simulation results on a large number of Arabic, Persian, and Hebrew textual images show that this algorithm has a compression ratio of nearly 87%, which exceeds published results including JBIG2.

  15. Relativistic laser pulse compression in magnetized plasmas

    SciTech Connect

    Liang, Yun; Sang, Hai-Bo Wan, Feng; Lv, Chong; Xie, Bai-Song

    2015-07-15

    The self-compression of a weak relativistic Gaussian laser pulse propagating in a magnetized plasma is investigated. The nonlinear Schrödinger equation, which describes the laser pulse amplitude evolution, is deduced and solved numerically. The pulse compression is observed in the cases of both left- and right-hand circular polarized lasers. It is found that the compressed velocity is increased for the left-hand circular polarized laser fields, while decreased for the right-hand ones, which is reinforced as the enhancement of the external magnetic field. We find a 100 fs left-hand circular polarized laser pulse is compressed in a magnetized (1757 T) plasma medium by more than ten times. The results in this paper indicate the possibility of generating particularly intense and short pulses.

  16. Hyperspectral image data compression based on DSP

    NASA Astrophysics Data System (ADS)

    Fan, Jiming; Zhou, Jiankang; Chen, Xinhua; Shen, Weimin

    2010-11-01

    The huge data volume of hyperspectral image challenges its transportation and store. It is necessary to find an effective method to compress the hyperspectral image. Through analysis and comparison of current various algorithms, a mixed compression algorithm based on prediction, integer wavelet transform and embedded zero-tree wavelet (EZW) is proposed in this paper. We adopt a high-powered Digital Signal Processor (DSP) of TMS320DM642 to realize the proposed algorithm. Through modifying the mixed algorithm and optimizing its algorithmic language, the processing efficiency of the program was significantly improved, compared the non-optimized one. Our experiment show that the mixed algorithm based on DSP runs much faster than the algorithm on personal computer. The proposed method can achieve the nearly real-time compression with excellent image quality and compression performance.

  17. Compression asphyxia from a human pyramid.

    PubMed

    Tumram, Nilesh Keshav; Ambade, Vipul Namdeorao; Biyabani, Naushad

    2015-12-01

    In compression asphyxia, respiration is stopped by external forces on the body. It is usually due to an external force compressing the trunk such as a heavy weight on the chest or abdomen and is associated with internal injuries. In present case, the victim was trapped and crushed under the falling persons from a human pyramid formation for a "Dahi Handi" festival. There was neither any severe blunt force injury nor any significant pathological natural disease contributing to the cause of death. The victim was unable to remove himself from the situation because his cognitive responses and coordination were impaired due to alcohol intake. The victim died from asphyxia due to compression of his chest and abdomen. Compression asphyxia resulting from the collapse of a human pyramid and the dynamics of its impact force in these circumstances is very rare and is not reported previously to the best of our knowledge. PMID:26059277

  18. Ramp Compression Experiments - a Sensitivity Study

    SciTech Connect

    Bastea, M; Reisman, D

    2007-02-26

    We present the first sensitivity study of the material isentropes extracted from ramp compression experiments. We perform hydrodynamic simulations of representative experimental geometries associated with ramp compression experiments and discuss the major factors determining the accuracy of the equation of state information extracted from such data. In conclusion, we analyzed both qualitatively and quantitatively the major experimental factors that determine the accuracy of equations of state extracted from ramp compression experiments. Since in actual experiments essentially all the effects discussed here will compound, factoring out individual signatures and magnitudes, as done in the present work, is especially important. This study should provide some guidance for the effective design and analysis of ramp compression experiments, as well as for further improvements of ramp generators performance.

  19. Enhancement and compression of digital chest radiographs.

    PubMed

    Cohn, M; Trefler, M; Young, T Y

    1990-01-01

    The application of digital technologies to chest radiography holds the promise of routine application of image processing techniques to effect image enhancement. Because of their inherent spatial resolution, however, digital chest images impose severe constraints on data storage devices. Compression of these images will relax such constraints and facilitate image transmission on a digital network. We evaluated an algorithm for enhancing digital chest images that has allowed significant data compression while improving the diagnostic quality of the image. This algorithm is based on the photographic technique of unsharp masking. Image quality was measured with respect to the task of tumor detection and compression ratios as high as 2:1 were achieved. This compression can be supplemented by irreversible methods. PMID:2299708

  20. 3D MHD Simulations of Spheromak Compression

    NASA Astrophysics Data System (ADS)

    Stuber, James E.; Woodruff, Simon; O'Bryan, John; Romero-Talamas, Carlos A.; Darpa Spheromak Team

    2015-11-01

    The adiabatic compression of compact tori could lead to a compact and hence low cost fusion energy system. The critical scientific issues in spheromak compression relate both to confinement properties and to the stability of the configuration undergoing compression. We present results from the NIMROD code modified with the addition of magnetic field coils that allow us to examine the role of rotation on the stability and confinement of the spheromak (extending prior work for the FRC). We present results from a scan in initial rotation, from 0 to 100km/s. We show that strong rotational shear (10km/s over 1cm) occurs. We compare the simulation results with analytic scaling relations for adiabatic compression. Work performed under DARPA grant N66001-14-1-4044.