Science.gov

Sample records for 1-db compression point

  1. Design Point for a Spheromak Compression Experiment

    NASA Astrophysics Data System (ADS)

    Woodruff, Simon; Romero-Talamas, Carlos A.; O'Bryan, John; Stuber, James; Darpa Spheromak Team

    2015-11-01

    Two principal issues for the spheromak concept remain to be addressed experimentally: formation efficiency and confinement scaling. We are therefore developing a design point for a spheromak experiment that will be heated by adiabatic compression, utilizing the CORSICA and NIMROD codes as well as analytic modeling with target parameters R_initial =0.3m, R_final =0.1m, T_initial =0.2keV, T_final =1.8keV, n_initial =1019m-3 and n_final = 1021m-3, with radial convergence of C =3. This low convergence differentiates the concept from MTF with C =10 or more, since the plasma will be held in equilibrium throughout compression. We present results from CORSICA showing the placement of coils and passive structure to ensure stability during compression, and design of the capacitor bank needed to both form the target plasma and compress it. We specify target parameters for the compression in terms of plasma beta, formation efficiency and energy confinement. Work performed under DARPA grant N66001-14-1-4044.

  2. Optimal Compression of Floating-Point FITS Images

    NASA Astrophysics Data System (ADS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2010-12-01

    Lossless compression (e.g., with GZIP) of floating-point format astronomical FITS images is ineffective and typically only reduces the file size by 10% to 30%. We describe a much more effective compression method that is supported by the publicly available fpack and funpack FITS image compression utilities that can compress floating point images by a factor of 10 without loss of significant scientific precision. A “subtractive dithering” technique is described which permits coarser quantization (and thus higher compression) than is possible with simple scaling methods.

  3. A Scheme for Compressing Floating-Point Images

    NASA Astrophysics Data System (ADS)

    White, Richard L.; Greenfield, Perry

    While many techniques have been used to compress integer data, compressing floating-point data presents a number of additional problems. We have implemented a scheme for compressing floating-point images that is fast, robust, and automatic, that allows random access to pixels without decompressing the whole image, and that generally has a scientifically negligible effect on the noise present in the image. The compressed data are stored in an FITS binary table. Most astronomical images can be compressed by approximately a factor of 3, using conservative settings for the permitted level of changes in the data. We intend to work with NOAO to incorporate this compression method into the IRAF image kernel, so that FITS images compressed using this scheme can be accessed transparently from IRAF applications without any explicit decompression steps. The scheme is simple, and it should be possible to include it in other FITS libraries as well.

  4. Fixed-Rate Compressed Floating-Point Arrays.

    PubMed

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

  5. Optimal Compression Methods for Floating-point Format Images

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2009-01-01

    We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.

  6. Fast and efficient compression of floating-point data.

    PubMed

    Lindstrom, Peter; Isenburg, Martin

    2006-01-01

    Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenecks. When the rate at which data is produced exceeds the available I/O bandwidth, the simulation stalls and the CPUs are idle. Data compression can alleviate this problem by using some CPU cycles to reduce the amount of data needed to be transfered. Most compression schemes, however, are designed to operate offline and seek to maximize compression, not throughput. Furthermore, they often require quantizing floating-point values onto a uniform integer grid, which disqualifies their use in applications where exact values must be retained. We propose a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications. A plug-in scheme for data-dependent prediction makes our scheme applicable to a wide variety of data used in visualization, such as unstructured meshes, point sets, images, and voxel grids. We achieve state-of-the-art compression rates and speeds, the latter in part due to an improved entropy coder. We demonstrate that this significantly accelerates I/O throughput in real simulation runs. Unlike previous schemes, our method also adapts well to variable-precision floating-point and integer data.

  7. Fixed-rate compressed floating-point arrays

    SciTech Connect

    Lindstrom, P.

    2014-03-30

    ZFP is a library for lossy compression of single- and double-precision floating-point data. One of the unique features of ZFP is its support for fixed-rate compression, which enables random read and write access at the granularity of small blocks of values. Using a C++ interface, this allows declaring compressed arrays (1D, 2D, and 3D arrays are supported) that through operator overloading can be treated just like conventional, uncompressed arrays, but which allow the user to specify the exact number of bits to allocate to the array. ZFP also has variable-rate fixed-precision and fixed-accuracy modes, which allow the user to specify a tolerance on the relative or absolute error.

  8. Influence of orientation on the evolution of small perturbations in compressible shear layers with inflection points

    NASA Astrophysics Data System (ADS)

    Karimi, Mona; Girimaji, Sharath S.

    2017-03-01

    We investigate the influence of orientation on the evolution of small perturbations in compressible shear layers with inflection points. By using linear analysis, we demonstrate that perturbations along the shear plane are most affected by compressibility. The influence of compressibility gradually diminishes with increasing obliqueness of the perturbations with respect to the shear plane. It is demonstrated that the effective gradient Mach number is an appropriate compressibility parameter. We establish that spanwise perturbations, orthogonal to the shear plane, are impervious to compressibility effects. Direct numerical simulations of compressible mixing layers subject to the perturbations at various obliqueness angles verify the analytical findings.

  9. An improved enhancement layer for octree based point cloud compression with plane projection approximation

    NASA Astrophysics Data System (ADS)

    Ainala, Khartik; Mekuria, Rufael N.; Khathariya, Birendra; Li, Zhu; Wang, Ye-Kui; Joshi, Rajan

    2016-09-01

    Recent advances in point cloud capture and applications in VR/AR sparked new interests in the point cloud data compression. Point Clouds are often organized and compressed with octree based structures. The octree subdivision sequence is often serialized in a sequence of bytes that are subsequently entropy encoded using range coding, arithmetic coding or other methods. Such octree based algorithms are efficient only up to a certain level of detail as they have an exponential run-time in the number of subdivision levels. In addition, the compression efficiency diminishes when the number of subdivision levels increases. Therefore, in this work we present an alternative enhancement layer to the coarse octree coded point cloud. In this case, the base layer of the point cloud is coded in known octree based fashion, but the higher level of details are coded in a different way in an enhancement layer bit-stream. The enhancement layer coding method takes the distribution of the points into account and projects points to geometric primitives, i.e. planes. It then stores residuals and applies entropy encoding with a learning based technique. The plane projection method is used for both geometry compression and color attribute compression. For color coding the method is used to enable efficient raster scanning of the color attributes on the plane to map them to an image grid. Results show that both improved compression performance and faster run-times are achieved for geometry and color attribute compression in point clouds.

  10. Image data compression using a new floating-point digital signal processor.

    PubMed

    Siegel, E L; Templeton, A W; Hensley, K L; McFadden, M A; Baxter, K G; Murphey, M D; Cronin, P E; Gesell, R G; Dwyer, S J

    1991-08-01

    A new dual-ported, floating-point, digital signal processor has been evaluated for compressing 512 and 1,024 digital radiographic images using a full-frame, two-dimensional, discrete cosine transform (2D-DCT). The floating point digital signal processor operates at 49.5 million floating point instructions per second (MFLOPS). The level of compression can be changed by varying four parameters in the lossy compression algorithm. Throughput times were measured for both 2D-DCT compression and decompression. For a 1,024 x 1,024 x 10-bit image with a compression ratio of 316:1, the throughput was 75.73 seconds (compression plus decompression throughput). For a digital fluorography 1,024 x 1,024 x 8-bit image and a compression ratio of 26:1, the total throughput time was 63.23 seconds. For a computed tomography image of 512 x 512 x 12 bits and a compression ratio of 10:1 the throughput time was 19.65 seconds.

  11. Analysis of actual pressure point using the power flexible capacitive sensor during chest compression.

    PubMed

    Minami, Kouichiro; Kokubo, Yota; Maeda, Ichinosuke; Hibino, Shingo

    2017-02-01

    In chest compression for cardiopulmonary resuscitation (CPR), the lower half of the sternum is pressed according to the American Heart Association (AHA) guidelines 2010. These have been no studies which identify the exact location of the applied by individual chest compressions. We developed a rubber power-flexible capacitive sensor that could measure the actual pressure point of chest compression in real time. Here, we examined the pressure point of chest compression by ambulance crews during CPR using a mannequin. We included 179 ambulance crews. Chest compression was performed for 2 min. The pressure position was monitored, and the quality of chest compression was analyzed by using a flexible pressure sensor (Shinnosukekun™). Of the ambulance crews, 58 (32.4 %) pressed the center and 121 (67.6 %) pressed outside the proper area of chest compression. Many of them pressed outside the center; 8, 7, 41, and 90 pressed on the caudal, left, right, and cranial side, respectively. Average compression rate, average recoil, average depth, and average duty cycle were 108.6 counts per minute, 0.089, 4.5 cm, and 48.27 %, respectively. Many of the ambulance crews did not press on the sternal lower half definitely. This new device has the potential to improve the quality of CPR during training or in clinical practice.

  12. Lossy compression of floating point high-dynamic range images using JPEG2000

    NASA Astrophysics Data System (ADS)

    Springer, Dominic; Kaup, Andre

    2009-01-01

    In recent years, a new technique called High Dynamic Range (HDR) has gained attention in the image processing field. By representing pixel values with floating point numbers, recorded images can hold significantly more luminance information than ordinary integer images. This paper focuses on the realization of a lossy compression scheme for HDR images. The JPEG2000 standard is used as a basic component and is efficiently integrated into the compression chain. Based on a detailed analysis of the floating point format and the human visual system, a concept for lossy compression is worked out and thoroughly optimized. Our scheme outperforms all other existing lossy HDR compression schemes and shows superior performance both at low and high bitrates.

  13. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information

    NASA Technical Reports Server (NTRS)

    Pence, William D.; White, R. L.; Seaman, R.

    2010-01-01

    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  14. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information

    NASA Astrophysics Data System (ADS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2010-09-01

    We describe a compression method for floating-point astronomical images that gives compression ratios of 6–10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process can greatly improve the precision of measurements in the images. This is especially important if the analysis algorithm relies on the mode or the median, which would be similarly quantized if the pixel values are not dithered. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  15. An Adaptive Prediction-Based Approach to Lossless Compression of Floating-Point Volume Data.

    PubMed

    Fout, N; Ma, Kwan-Liu

    2012-12-01

    In this work, we address the problem of lossless compression of scientific and medical floating-point volume data. We propose two prediction-based compression methods that share a common framework, which consists of a switched prediction scheme wherein the best predictor out of a preset group of linear predictors is selected. Such a scheme is able to adapt to different datasets as well as to varying statistics within the data. The first method, called APE (Adaptive Polynomial Encoder), uses a family of structured interpolating polynomials for prediction, while the second method, which we refer to as ACE (Adaptive Combined Encoder), combines predictors from previous work with the polynomial predictors to yield a more flexible, powerful encoder that is able to effectively decorrelate a wide range of data. In addition, in order to facilitate efficient visualization of compressed data, our scheme provides an option to partition floating-point values in such a way as to provide a progressive representation. We compare our two compressors to existing state-of-the-art lossless floating-point compressors for scientific data, with our data suite including both computer simulations and observational measurements. The results demonstrate that our polynomial predictor, APE, is comparable to previous approaches in terms of speed but achieves better compression rates on average. ACE, our combined predictor, while somewhat slower, is able to achieve the best compression rate on all datasets, with significantly better rates on most of the datasets.

  16. Compression After Impact Testing of Sandwich Structures Using the Four Point Bend Test

    NASA Technical Reports Server (NTRS)

    Nettles, Alan T.; Gregory, Elizabeth; Jackson, Justin; Kenworthy, Devon

    2008-01-01

    For many composite laminated structures, the design is driven by data obtained from Compression after Impact (CAI) testing. There currently is no standard for CAI testing of sandwich structures although there is one for solid laminates of a certain thickness and lay-up configuration. Most sandwich CAI testing has followed the basic technique of this standard where the loaded ends are precision machined and placed between two platens and compressed until failure. If little or no damage is present during the compression tests, the loaded ends may need to be potted to prevent end brooming. By putting a sandwich beam in a four point bend configuration, the region between the inner supports is put under a compressive load and a sandwich laminate with damage can be tested in this manner without the need for precision machining. Also, specimens with no damage can be taken to failure so direct comparisons between damaged and undamaged strength can be made. Data is presented that demonstrates the four point bend CAI test and is compared with end loaded compression tests of the same sandwich structure.

  17. Unravelling the Role of the Compressed Gas on Melting Point of Liquid Confined in Nanospace.

    PubMed

    Chen, Shimou; Liu, Yusheng; Fu, Haiying; He, Yaxing; Li, Cheng; Huang, Wei; Jiang, Zheng; Wu, Guozhong

    2012-04-19

    Phase behaviors of the liquids in nanospaces are of particular interest to understand the thermodynamics of the liquid on the nanoscale and for their applications that involve the confined systems. However, in many cases, the inconsistent observations of melting point variation for confined liquids are often revealed by different groups. Ionic liquids are a special kind of liquid. Here, by using the merits of the nonvolatile nature of ionic liquids, we realized the encapsulation of ionic liquids inside of mesopores silica oxide nanoparticles with a complete removal of compressed gas under high-vacuum condition; the completely confined ionic liquid formed a crystalline-like phase. It was found that compressed gas plays an important role in changing the melting point of the confined ionic liquid.

  18. Analysis of three-point-bend test for materials with unequal tension and compression properties

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1974-01-01

    An analysis capability is described for the three-point-bend test applicable to materials of linear but unequal tensile and compressive stress-strain relations. The capability consists of numerous equations of simple form and their graphical representation. Procedures are described to examine the local stress concentrations and failure modes initiation. Examples are given to illustrate the usefulness and ease of application of the capability. Comparisons are made with materials which have equal tensile and compressive properties. The results indicate possible underestimates for flexural modulus or strength ranging from 25 to 50 percent greater than values predicted when accounting for unequal properties. The capability can also be used to reduce test data from three-point-bending tests, extract material properties useful in design from these test data, select test specimen dimensions, and size structural members.

  19. Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform.

    PubMed

    De Queiroz, Ricardo; Chou, Philip A

    2016-06-01

    In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time and with the recent possibility of real time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably to the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this work represents the state-of-the-art in intra-frame compression of point clouds for real-time 3D video.

  20. Mathematical modelling of the beam under axial compression force applied at any point - the buckling problem

    NASA Astrophysics Data System (ADS)

    Magnucka-Blandzi, Ewa

    2016-06-01

    The study is devoted to stability of simply supported beam under axial compression. The beam is subjected to an axial load located at any point along the axis of the beam. The buckling problem has been desribed and solved mathematically. Critical loads have been calculated. In the particular case, the Euler's buckling load is obtained. Explicit solutions are given. The values of critical loads are collected in tables and shown in figure. The relation between the point of the load application and the critical load is presented.

  1. An upwind-biased, point-implicit relaxation algorithm for viscous, compressible perfect-gas flows

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    1990-01-01

    An upwind-biased, point-implicit relaxation algorithm for obtaining the numerical solution to the governing equations for three-dimensional, viscous, compressible, perfect-gas flows is described. The algorithm is derived using a finite-volume formulation in which the inviscid components of flux across cell walls are described with Roe's averaging and Harten's entropy fix with second-order corrections based on Yee's Symmetric Total Variation Diminishing scheme. Viscous terms are discretized using central differences. The relaxation strategy is well suited for computers employing either vector or parallel architectures. It is also well suited to the numerical solution of the governing equations on unstructured grids. Because of the point-implicit relaxation strategy, the algorithm remains stable at large Courant numbers without the necessity of solving large, block tri-diagonal systems. Convergence rates and grid refinement studies are conducted for Mach 5 flow through an inlet with a 10 deg compression ramp and Mach 14 flow over a 15 deg ramp. Predictions for pressure distributions, surface heating, and aerodynamics coefficients compare well with experiment data for Mach 10 flow over a blunt body.

  2. Comparison of ring compression testing to three point bend testing for unirradiated ZIRLO cladding

    SciTech Connect

    None, None

    2015-04-01

    Safe shipment and storage of nuclear reactor discharged fuel requires an understanding of how the fuel may perform under the various conditions that can be encountered. One specific focus of concern is performance during a shipment drop accident. Tests at Savannah River National Laboratory (SRNL) are being performed to characterize the properties of fuel clad relative to a mechanical accident condition such as a container drop. Unirradiated ZIRLO tubing samples have been charged with a range of hydride levels to simulate actual fuel rod levels. Samples of the hydrogen charged tubes were exposed to a radial hydride growth treatment (RHGT) consisting of heating to 400°C, applying initial hoop stresses of 90 to 170 MPa with controlled cooling and producing hydride precipitates. Initial samples have been tested using both a) ring compression test (RCT) which is shown to be sensitive to radial hydride and b) three-point bend tests which are less sensitive to radial hydride effects. Hydrides are generated in Zirconium based fuel cladding as a result of coolant (water) oxidation of the clad, hydrogen release, and a portion of the released (nascent) hydrogen absorbed into the clad and eventually exceeding the hydrogen solubility limit. The orientation of the hydrides relative to the subsequent normal and accident strains has a significant impact on the failure susceptability. In this study the impacts of stress, temperature and hydrogen levels are evaluated in reference to the propensity for hydride reorientation from the circumferential to the radial orientation. In addition the effects of radial hydrides on the Quasi Ductile Brittle Transition Temperature (DBTT) were measured. The results suggest that a) the severity of the radial hydride impact is related to the hydrogen level-peak temperature combination (for example at a peak drying temperature of 400°C; 800 PPM hydrogen has less of an impact/ less radial hydride fraction than 200 PPM hydrogen for the same thermal

  3. Modulation to the compressible homogenous turbulence by heavy point particles: Effect of particles' density

    NASA Astrophysics Data System (ADS)

    Xia, Zhenhua; Shi, Yipeng; Chen, Shiyi

    2015-11-01

    In this paper, two-way interactions between heavy point particles and forced compressible homogenous turbulence are simulated by using a localized artificial diffusivity scheme and an Eulerian-Lagrangian approach. The initial turbulent Mach number is around 1.0 and the Taylor Reynolds number is around 110. Seven different simulations of 106 particles with different particle densities (or Stokes number) are considered. The statistics of the compressible turbulence, such as the turbulence Mach number, kinetic energy, dilatation, and the kinetic energy spectra, from different simulations are compared with each other, and with the one-way undisturbed case. Our results show that the turbulence is suppressed if the two-way coupling backward interactions are considered, and the effect is more obvious if the density of particles is higher. The kinetic energy spectrum at larger Stokes number (higher density) exhibits a reduction at low wave numbers and an augmentation at high wave numbers, which is similar to those obtained in incompressible cases. The probability density functions of dilatation, and normal upstream Mach number of shocklets also show that the modulation to the shocklet statistics is more apparent for particles with higher density. We acknowledge the financial support provided by National Natural Science Foundation of China (Grants Nos. 11302006, and U1330107).

  4. Index of Unconfined Compressive Strength of SAFOD Core by Means of Point-Load Penetrometer Tests

    NASA Astrophysics Data System (ADS)

    Enderlin, M. B.; Weymer, B.; D'Onfro, P. S.; Ramos, R.; Morgan, K.

    2010-12-01

    The San Andreas Fault Observatory at Depth (SAFOD) project is motivated by the need to answer fundamental questions on the physical and chemical processes controlling faulting and earthquake generation within major plate-boundaries. In 2007, approximately 135 ft (41.1 m) of 4 inch (10.61 cm) diameter rock cores was recovered from two actively deforming traces of the San Andreas Fault. 97 evenly (more or less) distributed index tests for Unconfined Compressive Strength (UCS) where performed on the cores using a modified point-load penetrometer. The point-load penetrometer used was a handheld micro-conical point indenter referred to as the Dimpler, in reference to the small conical depression that it creates. The core surface was first covered with compliant tape that is about a square inch in size. The conical tip of the indenter is coated with a (red) dye and then forced, at a constant axial load, through the tape and into the sample creating a conical red depression (dimple) on the tape. The combination of red dye and tape preserves a record of the dimple geometrical attributes. The geometrical attributes (e.g. diameter and depth) depend on the rock UCS. The diameter of a dimple is measured with a surface measuring magnifier. Correlation between dimple diameter and UCS has been previously established with triaxial testing. The SAFOD core gave Dimpler UCS values in the range of 10 psi (68.9 KPa) to 15,000 psi (103.4 MPa). The UCS index also allows correlations between geomechanical properties and well log-derived petrophysical properties.

  5. Development of modifications to the material point method for the simulation of thin membranes, compressible fluids, and their interactions

    SciTech Connect

    York, A.R. II

    1997-07-01

    The material point method (MPM) is an evolution of the particle in cell method where Lagrangian particles or material points are used to discretize the volume of a material. The particles carry properties such as mass, velocity, stress, and strain and move through a Eulerian or spatial mesh. The momentum equation is solved on the Eulerian mesh. Modifications to the material point method are developed that allow the simulation of thin membranes, compressible fluids, and their dynamic interactions. A single layer of material points through the thickness is used to represent a membrane. The constitutive equation for the membrane is applied in the local coordinate system of each material point. Validation problems are presented and numerical convergence is demonstrated. Fluid simulation is achieved by implementing a constitutive equation for a compressible, viscous, Newtonian fluid and by solution of the energy equation. The fluid formulation is validated by simulating a traveling shock wave in a compressible fluid. Interactions of the fluid and membrane are handled naturally with the method. The fluid and membrane communicate through the Eulerian grid on which forces are calculated due to the fluid and membrane stress states. Validation problems include simulating a projectile impacting an inflated airbag. In some impact simulations with the MPM, bodies may tend to stick together when separating. Several algorithms are proposed and tested that allow bodies to separate from each other after impact. In addition, several methods are investigated to determine the local coordinate system of a membrane material point without relying upon connectivity data.

  6. Microscopic Strain Mapping in Nanostructured and Microstructured Alumina-Titania Coatings Under 4-point Compressive and Tensile Bending

    DTIC Science & Technology

    2010-06-01

    A & A Co. Engineering Conference International, Sub-Micron & Nanostructured Ceramics Colorado Springs, June 7-12, 2009, Colorado, USA Microscopic...Strain Mapping in Nanostructured and Microstructured Alumina-Titania Coatings Under 4-point Compressive and Tensile Bending A . Ignatov1,2, E. K. Akdogan1...provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently

  7. Evolution of Skin Temperature after the Application of Compressive Forces on Tendon, Muscle and Myofascial Trigger Point

    PubMed Central

    Magalhães, Marina Figueiredo; Dibai-Filho, Almir Vieira; de Oliveira Guirro, Elaine Caldeira; Girasol, Carlos Eduardo; de Oliveira, Alessandra Kelly; Dias, Fabiana Rodrigues Cancio; Guirro, Rinaldo Roberto de Jesus

    2015-01-01

    Some assessment and diagnosis methods require palpation or the application of certain forces on the skin, which affects the structures beneath, we highlight the importance of defining possible influences on skin temperature as a result of this physical contact. Thus, the aim of the present study is to determine the ideal time for performing thermographic examination after palpation based on the assessment of skin temperature evolution. Randomized and crossover study carried out with 15 computer-user volunteers of both genders, between 18 and 45 years of age, who were submitted to compressive forces of 0, 1, 2 and 3 kg/cm2 for 30 seconds with a washout period of 48 hours using a portable digital dynamometer. Compressive forces were applied on the following spots on the dominant upper limb: myofascial trigger point in the levator scapulae, biceps brachii muscle and palmaris longus tendon. Volunteers were examined by means of infrared thermography before and after the application of compressive forces (15, 30, 45 and 60 minutes). In most comparisons made over time, a significant decrease was observed 30, 45 and 60 minutes after the application of compressive forces (p < 0.05) on the palmaris longus tendon and biceps brachii muscle. However, no difference was observed when comparing the different compressive forces (p > 0.05). In conclusion, infrared thermography can be used after assessment or diagnosis methods focused on the application of forces on tendons and muscles, provided the procedure is performed 15 minutes after contact with the skin. Regarding to the myofascial trigger point, the thermographic examination can be performed within 60 minutes after the contact with the skin. PMID:26070073

  8. A Measurement Method for Large Parts Combining with Feature Compression Extraction and Directed Edge-Point Criterion

    PubMed Central

    Liu, Wei; Zhang, Yang; Yang, Fan; Gao, Peng; Lan, Zhiguang; Jia, Zhenyuan; Gao, Hang

    2016-01-01

    High-accuracy surface measurement of large aviation parts is a significant guarantee of aircraft assembly with high quality. The result of boundary measurement is a significant parameter for aviation-part measurement. This paper proposes a measurement method for accurately measuring the surface and boundary of aviation part with feature compression extraction and directed edge-point criterion. To improve the measurement accuracy of both the surface and boundary of large parts, extraction method of global boundary and feature analysis of local stripe are combined. The center feature of laser stripe is obtained with high accuracy and less calculation using a sub-pixel centroid extraction method based on compress processing. This method consists of a compressing process of images and judgment criterion of laser stripe centers. An edge-point extraction method based on directed arc-length criterion is proposed to obtain accurate boundary. Finally, a high-precision reconstruction of aerospace part is achieved. Experiments are performed both in a laboratory and an industrial field. The physical measurements validate that the mean distance deviation of the proposed method is 0.47 mm. The results of the field experimentation show the validity of the proposed method. PMID:28035975

  9. A Measurement Method for Large Parts Combining with Feature Compression Extraction and Directed Edge-Point Criterion.

    PubMed

    Liu, Wei; Zhang, Yang; Yang, Fan; Gao, Peng; Lan, Zhiguang; Jia, Zhenyuan; Gao, Hang

    2016-12-26

    High-accuracy surface measurement of large aviation parts is a significant guarantee of aircraft assembly with high quality. The result of boundary measurement is a significant parameter for aviation-part measurement. This paper proposes a measurement method for accurately measuring the surface and boundary of aviation part with feature compression extraction and directed edge-point criterion. To improve the measurement accuracy of both the surface and boundary of large parts, extraction method of global boundary and feature analysis of local stripe are combined. The center feature of laser stripe is obtained with high accuracy and less calculation using a sub-pixel centroid extraction method based on compress processing. This method consists of a compressing process of images and judgment criterion of laser stripe centers. An edge-point extraction method based on directed arc-length criterion is proposed to obtain accurate boundary. Finally, a high-precision reconstruction of aerospace part is achieved. Experiments are performed both in a laboratory and an industrial field. The physical measurements validate that the mean distance deviation of the proposed method is 0.47 mm. The results of the field experimentation show the validity of the proposed method.

  10. A Genuine Jahn-Teller System with Compressed Geometry and Quantum Effects Originating from Zero-Point Motion.

    PubMed

    Aramburu, José Antonio; García-Fernández, Pablo; García-Lastra, Juan María; Moreno, Miguel

    2016-07-18

    First-principle calculations together with analysis of the experimental data found for 3d(9) and 3d(7) ions in cubic oxides proved that the center found in irradiated CaO:Ni(2+) corresponds to Ni(+) under a static Jahn-Teller effect displaying a compressed equilibrium geometry. It was also shown that the anomalous positive g∥ shift (g∥ -g0 =0.065) measured at T=20 K obeys the superposition of the |3 z(2) -r(2) ⟩ and |x(2) -y(2) ⟩ states driven by quantum effects associated with the zero-point motion, a mechanism first put forward by O'Brien for static Jahn-Teller systems and later extended by Ham to the dynamic Jahn-Teller case. To our knowledge, this is the first genuine Jahn-Teller system (i.e. in which exact degeneracy exists at the high-symmetry configuration) exhibiting a compressed equilibrium geometry for which large quantum effects allow experimental observation of the effect predicted by O'Brien. Analysis of the calculated energy barriers for different Jahn-Teller systems allowed us to explain the origin of the compressed geometry observed for CaO:Ni(+) .

  11. Evidence for the Use of Ischemic Compression and Dry Needling in the Management of Trigger Points of the Upper Trapezius in Patients with Neck Pain: A Systematic Review.

    PubMed

    Cagnie, Barbara; Castelein, Birgit; Pollie, Flore; Steelant, Lieselotte; Verhoeyen, Hanne; Cools, Ann

    2015-07-01

    The aim of this review was to describe the effects of ischemic compression and dry needling on trigger points in the upper trapezius muscle in patients with neck pain and compare these two interventions with other therapeutic interventions aiming to inactivate trigger points. Both PubMed and Web of Science were searched for randomized controlled trials using different key word combinations related to myofascial neck pain and therapeutic interventions. Four main outcome parameters were evaluated on short and medium term: pain, range of motion, functionality, and quality-of-life, including depression. Fifteen randomized controlled trials were included in this systematic review. There is moderate evidence for ischemic compression and strong evidence for dry needling to have a positive effect on pain intensity. This pain decrease is greater compared with active range of motion exercises (ischemic compression) and no or placebo intervention (ischemic compression and dry needling) but similar to other therapeutic approaches. There is moderate evidence that both ischemic compression and dry needling increase side-bending range of motion, with similar effects compared with lidocaine injection. There is weak evidence regarding its effects on functionality and quality-of-life. On the basis of this systematic review, ischemic compression and dry needling can both be recommended in the treatment of neck pain patients with trigger points in the upper trapezius muscle. Additional research with high-quality study designs are needed to develop more conclusive evidence.

  12. Hybrid Energy Storage System Based on Compressed Air and Super-Capacitors with Maximum Efficiency Point Tracking (MEPT)

    NASA Astrophysics Data System (ADS)

    Lemofouet, Sylvain; Rufer, Alfred

    This paper presents a hybrid energy storage system mainly based on Compressed Air, where the storage and withdrawal of energy are done within maximum efficiency conditions. As these maximum efficiency conditions impose the level of converted power, an intermittent time-modulated operation mode is applied to the thermodynamic converter to obtain a variable converted power. A smoothly variable output power is achieved with the help of a supercapacitive auxiliary storage device used as a filter. The paper describes the concept of the system, the power-electronic interfaces and especially the Maximum Efficiency Point Tracking (MEPT) algorithm and the strategy used to vary the output power. In addition, the paper introduces more efficient hybrid storage systems where the volumetric air machine is replaced by an oil-hydraulics and pneumatics converter, used under isothermal conditions. Practical results are also presented, recorded from a low-power air motor coupled to a small DC generator, as well as from a first prototype of the hydro-pneumatic system. Some economical considerations are also made, through a comparative cost evaluation of the presented hydro-pneumatic systems and a lead acid batteries system, in the context of a stand alone photovoltaic home application. This evaluation confirms the cost effectiveness of the presented hybrid storage systems.

  13. An evaluation of the sandwich beam in four-point bending as a compressive test method for composites

    NASA Technical Reports Server (NTRS)

    Shuart, M. J.; Herakovich, C. T.

    1978-01-01

    The experimental phase of the study included compressive tests on HTS/PMR-15 graphite/polyimide, 2024-T3 aluminum alloy, and 5052 aluminum honeycomb at room temperature, and tensile tests on graphite/polyimide at room temperature, -157 C, and 316 C. Elastic properties and strength data are presented for three laminates. The room temperature elastic properties were generally found to differ in tension and compression with Young's modulus values differing by as much as twenty-six percent. The effect of temperature on modulus and strength was shown to be laminate dependent. A three-dimensional finite element analysis predicted an essentially uniform, uniaxial compressive stress state in the top flange test section of the sandwich beam. In conclusion, the sandwich beam can be used to obtain accurate, reliable Young's modulus and Poisson's ratio data for advanced composites; however, the ultimate compressive stress for some laminates may be influenced by the specimen geometry.

  14. 26. Central compression lock, north span facing north. Compression lock ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    26. Central compression lock, north span facing north. Compression lock locks two spans together at highest point. There are three compression locks. - Henry Ford Bridge, Spanning Cerritos Channel, Los Angeles-Long Beach Harbor, Los Angeles, Los Angeles County, CA

  15. Operational procedure for computer program for design point characteristics of a compressed-air generator with through-flow combustor for V/STOL applications

    NASA Technical Reports Server (NTRS)

    Krebs, R. P.

    1971-01-01

    The computer program described in this report calculates the design-point characteristics of a compressed-air generator for use in V/STOL applications such as systems with a tip-turbine-driven lift fan. The program computes the dimensions and mass, as well as the thermodynamic performance of a model air generator configuration which involves a straight through-flow combustor. Physical and thermodynamic characteristics of the air generator components are also given. The program was written in FORTRAN IV language. Provision has been made so that the program will accept input values in either SI units or U.S. customary units. Each air generator design-point calculation requires about 1.5 seconds of 7094 computer time for execution.

  16. Microbunching and RF Compression

    SciTech Connect

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.

    2010-05-23

    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  17. Solubilities of heavy fossil fuels in compressed gases. Calculation of dew points in tar-containing gas streams

    SciTech Connect

    Monge, A.; Prausnitz, J.M.

    1984-04-01

    A molecular-thermodynamic model is used to establish a correlation for solubilities of heavy fossil fuels in dense gases (such as those from a coal gasifier) in the region from ambient to 100 bar and 600 K. This model is then applied to calculate dew points in tar-containing gas streams. Experimental solubility measurements have been made for 2 Lurgi coal-tar fractions in dry and moist methane. Calculated and experimental solubilities agree well. The correlation is used to establish a design-oriented computer program such as can be used for the design of a continuous-flow heat exchanger.

  18. Data Compression.

    ERIC Educational Resources Information Center

    Bookstein, Abraham; Storer, James A.

    1992-01-01

    Introduces this issue, which contains papers from the 1991 Data Compression Conference, and defines data compression. The two primary functions of data compression are described, i.e., storage and communications; types of data using compression technology are discussed; compression methods are explained; and current areas of research are…

  19. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  20. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  1. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.

  2. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.

  3. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-03-10

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.

  4. Vascular compression syndromes.

    PubMed

    Czihal, Michael; Banafsche, Ramin; Hoffmann, Ulrich; Koeppel, Thomas

    2015-11-01

    Dealing with vascular compression syndromes is one of the most challenging tasks in Vascular Medicine practice. This heterogeneous group of disorders is characterised by external compression of primarily healthy arteries and/or veins as well as accompanying nerval structures, carrying the risk of subsequent structural vessel wall and nerve damage. Vascular compression syndromes may severely impair health-related quality of life in affected individuals who are typically young and otherwise healthy. The diagnostic approach has not been standardised for any of the vascular compression syndromes. Moreover, some degree of positional external compression of blood vessels such as the subclavian and popliteal vessels or the celiac trunk can be found in a significant proportion of healthy individuals. This implies important difficulties in differentiating physiological from pathological findings of clinical examination and diagnostic imaging with provocative manoeuvres. The level of evidence on which treatment decisions regarding surgical decompression with or without revascularisation can be relied on is generally poor, mostly coming from retrospective single centre studies. Proper patient selection is critical in order to avoid overtreatment in patients without a clear association between vascular compression and clinical symptoms. With a focus on the thoracic outlet-syndrome, the median arcuate ligament syndrome and the popliteal entrapment syndrome, the present article gives a selective literature review on compression syndromes from an interdisciplinary vascular point of view.

  5. Compressive Holography

    NASA Astrophysics Data System (ADS)

    Lim, Se Hoon

    Compressive holography estimates images from incomplete data by using sparsity priors. Compressive holography combines digital holography and compressive sensing. Digital holography consists of computational image estimation from data captured by an electronic focal plane array. Compressive sensing enables accurate data reconstruction by prior knowledge on desired signal. Computational and optical co-design optimally supports compressive holography in the joint computational and optical domain. This dissertation explores two examples of compressive holography: estimation of 3D tomographic images from 2D data and estimation of images from under sampled apertures. Compressive holography achieves single shot holographic tomography using decompressive inference. In general, 3D image reconstruction suffers from underdetermined measurements with a 2D detector. Specifically, single shot holographic tomography shows the uniqueness problem in the axial direction because the inversion is ill-posed. Compressive sensing alleviates the ill-posed problem by enforcing some sparsity constraints. Holographic tomography is applied for video-rate microscopic imaging and diffuse object imaging. In diffuse object imaging, sparsity priors are not valid in coherent image basis due to speckle. So incoherent image estimation is designed to hold the sparsity in incoherent image basis by support of multiple speckle realizations. High pixel count holography achieves high resolution and wide field-of-view imaging. Coherent aperture synthesis can be one method to increase the aperture size of a detector. Scanning-based synthetic aperture confronts a multivariable global optimization problem due to time-space measurement errors. A hierarchical estimation strategy divides the global problem into multiple local problems with support of computational and optical co-design. Compressive sparse aperture holography can be another method. Compressive sparse sampling collects most of significant field

  6. libpolycomp: Compression/decompression library

    NASA Astrophysics Data System (ADS)

    Tomasi, Maurizio

    2016-04-01

    Libpolycomp compresses and decompresses one-dimensional streams of numbers by means of several algorithms. It is well-suited for time-ordered data acquired by astronomical instruments or simulations. One of the algorithms, called "polynomial compression", combines two widely-used ideas (namely, polynomial approximation and filtering of Fourier series) to achieve substantial compression ratios for datasets characterized by smoothness and lack of noise. Notable examples are the ephemerides of astronomical objects and the pointing information of astronomical telescopes. Other algorithms implemented in this C library are well known and already widely used, e.g., RLE, quantization, deflate (via libz) and Burrows-Wheeler transform (via libbzip2). Libpolycomp can compress the timelines acquired by the Planck/LFI instrument with an overall compression ratio of ~9, while other widely known programs (gzip, bzip2) reach compression ratios less than 1.5.

  7. Compression stockings

    MedlinePlus

    ... knee bend. Compression Stockings Can Be Hard to Put on If it's hard for you to put on the stockings, try these tips: Apply lotion ... your legs, but let it dry before you put on the stockings. Use a little baby powder ...

  8. Efficient Compression of High Resolution Climate Data

    NASA Astrophysics Data System (ADS)

    Yin, J.; Schuchardt, K. L.

    2011-12-01

    resolution climate data can be massive. Those data can consume a huge amount of disk space for storage, incur significant overhead for outputting data during simulation, introduce high latency for visualization and analysis, and may even make interactive visualization and analysis impossible given the limit of the data that a conventional cluster can handle. These problems can be alleviated by with effective and efficient data compression techniques. Even though HDF5 format supports compression, previous work has mainly focused on employ traditional general purpose compression schemes such as dictionary coder and block sorting based compression scheme. Those compression schemes mainly focus on encoding repeated byte sequences efficiently and are not well suitable for compressing climate data consist mainly of distinguished float point numbers. We plan to select and customize our compression schemes according to the characteristics of high-resolution climate data. One observation on high resolution climate data is that as the resolution become higher, values of various climate variables such as temperature and pressure, become closer in nearby cells. This provides excellent opportunities for predication-based compression schemes. We have performed a preliminary estimation of compression ratios of a very simple minded predication-based compression ratio in which we compute the difference between current float point number with previous float point number and then encoding the exponent and significance part of the float point number with entropy-based compression scheme. Our results show that we can achieve higher compression ratios between 2 and 3 in lossless compression, which is significantly higher than traditional compression algorithms. We have also developed lossy compression with our techniques. We can achive orders of magnitude data reduction while ensure error bounds. Moreover, our compression scheme is much more efficient and introduces much less overhead

  9. Compressed convolution

    NASA Astrophysics Data System (ADS)

    Elsner, Franz; Wandelt, Benjamin D.

    2014-01-01

    We introduce the concept of compressed convolution, a technique to convolve a given data set with a large number of non-orthogonal kernels. In typical applications our technique drastically reduces the effective number of computations. The new method is applicable to convolutions with symmetric and asymmetric kernels and can be easily controlled for an optimal trade-off between speed and accuracy. It is based on linear compression of the collection of kernels into a small number of coefficients in an optimal eigenbasis. The final result can then be decompressed in constant time for each desired convolved output. The method is fully general and suitable for a wide variety of problems. We give explicit examples in the context of simulation challenges for upcoming multi-kilo-detector cosmic microwave background (CMB) missions. For a CMB experiment with detectors with similar beam properties, we demonstrate that the algorithm can decrease the costs of beam convolution by two to three orders of magnitude with negligible loss of accuracy. Likewise, it has the potential to allow the reduction of disk space required to store signal simulations by a similar amount. Applications in other areas of astrophysics and beyond are optimal searches for a large number of templates in noisy data, e.g. from a parametrized family of gravitational wave templates; or calculating convolutions with highly overcomplete wavelet dictionaries, e.g. in methods designed to uncover sparse signal representations.

  10. Compression and Entrapment Syndromes

    PubMed Central

    Heffernan, L.P.; Benstead, T.J.

    1987-01-01

    Family physicians are often confronted by patients who present with pain, numbness and weakness. Such complaints, when confined to a single extremity, most particularly to a restricted portion of the extremity, may indicate focal dysfunction of peripheral nerve structures arising from compression and/or entrapment, to which such nerves are selectively vulnerable. The authors of this article consider the paramount clinical features that allow the clinician to arrive at a correct diagnosis, reviews major points in differential diagnosis, and suggest appropriate management strategies. PMID:21263858

  11. Digital image compression in dermatology: format comparison.

    PubMed

    Guarneri, F; Vaccaro, M; Guarneri, C

    2008-09-01

    Digital image compression (reduction of the amount of numeric data needed to represent a picture) is widely used in electronic storage and transmission devices. Few studies have compared the suitability of the different compression algorithms for dermatologic images. We aimed at comparing the performance of four popular compression formats, Tagged Image File (TIF), Portable Network Graphics (PNG), Joint Photographic Expert Group (JPEG), and JPEG2000 on clinical and videomicroscopic dermatologic images. Nineteen (19) clinical and 15 videomicroscopic digital images were compressed using JPEG and JPEG2000 at various compression factors and TIF and PNG. TIF and PNG are "lossless" formats (i.e., without alteration of the image), JPEG is "lossy" (the compressed image has a lower quality than the original), JPEG2000 has a lossless and a lossy mode. The quality of the compressed images was assessed subjectively (by three expert reviewers) and quantitatively (by measuring, point by point, the color differences from the original). Lossless JPEG2000 (49% compression) outperformed the other lossless algorithms, PNG and TIF (42% and 31% compression, respectively). Lossy JPEG2000 compression was slightly less efficient than JPEG, but preserved image quality much better, particularly at higher compression factors. For its good quality and compression ratio, JPEG2000 appears to be a good choice for clinical/videomicroscopic dermatologic image compression. Additionally, its diffusion and other features, such as the possibility of embedding metadata in the image file and to encode various parts of an image at different compression levels, make it perfectly suitable for the current needs of dermatology and teledermatology.

  12. Dual compression is not an uncommon type of iliac vein compression syndrome.

    PubMed

    Shi, Wan-Yin; Gu, Jian-Ping; Liu, Chang-Jian; Lou, Wen-Sheng; He, Xu

    2017-03-13

    Typical iliac vein compression syndrome (IVCS) is characterized by compression of left common iliac vein (LCIV) by the overlying right common iliac artery (RCIA). We described an underestimated type of IVCS with dual compression by right and left common iliac arteries (LCIA) simultaneously. Thirty-one patients with IVCS were retrospectively included. All patients received trans-catheter venography and computed tomography (CT) examinations for diagnosing and evaluating IVCS. Late venography and reconstructed CT were used for evaluating the anatomical relationship among LCIV, RCIA and LCIA. Imaging manifestations as well as demographic data were collected and evaluated by two experienced radiologists. Sole and dual compression were found in 32.3% (n = 10) and 67.7% (n = 21) of 31 patients respectively. No statistical differences existed between them in terms of age, gender, LCIV diameter at the maximum compression point, pressure gradient across stenosis, and the percentage of compression level. On CT and venography, sole compression was commonly presented with a longitudinal compression at the orifice of LCIV while dual compression was usually presented as two types: one had a lengthy stenosis along the upper side of LCIV and the other was manifested by a longitudinal compression near to the orifice of external iliac vein. The presence of dual compression seemed significantly correlated with the tortuous LCIA (p = 0.006). Left common iliac vein can be presented by dual compression. This type of compression has typical manifestations on late venography and CT.

  13. LIDAR data compression using wavelets

    NASA Astrophysics Data System (ADS)

    Pradhan, B.; Mansor, Shattri; Ramli, Abdul Rahman; Mohamed Sharif, Abdul Rashid B.; Sandeep, K.

    2005-10-01

    The lifting scheme has been found to be a flexible method for constructing scalar wavelets with desirable properties. In this paper, it is extended to the LIDAR data compression. A newly developed data compression approach to approximate the LIDAR surface with a series of non-overlapping triangles has been presented. Generally a Triangulated Irregular Networks (TIN) are the most common form of digital surface model that consists of elevation values with x, y coordinates that make up triangles. But over the years the TIN data representation has become a case in point for many researchers due its large data size. Compression of TIN is needed for efficient management of large data and good surface visualization. This approach covers following steps: First, by using a Delaunay triangulation, an efficient algorithm is developed to generate TIN, which forms the terrain from an arbitrary set of data. A new interpolation wavelet filter for TIN has been applied in two steps, namely splitting and elevation. In the splitting step, a triangle has been divided into several sub-triangles and the elevation step has been used to 'modify' the point values (point coordinates for geometry) after the splitting. Then, this data set is compressed at the desired locations by using second generation wavelets. The quality of geographical surface representation after using proposed technique is compared with the original LIDAR data. The results show that this method can be used for significant reduction of data set.

  14. Turbulence in Compressible Flows

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Lecture notes for the AGARD Fluid Dynamics Panel (FDP) Special Course on 'Turbulence in Compressible Flows' have been assembled in this report. The following topics were covered: Compressible Turbulent Boundary Layers, Compressible Turbulent Free Shear Layers, Turbulent Combustion, DNS/LES and RANS Simulations of Compressible Turbulent Flows, and Case Studies of Applications of Turbulence Models in Aerospace.

  15. Experimental compressive phase space tomography

    PubMed Central

    Tian, Lei; Lee, Justin; Oh, Se Baek; Barbastathis, George

    2012-01-01

    Phase space tomography estimates correlation functions entirely from snapshots in the evolution of the wave function along a time or space variable. In contrast, traditional interferometric methods require measurement of multiple two–point correlations. However, as in every tomographic formulation, undersampling poses a severe limitation. Here we present the first, to our knowledge, experimental demonstration of compressive reconstruction of the classical optical correlation function, i.e. the mutual intensity function. Our compressive algorithm makes explicit use of the physically justifiable assumption of a low–entropy source (or state.) Since the source was directly accessible in our classical experiment, we were able to compare the compressive estimate of the mutual intensity to an independent ground–truth estimate from the van Cittert–Zernike theorem and verify substantial quantitative improvements in the reconstruction. PMID:22513541

  16. Compressively sensed complex networks.

    SciTech Connect

    Dunlavy, Daniel M.; Ray, Jaideep; Pinar, Ali

    2010-07-01

    The aim of this project is to develop low dimension parametric (deterministic) models of complex networks, to use compressive sensing (CS) and multiscale analysis to do so and to exploit the structure of complex networks (some are self-similar under coarsening). CS provides a new way of sampling and reconstructing networks. The approach is based on multiresolution decomposition of the adjacency matrix and its efficient sampling. It requires preprocessing of the adjacency matrix to make it 'blocky' which is the biggest (combinatorial) algorithm challenge. Current CS reconstruction algorithm makes no use of the structure of a graph, its very general (and so not very efficient/customized). Other model-based CS techniques exist, but not yet adapted to networks. Obvious starting point for future work is to increase the efficiency of reconstruction.

  17. Compressed gas manifold

    DOEpatents

    Hildebrand, Richard J.; Wozniak, John J.

    2001-01-01

    A compressed gas storage cell interconnecting manifold including a thermally activated pressure relief device, a manual safety shut-off valve, and a port for connecting the compressed gas storage cells to a motor vehicle power source and to a refueling adapter. The manifold is mechanically and pneumatically connected to a compressed gas storage cell by a bolt including a gas passage therein.

  18. Fracture in compression of brittle solids

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The fracture of brittle solids in monotonic compression is reviewed from both the mechanistic and phenomenological points of view. The fundamental theoretical developments based on the extension of pre-existing cracks in general multiaxial stress fields are recognized as explaining extrinsic behavior where a single crack is responsible for the final failure. In contrast, shear faulting in compression is recognized to be the result of an evolutionary localization process involving en echelon action of cracks and is termed intrinsic.

  19. Stability of compressible Taylor-Couette flow

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Chow, Chuen-Yen

    1991-01-01

    Compressible stability equations are solved using the spectral collocation method in an attempt to study the effects of temperature difference and compressibility on the stability of Taylor-Couette flow. It is found that the Chebyshev collocation spectral method yields highly accurate results using fewer grid points for solving stability problems. Comparisons are made between the result obtained by assuming small Mach number with a uniform temperature distribution and that based on fully incompressible analysis.

  20. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  1. HYDRODYNAMIC COMPRESSIVE FORGING.

    DTIC Science & Technology

    HYDRODYNAMICS), (*FORGING, COMPRESSIVE PROPERTIES, LUBRICANTS, PERFORMANCE(ENGINEERING), DIES, TENSILE PROPERTIES, MOLYBDENUM ALLOYS , STRAIN...MECHANICS), BERYLLIUM ALLOYS , NICKEL ALLOYS , CASTING ALLOYS , PRESSURE, FAILURE(MECHANICS).

  2. Lossless Astronomical Image Compression and the Effects of Random Noise

    NASA Technical Reports Server (NTRS)

    Pence, William

    2009-01-01

    In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.

  3. Fractal image compression

    NASA Technical Reports Server (NTRS)

    Barnsley, Michael F.; Sloan, Alan D.

    1989-01-01

    Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed.

  4. Texture Studies and Compression Behaviour of Apple Flesh

    NASA Astrophysics Data System (ADS)

    James, Bryony; Fonseca, Celia

    Compressive behavior of fruit flesh has been studied using mechanical tests and microstructural analysis. Apple flesh from two cultivars (Braeburn and Cox's Orange Pippin) was investigated to represent the extremes in a spectrum of fruit flesh types, hard and juicy (Braeburn) and soft and mealy (Cox's). Force-deformation curves produced during compression of unconstrained discs of apple flesh followed trends predicted from the literature for each of the "juicy" and "mealy" types. The curves display the rupture point and, in some cases, a point of inflection that may be related to the point of incipient juice release. During compression these discs of flesh generally failed along the centre line, perpendicular to the direction of loading, through a barrelling mechanism. Cryo-Scanning Electron Microscopy (cryo-SEM) was used to examine the behavior of the parenchyma cells during fracture and compression using a purpose designed sample holder and compression tester. Fracture behavior reinforced the difference in mechanical properties between crisp and mealy fruit flesh. During compression testing prior to cryo-SEM imaging the apple flesh was constrained perpendicular to the direction of loading. Microstructural analysis suggests that, in this arrangement, the material fails along a compression front ahead of the compressing plate. Failure progresses by whole lines of parenchyma cells collapsing, or rupturing, with juice filling intercellular spaces, before the compression force is transferred to the next row of cells.

  5. Lossless wavelet compression on medical image

    NASA Astrophysics Data System (ADS)

    Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong

    2006-09-01

    An increasing number of medical imagery is created directly in digital form. Such as Clinical image Archiving and Communication Systems (PACS), as well as telemedicine networks require the storage and transmission of this huge amount of medical image data. Efficient compression of these data is crucial. Several lossless and lossy techniques for the compression of the data have been proposed. Lossless techniques allow exact reconstruction of the original imagery, while lossy techniques aim to achieve high compression ratios by allowing some acceptable degradation in the image. Lossless compression does not degrade the image, thus facilitating accurate diagnosis, of course at the expense of higher bit rates, i.e. lower compression ratios. Various methods both for lossy (irreversible) and lossless (reversible) image compression are proposed in the literature. The recent advances in the lossy compression techniques include different methods such as vector quantization. Wavelet coding, neural networks, and fractal coding. Although these methods can achieve high compression ratios (of the order 50:1, or even more), they do not allow reconstructing exactly the original version of the input data. Lossless compression techniques permit the perfect reconstruction of the original image, but the achievable compression ratios are only of the order 2:1, up to 4:1. In our paper, we use a kind of lifting scheme to generate truly loss-less non-linear integer-to-integer wavelet transforms. At the same time, we exploit the coding algorithm producing an embedded code has the property that the bits in the bit stream are generated in order of importance, so that all the low rate codes are included at the beginning of the bit stream. Typically, the encoding process stops when the target bit rate is met. Similarly, the decoder can interrupt the decoding process at any point in the bit stream, and still reconstruct the image. Therefore, a compression scheme generating an embedded code can

  6. Dental Compressed Air Systems.

    DTIC Science & Technology

    1992-03-01

    I AL-TR-IWI-0uuu AD-A249 954 DENTAL COMPRESSED AIMYTM R Curtis D. Weyrmuch, Mejor, USAP, D Samuel P.Dvs iueatclpi SF.O N AEROSPACE MwaEDIN mwr~ComA G...FUNDING NUMBERS Dental Compressed Air Systems PE - 87714F PR - 7350 TA - 22 D. Weyrauch WU - XX Samuel P. Davis George W. Gaines 7. PERFORMING...words) The purpose of this report is to update guidelines on dental compressed air systems (DCA). Much of the information was obtained from a survey

  7. Modeling Compressed Turbulence

    SciTech Connect

    Israel, Daniel M.

    2012-07-13

    From ICE to ICF, the effect of mean compression or expansion is important for predicting the state of the turbulence. When developing combustion models, we would like to know the mix state of the reacting species. This involves density and concentration fluctuations. To date, research has focused on the effect of compression on the turbulent kinetic energy. The current work provides constraints to help development and calibration for models of species mixing effects in compressed turbulence. The Cambon, et al., re-scaling has been extended to buoyancy driven turbulence, including the fluctuating density, concentration, and temperature equations. The new scalings give us helpful constraints for developing and validating RANS turbulence models.

  8. Spiral vortices in compressible turbulent flows

    NASA Astrophysics Data System (ADS)

    Gomez, T.; Politano, H.; Pouquet, A.; Larchevêque, M.

    2001-07-01

    We extend the spiral vortex solution of Lundgren [Phys. Fluids 25, 2193 (1982)] to compressible turbulent flows with a perfect gas. This model links the dynamical and the spectral properties of incompressible flows, providing a k-5/3 Kolmogorov energy spectrum. In so doing, a compressible spatiotemporal transformation is derived, reducing the dynamics of three-dimensional vortices, stretched by an axisymmetric incompressible strain, into a two-dimensional compressible vortex dynamics. It enables us to write the three-dimensional spectra of the incompressible and compressible square velocities in terms of, respectively, the two-dimensional spectra of the enstrophy and of the square velocity divergence, by the use of a temporal integration. Numerical results are presented from decaying direct simulations performed with 5122 grid points; initially, the rms Mach number is 0.23, with local values up to 0.9, the Reynolds number is 700, and the ratio between compressible and incompressible square velocities is 0.1. A k-5/3 inertial behavior is seen to result from the dynamical evolution for both the compressible and incompressible three-dimensional spectra.

  9. High-performance compression of astronomical images

    NASA Technical Reports Server (NTRS)

    White, Richard L.

    1993-01-01

    Astronomical images have some rather unusual characteristics that make many existing image compression techniques either ineffective or inapplicable. A typical image consists of a nearly flat background sprinkled with point sources and occasional extended sources. The images are often noisy, so that lossless compression does not work very well; furthermore, the images are usually subjected to stringent quantitative analysis, so any lossy compression method must be proven not to discard useful information, but must instead discard only the noise. Finally, the images can be extremely large. For example, the Space Telescope Science Institute has digitized photographic plates covering the entire sky, generating 1500 images each having 14000 x 14000 16-bit pixels. Several astronomical groups are now constructing cameras with mosaics of large CCD's (each 2048 x 2048 or larger); these instruments will be used in projects that generate data at a rate exceeding 100 MBytes every 5 minutes for many years. An effective technique for image compression may be based on the H-transform (Fritze et al. 1977). The method that we have developed can be used for either lossless or lossy compression. The digitized sky survey images can be compressed by at least a factor of 10 with no noticeable losses in the astrometric and photometric properties of the compressed images. The method has been designed to be computationally efficient: compression or decompression of a 512 x 512 image requires only 4 seconds on a Sun SPARCstation 1. The algorithm uses only integer arithmetic, so it is completely reversible in its lossless mode, and it could easily be implemented in hardware for space applications.

  10. Compressive Optical Image Encryption

    PubMed Central

    Li, Jun; Sheng Li, Jiao; Yang Pan, Yang; Li, Rong

    2015-01-01

    An optical image encryption technique based on compressive sensing using fully optical means has been proposed. An object image is first encrypted to a white-sense stationary noise pattern using a double random phase encoding (DRPE) method in a Mach-Zehnder interferometer. Then, the encrypted image is highly compressed to a signal using single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the encrypted image is reconstructed well via compressive sensing theory, and the original image can be decrypted with three reconstructed holograms and the correct keys. The numerical simulations show that the method is effective and suitable for optical image security transmission in future all-optical networks because of the ability of completely optical implementation and substantially smaller hologram data volume. PMID:25992946

  11. Compressive holographic video

    NASA Astrophysics Data System (ADS)

    Wang, Zihao; Spinoulas, Leonidas; He, Kuan; Tian, Lei; Cossairt, Oliver; Katsaggelos, Aggelos K.; Chen, Huaijin

    2017-01-01

    Compressed sensing has been discussed separately in spatial and temporal domains. Compressive holography has been introduced as a method that allows 3D tomographic reconstruction at different depths from a single 2D image. Coded exposure is a temporal compressed sensing method for high speed video acquisition. In this work, we combine compressive holography and coded exposure techniques and extend the discussion to 4D reconstruction in space and time from one coded captured image. In our prototype, digital in-line holography was used for imaging macroscopic, fast moving objects. The pixel-wise temporal modulation was implemented by a digital micromirror device. In this paper we demonstrate $10\\times$ temporal super resolution with multiple depths recovery from a single image. Two examples are presented for the purpose of recording subtle vibrations and tracking small particles within 5 ms.

  12. The Compressibility Burble

    NASA Technical Reports Server (NTRS)

    Stack, John

    1935-01-01

    Simultaneous air-flow photographs and pressure-distribution measurements have been made of the NACA 4412 airfoil at high speeds in order to determine the physical nature of the compressibility burble. The flow photographs were obtained by the Schlieren method and the pressures were simultaneously measured for 54 stations on the 5-inch-chord wing by means of a multiple-tube photographic manometer. Pressure-measurement results and typical Schlieren photographs are presented. The general nature of the phenomenon called the "compressibility burble" is shown by these experiments. The source of the increased drag is the compression shock that occurs, the excess drag being due to the conversion of a considerable amount of the air-stream kinetic energy into heat at the compression shock.

  13. Muon cooling: longitudinal compression.

    PubMed

    Bao, Yu; Antognini, Aldo; Bertl, Wilhelm; Hildebrandt, Malte; Khaw, Kim Siang; Kirch, Klaus; Papa, Angela; Petitjean, Claude; Piegsa, Florian M; Ritt, Stefan; Sedlak, Kamil; Stoykov, Alexey; Taqqu, David

    2014-06-06

    A 10  MeV/c positive muon beam was stopped in helium gas of a few mbar in a magnetic field of 5 T. The muon "swarm" has been efficiently compressed from a length of 16 cm down to a few mm along the magnetic field axis (longitudinal compression) using electrostatic fields. The simulation reproduces the low energy interactions of slow muons in helium gas. Phase space compression occurs on the order of microseconds, compatible with the muon lifetime of 2  μs. This paves the way for the preparation of a high-quality low-energy muon beam, with an increase in phase space density relative to a standard surface muon beam of 10^{7}. The achievable phase space compression by using only the longitudinal stage presented here is of the order of 10^{4}.

  14. Compressive laser ranging.

    PubMed

    Babbitt, Wm Randall; Barber, Zeb W; Renner, Christoffer

    2011-12-15

    Compressive sampling has been previously proposed as a technique for sampling radar returns and determining sparse range profiles with a reduced number of measurements compared to conventional techniques. By employing modulation on both transmission and reception, compressive sensing in ranging is extended to the direct measurement of range profiles without intermediate measurement of the return waveform. This compressive ranging approach enables the use of pseudorandom binary transmit waveforms and return modulation, along with low-bandwidth optical detectors to yield high-resolution ranging information. A proof-of-concept experiment is presented. With currently available compact, off-the-shelf electronics and photonics, such as high data rate binary pattern generators and high-bandwidth digital optical modulators, compressive laser ranging can readily achieve subcentimeter resolution in a compact, lightweight package.

  15. Compressive optical image encryption.

    PubMed

    Li, Jun; Sheng Li, Jiao; Yang Pan, Yang; Li, Rong

    2015-05-20

    An optical image encryption technique based on compressive sensing using fully optical means has been proposed. An object image is first encrypted to a white-sense stationary noise pattern using a double random phase encoding (DRPE) method in a Mach-Zehnder interferometer. Then, the encrypted image is highly compressed to a signal using single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the encrypted image is reconstructed well via compressive sensing theory, and the original image can be decrypted with three reconstructed holograms and the correct keys. The numerical simulations show that the method is effective and suitable for optical image security transmission in future all-optical networks because of the ability of completely optical implementation and substantially smaller hologram data volume.

  16. Compressible Astrophysics Simulation Code

    SciTech Connect

    Howell, L.; Singer, M.

    2007-07-18

    This is an astrophysics simulation code involving a radiation diffusion module developed at LLNL coupled to compressible hydrodynamics and adaptive mesh infrastructure developed at LBNL. One intended application is to neutrino diffusion in core collapse supernovae.

  17. Compressive holographic video.

    PubMed

    Wang, Zihao; Spinoulas, Leonidas; He, Kuan; Tian, Lei; Cossairt, Oliver; Katsaggelos, Aggelos K; Chen, Huaijin

    2017-01-09

    Compressed sensing has been discussed separately in spatial and temporal domains. Compressive holography has been introduced as a method that allows 3D tomographic reconstruction at different depths from a single 2D image. Coded exposure is a temporal compressed sensing method for high speed video acquisition. In this work, we combine compressive holography and coded exposure techniques and extend the discussion to 4D reconstruction in space and time from one coded captured image. In our prototype, digital in-line holography was used for imaging macroscopic, fast moving objects. The pixel-wise temporal modulation was implemented by a digital micromirror device. In this paper we demonstrate 10× temporal super resolution with multiple depths recovery from a single image. Two examples are presented for the purpose of recording subtle vibrations and tracking small particles within 5 ms.

  18. Vertebral Compression Fractures

    MedlinePlus

    ... OI: Information on Vertebral Compression Fractures 804 W. Diamond Ave., Ste. 210 Gaithersburg, MD 20878 (800) 981- ... osteogenesis imperfecta contact : Osteogenesis Imperfecta Foundation 804 W. Diamond Avenue, Suite 210, Gaithersburg, MD 20878 Tel: 800- ...

  19. Smoothing DCT Compression Artifacts

    NASA Technical Reports Server (NTRS)

    Ahumada, A. J., Jr.; Horng, R.; Statler, Irving C. (Technical Monitor)

    1994-01-01

    Image compression based on quantizing the image in the discrete cosine transform (DCT) domain can generate blocky artifacts in the output image. It is possible to reduce these artifacts and RMS error by adjusting measures of block edginess and image roughness, while restricting the DCT coefficient values to values that would have been quantized to those of the compressed image. We also introduce a DCT coefficient amplitude adjustment that reduces RMS error.

  20. Alternative Compression Garments

    NASA Technical Reports Server (NTRS)

    Stenger, M. B.; Lee, S. M. C.; Ribeiro, L. C.; Brown, A. K.; Westby, C. M.; Platts, S. H.

    2011-01-01

    Orthostatic intolerance after spaceflight is still an issue for astronauts as no in-flight countermeasure has been 100% effective. Future anti-gravity suits (AGS) may be similar to the Shuttle era inflatable AGS or may be a mechanical compression device like the Russian Kentavr. We have evaluated the above garments as well as elastic, gradient compression garments of varying magnitude and determined that breast-high elastic compression garments may be a suitable replacement to the current AGS. This new garment should be more comfortable than the AGS, easy to don and doff, and as effective a countermeasure to orthostatic intolerance. Furthermore, these new compression garments could be worn for several days after space flight as necessary if symptoms persisted. We conducted two studies to evaluate elastic, gradient compression garments. The purpose of these studies was to evaluate the comfort and efficacy of an alternative compression garment (ACG) immediately after actual space flight and 6 degree head-down tilt bed rest as a model of space flight, and to determine if they would impact recovery if worn for up to three days after bed rest.

  1. Compressed image deblurring

    NASA Astrophysics Data System (ADS)

    Xu, Yuquan; Hu, Xiyuan; Peng, Silong

    2014-03-01

    We propose an algorithm to recover the latent image from the blurred and compressed input. In recent years, although many image deblurring algorithms have been proposed, most of the previous methods do not consider the compression effect in blurry images. Actually, it is unavoidable in practice that most of the real-world images are compressed. This compression will introduce a typical kind of noise, blocking artifacts, which do not meet the Gaussian distribution assumed in most existing algorithms. Without properly handling this non-Gaussian noise, the recovered image will suffer severe artifacts. Inspired by the statistic property of compression error, we model the non-Gaussian noise as hyper-Laplacian distribution. Based on this model, an efficient nonblind image deblurring algorithm based on variable splitting technique is proposed to solve the resulting nonconvex minimization problem. Finally, we also address an effective blind image deblurring algorithm which can deal with the compressed and blurred images efficiently. Extensive experiments compared with state-of-the-art nonblind and blind deblurring methods demonstrate the effectiveness of the proposed method.

  2. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  3. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  4. Integer cosine transform for image compression

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Pollara, F.; Shahshahani, M.

    1991-01-01

    This article describes a recently introduced transform algorithm called the integer cosine transform (ICT), which is used in transform-based data compression schemes. The ICT algorithm requires only integer operations on small integers and at the same time gives a rate-distortion performance comparable to that offered by the floating-point discrete cosine transform (DCT). The article addresses the issue of implementation complexity, which is of prime concern for source coding applications of interest in deep-space communications. Complexity reduction in the transform stage of the compression scheme is particularly relevant, since this stage accounts for most (typically over 80 percent) of the computational load.

  5. Structure and Properties of Silica Glass Densified in Cold Compression and Hot Compression

    NASA Astrophysics Data System (ADS)

    Guerette, Michael; Ackerson, Michael R.; Thomas, Jay; Yuan, Fenglin; Bruce Watson, E.; Walker, David; Huang, Liping

    2015-10-01

    Silica glass has been shown in numerous studies to possess significant capacity for permanent densification under pressure at different temperatures to form high density amorphous (HDA) silica. However, it is unknown to what extent the processes leading to irreversible densification of silica glass in cold-compression at room temperature and in hot-compression (e.g., near glass transition temperature) are common in nature. In this work, a hot-compression technique was used to quench silica glass from high temperature (1100 °C) and high pressure (up to 8 GPa) conditions, which leads to density increase of ~25% and Young’s modulus increase of ~71% relative to that of pristine silica glass at ambient conditions. Our experiments and molecular dynamics (MD) simulations provide solid evidences that the intermediate-range order of the hot-compressed HDA silica is distinct from that of the counterpart cold-compressed at room temperature. This explains the much higher thermal and mechanical stability of the former than the latter upon heating and compression as revealed in our in-situ Brillouin light scattering (BLS) experiments. Our studies demonstrate the limitation of the resulting density as a structural indicator of polyamorphism, and point out the importance of temperature during compression in order to fundamentally understand HDA silica.

  6. Structure and Properties of Silica Glass Densified in Cold Compression and Hot Compression.

    PubMed

    Guerette, Michael; Ackerson, Michael R; Thomas, Jay; Yuan, Fenglin; Bruce Watson, E; Walker, David; Huang, Liping

    2015-10-15

    Silica glass has been shown in numerous studies to possess significant capacity for permanent densification under pressure at different temperatures to form high density amorphous (HDA) silica. However, it is unknown to what extent the processes leading to irreversible densification of silica glass in cold-compression at room temperature and in hot-compression (e.g., near glass transition temperature) are common in nature. In this work, a hot-compression technique was used to quench silica glass from high temperature (1100 °C) and high pressure (up to 8 GPa) conditions, which leads to density increase of ~25% and Young's modulus increase of ~71% relative to that of pristine silica glass at ambient conditions. Our experiments and molecular dynamics (MD) simulations provide solid evidences that the intermediate-range order of the hot-compressed HDA silica is distinct from that of the counterpart cold-compressed at room temperature. This explains the much higher thermal and mechanical stability of the former than the latter upon heating and compression as revealed in our in-situ Brillouin light scattering (BLS) experiments. Our studies demonstrate the limitation of the resulting density as a structural indicator of polyamorphism, and point out the importance of temperature during compression in order to fundamentally understand HDA silica.

  7. Simulation and modeling of homogeneous, compressed turbulence

    NASA Technical Reports Server (NTRS)

    Wu, C. T.; Ferziger, J. H.; Chapman, D. R.

    1985-01-01

    Low Reynolds number homogeneous turbulence undergoing low Mach number isotropic and one-dimensional compression was simulated by numerically solving the Navier-Stokes equations. The numerical simulations were performed on a CYBER 205 computer using a 64 x 64 x 64 mesh. A spectral method was used for spatial differencing and the second-order Runge-Kutta method for time advancement. A variety of statistical information was extracted from the computed flow fields. These include three-dimensional energy and dissipation spectra, two-point velocity correlations, one-dimensional energy spectra, turbulent kinetic energy and its dissipation rate, integral length scales, Taylor microscales, and Kolmogorov length scale. Results from the simulated flow fields were used to test one-point closure, two-equation models. A new one-point-closure, three-equation turbulence model which accounts for the effect of compression is proposed. The new model accurately calculates four types of flows (isotropic decay, isotropic compression, one-dimensional compression, and axisymmetric expansion flows) for a wide range of strain rates.

  8. Is Individualizing Breast Compression during Mammography useful? - Investigations of pain indications during mammography relating to compression force and surface area of the compressed breast.

    PubMed

    Feder, Katarzyna; Grunert, Jens-Holger

    2017-01-01

    Purpose The aim of this paper is to determine how the presence of pain during mammographic compression could be reduced. To this end, we examine its relationship with compression force, surface-area of the compressed breast, breast density (ACR) and former operations. Materials and Methods In 199 women 765 mammograms were performed. Women were asked to rate the level of pain on a scale of 0 - 10 (0: no, 10: highest pain). The surface-area of the breast under compression captured by the mammograms was measured using planimetry. 52 of the 199 women were asked to identify the area of the upper body with the highest level of pain. Results The thickness of the compressed breast was 65.2 % of the uncompressed breast at a force of 10 daN (57.8 % at 15 daN). When the force was increased from 10 daN to 15 daN, the average glandular dose (AGD) declined by 17 %. Tolerance of compression was associated with the size of the breast. More than 50 % of the mammograms with a small compression less than 9 daN were associated with higher level of pain. In the oblique projection, 60 % of the women specified the axilla as the area of maximum pain. Conclusion Women with larger breasts tolerated a greater force of compression. This implies a need for individualised examination depending on the size of the breast. Women with increased pain susceptibility terminated the compression early regardless of a small compression less than 9 daN. More than 50 % of the women identified areas outside breast as especially painful. Therefore, during examination, the areas around the breast should also be taken into consideration in order to minimize unnecessary discomfort. Key Points · With increased mammographic compression force, the effectiveness of breast thickness reduction declined.. · A compression force of 15 daN enabled an additional reduction by 17 % in average glandular dose (AGD) compared to 10 daN.. · Tolerance of increased compression force was related to

  9. Transverse Compression of Tendons.

    PubMed

    Salisbury, S T Samuel; Buckley, C Paul; Zavatsky, Amy B

    2016-04-01

    A study was made of the deformation of tendons when compressed transverse to the fiber-aligned axis. Bovine digital extensor tendons were compression tested between flat rigid plates. The methods included: in situ image-based measurement of tendon cross-sectional shapes, after preconditioning but immediately prior to testing; multiple constant-load creep/recovery tests applied to each tendon at increasing loads; and measurements of the resulting tendon displacements in both transverse directions. In these tests, friction resisted axial stretch of the tendon during compression, giving approximately plane-strain conditions. This, together with the assumption of a form of anisotropic hyperelastic constitutive model proposed previously for tendon, justified modeling the isochronal response of tendon as that of an isotropic, slightly compressible, neo-Hookean solid. Inverse analysis, using finite-element (FE) simulations of the experiments and 10 s isochronal creep displacement data, gave values for Young's modulus and Poisson's ratio of this solid of 0.31 MPa and 0.49, respectively, for an idealized tendon shape and averaged data for all the tendons and E = 0.14 and 0.10 MPa for two specific tendons using their actual measured geometry. The compression load versus displacement curves, as measured and as simulated, showed varying degrees of stiffening with increasing load. This can be attributed mostly to geometrical changes in tendon cross section under load, varying according to the initial 3D shape of the tendon.

  10. Multishock Compression Properties of Warm Dense Argon

    PubMed Central

    Zheng, Jun; Chen, Qifeng; Yunjun, Gu; Li, Zhiguo; Shen, Zhijun

    2015-01-01

    Warm dense argon was generated by a shock reverberation technique. The diagnostics of warm dense argon were performed by a multichannel optical pyrometer and a velocity interferometer system. The equations of state in the pressure-density range of 20–150 GPa and 1.9–5.3 g/cm3 from the first- to fourth-shock compression were presented. The single-shock temperatures in the range of 17.2–23.4 kK were obtained from the spectral radiance. Experimental results indicates that multiple shock-compression ratio (ηi = ρi/ρ0) is greatly enhanced from 3.3 to 8.8, where ρ0 is the initial density of argon and ρi (i = 1, 2, 3, 4) is the compressed density from first to fourth shock, respectively. For the relative compression ratio (ηi’ = ρi/ρi-1), an interesting finding is that a turning point occurs at the second shocked states under the conditions of different experiments, and ηi’ increases with pressure in lower density regime and reversely decreases with pressure in higher density regime. The evolution of the compression ratio is controlled by the excitation of internal degrees of freedom, which increase the compression, and by the interaction effects between particles that reduce it. A temperature-density plot shows that current multishock compression states of argon have distributed into warm dense regime. PMID:26515505

  11. The compressible mixing layer

    NASA Technical Reports Server (NTRS)

    Vandromme, Dany; Haminh, Hieu

    1991-01-01

    The capability of turbulence modeling correctly to handle natural unsteadiness appearing in compressible turbulent flows is investigated. Physical aspects linked to the unsteadiness problem and the role of various flow parameters are analyzed. It is found that unsteady turbulent flows can be simulated by dividing these motions into an 'organized' part for which equations of motion are solved and a remaining 'incoherent' part represented by a turbulence model. Two-equation turbulence models and second-order turbulence models can yield reasonable results. For specific compressible unsteady turbulent flow, graphic presentations of different quantities may reveal complementary physical features. Strong compression zones are observed in rapid flow parts but shocklets do not yet occur.

  12. Isentropic Compression of Argon

    SciTech Connect

    H. Oona; J.C. Solem; L.R. Veeser, C.A. Ekdahl; P.J. Rodriquez; S.M. Younger; W. Lewis; W.D. Turley

    1997-08-01

    We are studying the transition of argon from an insulator to a conductor by compressing the frozen gas isentropically to pressures at which neighboring atomic orbitals overlap sufficiently to allow some electron motion between atoms. Argon and the other rare gases have closed electron shells and therefore remain montomic, even when they solidify. Their simple structure makes it likely that any measured change in conductivity is due to changes in the atomic structure, not in molecular configuration. As the crystal is compressed the band gap closes, allowing increased conductivity. We have begun research to determine the conductivity at high pressures, and it is our intention to determine the compression at which the crystal becomes a metal.

  13. Compressible Flow Toolbox

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.

    2006-01-01

    The Compressible Flow Toolbox is primarily a MATLAB-language implementation of a set of algorithms that solve approximately 280 linear and nonlinear classical equations for compressible flow. The toolbox is useful for analysis of one-dimensional steady flow with either constant entropy, friction, heat transfer, or Mach number greater than 1. The toolbox also contains algorithms for comparing and validating the equation-solving algorithms against solutions previously published in open literature. The classical equations solved by the Compressible Flow Toolbox are as follows: The isentropic-flow equations, The Fanno flow equations (pertaining to flow of an ideal gas in a pipe with friction), The Rayleigh flow equations (pertaining to frictionless flow of an ideal gas, with heat transfer, in a pipe of constant cross section), The normal-shock equations, The oblique-shock equations, and The expansion equations.

  14. Isentropic compression of argon

    SciTech Connect

    Veeser, L.R.; Ekdahl, C.A.; Oona, H.

    1997-06-01

    The compression was done in an MC-1 flux compression (explosive) generator, in order to study the transition from an insulator to a conductor. Since conductivity signals were observed in all the experiments (except when the probe is removed), both the Teflon and the argon are becoming conductive. The conductivity could not be determined (Teflon insulation properties unknown), but it could be bounded as being {sigma}=1/{rho}{le}8({Omega}cm){sub -1}, because when the Teflon breaks down, the dielectric constant is reduced. The Teflon insulator problem remains, and other ways to better insulate the probe or to measure the conductivity without a probe is being sought.

  15. Image data compression investigation

    NASA Technical Reports Server (NTRS)

    Myrie, Carlos

    1989-01-01

    NASA continuous communications systems growth has increased the demand for image transmission and storage. Research and analysis was conducted on various lossy and lossless advanced data compression techniques or approaches used to improve the efficiency of transmission and storage of high volume stellite image data such as pulse code modulation (PCM), differential PCM (DPCM), transform coding, hybrid coding, interframe coding, and adaptive technique. In this presentation, the fundamentals of image data compression utilizing two techniques which are pulse code modulation (PCM) and differential PCM (DPCM) are presented along with an application utilizing these two coding techniques.

  16. Vacancy behavior in a compressed fcc Lennard-Jones crystal

    SciTech Connect

    Beeler, J.R. Jr.

    1981-12-01

    This computer experiment study concerns the determination of the stable vacancy configuration in a compressed fcc Lennard-Jones crystal and the migration of this defect in a compressed crystal. Isotropic and uniaxial compression stress conditions were studied. The isotropic and uniaxial compression magnitudes employed were 0.94 less than or equal to eta less than or equal to 1.5, and 1.0 less than or equal to eta less than or equal to 1.5, respectively. The site-centered vacancy (SCV) was the stable vacancy configuration whenever cubic symmetry was present. This includes all of the isotropic compression cases and the particular uniaxial compression case (eta = ..sqrt..2) that give a bcc structure. In addition, the SCV was the stable configuration for uniaxial compression eta < 1.29. The out-of-plane split vacancy (SV-OP) was the stable vacancy configuration for uniaxial compression 1.29 < eta less than or equal to 1.5 and was the saddle-point configuration for SCV migration when the SCV was the stable form. For eta > 1.20, the SV-OP is an extended defect and, therefore, a saddle point for SV-OP migration could not be determined. The mechanism for the transformation from the SCV to the SV-OP as the stable form at eta = 1.29 appears to be an alternating sign (101) and/or (011) shear process.

  17. Energy Transfer and Triadic Interactions in Compressible Turbulence

    NASA Technical Reports Server (NTRS)

    Bataille, F.; Zhou, Ye; Bertoglio, Jean-Pierre

    1997-01-01

    Using a two-point closure theory, the Eddy-Damped-Quasi-Normal-Markovian (EDQNM) approximation, we have investigated the energy transfer process and triadic interactions of compressible turbulence. In order to analyze the compressible mode directly, the Helmholtz decomposition is used. The following issues were addressed: (1) What is the mechanism of energy exchange between the solenoidal and compressible modes, and (2) Is there an energy cascade in the compressible energy transfer process? It is concluded that the compressible energy is transferred locally from the solenoidal part to the compressible part. It is also found that there is an energy cascade of the compressible mode for high turbulent Mach number (M(sub t) greater than or equal to 0.5). Since we assume that the compressibility is weak, the magnitude of the compressible (radiative or cascade) transfer is much smaller than that of solenoidal cascade. These results are further confirmed by studying the triadic energy transfer function, the most fundamental building block of the energy transfer.

  18. Learning random networks for compression of still and moving images

    NASA Technical Reports Server (NTRS)

    Gelenbe, Erol; Sungur, Mert; Cramer, Christopher

    1994-01-01

    Image compression for both still and moving images is an extremely important area of investigation, with numerous applications to videoconferencing, interactive education, home entertainment, and potential applications to earth observations, medical imaging, digital libraries, and many other areas. We describe work on a neural network methodology to compress/decompress still and moving images. We use the 'point-process' type neural network model which is closer to biophysical reality than standard models, and yet is mathematically much more tractable. We currently achieve compression ratios of the order of 120:1 for moving grey-level images, based on a combination of motion detection and compression. The observed signal-to-noise ratio varies from values above 25 to more than 35. The method is computationally fast so that compression and decompression can be carried out in real-time. It uses the adaptive capabilities of a set of neural networks so as to select varying compression ratios in real-time as a function of quality achieved. It also uses a motion detector which will avoid retransmitting portions of the image which have varied little from the previous frame. Further improvements can be achieved by using on-line learning during compression, and by appropriate compensation of nonlinearities in the compression/decompression scheme. We expect to go well beyond the 250:1 compression level for color images with good quality levels.

  19. Nonlinear Frequency Compression

    PubMed Central

    Scollie, Susan; Glista, Danielle; Seelisch, Andreas

    2013-01-01

    Frequency lowering technologies offer an alternative amplification solution for severe to profound high frequency hearing losses. While frequency lowering technologies may improve audibility of high frequency sounds, the very nature of this processing can affect the perceived sound quality. This article reports the results from two studies that investigated the impact of a nonlinear frequency compression (NFC) algorithm on perceived sound quality. In the first study, the cutoff frequency and compression ratio parameters of the NFC algorithm were varied, and their effect on the speech quality was measured subjectively with 12 normal hearing adults, 12 normal hearing children, 13 hearing impaired adults, and 9 hearing impaired children. In the second study, 12 normal hearing and 8 hearing impaired adult listeners rated the quality of speech in quiet, speech in noise, and music after processing with a different set of NFC parameters. Results showed that the cutoff frequency parameter had more impact on sound quality ratings than the compression ratio, and that the hearing impaired adults were more tolerant to increased frequency compression than normal hearing adults. No statistically significant differences were found in the sound quality ratings of speech-in-noise and music stimuli processed through various NFC settings by hearing impaired listeners. These findings suggest that there may be an acceptable range of NFC settings for hearing impaired individuals where sound quality is not adversely affected. These results may assist an Audiologist in clinical NFC hearing aid fittings for achieving a balance between high frequency audibility and sound quality. PMID:23539261

  20. Compress Your Files

    ERIC Educational Resources Information Center

    Branzburg, Jeffrey

    2005-01-01

    File compression enables data to be squeezed together, greatly reducing file size. Why would someone want to do this? Reducing file size enables the sending and receiving of files over the Internet more quickly, the ability to store more files on the hard drive, and the ability pack many related files into one archive (for example, all files…

  1. The Compressed Video Experience.

    ERIC Educational Resources Information Center

    Weber, John

    In the fall semester 1995, Southern Arkansas University- Magnolia (SAU-M) began a two semester trial delivering college classes via a compressed video link between SAU-M and its sister school Southern Arkansas University Tech (SAU-T) in Camden. As soon as the University began broadcasting and receiving classes, it was discovered that using the…

  2. Focus on Compression Stockings

    MedlinePlus

    ... soap. Do not use Woolite™ detergent. Use warm water and wash by hand or in the gentle cycle in the washing machine. After rinsing the compression stocking completely, remove excess water by rolling it in a ... the dryer on the deli- cate cycle at a cool temperature. It may be convenient ...

  3. Compression Garments and Exercise: No Influence of Pressure Applied

    PubMed Central

    Beliard, Samuel; Chauveau, Michel; Moscatiello, Timothée; Cros, François; Ecarnot, Fiona; Becker, François

    2015-01-01

    Compression garments on the lower limbs are increasingly popular among athletes who wish to improve performance, reduce exercise-induced discomfort, and reduce the risk of injury. However, the beneficial effects of compression garments have not been clearly established. We performed a review of the literature for prospective, randomized, controlled studies, using quantified lower limb compression in order to (1) describe the beneficial effects that have been identified with compression garments, and in which conditions; and (2) investigate whether there is a relation between the pressure applied and the reported effects. The pressure delivered were measured either in laboratory conditions on garments identical to those used in the studies, or derived from publication data. Twenty three original articles were selected for inclusion in this review. The effects of wearing compression garments during exercise are controversial, as most studies failed to demonstrate a beneficial effect on immediate or performance recovery, or on delayed onset of muscle soreness. There was a trend towards a beneficial effect of compression garments worn during recovery, with performance recovery found to be improved in the five studies in which this was investigated, and delayed-onset muscle soreness was reportedly reduced in three of these five studies. There is no apparent relation between the effects of compression garments worn during or after exercise and the pressures applied, since beneficial effects were obtained with both low and high pressures. Wearing compression garments during recovery from exercise seems to be beneficial for performance recovery and delayed-onset muscle soreness, but the factors explaining this efficacy remain to be elucidated. Key points We observed no relationship between the effects of compression and the pressures applied. The pressure applied at the level of the lower limb by compression garments destined for use by athletes varies widely between

  4. TEM Video Compressive Sensing

    SciTech Connect

    Stevens, Andrew J.; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.

    2015-08-02

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into a single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental

  5. Effects on MR images compression in tissue classification quality

    NASA Astrophysics Data System (ADS)

    Santalla, H.; Meschino, G.; Ballarin, V.

    2007-11-01

    It is known that image compression is required to optimize the storage in memory. Moreover, transmission speed can be significantly improved. Lossless compression is used without controversy in medicine, though benefits are limited. If we compress images lossy, where image can not be totally recovered; we can only recover an approximation. In this point definition of "quality" is essential. What we understand for "quality"? How can we evaluate a compressed image? Quality in images is an attribute whit several definitions and interpretations, which actually depend on the posterior use we want to give them. This work proposes a quantitative analysis of quality for lossy compressed Magnetic Resonance (MR) images, and their influence in automatic tissue classification, accomplished with these images.

  6. Multimode Data-Compression System

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi

    1996-01-01

    Data-compression system developed to satisfy need for high-speed, high-performance compression of data from sources as diverse as medical images, high-definition television images, audio signals, readouts from scientific instruments, and binary data files. Maximum data-transmission capability of communication channel or storage capacity of storage device multiplied by approximately compression ratio. Various combinations of lossless and lossy compression chosen to suit various data streams.

  7. Progressive transmission and compression images

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.

    1996-01-01

    We describe an image data compression strategy featuring progressive transmission. The method exploits subband coding and arithmetic coding for compression. We analyze the Laplacian probability density, which closely approximates the statistics of individual subbands, to determine a strategy for ordering the compressed subband data in a way that improves rate-distortion performance. Results are presented for a test image.

  8. Compression of Ultrafast Laser Beams

    DTIC Science & Technology

    2016-03-01

    the theory, construction, and evaluation of 2 separate algorithms, a modified genetic algorithm and the multiphoton intrapulse interference phase...pulse compression was evaluated, and it was found that the MIIPS algorithm was superior to the genetic algorithm for pulse compression. 15...SUBJECT TERMS ultrafast lasers, pulse compression, genetic algorithm, MIIPS algorithm, pulse shaping, pulse shaper construction 16. SECURITY

  9. Predictive Encoding in Text Compression.

    ERIC Educational Resources Information Center

    Raita, Timo; Teuhola, Jukka

    1989-01-01

    Presents three text compression methods of increasing power and evaluates each based on the trade-off between compression gain and processing time. The advantages of using hash coding for speed and optimal arithmetic coding to successor information for compression gain are discussed. (26 references) (Author/CLB)

  10. Digital cinema video compression

    NASA Astrophysics Data System (ADS)

    Husak, Walter

    2003-05-01

    The Motion Picture Industry began a transition from film based distribution and projection to digital distribution and projection several years ago. Digital delivery and presentation offers the prospect to increase the quality of the theatrical experience for the audience, reduce distribution costs to the distributors, and create new business opportunities for the theater owners and the studios. Digital Cinema also presents an opportunity to provide increased flexibility and security of the movies for the content owners and the theater operators. Distribution of content via electronic means to theaters is unlike any of the traditional applications for video compression. The transition from film-based media to electronic media represents a paradigm shift in video compression techniques and applications that will be discussed in this paper.

  11. Basic cluster compression algorithm

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Lee, J.

    1980-01-01

    Feature extraction and data compression of LANDSAT data is accomplished by BCCA program which reduces costs associated with transmitting, storing, distributing, and interpreting multispectral image data. Algorithm uses spatially local clustering to extract features from image data to describe spectral characteristics of data set. Approach requires only simple repetitive computations, and parallel processing can be used for very high data rates. Program is written in FORTRAN IV for batch execution and has been implemented on SEL 32/55.

  12. Beamforming Using Compressive Sensing

    DTIC Science & Technology

    2011-10-01

    Am. 130 (4), October 2011 VC 2011 Acoustical Society of America G. F. Edelmann and C. F. Gaumond: JASA Express Letters [DOI: 10.1121/1.3632046...arbitrarily spaced array, the rank of A may be insufficient, G. F. Edelmann and C. F. Gaumond: JASA Express Letters [DOI: 10.1121/1.3632046] Published Online...09 September 2011 J. Acoust. Soc. Am. 130 (4), October 2011 G. F. Edelmann and C. F. Gaumond: Beamforming using compressive sensing EL233 Downloaded

  13. Shock compression of nitrobenzene

    NASA Astrophysics Data System (ADS)

    Kozu, Naoshi; Arai, Mitsuru; Tamura, Masamitsu; Fujihisa, Hiroshi; Aoki, Katsutoshi; Yoshida, Masatake; Kondo, Ken-Ichi

    1999-06-01

    The Hugoniot (4 - 30 GPa) and the isotherm (1 - 7 GPa) of nitrobenzene have been investigated by shock and static compression experiments. Nitrobenzene has the most basic structure of nitro aromatic compounds, which are widely used as energetic materials, but nitrobenzene has been considered not to explode in spite of the fact its calculated heat of detonation is similar to TNT, about 1 kcal/g. Explosive plane-wave generators and diamond anvil cell were used for shock and static compression, respectively. The obtained Hugoniot consists of two linear lines, and the kink exists around 10 GPa. The upper line agrees well with the Hugoniot of detonation products calculated by KHT code, so it is expected that nitrobenzene detonates in that area. Nitrobenzene solidifies under 1 GPa of static compression, and the isotherm of solid nitrobenzene was obtained by X-ray diffraction technique. Comparing the Hugoniot and the isotherm, nitrobenzene is in liquid phase under experimented shock condition. From the expected phase diagram, shocked nitrobenzene seems to remain metastable liquid in solid phase region on that diagram.

  14. Compression of Cake

    NASA Astrophysics Data System (ADS)

    Nason, Sarah; Houghton, Brittany; Renfro, Timothy

    2012-03-01

    The fall university physics class, at McMurry University, created a compression modulus experiment that even high school students could do. The class came up with this idea after a Young's modulus experiment which involved stretching wire. A question was raised of what would happen if we compressed something else? We created our own Young's modulus experiment, but in a more entertaining way. The experiment involves measuring the height of a cake both before and after a weight has been applied to the cake. We worked to derive the compression modulus by applying weight to a cake. In the end, we had our experimental cake and, ate it too! To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2012.TSS.B1.1

  15. Knee joint passive stiffness and moment in sagittal and frontal planes markedly increase with compression.

    PubMed

    Marouane, H; Shirazi-Adl, A; Adouni, M

    2015-01-01

    Knee joints are subject to large compression forces in daily activities. Due to artefact moments and instability under large compression loads, biomechanical studies impose additional constraints to circumvent the compression position-dependency in response. To quantify the effect of compression on passive knee moment resistance and stiffness, two validated finite element models of the tibiofemoral (TF) joint, one refined with depth-dependent fibril-reinforced cartilage and the other less refined with homogeneous isotropic cartilage, are used. The unconstrained TF joint response in sagittal and frontal planes is investigated at different flexion angles (0°, 15°, 30° and 45°) up to 1800 N compression preloads. The compression is applied at a novel joint mechanical balance point (MBP) identified as a point at which the compression does not cause any coupled rotations in sagittal and frontal planes. The MBP of the unconstrained joint is located at the lateral plateau in small compressions and shifts medially towards the inter-compartmental area at larger compression forces. The compression force substantially increases the joint moment-bearing capacities and instantaneous angular rigidities in both frontal and sagittal planes. The varus-valgus laxities diminish with compression preloads despite concomitant substantial reductions in collateral ligament forces. While the angular rigidity would enhance the joint stability, the augmented passive moment resistance under compression preloads plays a role in supporting external moments and should as such be considered in the knee joint musculoskeletal models.

  16. Compression therapy for venous disease.

    PubMed

    Attaran, Robert R; Ochoa Chaar, Cassius I

    2017-03-01

    For centuries, compression therapy has been utilized to treat venous disease. To date it remains the mainstay of therapy, particularly in more severe forms such as venous ulceration. In addition to mechanisms of benefit, we discuss the evidence behind compression therapy, particularly hosiery, in various forms of venous disease of the lower extremities. We review compression data for stand-alone therapy, post-intervention, as DVT prevention, post-thrombotic syndrome and venous ulcer disease. We also review the data comparing compression modalities as well as the use of compression in mixed arteriovenous disease.

  17. Compressive Estimation and Imaging Based on Autoregressive Models.

    PubMed

    Testa, Matteo; Magli, Enrico

    2016-11-01

    Compressed sensing (CS) is a fast and efficient way to obtain compact signal representations. Oftentimes, one wishes to extract some information from the available compressed signal. Since CS signal recovery is typically expensive from a computational point of view, it is inconvenient to first recover the signal and then extract the information. A much more effective approach consists in estimating the information directly from the signal's linear measurements. In this paper, we propose a novel framework for compressive estimation of autoregressive (AR) process parameters based on ad hoc sensing matrix construction. More in detail, we introduce a compressive least square estimator for AR(p) parameters and a specific AR(1) compressive Bayesian estimator. We exploit the proposed techniques to address two important practical problems. The first is compressive covariance estimation for Toeplitz structured covariance matrices where we tackle the problem with a novel parametric approach based on the estimated AR parameters. The second is a block-based compressive imaging system, where we introduce an algorithm that adaptively calculates the number of measurements to be acquired for each block from a set of initial measurements based on its degree of compressibility. We show that the proposed techniques outperform the state-of-the-art methods for these two problems.

  18. Compressed digital holography: from micro towards macro

    NASA Astrophysics Data System (ADS)

    Schretter, Colas; Bettens, Stijn; Blinder, David; Pesquet-Popescu, Béatrice; Cagnazzo, Marco; Dufaux, Frédéric; Schelkens, Peter

    2016-09-01

    signal processing methods from software-driven computer engineering and applied mathematics. The compressed sensing theory in particular established a practical framework for reconstructing the scene content using few linear combinations of complex measurements and a sparse prior for regularizing the solution. Compressed sensing found direct applications in digital holography for microscopy. Indeed, the wave propagation phenomenon in free space mixes in a natural way the spatial distribution of point sources from the 3-dimensional scene. As the 3-dimensional scene is mapped to a 2-dimensional hologram, the hologram samples form a compressed representation of the scene as well. This overview paper discusses contributions in the field of compressed digital holography at the micro scale. Then, an outreach on future extensions towards the real-size macro scale is discussed. Thanks to advances in sensor technologies, increasing computing power and the recent improvements in sparse digital signal processing, holographic modalities are on the verge of practical high-quality visualization at a macroscopic scale where much higher resolution holograms must be acquired and processed on the computer.

  19. Spectra and statistics in compressible isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Wang, Jianchun; Gotoh, Toshiyuki; Watanabe, Takeshi

    2017-01-01

    Spectra and one-point statistics of velocity and thermodynamic variables in isotropic turbulence of compressible fluid are examined by using numerical simulations with solenoidal forcing at the turbulent Mach number Mt from 0.05 to 1.0 and at the Taylor Reynolds number Reλ from 40 to 350. The velocity field is decomposed into a solenoidal component and a compressible component in terms of the Helmholtz decomposition, and the compressible velocity component is further decomposed into a pseudosound component, namely, the hydrodynamic component associated with the incompressible field and an acoustic component associated with sound waves. It is found that the acoustic mode dominates over the pseudosound mode at turbulent Mach numbers Mt≥0.4 in our numerical simulations. At turbulent Mach numbers Mt≤0.4 , there exists a critical wave number kc beyond which the pseudosound mode dominates while the acoustic mode dominates at small wave numbers k compressible velocity is fully enslaved to the solenoidal velocity, and its spectrum scales as Mt4k-3 in the inertial range. It is also found that in the inertial range, the spectra of pressure, density, and temperature exhibit a k-7 /3 scaling for Mt≤0.3 and a k-5 /3 scaling for Mt≥0.5 .

  20. [Compression therapy in leg ulcers].

    PubMed

    Dissemond, J; Protz, K; Reich-Schupke, S; Stücker, M; Kröger, K

    2016-04-01

    Compression therapy is well-tried treatment with only few side effects for most patients with leg ulcers and/or edema. Despite the very long tradition in German-speaking countries and good evidence for compression therapy in different indications, recent scientific findings indicate that the current situation in Germany is unsatisfactory. Today, compression therapy can be performed with very different materials and systems. In addition to the traditional bandaging with Unna Boot, short-stretch, long-stretch, or multicomponent bandage systems, medical compression ulcer stockings are available. Other very effective but far less common alternatives are velcro wrap systems. When planning compression therapy, it is also important to consider donning devices with the patient. In addition to compression therapy, intermittent pneumatic compression therapy can be used. Through these various treatment options, it is now possible to develop an individually accepted, geared to the needs of the patients, and functional therapy strategy for nearly all patients with leg ulcers.

  1. Comparative data compression techniques and multi-compression results

    NASA Astrophysics Data System (ADS)

    Hasan, M. R.; Ibrahimy, M. I.; Motakabber, S. M. A.; Ferdaus, M. M.; Khan, M. N. H.

    2013-12-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms.

  2. SeqCompress: an algorithm for biological sequence compression.

    PubMed

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms.

  3. Compressible magnetohydrodynamic sawtooth crash

    NASA Astrophysics Data System (ADS)

    Sugiyama, Linda E.

    2014-02-01

    In a toroidal magnetically confined plasma at low resistivity, compressible magnetohydrodynamic (MHD) predicts that an m = 1/n = 1 sawtooth has a fast, explosive crash phase with abrupt onset, rate nearly independent of resistivity, and localized temperature redistribution similar to experimental observations. Large scale numerical simulations show that the 1/1 MHD internal kink grows exponentially at a resistive rate until a critical amplitude, when the plasma motion accelerates rapidly, culminating in fast loss of the temperature and magnetic structure inside q < 1, with somewhat slower density redistribution. Nonlinearly, for small effective growth rate the perpendicular momentum rate of change remains small compared to its individual terms ∇p and J × B until the fast crash, so that the compressible growth rate is determined by higher order terms in a large aspect ratio expansion, as in the linear eigenmode. Reduced MHD fails completely to describe the toroidal mode; no Sweet-Parker-like reconnection layer develops. Important differences result from toroidal mode coupling effects. A set of large aspect ratio compressible MHD equations shows that the large aspect ratio expansion also breaks down in typical tokamaks with rq =1/Ro≃1/10 and a /Ro≃1/3. In the large aspect ratio limit, failure extends down to much smaller inverse aspect ratio, at growth rate scalings γ =O(ɛ2). Higher order aspect ratio terms, including B˜ϕ, become important. Nonlinearly, higher toroidal harmonics develop faster and to a greater degree than for large aspect ratio and help to accelerate the fast crash. The perpendicular momentum property applies to other transverse MHD instabilities, including m ≥ 2 magnetic islands and the plasma edge.

  4. International magnetic pulse compression

    SciTech Connect

    Kirbie, H.C.; Newton, M.A.; Siemens, P.D.

    1991-04-01

    Although pulsed-power engineering traditionally has been practiced by a fairly small, close community in the areas of defense and energy research, it is becoming more common in high-power, high-energy commercial pursuits such as material processing and lasers. This paper is a synopsis of the Feb. 12--14, 1990 workshop on magnetic switching as it applies primarily to pulse compression (power transformation). During the course of the Workshop at Granlibakken, a great deal of information was amassed and a keen insight into both the problems and opportunities as to the use of this switching approach was developed. The segmented workshop format proved ideal for identifying key aspects affecting optimum performance in a variety of applications. Individual groups of experts addressed network and system modeling, magnetic materials, power conditioning, core cooling and dielectrics, and finally circuits and application. At the end, they came together to consolidate their input and formulate the workshop's conclusions, identifying roadblocks or suggesting research projects, particularly as they apply to magnetic switching's trump card -- its high-average-power-handling capability (at least on a burst-mode basis). The workshop was especially productive both in the quality and quantity of information transfer in an environment conducive to a free and open exchange of ideas. We will not delve into the organization proper of this meeting, rather we wish to commend to the interested reader this volume, which provides the definitive and most up-to-date compilation on the subject of magnetic pulse compression from underlying principles to current state of the art as well as the prognosis for the future of magnetic pulse compression as a consensus of the workshop's organizers and participants.

  5. International magnetic pulse compression

    NASA Astrophysics Data System (ADS)

    Kirbie, H. C.; Newton, M. A.; Siemens, P. D.

    1991-04-01

    Although pulsed-power engineering traditionally has been practiced by a fairly small, close community in the areas of defense and energy research, it is becoming more common in high-power, high-energy commercial pursuits such as material processing and lasers. This paper is a synopsis of the Feb. 12-14, 1990 workshop on magnetic switching as it applies primarily to pulse compression (power transformation). During the course of the Workshop at Granlibakken, a great deal of information was amassed and a keen insight into both the problems and opportunities as to the use of this switching approach was developed. The segmented workshop format proved ideal for identifying key aspects affecting optimum performance in a variety of applications. Individual groups of experts addressed network and system modeling, magnetic materials, power conditioning, core cooling and dielectrics, and finally circuits and application. At the end, they came together to consolidate their input and formulate the workshop's conclusions, identifying roadblocks or suggesting research projects, particularly as they apply to magnetic switching's trump card - its high-average-power-handling capability (at least on a burst-mode basis). The workshop was especially productive both in the quality and quantity of information transfer in an environment conducive to a free and open exchange of ideas. We will not delve into the organization proper of this meeting, rather we wish to commend to the interested reader this volume, which provides the definitive and most up-to-date compilation on the subject of magnetic pulse compression from underlying principles to current state of the art as well as the prognosis for the future of magnetic pulse compression as a consensus of the workshop's organizers and participants.

  6. The compression of liquids

    NASA Astrophysics Data System (ADS)

    Whalley, E.

    The compression of liquids can be measured either directly by applying a pressure and noting the volume change, or indirectly, by measuring the magnitude of the fluctuations of the local volume. The methods used in Ottawa for the direct measurement of the compression are reviewed. The mean-square deviation of the volume from the mean at constant temperature can be measured by X-ray and neutron scattering at low angles, and the meansquare deviation at constant entropy can be measured by measuring the speed of sound. The speed of sound can be measured either acoustically, using an acoustic transducer, or by Brillouin spectroscopy. Brillouin spectroscopy can also be used to study the shear waves in liquids if the shear relaxation time is > ∼ 10 ps. The relaxation time of water is too short for the shear waves to be studied in this way, but they do occur in the low-frequency Raman and infrared spectra. The response of the structure of liquids to pressure can be studied by neutron scattering, and recently experiments have been done at Atomic Energy of Canada Ltd, Chalk River, on liquid D 2O up to 15.6 kbar. They show that the near-neighbor intermolecular O-D and D-D distances are less spread out and at shorter distances at high pressure. Raman spectroscopy can also provide information on the structural response. It seems that the O-O distance in water decreases much less with pressure than it does in ice. Presumably, the bending of O-O-O angles tends to increase the O-O distance, and so to largely compensate the compression due to the direct effect of pressure.

  7. Compression retaining piston

    SciTech Connect

    Quaglino, A.V. Jr.

    1987-06-16

    A piston apparatus is described for maintaining compression between the piston wall and the cylinder wall, that comprises the following: a generally cylindrical piston body, including: a head portion defining the forward end of the body; and a continuous side wall portion extending rearward from the head portion; a means for lubricating and preventing compression loss between the side wall portion and the cylinder wall, including an annular recessed area in the continuous side wall portion for receiving a quantity of fluid lubricant in fluid engagement between the wall of the recessed and the wall of the cylinder; a first and second resilient, elastomeric, heat resistant rings positioned in grooves along the wall of the continuous side wall portion, above and below the annular recessed area. Each ring engages the cylinder wall to reduce loss of lubricant within the recessed area during operation of the piston; a first pump means for providing fluid lubricant to engine components other than the pistons; and a second pump means provides fluid lubricant to the recessed area in the continuous side wall portion of the piston. The first and second pump means obtains lubricant from a common source, and the second pump means including a flow line supplies oil from a predetermined level above the level of oil provided to the first pump means. This is so that should the oil level to the second pump means fall below the predetermined level, the loss of oil to the recessed area in the continuous side wall portion of the piston would result in loss of compression and shut down of the engine.

  8. Ultrasound beamforming using compressed data.

    PubMed

    Li, Yen-Feng; Li, Pai-Chi

    2012-05-01

    The rapid advancements in electronics technologies have made software-based beamformers for ultrasound array imaging feasible, thus facilitating the rapid development of high-performance and potentially low-cost systems. However, one challenge to realizing a fully software-based system is transferring data from the analog front end to the software back end at rates of up to a few gigabits per second. This study investigated the use of data compression to reduce the data transfer requirements and optimize the associated trade-off with beamforming quality. JPEG and JPEG2000 compression techniques were adopted. The acoustic data of a line phantom were acquired with a 128-channel array transducer at a center frequency of 3.5 MHz, and the acoustic data of a cyst phantom were acquired with a 64-channel array transducer at a center frequency of 3.33 MHz. The receive-channel data associated with each transmit event are separated into 8 × 8 blocks and several tiles before JPEG and JPEG2000 data compression is applied, respectively. In one scheme, the compression was applied to raw RF data, while in another only the amplitude of baseband data was compressed. The maximum compression ratio of RF data compression to produce an average error of lower than 5 dB was 15 with JPEG compression and 20 with JPEG2000 compression. The image quality is higher with baseband amplitude data compression than with RF data compression; although the maximum overall compression ratio (compared with the original RF data size), which was limited by the data size of uncompressed phase data, was lower than 12, the average error in this case was lower than 1 dB when the compression ratio was lower than 8.

  9. Sampling video compression system

    NASA Technical Reports Server (NTRS)

    Matsumoto, Y.; Lum, H. (Inventor)

    1977-01-01

    A system for transmitting video signal of compressed bandwidth is described. The transmitting station is provided with circuitry for dividing a picture to be transmitted into a plurality of blocks containing a checkerboard pattern of picture elements. Video signals along corresponding diagonal rows of picture elements in the respective blocks are regularly sampled. A transmitter responsive to the output of the sampling circuitry is included for transmitting the sampled video signals of one frame at a reduced bandwidth over a communication channel. The receiving station is provided with a frame memory for temporarily storing transmitted video signals of one frame at the original high bandwidth frequency.

  10. Beamforming using compressive sensing.

    PubMed

    Edelmann, Geoffrey F; Gaumond, Charles F

    2011-10-01

    Compressive sensing (CS) is compared with conventional beamforming using horizontal beamforming of at-sea, towed-array data. They are compared qualitatively using bearing time records and quantitatively using signal-to-interference ratio. Qualitatively, CS exhibits lower levels of background interference than conventional beamforming. Furthermore, bearing time records show increasing, but tolerable, levels of background interference when the number of elements is decreased. For the full array, CS generates signal-to-interference ratio of 12 dB, but conventional beamforming only 8 dB. The superiority of CS over conventional beamforming is much more pronounced with undersampling.

  11. Avalanches in Wood Compression.

    PubMed

    Mäkinen, T; Miksic, A; Ovaska, M; Alava, Mikko J

    2015-07-31

    Wood is a multiscale material exhibiting a complex viscoplastic response. We study avalanches in small wood samples in compression. "Woodquakes" measured by acoustic emission are surprisingly similar to earthquakes and crackling noise in rocks and laboratory tests on brittle materials. Both the distributions of event energies and of waiting (silent) times follow power laws. The stress-strain response exhibits clear signatures of localization of deformation to "weak spots" or softwood layers, as identified using digital image correlation. Even though material structure-dependent localization takes place, the avalanche behavior remains scale-free.

  12. Avalanches in Wood Compression

    NASA Astrophysics Data System (ADS)

    Mäkinen, T.; Miksic, A.; Ovaska, M.; Alava, Mikko J.

    2015-07-01

    Wood is a multiscale material exhibiting a complex viscoplastic response. We study avalanches in small wood samples in compression. "Woodquakes" measured by acoustic emission are surprisingly similar to earthquakes and crackling noise in rocks and laboratory tests on brittle materials. Both the distributions of event energies and of waiting (silent) times follow power laws. The stress-strain response exhibits clear signatures of localization of deformation to "weak spots" or softwood layers, as identified using digital image correlation. Even though material structure-dependent localization takes place, the avalanche behavior remains scale-free.

  13. Compression of color-mapped images

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, A. C.; Sayood, Khalid

    1992-01-01

    In a standard image coding scenario, pixel-to-pixel correlation nearly always exists in the data, especially if the image is a natural scene. This correlation is what allows predictive coding schemes (e.g., DPCM) to perform efficient compression. In a color-mapped image, the values stored in the pixel array are no longer directly related to the pixel intensity. Two color indices which are numerically adjacent (close) may point to two very different colors. The correlation still exists, but only via the colormap. This fact can be exploited by sorting the color map to reintroduce the structure. The sorting of colormaps is studied and it is shown how the resulting structure can be used in both lossless and lossy compression of images.

  14. Respiratory sounds compression.

    PubMed

    Yadollahi, Azadeh; Moussavi, Zahra

    2008-04-01

    Recently, with the advances in digital signal processing, compression of biomedical signals has received great attention for telemedicine applications. In this paper, an adaptive transform coding-based method for compression of respiratory and swallowing sounds is proposed. Using special characteristics of respiratory sounds, the recorded signals are divided into stationary and nonstationary portions, and two different bit allocation methods (BAMs) are designed for each portion. The method was applied to the data of 12 subjects and its performance in terms of overall signal-to-noise ratio (SNR) values was calculated at different bit rates. The performance of different quantizers was also considered and the sensitivity of the quantizers to initial conditions has been alleviated. In addition, the fuzzy clustering method was examined for classifying the signal into different numbers of clusters and investigating the performance of the adaptive BAM with increasing the number of classes. Furthermore, the effects of assigning different numbers of bits for encoding stationary and nonstationary portions of the signal were studied. The adaptive BAM with variable number of bits was found to improve the SNR values of the fixed BAM by 5 dB. Last, the possibility of removing the training part for finding the parameters of adaptive BAMs for each individual was investigated. The results indicate that it is possible to use a predefined set of BAMs for all subjects and remove the training part completely. Moreover, the method is fast enough to be implemented for real-time application.

  15. Free compression tube. Applications

    NASA Astrophysics Data System (ADS)

    Rusu, Ioan

    2012-11-01

    During the flight of vehicles, their propulsion energy must overcome gravity, to ensure the displacement of air masses on vehicle trajectory, to cover both energy losses from the friction between a solid surface and the air and also the kinetic energy of reflected air masses due to the impact with the flying vehicle. The flight optimization by increasing speed and reducing fuel consumption has directed research in the aerodynamics field. The flying vehicles shapes obtained through studies in the wind tunnel provide the optimization of the impact with the air masses and the airflow along the vehicle. By energy balance studies for vehicles in flight, the author Ioan Rusu directed his research in reducing the energy lost at vehicle impact with air masses. In this respect as compared to classical solutions for building flight vehicles aerodynamic surfaces which reduce the impact and friction with air masses, Ioan Rusu has invented a device which he named free compression tube for rockets, registered with the State Office for Inventions and Trademarks of Romania, OSIM, deposit f 2011 0352. Mounted in front of flight vehicles it eliminates significantly the impact and friction of air masses with the vehicle solid. The air masses come into contact with the air inside the free compression tube and the air-solid friction is eliminated and replaced by air to air friction.

  16. Perceptually Lossless Wavelet Compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John

    1996-01-01

    The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp -1), where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We propose a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a 'perceptually lossless' quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  17. Compressive sensing in medical imaging

    PubMed Central

    Graff, Christian G.; Sidky, Emil Y.

    2015-01-01

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400

  18. Energy transfer in compressible turbulence

    NASA Technical Reports Server (NTRS)

    Bataille, Francoise; Zhou, YE; Bertoglio, Jean-Pierre

    1995-01-01

    This letter investigates the compressible energy transfer process. We extend a methodology developed originally for incompressible turbulence and use databases from numerical simulations of a weak compressible turbulence based on Eddy-Damped-Quasi-Normal-Markovian (EDQNM) closure. In order to analyze the compressible mode directly, the well known Helmholtz decomposition is used. While the compressible component has very little influence on the solenoidal part, we found that almost all of the compressible turbulence energy is received from its solenoidal counterpart. We focus on the most fundamental building block of the energy transfer process, the triadic interactions. This analysis leads us to conclude that, at low turbulent Mach number, the compressible energy transfer process is dominated by a local radiative transfer (absorption) in both inertial and energy containing ranges.

  19. Compression of intensity interferometry signals

    NASA Astrophysics Data System (ADS)

    Ribak, Erez N.; Shulamy, Yaron

    2016-02-01

    Correlations between photon currents from separate light-collectors provide information on the shape of the source. When the light-collectors are well separated, for example in space, transmission of these currents to a central correlator is limited by band-width. We study the possibility of compression of the photon fluxes and find that traditional compression methods have a similar chance of achieving this goal compared to compressed sensing.

  20. Shock compression of precompressed deuterium

    SciTech Connect

    Armstrong, M R; Crowhurst, J C; Zaug, J M; Bastea, S; Goncharov, A F; Militzer, B

    2011-07-31

    Here we report quasi-isentropic dynamic compression and thermodynamic characterization of solid, precompressed deuterium over an ultrafast time scale (< 100 ps) and a microscopic length scale (< 1 {micro}m). We further report a fast transition in shock wave compressed solid deuterium that is consistent with the ramp to shock transition, with a time scale of less than 10 ps. These results suggest that high-density dynamic compression of hydrogen may be possible on microscopic length scales.

  1. A PDF closure model for compressible turbulent chemically reacting flows

    NASA Technical Reports Server (NTRS)

    Kollmann, W.

    1992-01-01

    The objective of the proposed research project was the analysis of single point closures based on probability density function (pdf) and characteristic functions and the development of a prediction method for the joint velocity-scalar pdf in turbulent reacting flows. Turbulent flows of boundary layer type and stagnation point flows with and without chemical reactions were be calculated as principal applications. Pdf methods for compressible reacting flows were developed and tested in comparison with available experimental data. The research work carried in this project was concentrated on the closure of pdf equations for incompressible and compressible turbulent flows with and without chemical reactions.

  2. Magnetic compression laser driving circuit

    DOEpatents

    Ball, Don G.; Birx, Dan; Cook, Edward G.

    1993-01-01

    A magnetic compression laser driving circuit is disclosed. The magnetic compression laser driving circuit compresses voltage pulses in the range of 1.5 microseconds at 20 Kilovolts of amplitude to pulses in the range of 40 nanoseconds and 60 Kilovolts of amplitude. The magnetic compression laser driving circuit includes a multi-stage magnetic switch where the last stage includes a switch having at least two turns which has larger saturated inductance with less core material so that the efficiency of the circuit and hence the laser is increased.

  3. Magnetic compression laser driving circuit

    DOEpatents

    Ball, D.G.; Birx, D.; Cook, E.G.

    1993-01-05

    A magnetic compression laser driving circuit is disclosed. The magnetic compression laser driving circuit compresses voltage pulses in the range of 1.5 microseconds at 20 kilovolts of amplitude to pulses in the range of 40 nanoseconds and 60 kilovolts of amplitude. The magnetic compression laser driving circuit includes a multi-stage magnetic switch where the last stage includes a switch having at least two turns which has larger saturated inductance with less core material so that the efficiency of the circuit and hence the laser is increased.

  4. Spectroscopic insight for tablet compression.

    PubMed

    Lakio, S; Ylinärä, H; Antikainen, O; Räikkönen, H; Yliruusi, J

    2015-02-01

    Tablet compression process has been studied over the years from various perspectives. However what exactly happens to material during compression is still unknown. In this study a novel compression die which enables real-time spectroscopic measurements during the compression of material is represented. Both near infrared and Raman spectroscope probes can be attached to the die. In this study the usage of the die is demonstrated by using Raman spectroscopy. Eicosane, d-glucose anhydrate, α-lactose monohydrate and xylitol were used in the study because their compression behavior and bonding properties during compression were assumed to be different. The intensity of the Raman signal changed during compression with all of the materials. However, the intensity changes were different within the materials. The biggest differences were within the xylitol spectra. It was noticed that some peaks disappeared with higher compression pressures indicating that the pressure affected variously on different bonds in xylitol structure. These reversible changes were supposed to relate the changes in conformation and crystal structure. As a conclusion, the die was found to be a significant addition for studying compression process in real-time. It can help to reveal Process induced transformations (PITs) occurring during powder compaction.

  5. HIGH-COMPRESSIVE-STRENGTH CONCRETE.

    DTIC Science & Technology

    CONCRETE , COMPRESSIVE PROPERTIES), PERFORMANCE(ENGINEERING), AGING(MATERIALS), MANUFACTURING, STRUCTURES, THERMAL PROPERTIES, CREEP, DEFORMATION, REINFORCED CONCRETE , MATHEMATICAL ANALYSIS, STRESSES, MIXTURES, TENSILE PROPERTIES

  6. RACBVHs: random-accessible compressed bounding volume hierarchies.

    PubMed

    Kim, Tae-Joon; Moon, Bochang; Kim, Duksu; Yoon, Sung-Eui

    2010-01-01

    We present a novel compressed bounding volume hierarchy (BVH) representation, random-accessible compressed bounding volume hierarchies (RACBVHs), for various applications requiring random access on BVHs of massive models. Our RACBVH representation is compact and transparently supports random access on the compressed BVHs without decompressing the whole BVH. To support random access on our compressed BVHs, we decompose a BVH into a set of clusters. Each cluster contains consecutive bounding volume (BV) nodes in the original layout of the BVH. Also, each cluster is compressed separately from other clusters and serves as an access point to the RACBVH representation. We provide the general BVH access API to transparently access our RACBVH representation. At runtime, our decompression framework is guaranteed to provide correct BV nodes without decompressing the whole BVH. Also, our method is extended to support parallel random access that can utilize the multicore CPU architecture. Our method can achieve up to a 12:1 compression ratio, and more importantly, can decompress 4.2 M BV nodes ({=}135 {\\rm MB}) per second by using a single CPU-core. To highlight the benefits of our approach, we apply our method to two different applications: ray tracing and collision detection. We can improve the runtime performance by more than a factor of 4 as compared to using the uncompressed original data. This improvement is a result of the fast decompression performance and reduced data access time by selectively fetching and decompressing small regions of the compressed BVHs requested by applications.

  7. POLYCOMP: Efficient and configurable compression of astronomical timelines

    NASA Astrophysics Data System (ADS)

    Tomasi, M.

    2016-07-01

    This paper describes the implementation of polycomp, a open-sourced, publicly available program for compressing one-dimensional data series in tabular format. The program is particularly suited for compressing smooth, noiseless streams of data like pointing information, as one of the algorithms it implements applies a combination of least squares polynomial fitting and discrete Chebyshev transforms that is able to achieve a compression ratio Cr up to ≈ 40 in the examples discussed in this work. This performance comes at the expense of a loss of information, whose upper bound is configured by the user. I show two areas in which the usage of polycomp is interesting. In the first example, I compress the ephemeris table of an astronomical object (Ganymede), obtaining Cr ≈ 20, with a compression error on the x , y , z coordinates smaller than 1 m. In the second example, I compress the publicly available timelines recorded by the Low Frequency Instrument (LFI), an array of microwave radiometers onboard the ESA Planck spacecraft. The compression reduces the needed storage from ∼ 6.5 TB to ≈ 0.75 TB (Cr ≈ 9), thus making them small enough to be kept in a portable hard drive.

  8. H.264/AVC Video Compressed Traces: Multifractal and Fractal Analysis

    NASA Astrophysics Data System (ADS)

    Reljin, Irini; Samčović, Andreja; Reljin, Branimir

    2006-12-01

    Publicly available long video traces encoded according to H.264/AVC were analyzed from the fractal and multifractal points of view. It was shown that such video traces, as compressed videos (H.261, H.263, and MPEG-4 Version 2) exhibit inherent long-range dependency, that is, fractal, property. Moreover they have high bit rate variability, particularly at higher compression ratios. Such signals may be better characterized by multifractal (MF) analysis, since this approach describes both local and global features of the process. From multifractal spectra of the frame size video traces it was shown that higher compression ratio produces broader and less regular MF spectra, indicating to higher MF nature and the existence of additive components in video traces. Considering individual frames (I, P, and B) and their MF spectra one can approve additive nature of compressed video and the particular influence of these frames to a whole MF spectrum. Since compressed video occupies a main part of transmission bandwidth, results obtained from MF analysis of compressed video may contribute to more accurate modeling of modern teletraffic. Moreover, by appropriate choice of the method for estimating MF quantities, an inverse MF analysis is possible, that means, from a once derived MF spectrum of observed signal it is possible to recognize and extract parts of the signal which are characterized by particular values of multifractal parameters. Intensive simulations and results obtained confirm the applicability and efficiency of MF analysis of compressed video.

  9. Adiabatic compression and radiative compression of magnetic fields

    SciTech Connect

    Woods, C.H.

    1980-02-12

    Flux is conserved during mechanical compression of magnetic fields for both nonrelativistic and relativistic compressors. However, the relativistic compressor generates radiation, which can carry up to twice the energy content of the magnetic field compressed adiabatically. The radiation may be either confined or allowed to escape.

  10. The effects of wavelet compression on Digital Elevation Models (DEMs)

    USGS Publications Warehouse

    Oimoen, M.J.

    2004-01-01

    This paper investigates the effects of lossy compression on floating-point digital elevation models using the discrete wavelet transform. The compression of elevation data poses a different set of problems and concerns than does the compression of images. Most notably, the usefulness of DEMs depends largely in the quality of their derivatives, such as slope and aspect. Three areas extracted from the U.S. Geological Survey's National Elevation Dataset were transformed to the wavelet domain using the third order filters of the Daubechies family (DAUB6), and were made sparse by setting 95 percent of the smallest wavelet coefficients to zero. The resulting raster is compressible to a corresponding degree. The effects of the nulled coefficients on the reconstructed DEM are noted as residuals in elevation, derived slope and aspect, and delineation of drainage basins and streamlines. A simple masking technique also is presented, that maintains the integrity and flatness of water bodies in the reconstructed DEM.

  11. Compression and compression fatigue testing of composite laminates

    NASA Technical Reports Server (NTRS)

    Porter, T. R.

    1982-01-01

    The effects of moisture and temperature on the fatigue and fracture response of composite laminates under compression loads were investigated. The structural laminates studied were an intermediate stiffness graphite-epoxy composite (a typical angle ply laimna liminate had a typical fan blade laminate). Full and half penetration slits and impact delaminations were the defects examined. Results are presented which show the effects of moisture on the fracture and fatigue strength at room temperature, 394 K (250 F), and 422 K (300 F). Static tests results show the effects of defect size and type on the compression-fracture strength under moisture and thermal environments. The cyclic tests results compare the fatigue lives and residual compression strength under compression only and under tension-compression fatigue loading.

  12. Survey of Header Compression Techniques

    NASA Technical Reports Server (NTRS)

    Ishac, Joseph

    2001-01-01

    This report provides a summary of several different header compression techniques. The different techniques included are: (1) Van Jacobson's header compression (RFC 1144); (2) SCPS (Space Communications Protocol Standards) header compression (SCPS-TP, SCPS-NP); (3) Robust header compression (ROHC); and (4) The header compression techniques in RFC2507 and RFC2508. The methodology for compression and error correction for these schemes are described in the remainder of this document. All of the header compression schemes support compression over simplex links, provided that the end receiver has some means of sending data back to the sender. However, if that return path does not exist, then neither Van Jacobson's nor SCPS can be used, since both rely on TCP (Transmission Control Protocol). In addition, under link conditions of low delay and low error, all of the schemes perform as expected. However, based on the methodology of the schemes, each scheme is likely to behave differently as conditions degrade. Van Jacobson's header compression relies heavily on the TCP retransmission timer and would suffer an increase in loss propagation should the link possess a high delay and/or bit error rate (BER). The SCPS header compression scheme protects against high delay environments by avoiding delta encoding between packets. Thus, loss propagation is avoided. However, SCPS is still affected by an increased BER (bit-error-rate) since the lack of delta encoding results in larger header sizes. Next, the schemes found in RFC2507 and RFC2508 perform well for non-TCP connections in poor conditions. RFC2507 performance with TCP connections is improved by various techniques over Van Jacobson's, but still suffers a performance hit with poor link properties. Also, RFC2507 offers the ability to send TCP data without delta encoding, similar to what SCPS offers. ROHC is similar to the previous two schemes, but adds additional CRCs (cyclic redundancy check) into headers and improves

  13. Population attribute compression

    DOEpatents

    White, James M.; Faber, Vance; Saltzman, Jeffrey S.

    1995-01-01

    An image population having a large number of attributes is processed to form a display population with a predetermined smaller number of attributes that represent the larger number of attributes. In a particular application, the color values in an image are compressed for storage in a discrete look-up table (LUT). Color space containing the LUT color values is successively subdivided into smaller volumes until a plurality of volumes are formed, each having no more than a preselected maximum number of color values. Image pixel color values can then be rapidly placed in a volume with only a relatively few LUT values from which a nearest neighbor is selected. Image color values are assigned 8 bit pointers to their closest LUT value whereby data processing requires only the 8 bit pointer value to provide 24 bit color values from the LUT.

  14. Compressive Network Analysis

    PubMed Central

    Jiang, Xiaoye; Yao, Yuan; Liu, Han; Guibas, Leonidas

    2014-01-01

    Modern data acquisition routinely produces massive amounts of network data. Though many methods and models have been proposed to analyze such data, the research of network data is largely disconnected with the classical theory of statistical learning and signal processing. In this paper, we present a new framework for modeling network data, which connects two seemingly different areas: network data analysis and compressed sensing. From a nonparametric perspective, we model an observed network using a large dictionary. In particular, we consider the network clique detection problem and show connections between our formulation with a new algebraic tool, namely Randon basis pursuit in homogeneous spaces. Such a connection allows us to identify rigorous recovery conditions for clique detection problems. Though this paper is mainly conceptual, we also develop practical approximation algorithms for solving empirical problems and demonstrate their usefulness on real-world datasets. PMID:25620806

  15. Compressed quantum simulation

    SciTech Connect

    Kraus, B.

    2014-12-04

    Here, I summarize the results presented in B. Kraus, Phys. Rev. Lett. 107, 250503 (2011). Recently, it has been shown that certain circuits, the so-called match gate circuits, can be compressed to an exponentially smaller universal quantum computation. We use this result to demonstrate that the simulation of a 1-D Ising chain consisting of n qubits can be performed on a universal quantum computer running on only log(n) qubits. We show how the adiabatic evolution can be simulated on this exponentially smaller system and how the magnetization can be measured. Since the Ising model displays a quantum phase transition, this result implies that a quantum phase transition of a very large system can be observed with current technology.

  16. Compressive Network Analysis.

    PubMed

    Jiang, Xiaoye; Yao, Yuan; Liu, Han; Guibas, Leonidas

    2014-11-01

    Modern data acquisition routinely produces massive amounts of network data. Though many methods and models have been proposed to analyze such data, the research of network data is largely disconnected with the classical theory of statistical learning and signal processing. In this paper, we present a new framework for modeling network data, which connects two seemingly different areas: network data analysis and compressed sensing. From a nonparametric perspective, we model an observed network using a large dictionary. In particular, we consider the network clique detection problem and show connections between our formulation with a new algebraic tool, namely Randon basis pursuit in homogeneous spaces. Such a connection allows us to identify rigorous recovery conditions for clique detection problems. Though this paper is mainly conceptual, we also develop practical approximation algorithms for solving empirical problems and demonstrate their usefulness on real-world datasets.

  17. Vapor compression distillation module

    NASA Technical Reports Server (NTRS)

    Nuccio, P. P.

    1975-01-01

    A Vapor Compression Distillation (VCD) module was developed and evaluated as part of a Space Station Prototype (SSP) environmental control and life support system. The VCD module includes the waste tankage, pumps, post-treatment cells, automatic controls and fault detection instrumentation. Development problems were encountered with two components: the liquid pumps, and the waste tank and quantity gauge. Peristaltic pumps were selected instead of gear pumps, and a sub-program of materials and design optimization was undertaken leading to a projected life greater than 10,000 hours of continuous operation. A bladder tank was designed and built to contain the waste liquids and deliver it to the processor. A detrimental pressure pattern imposed upon the bladder by a force-operated quantity gauge was corrected by rearranging the force application, and design goals were achieved. System testing has demonstrated that all performance goals have been fulfilled.

  18. Application specific compression : final report.

    SciTech Connect

    Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.

    2008-12-01

    With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

  19. Data compression by wavelet transforms

    NASA Technical Reports Server (NTRS)

    Shahshahani, M.

    1992-01-01

    A wavelet transform algorithm is applied to image compression. It is observed that the algorithm does not suffer from the blockiness characteristic of the DCT-based algorithms at compression ratios exceeding 25:1, but the edges do not appear as sharp as they do with the latter method. Some suggestions for the improved performance of the wavelet transform method are presented.

  20. Pressure Oscillations in Adiabatic Compression

    ERIC Educational Resources Information Center

    Stout, Roland

    2011-01-01

    After finding Moloney and McGarvey's modified adiabatic compression apparatus, I decided to insert this experiment into my physical chemistry laboratory at the last minute, replacing a problematic experiment. With insufficient time to build the apparatus, we placed a bottle between two thick textbooks and compressed it with a third textbook forced…

  1. Streaming Compression of Hexahedral Meshes

    SciTech Connect

    Isenburg, M; Courbet, C

    2010-02-03

    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  2. Digital compression algorithms for HDTV transmission

    NASA Technical Reports Server (NTRS)

    Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.

    1990-01-01

    Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.

  3. Multiview image compression based on LDV scheme

    NASA Astrophysics Data System (ADS)

    Battin, Benjamin; Niquin, Cédric; Vautrot, Philippe; Debons, Didier; Lucas, Laurent

    2011-03-01

    In recent years, we have seen several different approaches dealing with multiview compression. First, we can find the H264/MVC extension which generates quite heavy bitstreams when used on n-views autostereoscopic medias and does not allow inter-view reconstruction. Another solution relies on the MVD (MultiView+Depth) scheme which keeps p views (n > p > 1) and their associated depth-maps. This method is not suitable for multiview compression since it does not exploit the redundancy between the p views, moreover occlusion areas cannot be accurately filled. In this paper, we present our method based on the LDV (Layered Depth Video) approach which keeps one reference view with its associated depth-map and the n-1 residual ones required to fill occluded areas. We first perform a global per-pixel matching step (providing a good consistency between each view) in order to generate one unified-color RGB texture (where a unique color is devoted to all pixels corresponding to the same 3D-point, thus avoiding illumination artifacts) and a signed integer disparity texture. Next, we extract the non-redundant information and store it into two textures (a unified-color one and a disparity one) containing the reference and the n-1 residual views. The RGB texture is compressed with a conventional DCT or DWT-based algorithm and the disparity texture with a lossless dictionary algorithm. Then, we will discuss about the signal deformations generated by our approach.

  4. Microbunching Instability due to Bunch Compression

    SciTech Connect

    Huang, Zhirong; Wu, Juhao; Shaftan, Timur; /Brookhaven

    2005-12-13

    Magnetic bunch compressors are designed to increase the peak current while maintaining the transverse and longitudinal emittances in order to drive a short-wavelength free electron laser (FEL). Recently, several linac-based FEL experiments observe self-developing micro-structures in the longitudinal phase space of electron bunches undergoing strong compression [1-3]. In the mean time, computer simulations of coherent synchrotron radiation (CSR) effects in bunch compressors illustrate that a CSR-driven microbunching instability may significantly amplify small longitudinal density and energy modulations and hence degrade the beam quality [4]. Various theoretical models have since been developed to describe this instability [5-8]. It is also pointed out that the microbunching instability may be driven strongly by the longitudinal space charge (LSC) field [9,10] and by the linac wakefield [11] in the accelerator, leading to a very large overall gain of a two-stage compression system such as found in the Linac Coherent Light Source (LCLS) [12]. This paper reviews theory and simulations of microbunching instability due to bunch compression, the proposed method to suppress its effects for short-wavelength FELs, and experimental characterizations of beam modulations in linear accelerators. A related topic of interests is microbunching instability in storage rings, which has been reported in the previous ICFA beam dynamics newsletter No. 35 (http://wwwbd. fnal.gov/icfabd/Newsletter35.pdf).

  5. Compressive sensing exploiting wavelet-domain dependencies for ECG compression

    NASA Astrophysics Data System (ADS)

    Polania, Luisa F.; Carrillo, Rafael E.; Blanco-Velasco, Manuel; Barner, Kenneth E.

    2012-06-01

    Compressive sensing (CS) is an emerging signal processing paradigm that enables sub-Nyquist sampling of sparse signals. Extensive previous work has exploited the sparse representation of ECG signals in compression applications. In this paper, we propose the use of wavelet domain dependencies to further reduce the number of samples in compressive sensing-based ECG compression while decreasing the computational complexity. R wave events manifest themselves as chains of large coefficients propagating across scales to form a connected subtree of the wavelet coefficient tree. We show that the incorporation of this connectedness as additional prior information into a modified version of the CoSaMP algorithm can significantly reduce the required number of samples to achieve good quality in the reconstruction. This approach also allows more control over the ECG signal reconstruction, in particular, the QRS complex, which is typically distorted when prior information is not included in the recovery. The compression algorithm was tested upon records selected from the MIT-BIH arrhythmia database. Simulation results show that the proposed algorithm leads to high compression ratios associated with low distortion levels relative to state-of-the-art compression algorithms.

  6. Wavefield Compression for Full-Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Boehm, Christian; Fichtner, Andreas; de la Puente, Josep; Hanzich, Mauricio

    2015-04-01

    We present compression techniques tailored to iterative nonlinear minimization methods that significantly reduce the memory requirements to store the forward wavefield for the computation of sensitivity kernels. Full-waveform inversion on 3d data sets requires massive computing and memory capabilities. Adjoint techniques offer a powerful tool to compute the first and second derivatives. However, due to the asynchronous nature of forward and adjoint simulations, a severe bottleneck is introduced by the necessity to access both wavefields simultaneously when computing sensitivity kernels. There exist two opposing strategies to deal with this challenge. On the one hand, conventional approaches save the whole forward wavefield to the disk, which yields a significant I/O overhead and might require several terabytes of storage capacity per seismic event. On the other hand, checkpointing techniques allow to trade an almost arbitrary amount of memory requirements for a - potentially large - number of additional forward simulations. We propose an alternative approach that strikes a balance between memory requirements and the need for additional computations. Here, we aim at compressing the forward wavefield in such a way that (1) the I/O overhead is reduced substantially without the need for additional simulations, (2) the costs for compressing/decompressing the wavefield are negligible, and (3) the approximate derivatives resulting from the compressed forward wavefield do not affect the rate of convergence of a Newton-type minimization method. To this end, we apply an adaptive re-quantization of the displacement field that uses dynamically adjusted floating-point accuracies - i.e., a locally varying number of bits - to store the data. Furthermore, the spectral element functions are adaptively downsampled to a lower polynomial degree. In addition, a sliding-window cubic spline re-interpolates the temporal snapshots to recover a smooth signal. Moreover, a preprocessing step

  7. 14. Detail, upper chord connection point on upstream side of ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    14. Detail, upper chord connection point on upstream side of truss, showing connection of upper chord, laced vertical compression member, strut, counters, and laterals. - Dry Creek Bridge, Spanning Dry Creek at Cook Road, Ione, Amador County, CA

  8. Compressive Sensing for Quantum Imaging

    NASA Astrophysics Data System (ADS)

    Howland, Gregory A.

    This thesis describes the application of compressive sensing to several challenging problems in quantum imaging with practical and fundamental implications. Compressive sensing is a measurement technique that compresses a signal during measurement such that it can be dramatically undersampled. Compressive sensing has been shown to be an extremely efficient measurement technique for imaging, particularly when detector arrays are not available. The thesis first reviews compressive sensing through the lens of quantum imaging and quantum measurement. Four important applications and their corresponding experiments are then described in detail. The first application is a compressive sensing, photon-counting lidar system. A novel depth mapping technique that uses standard, linear compressive sensing is described. Depth maps up to 256 x 256 pixel transverse resolution are recovered with depth resolution less than 2.54 cm. The first three-dimensional, photon counting video is recorded at 32 x 32 pixel resolution and 14 frames-per-second. The second application is the use of compressive sensing for complementary imaging---simultaneously imaging the transverse-position and transverse-momentum distributions of optical photons. This is accomplished by taking random, partial projections of position followed by imaging the momentum distribution on a cooled CCD camera. The projections are shown to not significantly perturb the photons' momenta while allowing high resolution position images to be reconstructed using compressive sensing. A variety of objects and their diffraction patterns are imaged including the double slit, triple slit, alphanumeric characters, and the University of Rochester logo. The third application is the use of compressive sensing to characterize spatial entanglement of photon pairs produced by spontaneous parametric downconversion. The technique gives a theoretical speedup N2/log N for N-dimensional entanglement over the standard raster scanning technique

  9. Compressed Submanifold Multifactor Analysis.

    PubMed

    Luu, Khoa; Savvides, Marios; Bui, Tien; Suen, Ching

    2016-04-14

    Although widely used, Multilinear PCA (MPCA), one of the leading multilinear analysis methods, still suffers from four major drawbacks. First, it is very sensitive to outliers and noise. Second, it is unable to cope with missing values. Third, it is computationally expensive since MPCA deals with large multi-dimensional datasets. Finally, it is unable to maintain the local geometrical structures due to the averaging process. This paper proposes a novel approach named Compressed Submanifold Multifactor Analysis (CSMA) to solve the four problems mentioned above. Our approach can deal with the problem of missing values and outliers via SVD-L1. The Random Projection method is used to obtain the fast low-rank approximation of a given multifactor dataset. In addition, it is able to preserve the geometry of the original data. Our CSMA method can be used efficiently for multiple purposes, e.g. noise and outlier removal, estimation of missing values, biometric applications. We show that CSMA method can achieve good results and is very efficient in the inpainting problem as compared to [1], [2]. Our method also achieves higher face recognition rates compared to LRTC, SPMA, MPCA and some other methods, i.e. PCA, LDA and LPP, on three challenging face databases, i.e. CMU-MPIE, CMU-PIE and Extended YALE-B.

  10. Advances in compressible turbulent mixing

    SciTech Connect

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.

  11. Data compression applied to HHVT

    NASA Technical Reports Server (NTRS)

    Thompson, William K.

    1990-01-01

    A task order was written by the High Resolution, High Frame Rate Video Technology (HHVT) project engineers to study data compression techniques that could be applied to the HHVT system. Specifically, the goals of the HHVT data compression study are to accomplish the following: (1) Determine the downlink capabilities of the Space Shuttle and Space Station Freedom to support HHVT data (i.e., determine the maximum data rates and link availability); (2) Determine current and projected capabilities of high speed storage media to support HHVT data by determining their maximum data acquisition/transmission rates and volumes; (3) Identify which experiment in the HHVT Users' Requirement data base need data compression, based on the experiments' imaging requirements; (4) Select the best data compression technique for each of these users by identifying a technique that provides compression but minimizes distortion; and (5) Investigate state-of-the-art technologies for possible implementation of selected data compression techniques. Data compression will be needed because of the high data rates and larger volumes of data that will result from the use of digitized video onboard the Space Shuttle and Space Station Freedom.

  12. Designing experiments through compressed sensing.

    SciTech Connect

    Young, Joseph G.; Ridzal, Denis

    2013-06-01

    In the following paper, we discuss how to design an ensemble of experiments through the use of compressed sensing. Specifically, we show how to conduct a small number of physical experiments and then use compressed sensing to reconstruct a larger set of data. In order to accomplish this, we organize our results into four sections. We begin by extending the theory of compressed sensing to a finite product of Hilbert spaces. Then, we show how these results apply to experiment design. Next, we develop an efficient reconstruction algorithm that allows us to reconstruct experimental data projected onto a finite element basis. Finally, we verify our approach with two computational experiments.

  13. Context-Aware Image Compression

    PubMed Central

    Chan, Jacky C. K.; Mahjoubfar, Ata; Chen, Claire L.; Jalali, Bahram

    2016-01-01

    We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling. PMID:27367904

  14. Wearable EEG via lossless compression.

    PubMed

    Dufort, Guillermo; Favaro, Federico; Lecumberry, Federico; Martin, Alvaro; Oliver, Juan P; Oreggioni, Julian; Ramirez, Ignacio; Seroussi, Gadiel; Steinfeld, Leonardo

    2016-08-01

    This work presents a wearable multi-channel EEG recording system featuring a lossless compression algorithm. The algorithm, based in a previously reported algorithm by the authors, exploits the existing temporal correlation between samples at different sampling times, and the spatial correlation between different electrodes across the scalp. The low-power platform is able to compress, by a factor between 2.3 and 3.6, up to 300sps from 64 channels with a power consumption of 176μW/ch. The performance of the algorithm compares favorably with the best compression rates reported up to date in the literature.

  15. Reversibility of crumpling on compressed thin sheets: reversibility of crumpling.

    PubMed

    Pocheau, Alain; Roman, Benoit

    2014-04-01

    Compressing thin sheets usually yields the formation of singularities which focus curvature and stretching on points or lines. In particular, following the common experience of crumpled paper where a paper sheet is crushed in a paper ball, one might guess that elastic singularities should be the rule beyond some compression level. In contrast, we show here that, somewhat surprisingly, compressing a sheet between cylinders make singularities spontaneously disappear at large compression. This "stress defocusing" phenomenon is qualitatively explained from scale-invariance and further linked to a criterion based on a balance between stretching and curvature energies on defocused states. This criterion is made quantitative using the scalings relevant to sheet elasticity and compared to experiment. These results are synthesized in a phase diagram completed with plastic transitions and buckling saturation. They provide a renewed vision of elastic singularities as a thermodynamic condensed phase where stress is focused, in competition with a regular diluted phase where stress is defocused. The physical differences between phases is emphasized by determining experimentally the mechanical response when stress is focused or defocused and by recovering the corresponding scaling laws. In this phase diagram, different compression routes may be followed by constraining differently the two principal curvatures of a sheet. As evidenced here, this may provide an efficient way of compressing a sheet that avoids the occurrence of plastic damages by inducing a spontaneous regularization of geometry and stress.

  16. Compressive strain measurement using RFID patch antenna sensors

    NASA Astrophysics Data System (ADS)

    Cho, Chunhee; Yi, Xiaohua; Wang, Yang; Tentzeris, Manos M.; Leon, Roberto T.

    2014-04-01

    In this research, two radiofrequency identification (RFID) antenna sensor designs are tested for compressive strain measurement. The first design is a passive (battery-free) folded patch antenna sensor with a planar dimension of 61mm × 69mm. The second design is a slotted patch antenna sensor, whose dimension is reduced to 48mm × 44mm by introducing slots on antenna conducting layer to detour surface current path. A three-point bending setup is fabricated to apply compression on a tapered aluminum specimen mounted with an antenna sensor. Mechanics-electromagnetics coupled simulation shows that the antenna resonance frequency shifts when each antenna sensor is under compressive strain. Extensive compression tests are conducted to verify the strain sensing performance of the two sensors. Experimental results confirm that the resonance frequency of each antenna sensor increases in an approximately linear relationship with respect to compressive strain. The compressive strain sensing performance of the two RFID antenna sensors, including strain sensitivity and determination coefficient, is evaluated based on the experimental data.

  17. Compressive phase-only filtering at extreme compression rates

    NASA Astrophysics Data System (ADS)

    Pastor-Calle, David; Pastuszczak, Anna; Mikołajczyk, Michał; Kotyński, Rafał

    2017-01-01

    We introduce an efficient method for the reconstruction of the correlation between a compressively measured image and a phase-only filter. The proposed method is based on two properties of phase-only filtering: such filtering is a unitary circulant transform, and the correlation plane it produces is usually sparse. Thanks to these properties, phase-only filters are perfectly compatible with the framework of compressive sensing. Moreover, the lasso-based recovery algorithm is very fast when phase-only filtering is used as the compression matrix. The proposed method can be seen as a generalization of the correlation-based pattern recognition technique, which is hereby applied directly to non-adaptively acquired compressed data. At the time of measurement, any prior knowledge of the target object for which the data will be scanned is not required. We show that images measured at extremely high compression rates may still contain sufficient information for target classification and localization, even if the compression rate is high enough, that visual recognition of the target in the reconstructed image is no longer possible. The method has been applied by us to highly undersampled measurements obtained from a single-pixel camera, with sampling based on randomly chosen Walsh-Hadamard patterns.

  18. Internal roll compression system

    DOEpatents

    Anderson, Graydon E.

    1985-01-01

    This invention is a machine for squeezing water out of peat or other material of low tensile strength; the machine including an inner roll eccentrically positioned inside a tubular outer roll, so as to form a gradually increasing pinch area at one point therebetween, so that, as the rolls rotate, the material is placed between the rolls, and gets wrung out when passing through the pinch area.

  19. Preprocessing of compressed digital video

    NASA Astrophysics Data System (ADS)

    Segall, C. Andrew; Karunaratne, Passant V.; Katsaggelos, Aggelos K.

    2000-12-01

    Pre-processing algorithms improve on the performance of a video compression system by removing spurious noise and insignificant features from the original images. This increases compression efficiency and attenuates coding artifacts. Unfortunately, determining the appropriate amount of pre-filtering is a difficult problem, as it depends on both the content of an image as well as the target bit-rate of compression algorithm. In this paper, we explore a pre- processing technique that is loosely coupled to the quantization decisions of a rate control mechanism. This technique results in a pre-processing system that operates directly on the Displaced Frame Difference (DFD) and is applicable to any standard-compatible compression system. Results explore the effect of several standard filters on the DFD. An adaptive technique is then considered.

  20. Efficient Decoding of Compressed Data.

    ERIC Educational Resources Information Center

    Bassiouni, Mostafa A.; Mukherjee, Amar

    1995-01-01

    Discusses the problem of enhancing the speed of Huffman decoding of compressed data. Topics addressed include the Huffman decoding tree; multibit decoding; binary string mapping problems; and algorithms for solving mapping problems. (22 references) (LRW)

  1. Imaging of venous compression syndromes

    PubMed Central

    Ganguli, Suvranu; Ghoshhajra, Brian B.; Gupta, Rajiv; Prabhakar, Anand M.

    2016-01-01

    Venous compression syndromes are a unique group of disorders characterized by anatomical extrinsic venous compression, typically in young and otherwise healthy individuals. While uncommon, they may cause serious complications including pain, swelling, deep venous thrombosis (DVT), pulmonary embolism, and post-thrombotic syndrome. The major disease entities are May-Thurner syndrome (MTS), variant iliac vein compression syndrome (IVCS), venous thoracic outlet syndrome (VTOS)/Paget-Schroetter syndrome, nutcracker syndrome (NCS), and popliteal venous compression (PVC). In this article, we review the key clinical features, multimodality imaging findings, and treatment options of these disorders. Emphasis is placed on the growing role of noninvasive imaging options such as magnetic resonance venography (MRV) in facilitating early and accurate diagnosis and tailored intervention. PMID:28123973

  2. Comparison of Artificial Compressibility Methods

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Housman, Jeffrey; Kwak, Dochan

    2004-01-01

    Various artificial compressibility methods for calculating the three-dimensional incompressible Navier-Stokes equations are compared. Each method is described and numerical solutions to test problems are conducted. A comparison based on convergence behavior, accuracy, and robustness is given.

  3. Compression fractures of the back

    MedlinePlus

    Vertebral compression fractures ... the most common cause of this type of fracture. Osteoporosis is a disease in which bones become ... the spine, such as multiple myeloma Having many fractures of the vertebrae can lead to kyphosis . This ...

  4. Compressed gas fuel storage system

    DOEpatents

    Wozniak, John J.; Tiller, Dale B.; Wienhold, Paul D.; Hildebrand, Richard J.

    2001-01-01

    A compressed gas vehicle fuel storage system comprised of a plurality of compressed gas pressure cells supported by shock-absorbing foam positioned within a shape-conforming container. The container is dimensioned relative to the compressed gas pressure cells whereby a radial air gap surrounds each compressed gas pressure cell. The radial air gap allows pressure-induced expansion of the pressure cells without resulting in the application of pressure to adjacent pressure cells or physical pressure to the container. The pressure cells are interconnected by a gas control assembly including a thermally activated pressure relief device, a manual safety shut-off valve, and means for connecting the fuel storage system to a vehicle power source and a refueling adapter. The gas control assembly is enclosed by a protective cover attached to the container. The system is attached to the vehicle with straps to enable the chassis to deform as intended in a high-speed collision.

  5. Shock compression of polyvinyl chloride

    NASA Astrophysics Data System (ADS)

    Neogi, Anupam; Mitra, Nilanjan

    2016-04-01

    This study presents shock compression simulation of atactic polyvinyl chloride (PVC) using ab-initio and classical molecular dynamics. The manuscript also identifies the limits of applicability of classical molecular dynamics based shock compression simulation for PVC. The mechanism of bond dissociation under shock loading and its progression is demonstrated in this manuscript using the density functional theory based molecular dynamics simulations. The rate of dissociation of different bonds at different shock velocities is also presented in this manuscript.

  6. Optical frequency comb interference profilometry using compressive sensing.

    PubMed

    Pham, Quang Duc; Hayasaki, Yoshio

    2013-08-12

    We describe a new optical system using an ultra-stable mode-locked frequency comb femtosecond laser and compressive sensing to measure an object's surface profile. The ultra-stable frequency comb laser was used to precisely measure an object with a large depth, over a wide dynamic range. The compressive sensing technique was able to obtain the spatial information of the object with two single-pixel fast photo-receivers, with no mechanical scanning and fewer measurements than the number of sampling points. An optical experiment was performed to verify the advantages of the proposed method.

  7. Expanding Window Compressed Sensing for Non-Uniform Compressible Signals

    PubMed Central

    Liu, Yu; Zhu, Xuqi; Zhang, Lin; Cho, Sung Ho

    2012-01-01

    Many practical compressible signals like image signals or the networked data in wireless sensor networks have non-uniform support distribution in their sparse representation domain. Utilizing this prior information, a novel compressed sensing (CS) scheme with unequal protection capability is proposed in this paper by introducing a windowing strategy called expanding window compressed sensing (EW-CS). According to the importance of different parts of the signal, the signal is divided into several nested subsets, i.e., the expanding windows. Each window generates its own measurements using a random sensing matrix. The more significant elements are contained by more windows, so they are captured by more measurements. This design makes the EW-CS scheme have more convenient implementation and better overall recovery quality for non-uniform compressible signals than ordinary CS schemes. These advantages are theoretically analyzed and experimentally confirmed. Moreover, the EW-CS scheme is applied to the compressed acquisition of image signals and networked data where it also has superior performance than ordinary CS and the existing unequal protection CS schemes. PMID:23201984

  8. 17 CFR 23.503 - Portfolio compression.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 17 Commodity and Securities Exchanges 1 2013-04-01 2013-04-01 false Portfolio compression. 23.503... MAJOR SWAP PARTICIPANTS Swap Documentation § 23.503 Portfolio compression. (a) Portfolio compression... participant in a timely fashion, when appropriate. (2) Bilateral compression. Each swap dealer and major...

  9. 17 CFR 23.503 - Portfolio compression.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 17 Commodity and Securities Exchanges 1 2014-04-01 2014-04-01 false Portfolio compression. 23.503... MAJOR SWAP PARTICIPANTS Swap Documentation § 23.503 Portfolio compression. (a) Portfolio compression... participant in a timely fashion, when appropriate. (2) Bilateral compression. Each swap dealer and major...

  10. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 7 2011-07-01 2011-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  11. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  12. Compressive Imaging via Approximate Message Passing

    DTIC Science & Technology

    2015-09-04

    We propose novel compressive imaging algorithms that employ approximate message passing (AMP), which is an iterative signal estimation algorithm that...Approved for Public Release; Distribution Unlimited Final Report: Compressive Imaging via Approximate Message Passing The views, opinions and/or findings...Research Triangle Park, NC 27709-2211 approximate message passing , compressive imaging, compressive sensing, hyperspectral imaging, signal reconstruction

  13. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  14. Improved Compression of Wavelet-Transformed Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Klimesh, Matthew

    2005-01-01

    length and the code parameter. When this difference falls outside a fixed range, the code parameter is updated (increased or decreased). The Golomb code parameter is selected based on the average magnitude of recently encoded nonzero samples. The coding method requires no floating- point operations, and more readily adapts to local statistics than other methods. The method can also accommodate arbitrarily large input values and arbitrarily long runs of zeros. In practice, this means that changes in the dynamic range or size of the input data set would not require a change to the compressor. The algorithm has been tested in computational experiments on test images. A comparison with a previously developed algorithm that uses large code tables (generated via Huffman coding on training data) suggests that the data-compression effectiveness of the present algorithm is comparable to the best performance achievable by the previously developed algorithm.

  15. Compressive Neuropathies of the Upper Extremity: Pathophysiology, Classification, Electrodiagnostic Findings

    PubMed Central

    Tapadia, Minal; Mozaffar, Tahseen; Gupta, Ranjan

    2011-01-01

    Clinical examination and electrodiagnostic studies remain the gold standard for diagnosis of nerve injuries. Diagnosis of chronic nerve compression (CNC) injuries may be difficult in patients with confounding factors such as diabetes. The treatment of nerve entrapment ranges from medical to surgical management depending on the nerve involved and on the severity and duration of compression. Considerable insights have been made at the molecular level differentiating between nerve crush injuries and CNC injuries. While the myelin changes after CNC injury were previously thought to be a mild form of Wallerian degeneration, recent evidence points to a distinct pathophysiology involving Schwann cell mechano-sensitivity. Future areas of research include the use of Schwann cell transplantation in the treatment regimen, the correlation between demyelination and the onset of pain, and the role of Schwann cell integrins in transducing the mechanical forces involved in nerve compression injuries to Schwann cells. PMID:20223605

  16. Laser Driven, Extreme Compression Science

    NASA Astrophysics Data System (ADS)

    Eggert, Jon

    2014-03-01

    Extreme-compression science is blessed by a number of new techniques and facilities that are shattering previous experimental limitations: static pressures above 600 GPa, equation of state (EOS) experiments on pulsed-power machines, picosecond-resolved x-ray diffraction on free-electron lasers, and many new experiments on high-energy lasers. Our goals, using high-energy lasers, have been to push the limits of high pressure accessible to measurement and to bridge the gap between static- and dynamic-compression experiments by exploring off-Hugoniot states. I will review laser techniques for both shock- and ramp-compression experiments, and discuss a variety of diagnostics. I will present recent results including: impedance-matching Hugoniot experiments, absolute-Hugoniot implosive-shock radiography, coupled radiometry and velocimetry, ramp-compression EOS, and in-situ x-ray diffraction and absorption spectroscopy into the TPa regime. As the National Ignition Facility (NIF) transitions to a laser user facility for basic and applied science, we are transferring many of these techniques. The unprecedented quality and variety of diagnostics available, coupled with exquisite pulse-shaping predictability and control make the NIF a premier facility for extreme-compression experiments.

  17. Laser Driven, Extreme Compression Science

    NASA Astrophysics Data System (ADS)

    Eggert, Jon

    2013-06-01

    Extreme-compression science is blessed by a number of new techniques and facilities that are shattering previous experimental limitations: static pressures above 600 GPa, equation of state (EOS) experiments on pulsed-power machines, picosecond-resolved x-ray diffraction on free-electron lasers, and many new experiments on high-energy lasers. Our goals, using high-energy lasers, have been to push the limits of high pressure accessible to measurement and to bridge the gap between static- and dynamic-compression experiments by exploring off-Hugoniot states. I will review laser techniques for both shock- and ramp-compression experiments, and discuss a variety of diagnostics. I will present recent results including: impedance-matching Hugoniot experiments, absolute-Hugoniot implosive-shock radiography, coupled radiometry and velocimetry, ramp-compression EOS, and in-situ x-ray diffraction and absorption spectroscopy into the TPa regime. As the National Ignition Facility (NIF) transitions to a laser user facility for basic and applied science, we are transferring many of these techniques. The unprecedented quality and variety of diagnostics available, coupled with exquisite pulse-shaping predictability and control make the NIF a premier facility for extreme-compression experiments.

  18. Lossless compression of projection data from photon counting detectors

    NASA Astrophysics Data System (ADS)

    Shunhavanich, Picha; Pelc, Norbert J.

    2016-03-01

    With many attractive attributes, photon counting detectors with many energy bins are being considered for clinical CT systems. In practice, a large amount of projection data acquired for multiple energy bins must be transferred in real time through slip rings and data storage subsystems, causing a bandwidth bottleneck problem. The higher resolution of these detectors and the need for faster acquisition additionally contribute to this issue. In this work, we introduce a new approach to lossless compression, specifically for projection data from photon counting detectors, by utilizing the dependencies in the multi-energy data. The proposed predictor estimates the value of a projection data sample as a weighted average of its neighboring samples and an approximation from other energy bins, and the prediction residuals are then encoded. Context modeling using three or four quantized local gradients is also employed to detect edge characteristics of the data. Using three simulated phantoms including a head phantom, compression of 2.3:1-2.4:1 was achieved. The proposed predictor using zero, three, and four gradient contexts was compared to JPEG-LS and the ideal predictor (noiseless projection data). Among our proposed predictors, three-gradient context is preferred with a compression ratio from Golomb coding 7% higher than JPEG-LS and only 3% lower than the ideal predictor. In encoder efficiency, the Golomb code with the proposed three-gradient contexts has higher compression than block floating point. We also propose a lossy compression scheme, which quantizes the prediction residuals with scalar uniform quantization using quantization boundaries that limit the ratio of quantization error variance to quantum noise variance. Applying our proposed predictor with three-gradient context, the lossy compression achieved a compression ratio of 3.3:1 but inserted a 2.1% standard deviation of error compared to that of quantum noise in reconstructed images. From the initial

  19. Data compression using Chebyshev transform

    NASA Technical Reports Server (NTRS)

    Cheng, Andrew F. (Inventor); Hawkins, III, S. Edward (Inventor); Nguyen, Lillian (Inventor); Monaco, Christopher A. (Inventor); Seagrave, Gordon G. (Inventor)

    2007-01-01

    The present invention is a method, system, and computer program product for implementation of a capable, general purpose compression algorithm that can be engaged on the fly. This invention has particular practical application with time-series data, and more particularly, time-series data obtained form a spacecraft, or similar situations where cost, size and/or power limitations are prevalent, although it is not limited to such applications. It is also particularly applicable to the compression of serial data streams and works in one, two, or three dimensions. The original input data is approximated by Chebyshev polynomials, achieving very high compression ratios on serial data streams with minimal loss of scientific information.

  20. Premixed autoignition in compressible turbulence

    NASA Astrophysics Data System (ADS)

    Konduri, Aditya; Kolla, Hemanth; Krisman, Alexander; Chen, Jacqueline

    2016-11-01

    Prediction of chemical ignition delay in an autoignition process is critical in combustion systems like compression ignition engines and gas turbines. Often, ignition delay times measured in simple homogeneous experiments or homogeneous calculations are not representative of actual autoignition processes in complex turbulent flows. This is due the presence of turbulent mixing which results in fluctuations in thermodynamic properties as well as chemical composition. In the present study the effect of fluctuations of thermodynamic variables on the ignition delay is quantified with direct numerical simulations of compressible isotropic turbulence. A premixed syngas-air mixture is used to remove the effects of inhomogeneity in the chemical composition. Preliminary results show a significant spatial variation in the ignition delay time. We analyze the topology of autoignition kernels and identify the influence of extreme events resulting from compressibility and intermittency. The dependence of ignition delay time on Reynolds and turbulent Mach numbers is also quantified. Supported by Basic Energy Sciences, Dept of Energy, United States.

  1. Compressive Sensing with Optical Chaos

    NASA Astrophysics Data System (ADS)

    Rontani, D.; Choi, D.; Chang, C.-Y.; Locquet, A.; Citrin, D. S.

    2016-12-01

    Compressive sensing (CS) is a technique to sample a sparse signal below the Nyquist-Shannon limit, yet still enabling its reconstruction. As such, CS permits an extremely parsimonious way to store and transmit large and important classes of signals and images that would be far more data intensive should they be sampled following the prescription of the Nyquist-Shannon theorem. CS has found applications as diverse as seismology and biomedical imaging. In this work, we use actual optical signals generated from temporal intensity chaos from external-cavity semiconductor lasers (ECSL) to construct the sensing matrix that is employed to compress a sparse signal. The chaotic time series produced having their relevant dynamics on the 100 ps timescale, our results open the way to ultrahigh-speed compression of sparse signals.

  2. Compressibility of zinc sulfide nanoparticles

    SciTech Connect

    Gilbert, B.; Zhang, H.; Chen, B.; Banfield, J. F.; Kunz, M.; Huang, F.

    2006-09-15

    We describe a high-pressure x-ray diffraction (XRD) study of the compressibility of several samples of ZnS nanoparticles. The nanoparticles were synthesized with a range of sizes and surface chemical treatments in order to identify the factors that determine nanoparticle compressibility. Refinement of the XRD data revealed that all ZnS nanoparticles in the nominally cubic (sphalerite) phase exhibited a previously unobserved structural distortion under ambient conditions that exhibited, in addition, a dependence on pressure. Our results show that the compressibility of ZnS nanoparticles increases substantially as the particle size decreases, and we propose an interpretation based upon the available mechanisms of structural compliance in nanoscale vs bulk materials.

  3. Compressive behavior of fine sand.

    SciTech Connect

    Martin, Bradley E.; Kabir, Md. E.; Song, Bo; Chen, Wayne

    2010-04-01

    The compressive mechanical response of fine sand is experimentally investigated. The strain rate, initial density, stress state, and moisture level are systematically varied. A Kolsky bar was modified to obtain uniaxial and triaxial compressive response at high strain rates. A controlled loading pulse allows the specimen to acquire stress equilibrium and constant strain-rates. The results show that the compressive response of the fine sand is not sensitive to strain rate under the loading conditions in this study, but significantly dependent on the moisture content, initial density and lateral confinement. Partially saturated sand is more compliant than dry sand. Similar trends were reported in the quasi-static regime for experiments conducted at comparable specimen conditions. The sand becomes stiffer as initial density and/or confinement pressure increases. The sand particle size become smaller after hydrostatic pressure and further smaller after dynamic axial loading.

  4. Millimeter-wave compressive holography.

    PubMed

    Cull, Christy Fernandez; Wikner, David A; Mait, Joseph N; Mattheiss, Michael; Brady, David J

    2010-07-01

    We describe an active millimeter-wave holographic imaging system that uses compressive measurements for three-dimensional (3D) tomographic object estimation. Our system records a two-dimensional (2D) digitized Gabor hologram by translating a single pixel incoherent receiver. Two approaches for compressive measurement are undertaken: nonlinear inversion of a 2D Gabor hologram for 3D object estimation and nonlinear inversion of a randomly subsampled Gabor hologram for 3D object estimation. The object estimation algorithm minimizes a convex quadratic problem using total variation (TV) regularization for 3D object estimation. We compare object reconstructions using linear backpropagation and TV minimization, and we present simulated and experimental reconstructions from both compressive measurement strategies. In contrast with backpropagation, which estimates the 3D electromagnetic field, TV minimization estimates the 3D object that produces the field. Despite undersampling, range resolution is consistent with the extent of the 3D object band volume.

  5. Compressive Sensing with Optical Chaos

    PubMed Central

    Rontani, D.; Choi, D.; Chang, C.-Y.; Locquet, A.; Citrin, D. S.

    2016-01-01

    Compressive sensing (CS) is a technique to sample a sparse signal below the Nyquist-Shannon limit, yet still enabling its reconstruction. As such, CS permits an extremely parsimonious way to store and transmit large and important classes of signals and images that would be far more data intensive should they be sampled following the prescription of the Nyquist-Shannon theorem. CS has found applications as diverse as seismology and biomedical imaging. In this work, we use actual optical signals generated from temporal intensity chaos from external-cavity semiconductor lasers (ECSL) to construct the sensing matrix that is employed to compress a sparse signal. The chaotic time series produced having their relevant dynamics on the 100 ps timescale, our results open the way to ultrahigh-speed compression of sparse signals. PMID:27910863

  6. Measurement of compressed breast thickness by optical stereoscopic photogrammetry.

    PubMed

    Tyson, Albert H; Mawdsley, Gordon E; Yaffe, Martin J

    2009-02-01

    The determination of volumetric breast density (VBD) from mammograms requires accurate knowledge of the thickness of the compressed breast. In attempting to accurately determine VBD from images obtained on conventional mammography systems, the authors found that the thickness reported by a number of mammography systems in the field varied by as much as 15 mm when compressing the same breast or phantom. In order to evaluate the behavior of mammographic compression systems and to be able to predict the thickness at different locations in the breast on patients, they have developed a method for measuring the local thickness of the breast at all points of contact with the compression paddle using optical stereoscopic photogrammetry. On both flat (solid) and compressible phantoms, the measurements were accurate to better than 1 mm with a precision of 0.2 mm. In a pilot study, this method was used to measure thickness on 108 volunteers who were undergoing mammography examination. This measurement tool will allow us to characterize paddle surface deformations, deflections and calibration offsets for mammographic units.

  7. Stress relaxation in vanadium under shock and shockless dynamic compression

    SciTech Connect

    Kanel, G. I.; Razorenov, S. V.; Garkushin, G. V.; Savinykh, A. S.; Zaretsky, E. B.

    2015-07-28

    Evolutions of elastic-plastic waves have been recorded in three series of plate impact experiments with annealed vanadium samples under conditions of shockless and combined ramp and shock dynamic compression. The shaping of incident wave profiles was realized using intermediate base plates made of different silicate glasses through which the compression waves were entered into the samples. Measurements of the free surface velocity histories revealed an apparent growth of the Hugoniot elastic limit with decreasing average rate of compression. The growth was explained by “freezing” of the elastic precursor decay in the area of interaction of the incident and reflected waves. A set of obtained data show that the current value of the Hugoniot elastic limit and plastic strain rate is rather associated with the rate of the elastic precursor decay than with the local rate of compression. The study has revealed the contributions of dislocation multiplications in elastic waves. It has been shown that independently of the compression history the material arrives at the minimum point between the elastic and plastic waves with the same density of mobile dislocations.

  8. Compressive residual strength of graphite/epoxy laminates after impact

    NASA Technical Reports Server (NTRS)

    Guy, Teresa A.; Lagace, Paul A.

    1992-01-01

    The issue of damage tolerance after impact, in terms of the compressive residual strength, was experimentally examined in graphite/epoxy laminates using Hercules AS4/3501-6 in a (+ or - 45/0)(sub 2S) configuration. Three different impactor masses were used at various velocities and the resultant damage measured via a number of nondestructive and destructive techniques. Specimens were then tested to failure under uniaxial compression. The results clearly show that a minimum compressive residual strength exists which is below the open hole strength for a hole of the same diameter as the impactor. Increases in velocity beyond the point of minimum strength cause a difference in the damage produced and cause a resultant increase in the compressive residual strength which asymptotes to the open hole strength value. Furthermore, the results show that this minimum compressive residual strength value is independent of the impactor mass used and is only dependent upon the damage present in the impacted specimen which is the same for the three impactor mass cases. A full 3-D representation of the damage is obtained through the various techniques. Only this 3-D representation can properly characterize the damage state that causes the resultant residual strength. Assessment of the state-of-the-art in predictive analysis capabilities shows a need to further develop techniques based on the 3-D damage state that exists. In addition, the need for damage 'metrics' is clearly indicated.

  9. Distributed sensor data compression algorithm

    NASA Astrophysics Data System (ADS)

    Ambrose, Barry; Lin, Freddie

    2006-04-01

    Theoretically it is possible for two sensors to reliably send data at rates smaller than the sum of the necessary data rates for sending the data independently, essentially taking advantage of the correlation of sensor readings to reduce the data rate. In 2001, Caltech researchers Michelle Effros and Qian Zhao developed new techniques for data compression code design for correlated sensor data, which were published in a paper at the 2001 Data Compression Conference (DCC 2001). These techniques take advantage of correlations between two or more closely positioned sensors in a distributed sensor network. Given two signals, X and Y, the X signal is sent using standard data compression. The goal is to design a partition tree for the Y signal. The Y signal is sent using a code based on the partition tree. At the receiving end, if ambiguity arises when using the partition tree to decode the Y signal, the X signal is used to resolve the ambiguity. We have extended this work to increase the efficiency of the code search algorithms. Our results have shown that development of a highly integrated sensor network protocol that takes advantage of a correlation in sensor readings can result in 20-30% sensor data transport cost savings. In contrast, the best possible compression using state-of-the-art compression techniques that did not take into account the correlation of the incoming data signals achieved only 9-10% compression at most. This work was sponsored by MDA, but has very widespread applicability to ad hoc sensor networks, hyperspectral imaging sensors and vehicle health monitoring sensors for space applications.

  10. Flux Compression Magnetic Nozzle

    NASA Technical Reports Server (NTRS)

    Thio, Y. C. Francis; Schafer, Charles (Technical Monitor)

    2001-01-01

    In pulsed fusion propulsion schemes in which the fusion energy creates a radially expanding plasma, a magnetic nozzle is required to redirect the radially diverging flow of the expanding fusion plasma into a rearward axial flow, thereby producing a forward axial impulse to the vehicle. In a highly electrically conducting plasma, the presence of a magnetic field B in the plasma creates a pressure B(exp 2)/2(mu) in the plasma, the magnetic pressure. A gradient in the magnetic pressure can be used to decelerate the plasma traveling in the direction of increasing magnetic field, or to accelerate a plasma from rest in the direction of decreasing magnetic pressure. In principle, ignoring dissipative processes, it is possible to design magnetic configurations to produce an 'elastic' deflection of a plasma beam. In particular, it is conceivable that, by an appropriate arrangement of a set of coils, a good approximation to a parabolic 'magnetic mirror' may be formed, such that a beam of charged particles emanating from the focal point of the parabolic mirror would be reflected by the mirror to travel axially away from the mirror. The degree to which this may be accomplished depends on the degree of control one has over the flux surface of the magnetic field, which changes as a result of its interaction with a moving plasma.

  11. Compressed sensing for phase retrieval.

    PubMed

    Newton, Marcus C

    2012-05-01

    To date there are several iterative techniques that enjoy moderate success when reconstructing phase information, where only intensity measurements are made. There remains, however, a number of cases in which conventional approaches are unsuccessful. In the last decade, the theory of compressed sensing has emerged and provides a route to solving convex optimisation problems exactly via ℓ(1)-norm minimization. Here the application of compressed sensing to phase retrieval in a nonconvex setting is reported. An algorithm is presented that applies reweighted ℓ(1)-norm minimization to yield accurate reconstruction where conventional methods fail.

  12. Gravitational compression of colloidal gels

    NASA Astrophysics Data System (ADS)

    Liétor-Santos, J. J.; Kim, C.; Lu, P. J.; Fernández-Nieves, A.; Weitz, D. A.

    2009-02-01

    We study the compression of depletion gels under the influence of a gravitational stress by monitoring the time evolution of the gel interface and the local volume fraction, φ , inside the gel. We find φ is not constant throughout the gel. Instead, there is a volume fraction gradient that develops and grows along the gel height as the compression process proceeds. Our results are correctly described by a non-linear poroelastic model that explicitly incorporates the φ -dependence of the gravitational, elastic and viscous stresses acting on the gel.

  13. [Vascular compression of the duodenum].

    PubMed

    Acosta, B; Guachalla, G; Martínez, C; Felce, S; Ledezma, G

    1991-01-01

    The acute vascular compression of the duodenum is a well-recognized clinical entity, characterized by recurrent vomiting, abdominal distention, weight loss, post prandial distress. The cause of compression is considered to be effect produced as a result of the angle formed by the superior mesenteric vessels and sometimes by one of its first two branches, and vertebrae and paravertebral muscles, when the angle between superior mesenteric vessels and the aorta it's lower than 18 degrees we can saw this syndrome. The duodenojejunostomy is the best treatment, as well as in our patient.

  14. Structured illumination temporal compressive microscopy

    PubMed Central

    Yuan, Xin; Pang, Shuo

    2016-01-01

    We present a compressive video microscope based on structured illumination with incoherent light source. The source-side illumination coding scheme allows the emission photons being collected by the full aperture of the microscope objective, and thus is suitable for the fluorescence readout mode. A 2-step iterative reconstruction algorithm, termed BWISE, has been developed to address the mismatch between the illumination pattern size and the detector pixel size. Image sequences with a temporal compression ratio of 4:1 were demonstrated. PMID:27231586

  15. Extended testing of compression distillation.

    NASA Technical Reports Server (NTRS)

    Bambenek, R. A.; Nuccio, P. P.

    1972-01-01

    During the past eight years, the NASA Manned Spacecraft Center has supported the development of an integrated water and waste management system which includes the compression distillation process for recovering useable water from urine, urinal flush water, humidity condensate, commode flush water, and concentrated wash water. This paper describes the design of the compression distillation unit, developed for this system, and the testing performed to demonstrate its reliability and performance. In addition, this paper summarizes the work performed on pretreatment and post-treatment processes, to assure the recovery of sterile potable water from urine and treated urinal flush water.

  16. Data compression for satellite images

    NASA Technical Reports Server (NTRS)

    Chen, P. H.; Wintz, P. A.

    1976-01-01

    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.

  17. Fast, efficient lossless data compression

    NASA Technical Reports Server (NTRS)

    Ross, Douglas

    1991-01-01

    This paper presents lossless data compression and decompression algorithms which can be easily implemented in software. The algorithms can be partitioned into their fundamental parts which can be implemented at various stages within a data acquisition system. This allows for efficient integration of these functions into systems at the stage where they are most applicable. The algorithms were coded in Forth to run on a Silicon Composers Single Board Computer (SBC) using the Harris RTX2000 Forth processor. The algorithms require very few system resources and operate very fast. The performance of the algorithms with the RTX enables real time data compression and decompression to be implemented for a wide range of applications.

  18. Compressing the Inert Doublet Model

    SciTech Connect

    Blinov, Nikita; Kozaczuk, Jonathan; Morrissey, David E.; de la Puente, Alejandro

    2016-02-16

    The Inert Doublet Model relies on a discrete symmetry to prevent couplings of the new scalars to Standard Model fermions. We found that this stabilizes the lightest inert state, which can then contribute to the observed dark matter density. In the presence of additional approximate symmetries, the resulting spectrum of exotic scalars can be compressed. Here, we study the phenomenological and cosmological implications of this scenario. In conclusion, we derive new limits on the compressed Inert Doublet Model from LEP, and outline the prospects for exclusion and discovery of this model at dark matter experiments, the LHC, and future colliders.

  19. [Utilization of compressed Chinese fir thinning wood].

    PubMed

    Chen, Ruiying; Wei, Ping; Liu, Jinghong

    2005-12-01

    With Chinese fir thinnings as raw material, and through measuring the physical-mechanical indices of its compressed wood, observing the variation of its microstructure and using IR analysis, an optimized technique of compressing Chinese fir thinnings was established. The technique was: compression ratio 50%-60%, thickness after compression 20 mm, moisture content before compression 50%, compressing time 20-30 minutes, and hot compressing temperature 180-200 degrees C. CH, an environmentally friendly cooking additive, had positive effects on softening the wood. During compressing, only the cells of fast-growing Chinese fir were extruded, their cavity became smaller, while the cell wall was not destroyed. The thickness reversion ratio of compressed wood was 2.68%, and its size stability and mechanical quality were as good as hardwoods (Betula lumninifera).

  20. Image Segmentation, Registration, Compression, and Matching

    NASA Technical Reports Server (NTRS)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  1. Finite scale equations for compressible fluid flow

    SciTech Connect

    Margolin, Len G

    2008-01-01

    Finite-scale equations (FSE) describe the evolution of finite volumes of fluid over time. We discuss the FSE for a one-dimensional compressible fluid, whose every point is governed by the Navier-Stokes equations. The FSE contain new momentum and internal energy transport terms. These are similar to terms added in numerical simulation for high-speed flows (e.g. artificial viscosity) and for turbulent flows (e.g. subgrid scale models). These similarities suggest that the FSE may provide new insight as a basis for computational fluid dynamics. Our analysis of the FS continuity equation leads to a physical interpretation of the new transport terms, and indicates the need to carefully distinguish between volume-averaged and mass-averaged velocities in numerical simulation. We make preliminary connections to the other recent work reformulating Navier-Stokes equations.

  2. Buckling of cylindrical panels under axial compression

    NASA Technical Reports Server (NTRS)

    Sobel, L. H.; Weller, T.; Agarwal, B. L.

    1976-01-01

    This paper investigates the effects of boundary conditions and panel width on the axially compressive buckling behavior of unstiffened, isotropic, circular cylindrical panels. Numerical results are presented for eight different sets of boundary conditions along the straight edges of the panels. For all sets of boundary conditions except one (SS1), the results show that the panel buckling loads monotonically approach the complete cylinder buckling load from above as the panel width is increased. Low buckling loads, sometimes less than half the complete cylinder buckling load, are found for simply supported panels with free in-plane edge displacements (SS1). It is observed that the prevention of circumferential edge displacement is the most important in-plane boundary condition from the point of view of increasing the buckling load; and that the prevention of edge rotation in the circumferential direction also significantly increases the buckling load.

  3. Reconstruction of carbon atoms around a point defect of a graphene: a hybrid quantum/classical molecular-dynamics simulation.

    PubMed

    Kowaki, Y; Harada, A; Shimojo, F; Hoshino, K

    2009-02-11

    We have investigated the rearrangement of carbon atoms around a point defect of a graphene using a hybrid ab initio/classical molecular-dynamics (MD) simulation method, in which 36 carbon atoms surrounding a point defect are treated by the ab initio MD method and the other 475 carbon atoms relatively far from the point defect are treated by the classical MD method. We have confirmed a formation of a 5-1DB defect (a pentagon and a dangling bond) from the time dependence of atomic configurations and electron density distributions obtained by our simulation. We have found that the pentagon is formed in two different positions around the point defect, and that the two positions appear alternately during the simulation, the frequency of which increases with increasing temperature.

  4. Wavelet and wavelet packet compression of electrocardiograms.

    PubMed

    Hilton, M L

    1997-05-01

    Wavelets and wavelet packets have recently emerged as powerful tools for signal compression. Wavelet and wavelet packet-based compression algorithms based on embedded zerotree wavelet (EZW) coding are developed for electrocardiogram (ECG) signals, and eight different wavelets are evaluated for their ability to compress Holter ECG data. Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECG's are clinically useful.

  5. 2D-pattern matching image and video compression: theory, algorithms, and experiments.

    PubMed

    Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth

    2002-01-01

    In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.

  6. Analysis-preserving video microscopy compression via correlation and mathematical morphology.

    PubMed

    Shao, Chong; Zhong, Alfred; Cribb, Jeremy; Osborne, Lukas D; O'Brien, E Timothy; Superfine, Richard; Mayer-Patel, Ketan; Taylor, Russell M

    2015-12-01

    The large amount video data produced by multi-channel, high-resolution microscopy system drives the need for a new high-performance domain-specific video compression technique. We describe a novel compression method for video microscopy data. The method is based on Pearson's correlation and mathematical morphology. The method makes use of the point-spread function (PSF) in the microscopy video acquisition phase. We compare our method to other lossless compression methods and to lossy JPEG, JPEG2000, and H.264 compression for various kinds of video microscopy data including fluorescence video and brightfield video. We find that for certain data sets, the new method compresses much better than lossless compression with no impact on analysis results. It achieved a best compressed size of 0.77% of the original size, 25× smaller than the best lossless technique (which yields 20% for the same video). The compressed size scales with the video's scientific data content. Further testing showed that existing lossy algorithms greatly impacted data analysis at similar compression sizes.

  7. Effects of Reduced Compression in Digital Breast Tomosynthesis on Pain, Anxiety, and Image Quality

    PubMed Central

    Abdullah Suhaimi, Siti Aishah; Mohamed, Afifah; Ahmad, Mahadir; Chelliah, Kanaga Kumari

    2015-01-01

    Background Most women are reluctant to undergo breast cancer screenings due to the pain and anxiety they experience. Sectional three-dimensional (3-D) breasttomosynthesis was introduced to improve cancer detection, but breast compression is still used for the acquisition of images. This study was conducted to investigate the effects of reduced compression force on pain, anxiety and image quality in digital breast tomosynthesis (DBT). Methods A total of 130 women underwent screening mammography using convenience sampling with standard and reduced compression force at the breast clinic. A validated questionnaire of 20 items on the state anxiety level and a 4-point verbal rating scale on the pain level were conducted after the mammography. Craniocaudal (CC) and mediolateral oblique (MLO) projections were performed with standard compression, but only the CC view was performed with reduced compression. Two independent radiologists evaluated the images using image criteria scores (ICS) and the Breast Imaging-Reporting and Data System (BI-RADS). Results Standard compression exhibited significantly increased scores for pain and anxiety levels compared with reduced compression (P < 0.001). Both radiologists scored the standard and reduced compression images as equal, with scores of 87.5% and 92.5% for ICS and BI-RADS scoring, respectively. Conclusions Reduced compression force in DBT reduces anxiety and pain levels without compromising image quality. PMID:28223884

  8. Culture: Copying, Compression, and Conventionality

    ERIC Educational Resources Information Center

    Tamariz, Mónica; Kirby, Simon

    2015-01-01

    Through cultural transmission, repeated learning by new individuals transforms cultural information, which tends to become increasingly compressible (Kirby, Cornish, & Smith, 2008; Smith, Tamariz, & Kirby, 2013). Existing diffusion chain studies include in their design two processes that could be responsible for this tendency: learning…

  9. [Use of elastic compression stockings].

    PubMed

    Kallestrup, Lisbeth; Søgaard, Tine; Schjødt, Inge; Grove, Erik Lerkevang

    2014-08-04

    Post-thrombotic syndrome (PTS) is caused by venous insufficiency and is a frequent complication of deep venous thrombosis. Patients with PTS have reduced quality of life and an increased risk of recurrent deep venous thrombosis. Importantly, the risk of PTS is halved by the use of elastic compression stockings. This review outlines important practical aspects related to correct clinical use of these stockings.

  10. Device Assists Cardiac Chest Compression

    NASA Technical Reports Server (NTRS)

    Eichstadt, Frank T.

    1995-01-01

    Portable device facilitates effective and prolonged cardiac resuscitation by chest compression. Developed originally for use in absence of gravitation, also useful in terrestrial environments and situations (confined spaces, water rescue, medical transport) not conducive to standard manual cardiopulmonary resuscitation (CPR) techniques.

  11. COMPRESSED DEHYDRATED SUBSISTENCE, GREAT BRITAIN

    DTIC Science & Technology

    compress their dried foods. With the exception of the broad beans, unfamiliar to the U. S. diet, and the rutabagas , not common in the general U. S. diet, the items could be incorporated into U. S. rations with fair to good acceptability.

  12. Teaching Time-Space Compression

    ERIC Educational Resources Information Center

    Warf, Barney

    2011-01-01

    Time-space compression shows students that geographies are plastic, mutable and forever changing. This paper justifies the need to teach this topic, which is rarely found in undergraduate course syllabi. It addresses the impacts of transportation and communications technologies to explicate its dynamics. In summarizing various conceptual…

  13. COMPRESSIBLE FLOW, ENTRAINMENT, AND MEGAPLUME

    EPA Science Inventory

    It is generally believed that low Mach number, i.e., low-velocity, flow may be assumed to be incompressible flow. Under steady-state conditions, an exact equation of continuity may then be used to show that such flow is non-divergent. However, a rigorous, compressible fluid-dynam...

  14. Compressive passive millimeter wave imager

    DOEpatents

    Gopalsami, Nachappa; Liao, Shaolin; Elmer, Thomas W; Koehl, Eugene R; Heifetz, Alexander; Raptis, Apostolos C

    2015-01-27

    A compressive scanning approach for millimeter wave imaging and sensing. A Hadamard mask is positioned to receive millimeter waves from an object to be imaged. A subset of the full set of Hadamard acquisitions is sampled. The subset is used to reconstruct an image representing the object.

  15. Maxwell's demon and data compression

    NASA Astrophysics Data System (ADS)

    Hosoya, Akio; Maruyama, Koji; Shikano, Yutaka

    2011-12-01

    In an asymmetric Szilard engine model of Maxwell's demon, we show the equivalence between information theoretical and thermodynamic entropies when the demon erases information optimally. The work gain by the engine can be exactly canceled out by the work necessary to reset the demon's memory after optimal data compression in the manner of Shannon before the erasure.

  16. Force balancing in mammographic compression

    SciTech Connect

    Branderhorst, W. Groot, J. E. de; Lier, M. G. J. T. B. van; Grimbergen, C. A.; Neeter, L. M. F. H.; Heeten, G. J. den; Neeleman, C.

    2016-01-15

    Purpose: In mammography, the height of the image receptor is adjusted to the patient before compressing the breast. An inadequate height setting can result in an imbalance between the forces applied by the image receptor and the paddle, causing the clamped breast to be pushed up or down relative to the body during compression. This leads to unnecessary stretching of the skin and other tissues around the breast, which can make the imaging procedure more painful for the patient. The goal of this study was to implement a method to measure and minimize the force imbalance, and to assess its feasibility as an objective and reproducible method of setting the image receptor height. Methods: A trial was conducted consisting of 13 craniocaudal mammographic compressions on a silicone breast phantom, each with the image receptor positioned at a different height. The image receptor height was varied over a range of 12 cm. In each compression, the force exerted by the compression paddle was increased up to 140 N in steps of 10 N. In addition to the paddle force, the authors measured the force exerted by the image receptor and the reaction force exerted on the patient body by the ground. The trial was repeated 8 times, with the phantom remounted at a slightly different orientation and position between the trials. Results: For a given paddle force, the obtained results showed that there is always exactly one image receptor height that leads to a balance of the forces on the breast. For the breast phantom, deviating from this specific height increased the force imbalance by 9.4 ± 1.9 N/cm (6.7%) for 140 N paddle force, and by 7.1 ± 1.6 N/cm (17.8%) for 40 N paddle force. The results also show that in situations where the force exerted by the image receptor is not measured, the craniocaudal force imbalance can still be determined by positioning the patient on a weighing scale and observing the changes in displayed weight during the procedure. Conclusions: In mammographic breast

  17. TuckerCompressMPI v. 1.0

    SciTech Connect

    Austin, Woody; Klinvex, Alicia; Ballard, Grey; Kolda, Tamara G.

    2016-09-21

    As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five-way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed-memory parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. This software provides a method for compressing large-scale multiway data.

  18. A Case of a Paracardial Osteophyte Causing Atrial Compression

    PubMed Central

    Papadopoulos, Christodoulos; Vassilikos, Vassilios

    2016-01-01

    Osteophytes are pointed or beaked osseous outgrowths at the margins of articular surfaces that are often associated with degenerative changes of articular cartilage. They are the most common aspect of osteoarthritis and they infrequently cause symptoms by compression of the adjacent anatomic structures, such as nerves, vessels, bronchi, and esophagus. We present here a rare case of a patient with a left atrial deformation by a large osteophyte. PMID:28119739

  19. Software documentation for compression-machine cavity control

    SciTech Connect

    Floersch, R.H.

    1981-04-01

    A new system design using closed loop control on the hydraulic system of compression transfer presses used to make filled elastomer parts will result in improved accuracy and repeatability of speed and pressure control during critical forming stages before part cure. The new design uses a microprocessor to supply set points and timing functions to the control system. Presented are the hardware and software architecture and objectives for the microprocessor portion of the control system.

  20. Infraspinatus muscle atrophy from suprascapular nerve compression.

    PubMed

    Cordova, Christopher B; Owens, Brett D

    2014-02-01

    Muscle weakness without pain may signal a nerve compression injury. Because these injuries should be identified and treated early to prevent permanent muscle weakness and atrophy, providers should consider suprascapular nerve compression in patients with shoulder muscle weakness.

  1. ADVANCED RECIPROCATING COMPRESSION TECHNOLOGY (ARCT)

    SciTech Connect

    Danny M. Deffenbaugh; Klaus Brun; Ralph E. Harris; J. Pete Harrell; Robert J. Mckee; J. Jeffrey Moore; Steven J. Svedeman; Anthony J. Smalley; Eugene L. Broerman; Robert A Hart; Marybeth G. Nored; Ryan S. Gernentz; Shane P. Siebenaler

    2005-12-01

    The U.S. natural gas pipeline industry is facing the twin challenges of increased flexibility and capacity expansion. To meet these challenges, the industry requires improved choices in gas compression to address new construction and enhancement of the currently installed infrastructure. The current fleet of installed reciprocating compression is primarily slow-speed integral machines. Most new reciprocating compression is and will be large, high-speed separable units. The major challenges with the fleet of slow-speed integral machines are: limited flexibility and a large range in performance. In an attempt to increase flexibility, many operators are choosing to single-act cylinders, which are causing reduced reliability and integrity. While the best performing units in the fleet exhibit thermal efficiencies between 90% and 92%, the low performers are running down to 50% with the mean at about 80%. The major cause for this large disparity is due to installation losses in the pulsation control system. In the better performers, the losses are about evenly split between installation losses and valve losses. The major challenges for high-speed machines are: cylinder nozzle pulsations, mechanical vibrations due to cylinder stretch, short valve life, and low thermal performance. To shift nozzle pulsation to higher orders, nozzles are shortened, and to dampen the amplitudes, orifices are added. The shortened nozzles result in mechanical coupling with the cylinder, thereby, causing increased vibration due to the cylinder stretch mode. Valve life is even shorter than for slow speeds and can be on the order of a few months. The thermal efficiency is 10% to 15% lower than slow-speed equipment with the best performance in the 75% to 80% range. The goal of this advanced reciprocating compression program is to develop the technology for both high speed and low speed compression that will expand unit flexibility, increase thermal efficiency, and increase reliability and integrity

  2. A sensor node lossless compression algorithm for non-slowly varying data based on DMD transform

    NASA Astrophysics Data System (ADS)

    Ren, Xuejun; Liu, Jianping

    2013-03-01

    Efficient utilization of energy is a core area of research in wireless sensor networks. Data compression methods to reduce the number of bits to be transmitted by the communication module will significantly reduce the energy requirement and increase the lifetime of the sensor node. Based on the lifting scheme 2-point discrete cosine transform (DCT), this paper proposed a new reversible recursive algorithm named Difference-Median-Difference (DMD) transform for lossless data compression in sensor node. The DMD transform can significantly reduce the spatio-temporal correlations among sensor data and can smoothly run in resource limited sensor nodes. Through an entropy encoder, the results of DMD transform can be compressed more compactly based on their statistical characteristics to achieve compression. Compared with the typical lossless algorithms, the proposed algorithm indicated better compression ratios than others for non-slowly-varying data, despite a less computational effort.

  3. Simultaneous denoising and compression of multispectral images

    NASA Astrophysics Data System (ADS)

    Hagag, Ahmed; Amin, Mohamed; Abd El-Samie, Fathi E.

    2013-01-01

    A new technique for denoising and compression of multispectral satellite images to remove the effect of noise on the compression process is presented. One type of multispectral images has been considered: Landsat Enhanced Thematic Mapper Plus. The discrete wavelet transform (DWT), the dual-tree DWT, and a simple Huffman coder are used in the compression process. Simulation results show that the proposed technique is more effective than other traditional compression-only techniques.

  4. FRESCO: Referential compression of highly similar sequences.

    PubMed

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

  5. Image quality (IQ) guided multispectral image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  6. Cluster compression algorithm: A joint clustering/data compression concept

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.

    1977-01-01

    The Cluster Compression Algorithm (CCA), which was developed to reduce costs associated with transmitting, storing, distributing, and interpreting LANDSAT multispectral image data is described. The CCA is a preprocessing algorithm that uses feature extraction and data compression to more efficiently represent the information in the image data. The format of the preprocessed data enables simply a look-up table decoding and direct use of the extracted features to reduce user computation for either image reconstruction, or computer interpretation of the image data. Basically, the CCA uses spatially local clustering to extract features from the image data to describe spectral characteristics of the data set. In addition, the features may be used to form a sequence of scalar numbers that define each picture element in terms of the cluster features. This sequence, called the feature map, is then efficiently represented by using source encoding concepts. Various forms of the CCA are defined and experimental results are presented to show trade-offs and characteristics of the various implementations. Examples are provided that demonstrate the application of the cluster compression concept to multi-spectral images from LANDSAT and other sources.

  7. An improved lossless group compression algorithm for seismic data in SEG-Y and MiniSEED file formats

    NASA Astrophysics Data System (ADS)

    Li, Huailiang; Tuo, Xianguo; Shen, Tong; Henderson, Mark Julian; Courtois, Jérémie; Yan, Minhao

    2017-03-01

    An improved lossless group compression algorithm is proposed for decreasing the size of SEG-Y files to relieve the enormous burden associated with the transmission and storage of large amounts of seismic exploration data. Because each data point is represented by 4 bytes in SEG-Y files, the file is broken down into 4 subgroups, and the Gini coefficient is employed to analyze the distribution of the overall data and each of the 4 data subgroups within the range [0,255]. The results show that each subgroup comprises characteristic frequency distributions suited to distinct compression algorithms. Therefore, the data of each subgroup was compressed using its best suited algorithm. After comparing the compression ratios obtained for each data subgroup using different algorithms, the Lempel-Ziv-Markov chain algorithm (LZMA) was selected for the compression of the first two subgroups and the Deflate algorithm for the latter two subgroups. The compression ratios and decompression times obtained with the improved algorithm were compared with those obtained with commonly employed compression algorithms for SEG-Y files with different sizes. The experimental results show that the improved algorithm provides a compression ratio of 75-80%, which is more effective than compression algorithms presently applied to SEG-Y files. In addition, the proposed algorithm is applied to the miniSEED format used in natural earthquake monitoring, and the results compared with those obtained using the Steim2 compression algorithm, the results again show that the proposed algorithm provides better data compression.

  8. 46 CFR 147.60 - Compressed gases.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 5 2014-10-01 2014-10-01 false Compressed gases. 147.60 Section 147.60 Shipping COAST... Other Special Requirements for Particular Materials § 147.60 Compressed gases. (a) Cylinder requirements. Cylinders used for containing hazardous ships' stores that are compressed gases must be— (1) Authorized...

  9. 46 CFR 147.60 - Compressed gases.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 5 2011-10-01 2011-10-01 false Compressed gases. 147.60 Section 147.60 Shipping COAST... Other Special Requirements for Particular Materials § 147.60 Compressed gases. (a) Cylinder requirements. Cylinders used for containing hazardous ships' stores that are compressed gases must be— (1) Authorized...

  10. 46 CFR 147.60 - Compressed gases.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 5 2013-10-01 2013-10-01 false Compressed gases. 147.60 Section 147.60 Shipping COAST... Other Special Requirements for Particular Materials § 147.60 Compressed gases. (a) Cylinder requirements. Cylinders used for containing hazardous ships' stores that are compressed gases must be— (1) Authorized...

  11. Digital pulse compression with low range sidelobes

    NASA Astrophysics Data System (ADS)

    Larvor, J. P.

    A definition of pulse compression performance is introduced and the pulse compression filter synthesis is explained. The evaluation of the real performance of a pulse compression system is described, taking into account the contribution and imperfections of each analog device of the transmitting and receiving channels. A realization example is given.

  12. General-Purpose Compression for Efficient Retrieval.

    ERIC Educational Resources Information Center

    Cannane, Adam; Williams, Hugh E.

    2001-01-01

    Discusses compression of databases that reduces space requirements and retrieval times; considers compression of documents in text databases based on semistatic modeling with words; and proposes a scheme for general purpose compression that can be applied to all types of data stored in large collections. (Author/LRW)

  13. Multichannel Compression, Temporal Cues, and Audibility.

    ERIC Educational Resources Information Center

    Souza, Pamela E.; Turner, Christopher W.

    1998-01-01

    The effect of the reduction of the temporal envelope produced by multichannel compression on recognition was examined in 16 listeners with hearing loss, with particular focus on audibility of the speech signal. Multichannel compression improved speech recognition when superior audibility was provided by a two-channel compression system over linear…

  14. Progressive Transmission and Compression of Images

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.

    1996-01-01

    We describe an image data compression strategy featuring progressive transmission. The method exploits subband coding and arithmetic coding for compression. We analyze the Laplacian probability density, which closely approximates the statistics of individual subbands, to determine a strategy for ordering the compressed subband data in a way that improves rate-distortion performance. Results are presented for a test image.

  15. Point set registration: coherent point drift.

    PubMed

    Myronenko, Andriy; Song, Xubo

    2010-12-01

    Point set registration is a key component in many computer vision tasks. The goal of point set registration is to assign correspondences between two sets of points and to recover the transformation that maps one point set to the other. Multiple factors, including an unknown nonrigid spatial transformation, large dimensionality of point set, noise, and outliers, make the point set registration a challenging problem. We introduce a probabilistic method, called the Coherent Point Drift (CPD) algorithm, for both rigid and nonrigid point set registration. We consider the alignment of two point sets as a probability density estimation problem. We fit the Gaussian mixture model (GMM) centroids (representing the first point set) to the data (the second point set) by maximizing the likelihood. We force the GMM centroids to move coherently as a group to preserve the topological structure of the point sets. In the rigid case, we impose the coherence constraint by reparameterization of GMM centroid locations with rigid parameters and derive a closed form solution of the maximization step of the EM algorithm in arbitrary dimensions. In the nonrigid case, we impose the coherence constraint by regularizing the displacement field and using the variational calculus to derive the optimal transformation. We also introduce a fast algorithm that reduces the method computation complexity to linear. We test the CPD algorithm for both rigid and nonrigid transformations in the presence of noise, outliers, and missing points, where CPD shows accurate results and outperforms current state-of-the-art methods.

  16. The free compressible viscous vortex

    NASA Technical Reports Server (NTRS)

    Colonius, Tim; Lele, Sanjiva K.; Moin, Parviz

    1991-01-01

    The present study investigates the effects of compressibility on free (unsteady) viscous heat-conducting vortices. Analytical solutions are found in the limit of large but finite Reynolds number and small but finite Mach number. It is shown that the spreading of the vortex causes a radial flow. This flow is given by the solution of an ordinary differential equation, which gives the dependence of the radial velocity on the tangential velocity, density, and temperature profiles of the vortex. Estimates of the radial velocity found by solving this equation are found to be in good agreement with numerical solutions of the full equations. The equations for the viscous evolution are expanded in powers of Mach number to obtain detailed analytical solutions. It is shown that swirling axisymmetric compressible flows generate negative radial velocities far from the vortex core owing to viscous effects, regardless of the initial distributions of vorticity, density, and entropy.

  17. Compressive Instability Phenomena During Springback

    NASA Astrophysics Data System (ADS)

    Kim, J.-B.; Yoon, J. W.; Yang, D. Y.

    2007-05-01

    Springback in sheet metal product makes difficulties in die design because small strain causes large displacement. Especially for the sheet metal product having small geometric constraints, springback displacement may become severe. After first stage of stamping of outer case of washing machine, a large amount of springback is observed. The stamping depth of the outer case is small while stamping area is very large compared to the stamping depth, and therefore, there exists small geometric constraints in the formed part. Also, a compressive instability during the elastic recovery takes place and this instability enlarged the elastic recovery and dimensional error. In this paper, the compressive instability during the elastic recovery is analyzed using bifurcation theory. The final deformed shape after springback is obtained by bifurcating the solution path from primary to secondary. The deformed shapes obtained by the finite element analysis are in good agreement with the experimental data. The bifurcation behavior and the springback displacement for different forming depth are investigated.

  18. Compressive Instability Phenomena During Springback

    SciTech Connect

    Kim, J.-B.; Yoon, J. W.; Yang, D. Y.

    2007-05-17

    Springback in sheet metal product makes difficulties in die design because small strain causes large displacement. Especially for the sheet metal product having small geometric constraints, springback displacement may become severe. After first stage of stamping of outer case of washing machine, a large amount of springback is observed. The stamping depth of the outer case is small while stamping area is very large compared to the stamping depth, and therefore, there exists small geometric constraints in the formed part. Also, a compressive instability during the elastic recovery takes place and this instability enlarged the elastic recovery and dimensional error. In this paper, the compressive instability during the elastic recovery is analyzed using bifurcation theory. The final deformed shape after springback is obtained by bifurcating the solution path from primary to secondary. The deformed shapes obtained by the finite element analysis are in good agreement with the experimental data. The bifurcation behavior and the springback displacement for different forming depth are investigated.

  19. A Compressed Terahertz Imaging Method

    NASA Astrophysics Data System (ADS)

    Zhang, Man; Pan, Rui; Xiong, Wei; He, Ting; Shen, Jing-Ling

    2012-10-01

    A compressed terahertz imaging method using a terahertz time domain spectroscopy system (THz-TDSS) is suggested and demonstrated. In the method, a parallel THz wave with the beam diameter 4cm from a usual THz-TDSS is used and a square shaped 2D echelon is placed in front of an imaged object. We confirm both in simulation and in experiment that only one terahertz time domain spectrum is needed to image the object. The image information is obtained from the compressed THz signal by deconvolution signal processing, and therefore the whole imaging time is greatly reduced in comparison with some other pulsed THz imaging methods. The present method will hopefully be used in real-time imaging.

  20. Subpicosecond compression experiments at Los Alamos National Laboratory

    SciTech Connect

    Carlsten, B.E.; Russell, S.J.; Kinross-Wright, J.M.

    1995-09-01

    The authors report on recent experiments using a magnetic chicane compressor at 8 MeV. Electron bunches at both low (0.1 nC) and high (1 nC) charges were compressed from 20 ps to less than 1 ps (FWHM). A transverse deflecting rf cavity was used to measure the bunch length at low charge; the bunch length at high charge was inferred from an induced energy spread of the beam. The longitudinal centrifugal-space charge force is calculated using a point-to-point numerical simulation and is shown not to influence the energy-spread measurement.

  1. Compressing TV-image data

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Lee, J.; Rice, R. F.; Schlutsmeyer, A. P.

    1981-01-01

    Compressing technique calculates activity estimator for each segment of image line. Estimator is used in conjunction with allowable bits per line, N, to determine number of bits necessary to code each segment and which segments can tolerate truncation. Preprocessed line data are then passed to adaptive variable-length coder, which selects optimum transmission code. Method increases capacity of broadcast and cable television transmissions and helps reduce size of storage medium for video and digital audio recordings.

  2. Data compression techniques and applications

    NASA Astrophysics Data System (ADS)

    Benelli, G.; Cappellini, V.; Lotti, F.

    1980-02-01

    The paper reviews several data compression methods for signal and image digital processing and transmission, including both established and more recent techniques. Attention is also given to methods of prediction-interpolation, differential pulse code modulation, delta modulation and transformations. The processing of two dimensional data is also considered, and the results of the application of these techniques to space telemetry and biomedical digital signal processing and telemetry systems are presented.

  3. The compressibility of nanocrystalline Pt

    NASA Astrophysics Data System (ADS)

    Mikheykin, A. S.; Dmitriev, V. P.; Chagovets, S. V.; Kuriganova, A. B.; Smirnova, N. V.; Leontyev, I. N.

    2012-10-01

    High-pressure behavior of carbon supported Pt nanoparticles (Pt/C) with an average particle size of 10.6 nm was investigated by in situ high-pressure synchrotron radiation x-ray diffraction up to 14 GPa at ambient temperature. Our results show that the compressibility of Pt/C nanoparticles decreases substantially as the particle size decreases. An interpretation based upon the available mechanisms of structural compliance in nanoscale vs bulk materials was proposed.

  4. Turbulence modeling for compressible flows

    NASA Technical Reports Server (NTRS)

    Marvin, J. G.

    1977-01-01

    Material prepared for a course on Applications and Fundamentals of Turbulence given at the University of Tennessee Space Institute, January 10 and 11, 1977, is presented. A complete concept of turbulence modeling is described, and examples of progess for its use in computational aerodynimics are given. Modeling concepts, experiments, and computations using the concepts are reviewed in a manner that provides an up-to-date statement on the status of this problem for compressible flows.

  5. Antiproton compression and radial measurements

    SciTech Connect

    Andresen, G. B.; Bowe, P. D.; Hangst, J. S.; Bertsche, W.; Butler, E.; Charlton, M.; Humphries, A. J.; Jenkins, M. J.; Joergensen, L. V.; Madsen, N.; Werf, D. P. van der; Bray, C. C.; Chapman, S.; Fajans, J.; Povilus, A.; Wurtele, J. S.; Cesar, C. L.; Lambo, R.; Silveira, D. M.; Fujiwara, M. C.

    2008-08-08

    Control of the radial profile of trapped antiproton clouds is critical to trapping antihydrogen. We report detailed measurements of the radial manipulation of antiproton clouds, including areal density compressions by factors as large as ten, achieved by manipulating spatially overlapped electron plasmas. We show detailed measurements of the near-axis antiproton radial profile, and its relation to that of the electron plasma. We also measure the outer radial profile by ejecting antiprotons to the trap wall using an octupole magnet.

  6. Compressed air energy storage system

    DOEpatents

    Ahrens, F.W.; Kartsounes, G.T.

    An internal combustion reciprocating engine is operable as a compressor during slack demand periods utilizing excess power from a power grid to charge air into an air storage reservoir and as an expander during peak demand periods to feed power into the power grid utilizing air obtained from the air storage reservoir together with combustion reciprocating engine is operated at high pressure and a low pressure turbine and compressor are also employed for air compression and power generation.

  7. Compressed air energy storage system

    DOEpatents

    Ahrens, Frederick W.; Kartsounes, George T.

    1981-01-01

    An internal combustion reciprocating engine is operable as a compressor during slack demand periods utilizing excess power from a power grid to charge air into an air storage reservoir and as an expander during peak demand periods to feed power into the power grid utilizing air obtained from the air storage reservoir together with combustible fuel. Preferably the internal combustion reciprocating engine is operated at high pressure and a low pressure turbine and compressor are also employed for air compression and power generation.

  8. Comparison of Artificial Compressibility Methods

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Housman, Jeffrey; Kwak, Dochan

    2003-01-01

    Various artificial compressibility methods for calculating three-dimensional, steady and unsteady, laminar and turbulent, incompressible Navier-Stokes equations are compared in this work. Each method is described in detail along with appropriate physical and numerical boundary conditions. Analysis of well-posedness and numerical solutions to test problems for each method are provided. A comparison based on convergence behavior, accuracy, stability and robustness is used to establish the relative positive and negative characteristics of each method.

  9. Compressing DNA sequence databases with coil

    PubMed Central

    White, W Timothy J; Hendy, Michael D

    2008-01-01

    Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work. PMID:18489794

  10. Compressibility effects on turbulent mixing

    NASA Astrophysics Data System (ADS)

    Panickacheril John, John; Donzis, Diego

    2016-11-01

    We investigate the effect of compressibility on passive scalar mixing in isotropic turbulence with a focus on the fundamental mechanisms that are responsible for such effects using a large Direct Numerical Simulation (DNS) database. The database includes simulations with Taylor Reynolds number (Rλ) up to 100, turbulent Mach number (Mt) between 0.1 and 0.6 and Schmidt number (Sc) from 0.5 to 1.0. We present several measures of mixing efficiency on different canonical flows to robustly identify compressibility effects. We found that, like shear layers, mixing is reduced as Mach number increases. However, data also reveal a non-monotonic trend with Mt. To assess directly the effect of dilatational motions we also present results with both dilatational and soleniodal forcing. Analysis suggests that a small fraction of dilatational forcing decreases mixing time at higher Mt. Scalar spectra collapse when normalized by Batchelor variables which suggests that a compressive mechanism similar to Batchelor mixing in incompressible flows might be responsible for better mixing at high Mt and with dilatational forcing compared to pure solenoidal mixing. We also present results on scalar budgets, in particular on production and dissipation. Support from NSF is gratefully acknowledged.

  11. Compressibility Effects in Aeronautical Engineering

    NASA Technical Reports Server (NTRS)

    Stack, John

    1941-01-01

    Compressible-flow research, while a relatively new field in aeronautics, is very old, dating back almost to the development of the first firearm. Over the last hundred years, researches have been conducted in the ballistics field, but these results have been of practically no use in aeronautical engineering because the phenomena that have been studied have been the more or less steady supersonic condition of flow. Some work that has been done in connection with steam turbines, particularly nozzle studies, has been of value, In general, however, understanding of compressible-flow phenomena has been very incomplete and permitted no real basis for the solution of aeronautical engineering problems in which.the flow is likely to be unsteady because regions of both subsonic and supersonic speeds may occur. In the early phases of the development of the airplane, speeds were so low that the effects of compressibility could be justifiably ignored. During the last war and immediately after, however, propellers exhibited losses in efficiency as the tip speeds approached the speed of sound, and the first experiments of an aeronautical nature were therefore conducted with propellers. Results of these experiments indicated serious losses of efficiency, but aeronautical engineers were not seriously concerned at the time became it was generally possible. to design propellers with quite low tip. speeds. With the development of new engines having increased power and rotational speeds, however, the problems became of increasing importance.

  12. Efficacy of compression of different capacitance beds in the amelioration of orthostatic hypotension

    NASA Technical Reports Server (NTRS)

    Denq, J. C.; Opfer-Gehrking, T. L.; Giuliani, M.; Felten, J.; Convertino, V. A.; Low, P. A.

    1997-01-01

    Orthostatic hypotension (OH) is the most disabling and serious manifestation of adrenergic failure, occurring in the autonomic neuropathies, pure autonomic failure (PAF) and multiple system atrophy (MSA). No specific treatment is currently available for most etiologies of OH. A reduction in venous capacity, secondary to some physical counter maneuvers (e.g., squatting or leg crossing), or the use of compressive garments, can ameliorate OH. However, there is little information on the differential efficacy, or the mechanisms of improvement, engendered by compression of specific capacitance beds. We therefore evaluated the efficacy of compression of specific compartments (calves, thighs, low abdomen, calves and thighs, and all compartments combined), using a modified antigravity suit, on the end-points of orthostatic blood pressure, and symptoms of orthostatic intolerance. Fourteen patients (PAF, n = 9; MSA, n = 3; diabetic autonomic neuropathy, n = 2; five males and nine females) with clinical OH were studied. The mean age was 62 years (range 31-78). The mean +/- SEM orthostatic systolic blood pressure when all compartments were compressed was 115.9 +/- 7.4 mmHg, significantly improved (p < 0.001) over the head-up tilt value without compression of 89.6 +/- 7.0 mmHg. The abdomen was the only single compartment whose compression significantly reduced OH (p < 0.005). There was a significant increase of peripheral resistance index (PRI) with compression of abdomen (p < 0.001) or all compartments (p < 0.001); end-diastolic index and cardiac index did not change. We conclude that denervation increases vascular capacity, and that venous compression improves OH by reducing this capacity and increasing PRI. Compression of all compartments is the most efficacious, followed by abdominal compression, whereas leg compression alone was less effective, presumably reflecting the large capacity of the abdomen relative to the legs.

  13. Psychophysical rating of image compression techniques

    NASA Technical Reports Server (NTRS)

    Stein, Charles S.; Hitchner, Lewis E.; Watson, Andrew B.

    1989-01-01

    Image compression schemes abound with little work which compares their bit-rate performance based on subjective fidelity measures. Statistical measures of image fidelity, such as squared error measures, do not necessarily correspond to subjective measures of image fidelity. Most previous comparisons of compression techniques have been based on these statistical measures. A psychophysical method has been used to estimate, for a number of compression techniques, a threshold bit-rate yielding a criterion level of performance in discriminating original and compressed images. The compression techniques studied include block truncation, Laplacian pyramid, block discrete cosine transform, with and without a human visual system scaling, and cortex transform coders.

  14. The Critical Point Facility (CPF)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The Critical Point Facility (CPF) is an ESA multiuser facility designed for microgravity research onboard Spacelab. It has been conceived and built to offer investigators opportunities to conduct research on critical point phenomena in microgravity. This facility provides the high precision and stability temperature standards required in this field of research. It has been primarily designed for the purpose of optical investigations of transparent fluids. During a Spacelab mission, the CPF automatically processes several thermostats sequentially, each thermostat corresponding to an experiment. The CPF is now integrated in Spacelab at Kennedy Space Center, in preparation for the International Microgravity Lab. mission. The CPF was designed to submit transparent fluids to an adequate, user defined thermal scenario, and to monitor their behavior by using thermal and optical means. Because they are strongly affected by gravity, a good understanding of critical phenomena in fluids can only be gained in low gravity conditions. Fluids at the critical point become compressed under their own weight. The role played by gravity in the formation of interfaces between distinct phases is not clearly understood.

  15. Compression Limit of Two-Dimensional Water Constrained in Graphene Nanocapillaries.

    PubMed

    Zhu, YinBo; Wang, FengChao; Bai, Jaeil; Zeng, Xiao Cheng; Wu, HengAn

    2015-12-22

    Evaluation of the tensile/compression limit of a solid under conditions of tension or compression is often performed to provide mechanical properties that are critical for structure design and assessment. Algara-Siller et al. recently demonstrated that when water is constrained between two sheets of graphene, it becomes a two-dimensional (2D) liquid and then is turned into an intriguing monolayer solid with a square pattern under high lateral pressure [ Nature , 2015 , 519 , 443 - 445 ]. From a mechanics point of view, this liquid-to-solid transformation characterizes the compression limit (or metastability limit) of the 2D monolayer water. Here, we perform a simulation study of the compression limit of 2D monolayer, bilayer, and trilayer water constrained in graphene nanocapillaries. At 300 K, a myriad of 2D ice polymorphs (both crystalline-like and amorphous) are formed from the liquid water at different widths of the nanocapillaries, ranging from 6.0 to11.6 Å. For monolayer water, the compression limit is typically a few hundred MPa, while for the bilayer and trilayer water, the compression limit is 1.5 GPa or higher, reflecting the ultrahigh van der Waals pressure within the graphene nanocapillaries. The compression-limit (phase) diagram is obtained at the nanocapillary width versus pressure (h-P) plane, based on the comprehensive molecular dynamics simulations at numerous thermodynamic states as well as on the Clapeyron equation. Interestingly, the compression-limit curves exhibit multiple local minima.

  16. Peak compression factor of proteins.

    PubMed

    Gritti, Fabrice; Guiochon, Georges

    2009-08-14

    An experimental protocol is proposed in order to measure with accuracy and precision the band compression factor G(12)(2) of a protein in gradient RPLC. Extra-column contributions to bandwidth and the dependency of both the retention factor and the reduced height equivalent to a theoretical plate (HETP) on the mobile phase composition were taken into account. The band compression factor of a small protein (insulin, MW kDa) was measured on a 2.1mm x 50mm column packed with 1.7 microm C(4)-bonded bridged ethylsiloxane BEH-silica particles, for 1 microL samples of dilute insulin solution (<0.05g/L). A linear gradient profile of acetonitrile (25-28% acetonitrile in water containing 0.1% trifluoroacetic acid) was applied during three different gradient times (5, 12.5, and 20 min). The mobile phase flow rate was set at 0.20 mL/min in order to avoid heat friction effects (maximum column inlet pressure 180 bar). The band compression factor of insulin is defined as the ratio of the experimental space band variance measured under gradient conditions to the reference space band variance, which would be observed if no thermodynamic compression would take place during gradient elution. It was 0.56, 0.71, and 0.76 with gradient times of 5, 12.5, and 20 min, respectively. These factors are 20-30% smaller than the theoretical band compression factors (0.79, 0.89, and 0.93) calculated from an equation derived from the well-known Poppe equation, later extended to any retention models and columns whose HETP depends on the mobile phase composition. This difference is explained in part by the omission in the model of the effect of the pressure gradient on the local retention factor of insulin during gradient elution. A much better agreement is obtained for insulin when this effect is taken into account. For lower molecular weight compounds, the pressure gradient has little effect but the finite retention of acetonitrile causes a distortion of the gradient shape during the migration of

  17. Practicality of magnetic compression for plasma density control

    SciTech Connect

    Gueroult, Renaud; Fisch, Nathaniel J.

    2016-03-16

    Here, plasma densification through magnetic compression has been suggested for time-resolved control of the wave properties in plasma-based accelerators [P. F. Schmit and N. J. Fisch, Phys. Rev. Lett. 109, 255003 (2012)]. Using particle in cell simulations with real mass ratio, the practicality of large magnetic compression on timescales shorter than the ion gyro-period is investigated. For compression times shorter than the transit time of a compressional Alfven wave across the plasma slab, results show the formation of two counter-propagating shock waves, leading to a highly non-uniform plasma density profile. Furthermore, the plasma slab displays large hydromagnetic like oscillations after the driving field has reached steady state. Peak compression is obtained when the two shocks collide in the mid-plane. At this instant, very large plasma heating is observed, and the plasmaβ is estimated to be about 1. Although these results point out a densification mechanism quite different and more complex than initially envisioned, these features still might be advantageous in particle accelerators.

  18. The compression of a heavy floating elastic film.

    PubMed

    Jambon-Puillet, Etienne; Vella, Dominic; Protière, Suzie

    2016-11-23

    We study the effect of film density on the uniaxial compression of thin elastic films at a liquid-fluid interface. Using a combination of experiments and theory, we show that dense films first wrinkle and then fold as the compression is increased, similarly to what has been reported when the film density is neglected. However, we highlight the changes in the shape of the fold induced by the film's own weight and extend the model of Diamant and Witten [Phys. Rev. Lett., 2011, 107, 164302] to understand these changes. In particular, we suggest that it is the weight of the film that breaks the up-down symmetry apparent from previous models, but elusive experimentally. We then compress the film beyond the point of self-contact and observe a new behaviour dependent on the film density: the single fold that forms after wrinkling transitions into a closed loop after self-contact, encapsulating a cylindrical droplet of the upper fluid. The encapsulated drop either causes the loop to bend upward or to sink deeper as the compression is increased, depending on the relative buoyancy of the drop-film combination. We propose a model to qualitatively explain this behaviour. Finally, we discuss the relevance of the different buckling modes predicted in previous theoretical studies and highlight the important role of surface tension in the shape of the fold that is observed from the side-an aspect that is usually neglected in theoretical analyses.

  19. Practicality of magnetic compression for plasma density control

    DOE PAGES

    Gueroult, Renaud; Fisch, Nathaniel J.

    2016-03-16

    Here, plasma densification through magnetic compression has been suggested for time-resolved control of the wave properties in plasma-based accelerators [P. F. Schmit and N. J. Fisch, Phys. Rev. Lett. 109, 255003 (2012)]. Using particle in cell simulations with real mass ratio, the practicality of large magnetic compression on timescales shorter than the ion gyro-period is investigated. For compression times shorter than the transit time of a compressional Alfven wave across the plasma slab, results show the formation of two counter-propagating shock waves, leading to a highly non-uniform plasma density profile. Furthermore, the plasma slab displays large hydromagnetic like oscillations aftermore » the driving field has reached steady state. Peak compression is obtained when the two shocks collide in the mid-plane. At this instant, very large plasma heating is observed, and the plasmaβ is estimated to be about 1. Although these results point out a densification mechanism quite different and more complex than initially envisioned, these features still might be advantageous in particle accelerators.« less

  20. Feature preserving compression of high resolution SAR images

    NASA Astrophysics Data System (ADS)

    Yang, Zhigao; Hu, Fuxiang; Sun, Tao; Qin, Qianqing

    2006-10-01

    Compression techniques are required to transmit the large amounts of high-resolution synthetic aperture radar (SAR) image data over the available channels. Common Image compression methods may lose detail and weak information in original images, especially at smoothness areas and edges with low contrast. This is known as "smoothing effect". It becomes difficult to extract and recognize some useful image features such as points and lines. We propose a new SAR image compression algorithm that can reduce the "smoothing effect" based on adaptive wavelet packet transform and feature-preserving rate allocation. For the reason that images should be modeled as non-stationary information resources, a SAR image is partitioned to overlapped blocks. Each overlapped block is then transformed by adaptive wavelet packet according to statistical features of different blocks. In quantifying and entropy coding of wavelet coefficients, we integrate feature-preserving technique. Experiments show that quality of our algorithm up to 16:1 compression ratio is improved significantly, and more weak information is reserved.

  1. Chapter 22: Compressed Air Evaluation Protocol

    SciTech Connect

    Benton, N.

    2014-11-01

    Compressed-air systems are used widely throughout industry for many operations, including pneumatic tools, packaging and automation equipment, conveyors, and other industrial process operations. Compressed-air systems are defined as a group of subsystems composed of air compressors, air treatment equipment, controls, piping, pneumatic tools, pneumatically powered machinery, and process applications using compressed air. A compressed-air system has three primary functional subsystems: supply, distribution, and demand. Air compressors are the primary energy consumers in a compressed-air system and are the primary focus of this protocol. The two compressed-air energy efficiency measures specifically addressed in this protocol are: high-efficiency/variable speed drive (VSD) compressor replacing modulating compressor; compressed-air leak survey and repairs. This protocol provides direction on how to reliably verify savings from these two measures using a consistent approach for each.

  2. Influence of Tension-Compression Asymmetry on the Mechanical Behavior of AZ31B Magnesium Alloy Sheets in Bending

    NASA Astrophysics Data System (ADS)

    Zhou, Ping; Beeh, Elmar; Friedrich, Horst E.

    2016-03-01

    Magnesium alloys are promising materials for lightweight design in the automotive industry due to their high strength-to-mass ratio. This study aims to study the influence of tension-compression asymmetry on the radius of curvature and energy absorption capacity of AZ31B-O magnesium alloy sheets in bending. The mechanical properties were characterized using tension, compression, and three-point bending tests. The material exhibits significant tension-compression asymmetry in terms of strength and strain hardening rate due to extension twinning in compression. The compressive yield strength is much lower than the tensile yield strength, while the strain hardening rate is much higher in compression. Furthermore, the tension-compression asymmetry in terms of r value (Lankford value) was also observed. The r value in tension is much higher than that in compression. The bending results indicate that the AZ31B-O sheet can outperform steel and aluminum sheets in terms of specific energy absorption in bending mainly due to its low density. In addition, the AZ31B-O sheet was deformed with a larger radius of curvature than the steel and aluminum sheets, which brings a benefit to energy absorption capacity. Finally, finite element simulation for three-point bending was performed using LS-DYNA and the results confirmed that the larger radius of curvature of a magnesium specimen is mainly attributed to the high strain hardening rate in compression.

  3. Envera Variable Compression Ratio Engine

    SciTech Connect

    Charles Mendler

    2011-03-15

    Aggressive engine downsizing, variable compression ratio and use of the Atkinson cycle are being combined to improve fuel economy by up to 40 percent relative to port fuel injected gasoline engines, while maintaining full engine power. Approach Engine downsizing is viewed by US and foreign automobile manufacturers as one of the best options for improving fuel economy. While this strategy has already demonstrated a degree of success, downsizing and fuel economy gains are currently limited. With new variable compression ratio technology however, the degree of engine downsizing and fuel economy improvement can be greatly increased. A small variable compression ratio (VCR) engine has the potential to return significantly higher vehicle fuel economy while also providing high power. Affordability and potential for near term commercialization are key attributes of the Envera VCR engine. VCR Technology To meet torque and power requirements, a smaller engine needs to do more work per stroke. This is typically accomplished by boosting the incoming charge with either a turbo or supercharger so that more energy is present in the cylinder per stroke to do the work. With current production engines the degree of engine boosting (which correlates to downsizing) is limited by detonation (combustion knock) at high boost levels. Additionally, the turbo or supercharger needs to be responsive and efficient while providing the needed boost. VCR technology eliminates the limitation of engine knock at high load levels by reducing compression ratio to {approx}9:1 (or whatever level is appropriate) when high boost pressures are needed. By reducing the compression ratio during high load demand periods there is increased volume in the cylinder at top dead center (TDC) which allows more charge (or energy) to be present in the cylinder without increasing the peak pressure. Cylinder pressure is thus kept below the level at which the engine would begin to knock. When loads on the engine are low

  4. Image coding compression based on DCT

    NASA Astrophysics Data System (ADS)

    Feng, Fei; Liu, Peixue; Jiang, Baohua

    2012-04-01

    With the development of computer science and communications, the digital image processing develops more and more fast. High quality images are loved by people, but it will waste more stored space in our computer and it will waste more bandwidth when it is transferred by Internet. Therefore, it's necessary to have an study on technology of image compression. At present, many algorithms about image compression is applied to network and the image compression standard is established. In this dissertation, some analysis on DCT will be written. Firstly, the principle of DCT will be shown. It's necessary to realize image compression, because of the widely using about this technology; Secondly, we will have a deep understanding of DCT by the using of Matlab, the process of image compression based on DCT, and the analysis on Huffman coding; Thirdly, image compression based on DCT will be shown by using Matlab and we can have an analysis on the quality of the picture compressed. It is true that DCT is not the only algorithm to realize image compression. I am sure there will be more algorithms to make the image compressed have a high quality. I believe the technology about image compression will be widely used in the network or communications in the future.

  5. Krylov methods for compressible flows

    NASA Technical Reports Server (NTRS)

    Tidriri, M. D.

    1995-01-01

    We investigate the application of Krylov methods to compressible flows, and the effect of implicit boundary conditions on the implicit solution of nonlinear problems. Two defect-correction procedures, namely, approximate factorization (AF) for structured grids and ILU/GMRES for general grids, are considered. Also considered here are Newton-Krylov matrix-free methods that we combined with the use of mixed discretization schemes in the implicitly defined Jacobian and its preconditioner. Numerical experiments that show the performance of our approaches are then presented.

  6. Vapor Compression Distillation Flight Experiment

    NASA Technical Reports Server (NTRS)

    Hutchens, Cindy F.

    2002-01-01

    One of the major requirements associated with operating the International Space Station is the transportation -- space shuttle and Russian Progress spacecraft launches - necessary to re-supply station crews with food and water. The Vapor Compression Distillation (VCD) Flight Experiment, managed by NASA's Marshall Space Flight Center in Huntsville, Ala., is a full-scale demonstration of technology being developed to recycle crewmember urine and wastewater aboard the International Space Station and thereby reduce the amount of water that must be re-supplied. Based on results of the VCD Flight Experiment, an operational urine processor will be installed in Node 3 of the space station in 2005.

  7. Compressibility Characteristics of Compacted Snow

    DTIC Science & Technology

    1976-06-01

    Cornpressibility characteristics 7Jj i C’p of compacted snowifAG2� 004 t Cover: ~ ~ ~ ~ ~ ~ ~ ~ a - Thn***o htgrp fpoyrsaliekAmgife i ote rm...nwcmrse to7 asa 10 Phtgahb nhn Gow1 CRREL Report 76-21 Compressibility characteristics of compacted snow %i" Gunars Abele and Anthony J. Cow I ~ June 1976 A ...c , I fu. A AD,:j ly M3rs CORPS OF ENGINEERS, U.S. ARMY COLD REGIONS RESEARCH AND ENGINEERZ]NG LABORATORY HANOVER, NEW HAMPSHIRE Approved for public

  8. Compressed Sensing Meets Wave Chaology

    NASA Astrophysics Data System (ADS)

    Pinto, Innocenzo M.; Addesso, Paolo; Principe, Maria

    2015-03-01

    The Wigner distribution is an important tool in the study of high-frequency wave-packet dynamics in ray-chaotic enclosures. Smoothing the Wigner distribution helps improving its readability, by suppressing nonlinear artifacts, but spoils its resolution. Adding a sparsity constraint to smoothing, in the spirit of the compressed coding paradigm, restores resolution while still avoiding artifacts. The result is particularly valuable in the perspective of complexity gauging via Renyi-Wehrl entropy measures. Representative numerical experiments are presented to substantiate such clues.

  9. Engineering design point for a 1MW fusion neutron source

    NASA Astrophysics Data System (ADS)

    Sieck, Paul; Melnik, Paul; Woodruff, Simon; Stuber, James; Romero-Talamas, Carlos; O'Bryan, John; Miller, Ronald

    2016-10-01

    Compact fusion neutron sources are currently serving important roles in medical isotope production, and could be used for waste transmutation if sufficient fluence can be attained. The engineering design point for a compact neutron source with target rateof e17n/sbased on the adiabatic compression of a spheromak is presented. The compression coils and passive structure are designed to maintain stability during compression. The power supplies consist of 4 separate banks of MJ each; Pspice simulations and power requirement calculations will be shown. We outline the diagnostic set that will be required for an experimental campaign to address issues relating to both formation efficiency and energy confinement scaling during compression. Work supported in part by DARPA Grant N66001-14-1-4044 and IAEA CRP on compac fusion neutron sources.

  10. Fingerprint Template Compression by Solving a Minimum Label k-Node Subtree Problem

    NASA Astrophysics Data System (ADS)

    Raidl, Günther R.; Chwatal, Andreas

    2007-09-01

    We present a new approach for strongly compressing a relatively small amount of poorly structured data, as is required when embedding fingerprint template information in images of ID-cards by means of watermarking techniques. The approach is based on the construction of a directed tree spanning a selected part of the data points and a codebook of template arcs used for a compact encoding of relative point positions. The selection of data points, the tree structure, and the codebook are simultaneously optimized by a new exact branch-and-cut approach or, alternatively, a faster greedy randomized adaptive search procedure (GRASP) to maximize compression. Experiments indicate that the new method can encode the required information in less space than several standard compression algorithms.

  11. Biomechanics of turtle shells: how whole shells fail in compression.

    PubMed

    Magwene, Paul M; Socha, John J

    2013-02-01

    Turtle shells are a form of armor that provides varying degrees of protection against predation. Although this function of the shell as armor is widely appreciated, the mechanical limits of protection and the modes of failure when subjected to breaking stresses have not been well explored. We studied the mechanical properties of whole shells and of isolated bony tissues and sutures in four species of turtles (Trachemys scripta, Malaclemys terrapin, Chrysemys picta, and Terrapene carolina) using a combination of structural and mechanical tests. Structural properties were evaluated by subjecting whole shells to compressive and point loads in order to quantify maximum load, work to failure, and relative shell deformations. The mechanical properties of bone and sutures from the plastral region of the shell were evaluated using three-point bending experiments. Analysis of whole shell structural properties suggests that small shells undergo relatively greater deformations before failure than do large shells and similar amounts of energy are required to induce failure under both point and compressive loads. Location of failures occurred far more often at sulci than at sutures (representing the margins of the epidermal scutes and the underlying bones, respectively), suggesting that the small grooves in the bone created by the sulci introduce zones of weakness in the shell. Values for bending strength, ultimate bending strain, Young's modulus, and energy absorption, calculated from the three-point bending data, indicate that sutures are relatively weaker than the surrounding bone, but are able to absorb similar amounts of energy due to higher ultimate strain values.

  12. Observations of enhanced OTR signals from a compressed electron beam

    SciTech Connect

    Lumpkin, A.H.; Sereno, N.S.; Borland, M.; Li, Y.; Nemeth, K.; Pasky, S.; /Argonne

    2008-05-01

    The Advanced Photon Source (APS) injector complex includes an option for photocathode (PC) gun beam injection into the 450-MeV S-band linac. At the 150-MeV point, a 4-dipole chicane was used to compress the micropulse bunch length from a few ps to sub 0.5 ps (FWHM). Noticeable enhancements of the optical transition radiation (OTR) signal sampled after the APS chicane were then observed as has been reported in LCLS injector commissioning. A FIR CTR detector and interferometer were used to monitor the bunch compression process and correlate the appearance of localized spikes of OTR signal (5 to 10 times brighter than adjacent areas) within the beam image footprint. We have done spectral dependency measurements at 375 MeV with a series of band pass filters centered in 50-nm increments from 400 to 700 nm and observed a broadband enhancement in these spikes. Discussions of the possible mechanisms will be presented.

  13. Multiview video and depth compression for free-view navigation

    NASA Astrophysics Data System (ADS)

    Higuchi, Yuta; Tehrani, Mehrdad Panahpour; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki

    2012-03-01

    In this paper, we discuss a multiview video and depth coding system for Multiview video applications such as 3DTV and Free View-point Television (FTV) 1. We target an appropriate multiview and depth compression method. And then we investigate the effect on free view synthesis quality by changing the transmission rates between multiview and depth sequences. In the simulations, we employ MVC in parallel to compress the multiview video and depth sequences at different bitrates, and compare the virtual view sequences generated by decoded data with the original video sequences taken in the same viewpoint. Our experimental results show that bitrates of multi depth stream has less effect on the view synthesis quality compared with the multi view stream.

  14. Hybrid thermal link-wise artificial compressibility method

    NASA Astrophysics Data System (ADS)

    Obrecht, Christian; Kuznik, Frédéric

    2015-10-01

    Thermal flow prediction is a subject of interest from a scientific and engineering points of view. Our motivation is to develop an accurate, easy to implement and highly scalable method for convective flows simulation. To this end, we present an extension to the link-wise artificial compressibility method (LW-ACM) for thermal simulation of weakly compressible flows. The novel hybrid formulation uses second-order finite difference operators of the energy equation based on the same stencils as the LW-ACM. For validation purposes, the differentially heated cubic cavity was simulated. The simulations remained stable for Rayleigh numbers up to Ra =108. The Nusselt numbers at isothermal walls and dynamics quantities are in good agreement with reference values from the literature. Our results show that the hybrid thermal LW-ACM is an effective and easy-to-use solution to solve convective flows.

  15. Shear waves in inhomogeneous, compressible fluids in a gravity field.

    PubMed

    Godin, Oleg A

    2014-03-01

    While elastic solids support compressional and shear waves, waves in ideal compressible fluids are usually thought of as compressional waves. Here, a class of acoustic-gravity waves is studied in which the dilatation is identically zero, and the pressure and density remain constant in each fluid particle. These shear waves are described by an exact analytic solution of linearized hydrodynamics equations in inhomogeneous, quiescent, inviscid, compressible fluids with piecewise continuous parameters in a uniform gravity field. It is demonstrated that the shear acoustic-gravity waves also can be supported by moving fluids as well as quiescent, viscous fluids with and without thermal conductivity. Excitation of a shear-wave normal mode by a point source and the normal mode distortion in realistic environmental models are considered. The shear acoustic-gravity waves are likely to play a significant role in coupling wave processes in the ocean and atmosphere.

  16. Effect of Breast Compression on Lesion Characteristic Visibility with Diffraction-Enhanced Imaging

    SciTech Connect

    Faulconer, L.; Parham, C; Connor, D; Kuzmiak, C; Koomen, M; Lee, Y; Cho, K; Rafoth, J; Livasy, C; et al.

    2010-01-01

    Conventional mammography can not distinguish between transmitted, scattered, or refracted x-rays, thus requiring breast compression to decrease tissue depth and separate overlapping structures. Diffraction-enhanced imaging (DEI) uses monochromatic x-rays and perfect crystal diffraction to generate images with contrast based on absorption, refraction, or scatter. Because DEI possesses inherently superior contrast mechanisms, the current study assesses the effect of breast compression on lesion characteristic visibility with DEI imaging of breast specimens. Eleven breast tissue specimens, containing a total of 21 regions of interest, were imaged by DEI uncompressed, half-compressed, or fully compressed. A fully compressed DEI image was displayed on a soft-copy mammography review workstation, next to a DEI image acquired with reduced compression, maintaining all other imaging parameters. Five breast imaging radiologists scored image quality metrics considering known lesion pathology, ranking their findings on a 7-point Likert scale. When fully compressed DEI images were compared to those acquired with approximately a 25% difference in tissue thickness, there was no difference in scoring of lesion feature visibility. For fully compressed DEI images compared to those acquired with approximately a 50% difference in tissue thickness, across the five readers, there was a difference in scoring of lesion feature visibility. The scores for this difference in tissue thickness were significantly different at one rocking curve position and for benign lesion characterizations. These results should be verified in a larger study because when evaluating the radiologist scores overall, we detected a significant difference between the scores reported by the five radiologists. Reducing the need for breast compression might increase patient comfort during mammography. Our results suggest that DEI may allow a reduction in compression without substantially compromising clinical image

  17. Protein compressibility, dynamics, and pressure.

    PubMed Central

    Kharakoz, D P

    2000-01-01

    The relationship between the elastic and dynamic properties of native globular proteins is considered on the basis of a wide set of reported experimental data. The formation of a small cavity, capable of accommodating water, in the protein interior is associated with the elastic deformation, whose contribution to the free energy considerably exceeds the heat motion energy. Mechanically, the protein molecule is a highly nonlinear system. This means that its compressibility sharply decreases upon compression. The mechanical nonlinearity results in the following consequences related to the intramolecular dynamics of proteins: 1) The sign of the electrostriction effect in the protein matrix is opposite that observed in liquids-this is an additional indication that protein behaves like a solid particle. 2) The diffusion of an ion from the solvent to the interior of a protein should depend on pressure nonmonotonically: at low pressure diffusion is suppressed, while at high pressure it is enhanced. Such behavior is expected to display itself in any dynamic process depending on ion diffusion. Qualitative and quantitative expectations ensuing from the mechanical properties are concordant with the available experimental data on hydrogen exchange in native proteins at ambient and high pressure. PMID:10866977

  18. Compression creep of filamentary composites

    NASA Technical Reports Server (NTRS)

    Graesser, D. L.; Tuttle, M. E.

    1988-01-01

    Axial and transverse strain fields induced in composite laminates subjected to compressive creep loading were compared for several types of laminate layups. Unidirectional graphite/epoxy as well as multi-directional graphite/epoxy and graphite/PEEK layups were studied. Specimens with and without holes were tested. The specimens were subjected to compressive creep loading for a 10-hour period. In-plane displacements were measured using moire interferometry. A computer based data reduction scheme was developed which reduces the whole-field displacement fields obtained using moire to whole-field strain contour maps. Only slight viscoelastic response was observed in matrix-dominated laminates, except for one test in which catastrophic specimen failure occurred after a 16-hour period. In this case the specimen response was a complex combination of both viscoelastic and fracture mechanisms. No viscoelastic effects were observed for fiber-dominated laminates over the 10-hour creep time used. The experimental results for specimens with holes were compared with results obtained using a finite-element analysis. The comparison between experiment and theory was generally good. Overall strain distributions were very well predicted. The finite element analysis typically predicted slightly higher strain values at the edge of the hole, and slightly lower strain values at positions removed from the hole, than were observed experimentally. It is hypothesized that these discrepancies are due to nonlinear material behavior at the hole edge, which were not accounted for during the finite-element analysis.

  19. Hemifacial spasm and neurovascular compression.

    PubMed

    Lu, Alex Y; Yeung, Jacky T; Gerrard, Jason L; Michaelides, Elias M; Sekula, Raymond F; Bulsara, Ketan R

    2014-01-01

    Hemifacial spasm (HFS) is characterized by involuntary unilateral contractions of the muscles innervated by the ipsilateral facial nerve, usually starting around the eyes before progressing inferiorly to the cheek, mouth, and neck. Its prevalence is 9.8 per 100,000 persons with an average age of onset of 44 years. The accepted pathophysiology of HFS suggests that it is a disease process of the nerve root entry zone of the facial nerve. HFS can be divided into two types: primary and secondary. Primary HFS is triggered by vascular compression whereas secondary HFS comprises all other causes of facial nerve damage. Clinical examination and imaging modalities such as electromyography (EMG) and magnetic resonance imaging (MRI) are useful to differentiate HFS from other facial movement disorders and for intraoperative planning. The standard medical management for HFS is botulinum neurotoxin (BoNT) injections, which provides low-risk but limited symptomatic relief. The only curative treatment for HFS is microvascular decompression (MVD), a surgical intervention that provides lasting symptomatic relief by reducing compression of the facial nerve root. With a low rate of complications such as hearing loss, MVD remains the treatment of choice for HFS patients as intraoperative technique and monitoring continue to improve.

  20. Laser Compression of Nanocrystalline Metals

    NASA Astrophysics Data System (ADS)

    Meyers, Marc

    2009-06-01

    Laser compression carried out at the Omega and Janus yields new information on the deformation mechanisms of nanocrystalline Ni. Although conventional deformation does not produce hardening, the extreme regime imparted by laser compression generates an increase in hardness, attributed to the residual dislocations observed in the structure by TEM. An analytical model is applied to predict the critical pressures for the cell-stacking-faults transition in single-crystalline nickel and the onset twinning in nanocrystalline nickel. The slip-twinning transition pressure is shifted from 20 GPa, for polycrystalline Ni, to 80 GPa, for Ni with g. s. of 10 nm. Contributions to the net strain from the mechanisms of plastic deformation (partials, perfect dislocations, twinning, and gb shear) were quantified in the nanocrystalline samples through MD calculations. The effect of release, a phenomenon often neglected in MD simulations, on dislocation behavior was established. A large fraction of the dislocations generated at the front are annihilated.[4pt] In collaboration with Hussam Jarmakani, University of California, San Diego; Eduardo Bringa, U. Nacional de Cuyo; Bruce Remington, Lawrence Livermore National Laboratory; V. Nhon, University of Illinois; P. Earhart and Morris Wang, Lawrence Livermore National Laboratory.

  1. Study on Huber fractal image compression.

    PubMed

    Jeng, Jyh-Horng; Tseng, Chun-Chieh; Hsieh, Jer-Guang

    2009-05-01

    In this paper, a new similarity measure for fractal image compression (FIC) is introduced. In the proposed Huber fractal image compression (HFIC), the linear Huber regression technique from robust statistics is embedded into the encoding procedure of the fractal image compression. When the original image is corrupted by noises, we argue that the fractal image compression scheme should be insensitive to those noises presented in the corrupted image. This leads to a new concept of robust fractal image compression. The proposed HFIC is one of our attempts toward the design of robust fractal image compression. The main disadvantage of HFIC is the high computational cost. To overcome this drawback, particle swarm optimization (PSO) technique is utilized to reduce the searching time. Simulation results show that the proposed HFIC is robust against outliers in the image. Also, the PSO method can effectively reduce the encoding time while retaining the quality of the retrieved image.

  2. Compressed sensing for bioelectric signals: a review.

    PubMed

    Craven, Darren; McGinley, Brian; Kilmartin, Liam; Glavin, Martin; Jones, Edward

    2015-03-01

    This paper provides a comprehensive review of compressed sensing or compressive sampling (CS) in bioelectric signal compression applications. The aim is to provide a detailed analysis of the current trends in CS, focusing on the advantages and disadvantages in compressing different biosignals and its suitability for deployment in embedded hardware. Performance metrics such as percent root-mean-squared difference (PRD), signal-to-noise ratio (SNR), and power consumption are used to objectively quantify the capabilities of CS. Furthermore, CS is compared to state-of-the-art compression algorithms in compressing electrocardiogram (ECG) and electroencephalography (EEG) as examples of typical biosignals. The main technical challenges associated with CS are discussed along with the predicted future trends.

  3. Industrial Compressed Air System Energy Efficiency Guidebook.

    SciTech Connect

    United States. Bonneville Power Administration.

    1993-12-01

    Energy efficient design, operation and maintenance of compressed air systems in industrial plants can provide substantial reductions in electric power and other operational costs. This guidebook will help identify cost effective, energy efficiency opportunities in compressed air system design, re-design, operation and maintenance. The guidebook provides: (1) a broad overview of industrial compressed air systems, (2) methods for estimating compressed air consumption and projected air savings, (3) a description of applicable, generic energy conservation measures, and, (4) a review of some compressed air system demonstration projects that have taken place over the last two years. The primary audience for this guidebook includes plant maintenance supervisors, plant engineers, plant managers and others interested in energy management of industrial compressed air systems.

  4. Lossy Wavefield Compression for Full-Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Boehm, C.; Fichtner, A.; de la Puente, J.; Hanzich, M.

    2015-12-01

    We present lossy compression techniques, tailored to the inexact computation of sensitivity kernels, that significantly reduce the memory requirements of adjoint-based minimization schemes. Adjoint methods are a powerful tool to solve tomography problems in full-waveform inversion (FWI). Yet they face the challenge of massive memory requirements caused by the opposite directions of forward and adjoint simulations and the necessity to access both wavefields simultaneously during the computation of the sensitivity kernel. Thus, storage, I/O operations, and memory bandwidth become key topics in FWI. In this talk, we present strategies for the temporal and spatial compression of the forward wavefield. This comprises re-interpolation with coarse time steps and an adaptive polynomial degree of the spectral element shape functions. In addition, we predict the projection errors on a hierarchy of grids and re-quantize the residuals with an adaptive floating-point accuracy to improve the approximation. Furthermore, we use the first arrivals of adjoint waves to identify "shadow zones" that do not contribute to the sensitivity kernel at all. Updating and storing the wavefield within these shadow zones is skipped, which reduces memory requirements and computational costs at the same time. Compared to check-pointing, our approach has only a negligible computational overhead, utilizing the fact that a sufficiently accurate sensitivity kernel does not require a fully resolved forward wavefield. Furthermore, we use adaptive compression thresholds during the FWI iterations to ensure convergence. Numerical experiments on the reservoir scale and for the Western Mediterranean prove the high potential of this approach with an effective compression factor of 500-1000. Furthermore, it is computationally cheap and easy to integrate in both, finite-differences and finite-element wave propagation codes.

  5. Subband Coding Methods for Seismic Data Compression

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Pollara, F.

    1995-01-01

    This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  6. Eccentric crank variable compression ratio mechanism

    SciTech Connect

    Lawrence, Keith Edward; Moser, William Elliott; Roozenboom, Stephan Donald; Knox, Kevin Jay

    2008-05-13

    A variable compression ratio mechanism for an internal combustion engine that has an engine block and a crankshaft is disclosed. The variable compression ratio mechanism has a plurality of eccentric disks configured to support the crankshaft. Each of the plurality of eccentric disks has at least one cylindrical portion annularly surrounded by the engine block. The variable compression ratio mechanism also has at least one actuator configured to rotate the plurality of eccentric disks.

  7. Compressed data for the movie industry

    NASA Astrophysics Data System (ADS)

    Tice, Bradley S.

    2013-12-01

    The paper will present a compression algorithm that will allow for both random and non-random sequential binary strings of data to be compressed for storage and transmission of media information. The compression system has direct applications to the storage and transmission of digital media such as movies, television, audio signals and other visual and auditory signals needed for engineering practicalities in such industries.

  8. Pulse Compression Made Easy With VSIPL++

    DTIC Science & Technology

    2007-11-02

    Engine VSI/Pro C/ASM Kernel Object Oriented Strategies - Deferred Evaluation Synthetic Aperature Radar Pulse CompressionCritical Benchmarks... Synthetic Aperature Radar Pulse Compression VSIPL++ (C++)API VSIPL C API VSI/Pro Internal C++ Engine VSI/Pro C / ASM Kernels • What are the benefits of a...state of the art radar systems. Pulse Compression: The VSIPL way The pseudocode: Create Vectors Create Forward FFT object Create Inverse FFT object

  9. Wavelet transform approach to video compression

    NASA Astrophysics Data System (ADS)

    Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay

    1995-04-01

    In this research, we propose a video compression scheme that uses the boundary-control vectors to represent the motion field and the embedded zerotree wavelet (EZW) to compress the displacement frame difference. When compared to the DCT-based MPEG, the proposed new scheme achieves a better compression performance in terms of the MSE (mean square error) value and visual perception for the same given bit rate.

  10. New Theory and Algorithms for Compressive Sensing

    DTIC Science & Technology

    2009-03-06

    measurement device has limited computational resources (as in a sensor network ). Fortunately, over the past two years a new theory of Compressive Sensing... neural circuits,” Neural Computation, vol. 20, pp. 2526–2563. S. Sarvotham, D. Baron, and R. Baraniuk, “ Measurements vs. bits: Compressed sensing meets... measurements that corresponds to the problem structure, rather than bandwidth. Second, we improved on previous work in distributed compressive

  11. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  12. Memory hierarchy using row-based compression

    DOEpatents

    Loh, Gabriel H.; O'Connor, James M.

    2016-10-25

    A system includes a first memory and a device coupleable to the first memory. The device includes a second memory to cache data from the first memory. The second memory includes a plurality of rows, each row including a corresponding set of compressed data blocks of non-uniform sizes and a corresponding set of tag blocks. Each tag block represents a corresponding compressed data block of the row. The device further includes decompression logic to decompress data blocks accessed from the second memory. The device further includes compression logic to compress data blocks to be stored in the second memory.

  13. Image compression requirements and standards in PACS

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.

    1995-05-01

    Cost effective telemedicine and storage create a need for medical image compression. Compression saves communication bandwidth and reduces the size of the stored images. After clinicians become acquainted with the quality of the images using some of the newer algorithms, they accept the idea of lossy compression. The older algorithms, JPEG and MPEG in particular, are generally not adequate for high quality compression of medical images. The requirements for compression for medical images center on diagnostic quality images after the restoration of the images. The compression artifacts should not interfere with the viewing of the images for diagnosis. New requirements for compression arise from the fact that the images will likely be viewed on a computer workstation, where the images may be manipulated in ways that would bring out the artifacts. A medical imaging compression standard must be applicable across a large variety of image types from CT and MR to CR and ultrasound. To have one or a very few compression algorithms that are effective across a broad range of image types is desirable. Related series of images as for CT, MR, or cardiology require inter-image processing as well as intra-image processing for effective compression. Two preferred decompositions of the medical images are lapped orthogonal transforms and wavelet transforms. These transforms decompose the images in frequency in two different ways. The lapped orthogonal transforms groups the data according to the area where the data originated, while the wavelet transforms group the data by the frequency band of the image. The compression realized depends on the similarity of close transform coefficients. Huffman coding or the coding of the RICE algorithm are a beginning for the encoding. To be really effective the coding must have an extension for the areas where there is little information, the low entropy extension. In these areas there are less than one bit per pixel and multiple pixels must be

  14. Method and apparatus for compressing gas

    SciTech Connect

    Allam, R.J.

    1984-07-24

    The fuel required to provide the energy for compressing a gas can be reduced by compressing the gas substantially adiabatically through a pressure ratio of at least 2.5:1 in a compressor, cooling the hot compressed gas by heat exchange with water at superatmospheric pressure, further heating the water to produce superheated steam and using the superheated steam to drive the compressor. The total amount of fuel consumed can be considerably less than that used for compressing gas conventionally (i.e. substantially isothermally).

  15. Compression of rehydratable vegetables and cereals

    NASA Technical Reports Server (NTRS)

    Burns, E. E.

    1978-01-01

    Characteristics of freeze-dried compressed carrots, such as rehydration, volatile retention, and texture, were studied by relating histological changes to textural quality evaluation, and by determining the effects of storage temperature on freeze-dried compressed carrot bars. Results show that samples compressed with a high moisture content undergo only slight structural damage and rehydrate quickly. Cellular disruption as a result of compression at low moisture levels was the main reason for rehydration and texture differences. Products prepared from carrot cubes having 48% moisture compared favorably with a freshly cooked product in cohesiveness and elasticity, but were found slightly harder and more chewy.

  16. Comparing biological networks via graph compression

    PubMed Central

    2010-01-01

    Background Comparison of various kinds of biological data is one of the main problems in bioinformatics and systems biology. Data compression methods have been applied to comparison of large sequence data and protein structure data. Since it is still difficult to compare global structures of large biological networks, it is reasonable to try to apply data compression methods to comparison of biological networks. In existing compression methods, the uniqueness of compression results is not guaranteed because there is some ambiguity in selection of overlapping edges. Results This paper proposes novel efficient methods, CompressEdge and CompressVertices, for comparing large biological networks. In the proposed methods, an original network structure is compressed by iteratively contracting identical edges and sets of connected edges. Then, the similarity of two networks is measured by a compression ratio of the concatenated networks. The proposed methods are applied to comparison of metabolic networks of several organisms, H. sapiens, M. musculus, A. thaliana, D. melanogaster, C. elegans, E. coli, S. cerevisiae, and B. subtilis, and are compared with an existing method. These results suggest that our methods can efficiently measure the similarities between metabolic networks. Conclusions Our proposed algorithms, which compress node-labeled networks, are useful for measuring the similarity of large biological networks. PMID:20840727

  17. Pulsed spheromak reactor with adiabatic compression

    SciTech Connect

    Fowler, T K

    1999-03-29

    Extrapolating from the Pulsed Spheromak reactor and the LINUS concept, we consider ignition achieved by injecting a conducting liquid into the flux conserver to compress a low temperature spheromak created by gun injection and ohmic heating. The required energy to achieve ignition and high gain by compression is comparable to that required for ohmic ignition and the timescale is similar so that the mechanical power to ignite by compression is comparable to the electrical power to ignite ohmically. Potential advantages and problems are discussed. Like the High Beta scenario achieved by rapid fueling of an ohmically ignited plasma, compression must occur on timescales faster than Taylor relaxation.

  18. Compression of thick laminated composite beams with initial impact-like damage

    NASA Technical Reports Server (NTRS)

    Breivik, N. L.; Guerdal, Z.; Griffin, O. H., Jr.

    1992-01-01

    While the study of compression after impact of laminated composites has been under consideration for many years, the complexity of the damage initiated by low velocity impact has not lent itself to simple predictive models for compression strength. The damage modes due to non-penetrating, low velocity impact by large diameter objects can be simulated using quasi-static three-point bending. The resulting damage modes are less coupled and more easily characterized than actual impact damage modes. This study includes the compression testing of specimens with well documented initial damage states obtained from three-point bend testing. Compression strengths and failure modes were obtained for quasi-isotropic stacking sequences from 0.24 to 1.1 inches thick with both grouped and interspersed ply stacking. Initial damage prior to compression testing was divided into four classifications based on the type, extent, and location of the damage. These classifications are multiple through-thickness delaminations, isolated delamination, damage near the surface, and matrix cracks. Specimens from each classification were compared to specimens tested without initial damage in order to determine the effects of the initial damage on the final compression strength and failure modes. A finite element analysis was used to aid in the understanding and explanation of the experimental results.

  19. SPS antenna pointing control

    NASA Technical Reports Server (NTRS)

    Hung, J. C.

    1980-01-01

    The pointing control of a microwave antenna of the Satellite Power System was investigated emphasizing: (1) the SPS antenna pointing error sensing method; (2) a rigid body pointing control design; and (3) approaches for modeling the flexible body characteristics of the solar collector. Accuracy requirements for the antenna pointing control consist of a mechanical pointing control accuracy of three arc-minutes and an electronic phased array pointing accuracy of three arc-seconds. Results based on the factors considered in current analysis, show that the three arc-minute overall pointing control accuracy can be achieved in practice.

  20. Magnetic Flux Compression in Plasmas

    NASA Astrophysics Data System (ADS)

    Velikovich, A. L.

    2012-10-01

    Magnetic flux compression (MFC) as a method for producing ultra-high pulsed magnetic fields had been originated in the 1950s by Sakharov et al. at Arzamas in the USSR (now VNIIEF, Russia) and by Fowler et al. at Los Alamos in the US. The highest magnetic field produced by explosively driven MFC generator, 28 MG, was reported by Boyko et al. of VNIIEF. The idea of using MFC to increase the magnetic field in a magnetically confined plasma to 3-10 MG, relaxing the strict requirements on the plasma density and Lawson time, gave rise to the research area known as MTF in the US and MAGO in Russia. To make a difference in ICF, a magnetic field of ˜100 MG should be generated via MFC by a plasma liner as a part of the capsule compression scenario on a laser or pulsed power facility. This approach was first suggested in mid-1980s by Liberman and Velikovich in the USSR and Felber in the US. It has not been obvious from the start that it could work at all, given that so many mechanisms exist for anomalously fast penetration of magnetic field through plasma. And yet, many experiments stimulated by this proposal since 1986, mostly using pulsed-power drivers, demonstrated reasonably good flux compression up to ˜42 MG, although diagnostics of magnetic fields of such magnitude in HED plasmas is still problematic. The new interest of MFC in plasmas emerged with the advancement of new drivers, diagnostic methods and simulation tools. Experiments on MFC in a deuterium plasma filling a cylindrical plastic liner imploded by OMEGA laser beam led by Knauer, Betti et al. at LLE produced peak fields of 36 MG. The novel MagLIF approach to low-cost, high-efficiency ICF pursued by Herrmann, Slutz, Vesey et al. at Sandia involves pulsed-power-driven MFC to a peak field of ˜130 MG in a DT plasma. A review of the progress, current status and future prospects of MFC in plasmas is presented.

  1. Analysis of Compression Algorithm in Ground Collision Avoidance Systems (Auto-GCAS)

    NASA Technical Reports Server (NTRS)

    Schmalz, Tyler; Ryan, Jack

    2011-01-01

    Automatic Ground Collision Avoidance Systems (Auto-GCAS) utilizes Digital Terrain Elevation Data (DTED) stored onboard a plane to determine potential recovery maneuvers. Because of the current limitations of computer hardware on military airplanes such as the F-22 and F-35, the DTED must be compressed through a lossy technique called binary-tree tip-tilt. The purpose of this study is to determine the accuracy of the compressed data with respect to the original DTED. This study is mainly interested in the magnitude of the error between the two as well as the overall distribution of the errors throughout the DTED. By understanding how the errors of the compression technique are affected by various factors (topography, density of sampling points, sub-sampling techniques, etc.), modifications can be made to the compression technique resulting in better accuracy. This, in turn, would minimize unnecessary activation of A-GCAS during flight as well as maximizing its contribution to fighter safety.

  2. High performance file compression algorithm for video-on-demand e-learning system

    NASA Astrophysics Data System (ADS)

    Nomura, Yoshihiko; Matsuda, Ryutaro; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    2005-10-01

    Information processing and communication technology are progressing quickly, and are prevailing throughout various technological fields. Therefore, the development of such technology should respond to the needs for improvement of quality in the e-learning education system. The authors propose a new video-image compression processing system that ingeniously employs the features of the lecturing scene: recognizing the a lecturer and a lecture stick by pattern recognition techniques, the video-image compression processing system deletes the figure of a lecturer of low importance and displays only the end point of a lecture stick. It enables us to create the highly compressed lecture video files, which are suitable for the Internet distribution. We compare this technique with the other simple methods such as the lower frame-rate video files, and the ordinary MPEG files. The experimental result shows that the proposed compression processing system is much more effective than the others.

  3. A New Approach for Fingerprint Image Compression

    SciTech Connect

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  4. Merging of the Dirac points in electronic artificial graphene

    NASA Astrophysics Data System (ADS)

    Feilhauer, J.; Apel, W.; Schweitzer, L.

    2015-12-01

    Theory predicts that graphene under uniaxial compressive strain in an armchair direction should undergo a topological phase transition from a semimetal into an insulator. Due to the change of the hopping integrals under compression, both Dirac points shift away from the corners of the Brillouin zone towards each other. For sufficiently large strain, the Dirac points merge and an energy gap appears. However, such a topological phase transition has not yet been observed in normal graphene (due to its large stiffness) neither in any other electronic system. We show numerically and analytically that such a merging of the Dirac points can be observed in electronic artificial graphene created from a two-dimensional electron gas by application of a triangular lattice of repulsive antidots. Here, the effect of strain is modeled by tuning the distance between the repulsive potentials along the armchair direction. Our results show that the merging of the Dirac points should be observable in a recent experiment with molecular graphene.

  5. Saliency-aware video compression.

    PubMed

    Hadizadeh, Hadi; Bajić, Ivan V

    2014-01-01

    In region-of-interest (ROI)-based video coding, ROI parts of the frame are encoded with higher quality than non-ROI parts. At low bit rates, such encoding may produce attention-grabbing coding artifacts, which may draw viewer's attention away from ROI, thereby degrading visual quality. In this paper, we present a saliency-aware video compression method for ROI-based video coding. The proposed method aims at reducing salient coding artifacts in non-ROI parts of the frame in order to keep user's attention on ROI. Further, the method allows saliency to increase in high quality parts of the frame, and allows saliency to reduce in non-ROI parts. Experimental results indicate that the proposed method is able to improve visual quality of encoded video relative to conventional rate distortion optimized video coding, as well as two state-of-the art perceptual video coding methods.

  6. High energy femtosecond pulse compression

    NASA Astrophysics Data System (ADS)

    Lassonde, Philippe; Mironov, Sergey; Fourmaux, Sylvain; Payeur, Stéphane; Khazanov, Efim; Sergeev, Alexander; Kieffer, Jean-Claude; Mourou, Gerard

    2016-07-01

    An original method for retrieving the Kerr nonlinear index was proposed and implemented for TF12 heavy flint glass. Then, a defocusing lens made of this highly nonlinear glass was used to generate an almost constant spectral broadening across a Gaussian beam profile. The lens was designed with spherical curvatures chosen in order to match the laser beam profile, such that the product of the thickness with intensity is constant. This solid-state optics in combination with chirped mirrors was used to decrease the pulse duration at the output of a terawatt-class femtosecond laser. We demonstrated compression of a 33 fs pulse to 16 fs with 170 mJ energy.

  7. Photon counting compressive depth mapping.

    PubMed

    Howland, Gregory A; Lum, Daniel J; Ware, Matthew R; Howell, John C

    2013-10-07

    We demonstrate a compressed sensing, photon counting lidar system based on the single-pixel camera. Our technique recovers both depth and intensity maps from a single under-sampled set of incoherent, linear projections of a scene of interest at ultra-low light levels around 0.5 picowatts. Only two-dimensional reconstructions are required to image a three-dimensional scene. We demonstrate intensity imaging and depth mapping at 256 × 256 pixel transverse resolution with acquisition times as short as 3 seconds. We also show novelty filtering, reconstructing only the difference between two instances of a scene. Finally, we acquire 32 × 32 pixel real-time video for three-dimensional object tracking at 14 frames-per-second.

  8. The inviscid compressible Goertler problem

    NASA Technical Reports Server (NTRS)

    Dando, Andrew; Seddougui, Sharon O.

    1991-01-01

    The growth rate is studied of Goertler vortices in a compressible flow in the inviscid limit of large Goertler number. Numerical solutions are obtained for 0(1) wavenumbers. The further limits of large Mach number and large wavenumber with 0(1) Mach number are considered. It is shown that two different types of disturbance modes can appear in this problem. The first is a wall layer mode, so named as it has its eigenfunctions trapped in a thin layer away from the wall and termed a trapped layer mode for large wavenumbers and an adjustment layer mode for large Mach numbers, since then this mode has its eigenfunctions concentrated in the temperature adjustment layer. The near crossing of the modes which occurs in each of the limits mentioned is investigated.

  9. Shock compression of liquid hydrazine

    SciTech Connect

    Garcia, B.O.; Chavez, D.J.

    1996-05-01

    Liquid hydrazine (N{sub 2}H{sub 4}) is a propellant used for aerospace propulsion and power systems. Because the propellant modules can be subject to debris impacts during their use, the shock states that can occur in the hydrazine need to be characterized to safely predict its response. Several shock compression experiments have been conducted to investigate the shock detonability of liquid hydrazine; however, the experiments{close_quote} results disagree. Therefore, in this study, we reproduced each experiment numerically to evaluate in detail the shock wave profiles generated in the liquid hydrazine. This paper presents the results of each numerical simulation and compares the results to those obtained in experiment. {copyright} {ital 1996 American Institute of Physics.}

  10. Shock compression of liquid hydrazine

    SciTech Connect

    Garcia, B.O.; Chavez, D.J.

    1995-01-01

    Liquid hydrazine (N{sub 2}H{sub 4}) is a propellant used by the Air Force and NASA for aerospace propulsion and power systems. Because the propellant modules that contain the hydrazine can be subject to debris impacts during their use, the shock states that can occur in the hydrazine need to be characterized to safely predict its response. Several shock compression experiments have been conducted in an attempt to investigate the detonability of liquid hydrazine; however, the experiments results disagree. Therefore, in this study, we reproduced each experiment numerically to evaluate in detail the shock wave profiles generated in the liquid hydrazine. This paper presents the results of each numerical simulation and compares the results to those obtained in experiment. We also present the methodology of our approach, which includes chemical kinetic experiments, chemical equilibrium calculations, and characterization of the equation of state of liquid hydrazine.

  11. Compression molding of aerogel microspheres

    DOEpatents

    Pekala, Richard W.; Hrubesh, Lawrence W.

    1998-03-24

    An aerogel composite material produced by compression molding of aerogel microspheres (powders) mixed together with a small percentage of polymer binder to form monolithic shapes in a cost-effective manner. The aerogel composites are formed by mixing aerogel microspheres with a polymer binder, placing the mixture in a mold and heating under pressure, which results in a composite with a density of 50-800 kg/m.sup.3 (0.05-0.80 g/cc). The thermal conductivity of the thus formed aerogel composite is below that of air, but higher than the thermal conductivity of monolithic aerogels. The resulting aerogel composites are attractive for applications such as thermal insulation since fabrication thereof does not require large and expensive processing equipment. In addition to thermal insulation, the aerogel composites may be utilized for filtration, ICF target, double layer capacitors, and capacitive deionization.

  12. Compression molding of aerogel microspheres

    DOEpatents

    Pekala, R.W.; Hrubesh, L.W.

    1998-03-24

    An aerogel composite material produced by compression molding of aerogel microspheres (powders) mixed together with a small percentage of polymer binder to form monolithic shapes in a cost-effective manner is disclosed. The aerogel composites are formed by mixing aerogel microspheres with a polymer binder, placing the mixture in a mold and heating under pressure, which results in a composite with a density of 50--800 kg/m{sup 3} (0.05--0.80 g/cc). The thermal conductivity of the thus formed aerogel composite is below that of air, but higher than the thermal conductivity of monolithic aerogels. The resulting aerogel composites are attractive for applications such as thermal insulation since fabrication thereof does not require large and expensive processing equipment. In addition to thermal insulation, the aerogel composites may be utilized for filtration, ICF target, double layer capacitors, and capacitive deionization. 4 figs.

  13. Bunch length compression method for free electron lasers to avoid parasitic compressions

    DOEpatents

    Douglas, David R.; Benson, Stephen; Nguyen, Dinh Cong; Tennant, Christopher; Wilson, Guy

    2015-05-26

    A method of bunch length compression method for a free electron laser (FEL) that avoids parasitic compressions by 1) applying acceleration on the falling portion of the RF waveform, 2) compressing using a positive momentum compaction (R.sub.56>0), and 3) compensating for aberration by using nonlinear magnets in the compressor beam line.

  14. Mental Aptitude and Comprehension of Time-Compressed and Compressed-Expanded Listening Selections.

    ERIC Educational Resources Information Center

    Sticht, Thomas G.

    The comprehensibility of materials compressed and then expanded by means of an electromechanical process was tested with 280 Army inductees divided into groups of high and low mental aptitude. Three short listening selections relating to military activities were subjected to compression and compression-expansion to produce seven versions. Data…

  15. Compressed natural gas (CNG) measurement

    SciTech Connect

    Husain, Z.D.; Goodson, F.D.

    1995-12-01

    The increased level of environmental awareness has raised concerns about pollution. One area of high attention is the internal combustion engine. The internal combustion engine in and of itself is not a major pollution threat. However, the vast number of motor vehicles in use release large quantities of pollutants. Recent technological advances in ignition and engine controls coupled with unleaded fuels and catalytic converters have reduced vehicular emissions significantly. Alternate fuels have the potential to produce even greater reductions in emissions. The Natural Gas Vehicle (NGV) has been a significant alternative to accomplish the goal of cleaner combustion. Of the many alternative fuels under investigation, compressed natural gas (CNG) has demonstrated the lowest levels of emission. The only vehicle certified by the State of California as an Ultra Low Emission Vehicle (ULEV) was powered by CNG. The California emissions tests of the ULEV-CNG vehicle revealed the following concentrations: Non-Methane Hydrocarbons 0.005 grams/mile Carbon Monoxide 0.300 grams/mile Nitrogen Oxides 0.040 grams/mile. Unfortunately, CNG vehicles will not gain significant popularity until compressed natural gas is readily available in convenient locations in urban areas and in proximity to the Interstate highway system. Approximately 150,000 gasoline filling stations exist in the United States while number of CNG stations is about 1000 and many of those CNG stations are limited to fleet service only. Discussion in this paper concentrates on CNG flow measurement for fuel dispensers. Since the regulatory changes and market demands affect the flow metering and dispenser station design those aspects are discussed. The CNG industry faces a number of challenges.

  16. Survey of data compression techniques

    SciTech Connect

    Gryder, R.; Hake, K.

    1991-09-01

    PM-AIM must provide to customers in a timely fashion information about Army acquisitions. This paper discusses ways that PM-AIM can reduce the volume of data that must be transmitted between sites. Although this paper primarily discusses techniques of data compression, it also briefly discusses other options for meeting the PM-AIM requirements. The options available to PM-AIM, in addition to hardware and software data compression, include less-frequent updates, distribution of partial updates, distributed data base design, and intelligent network design. Any option that enhances the performance of the PM-AIM network is worthy of consideration. The recommendations of this paper apply to the PM-AIM project in three phases: the current phase, the target phase, and the objective phase. Each recommendation will be identified as (1) appropriate for the current phase, (2) considered for implementation during the target phase, or (3) a feature that should be part of the objective phase of PM-AIM`s design. The current phase includes only those measures that can be taken with the installed leased lines. The target phase includes those measures that can be taken in transferring the traffic from the leased lines to the DSNET environment with minimal changes in the current design. The objective phase includes all the things that should be done as a matter of course. The objective phase for PM-AIM appears to be a distributed data base with data for each site stored locally and all sites having access to all data.

  17. Survey of data compression techniques

    SciTech Connect

    Gryder, R.; Hake, K.

    1991-09-01

    PM-AIM must provide to customers in a timely fashion information about Army acquisitions. This paper discusses ways that PM-AIM can reduce the volume of data that must be transmitted between sites. Although this paper primarily discusses techniques of data compression, it also briefly discusses other options for meeting the PM-AIM requirements. The options available to PM-AIM, in addition to hardware and software data compression, include less-frequent updates, distribution of partial updates, distributed data base design, and intelligent network design. Any option that enhances the performance of the PM-AIM network is worthy of consideration. The recommendations of this paper apply to the PM-AIM project in three phases: the current phase, the target phase, and the objective phase. Each recommendation will be identified as (1) appropriate for the current phase, (2) considered for implementation during the target phase, or (3) a feature that should be part of the objective phase of PM-AIM's design. The current phase includes only those measures that can be taken with the installed leased lines. The target phase includes those measures that can be taken in transferring the traffic from the leased lines to the DSNET environment with minimal changes in the current design. The objective phase includes all the things that should be done as a matter of course. The objective phase for PM-AIM appears to be a distributed data base with data for each site stored locally and all sites having access to all data.

  18. Laser observations of the moon: Normal points for 1973

    NASA Technical Reports Server (NTRS)

    Mulholland, J. D.; Shelus, P. J.; Silverburg, E. C.

    1975-01-01

    McDonald Observatory lunar laser ranging observations for 1973 are presented in the form of compressed normal points and amendments for the 1969-1972 data set are given. Observations of the reflector mounted on the Soviet roving vehicle Lunakhod 2 have also been included.

  19. Laser observations of the moon - Normal points for 1973

    NASA Technical Reports Server (NTRS)

    Mulholland, J. D.; Shelus, P. J.; Silverberg, E. C.

    1975-01-01

    McDonald Observatory lunar laser-ranging observations for 1973 are presented in the form of compressed normal points, and amendments for the 1969-1972 data set are given. Observations of the reflector mounted on the Soviet roving vehicle Lunakhod 2 have also been included.

  20. 13. Detail, upper chord connection point on upstream side of ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    13. Detail, upper chord connection point on upstream side of truss, showing connection of upper chord, laced vertical compression member, knee-braced strut, counters, and laterals. - Red Bank Creek Bridge, Spanning Red Bank Creek at Rawson Road, Red Bluff, Tehama County, CA

  1. CCA performance of a new source list/EZW hybrid compression algorithm

    NASA Astrophysics Data System (ADS)

    Huber, A. Kris; Budge, Scott E.; Moon, Todd K.; Bingham, Gail E.

    2001-11-01

    A new data compression algorithm for encoding astronomical source lists is presented. Two experiments in combined compression and analysis (CCA) are described, the first using simulated imagery based upon a tractable source list model, and the second using images from SPIRIT III, a spaceborne infrared sensor. A CCA system consisting of the source list compressor followed by a zerotree-wavelet residual encoder is compared to alternatives based on three other astronomical image compression algorithms. CCA performance is expressed in terms of image distortion along with relevant measures of point source detection and estimation quality. Some variations of performance with compression bit rate and point source flux are characterized. While most of the compression algorithms reduce high-frequency quantum noise at certain bit rates, conclusive evidence is not found that such denoising brings an improvement in point source detection or estimation performance of the CCA systems. The proposed algorithm is a top performer in every measure of CCA performance; the computational complexity is relatively high, however.

  2. Method of making a non-lead hollow point bullet

    DOEpatents

    Vaughn, Norman L.; Lowden, Richard A.

    2003-10-07

    The method of making a non-lead hollow point bullet has the steps of a) compressing an unsintered powdered metal composite core into a jacket, b) punching a hollow cavity tip portion into the core, c) seating an insert, the insert having a hollow point tip and a tail protrusion, on top of the core such that the tail protrusion couples with the hollow cavity tip portion, and d) swaging the open tip of the jacket.

  3. Recoil Experiments Using a Compressed Air Cannon

    ERIC Educational Resources Information Center

    Taylor, Brett

    2006-01-01

    Ping-Pong vacuum cannons, potato guns, and compressed air cannons are popular and dramatic demonstrations for lecture and lab. Students enjoy them for the spectacle, but they can also be used effectively to teach physics. Recently we have used a student-built compressed air cannon as a laboratory activity to investigate impulse, conservation of…

  4. LOW-VELOCITY COMPRESSIBLE FLOW THEORY

    EPA Science Inventory

    The widespread application of incompressible flow theory dominates low-velocity fluid dynamics, virtually preventing research into compressible low-velocity flow dynamics. Yet, compressible solutions to simple and well-defined flow problems and a series of contradictions in incom...

  5. Hardware compression using common portions of data

    DOEpatents

    Chang, Jichuan; Viswanathan, Krishnamurthy

    2015-03-24

    Methods and devices are provided for data compression. Data compression can include receiving a plurality of data chunks, sampling at least some of the plurality of data chunks extracting a common portion from a number of the plurality of data chunks based on the sampling, and storing a remainder of the plurality of data chunks in memory.

  6. A New Compression Method for FITS Tables

    NASA Technical Reports Server (NTRS)

    Pence, William; Seaman, Rob; White, Richard L.

    2010-01-01

    As the size and number of FITS binary tables generated by astronomical observatories increases, so does the need for a more efficient compression method to reduce the amount disk space and network bandwidth required to archive and down1oad the data tables. We have developed a new compression method for FITS binary tables that is modeled after the FITS tiled-image compression compression convention that has been in use for the past decade. Tests of this new method on a sample of FITS binary tables from a variety of current missions show that on average this new compression technique saves about 50% more disk space than when simply compressing the whole FITS file with gzip. Other advantages of this method are (1) the compressed FITS table is itself a valid FITS table, (2) the FITS headers remain uncompressed, thus allowing rapid read and write access to the keyword values, and (3) in the common case where the FITS file contains multiple tables, each table is compressed separately and may be accessed without having to uncompress the whole file.

  7. Sudden Viscous Dissipation of Compressing Turbulence

    SciTech Connect

    Davidovits, Seth; Fisch, Nathaniel J.

    2016-03-11

    Here we report compression of turbulent plasma can amplify the turbulent kinetic energy, if the compression is fast compared to the viscous dissipation time of the turbulent eddies. A sudden viscous dissipation mechanism is demonstrated, whereby this amplified turbulent kinetic energy is rapidly converted into thermal energy, suggesting a new paradigm for fast ignition inertial fusion.

  8. Image Compression: Making Multimedia Publishing a Reality.

    ERIC Educational Resources Information Center

    Anson, Louisa

    1993-01-01

    Describes the new Fractal Transform technology, a method of compressing digital images to represent images as seen by the mind's eye. The International Organization for Standardization (ISO) standards for compressed image formats are discussed in relationship to Fractal Transform, and it is compared with Discrete Cosine Transform. Thirteen figures…

  9. Adaptive Encoding for Numerical Data Compression.

    ERIC Educational Resources Information Center

    Yokoo, Hidetoshi

    1994-01-01

    Discusses the adaptive compression of computer files of numerical data whose statistical properties are not given in advance. A new lossless coding method for this purpose, which utilizes Adelson-Velskii and Landis (AVL) trees, is proposed. The method is effective to any word length. Its application to the lossless compression of gray-scale images…

  10. Aligned genomic data compression via improved modeling.

    PubMed

    Ochoa, Idoia; Hernaez, Mikel; Weissman, Tsachy

    2014-12-01

    With the release of the latest Next-Generation Sequencing (NGS) machine, the HiSeq X by Illumina, the cost of sequencing the whole genome of a human is expected to drop to a mere $1000. This milestone in sequencing history marks the era of affordable sequencing of individuals and opens the doors to personalized medicine. In accord, unprecedented volumes of genomic data will require storage for processing. There will be dire need not only of compressing aligned data, but also of generating compressed files that can be fed directly to downstream applications to facilitate the analysis of and inference on the data. Several approaches to this challenge have been proposed in the literature; however, focus thus far has been on the low coverage regime and most of the suggested compressors are not based on effective modeling of the data. We demonstrate the benefit of data modeling for compressing aligned reads. Specifically, we show that, by working with data models designed for the aligned data, we can improve considerably over the best compression ratio achieved by previously proposed algorithms. Our results indicate that the pareto-optimal barrier for compression rate and speed claimed by Bonfield and Mahoney (2013) [Bonfield JK and Mahoneys MV, Compression of FASTQ and SAM format sequencing data, PLOS ONE, 8(3):e59190, 2013.] does not apply for high coverage aligned data. Furthermore, our improved compression ratio is achieved by splitting the data in a manner conducive to operations in the compressed domain by downstream applications.

  11. "In Situ" Generation of Compressed Inverted Files.

    ERIC Educational Resources Information Center

    Moffat, Alistair; Bell, Timothy A. H.

    1995-01-01

    Discussion of index construction for large text collections highlights a new indexing algorithm designed to create large compressed inverted indexes "in situ." Topics include a computational model, inversion, index compression, merging, experimental test results, effect on retrieval performance, memory restrictions, and dynamic…

  12. Code Compression Schems for Embedded Processors

    NASA Astrophysics Data System (ADS)

    Horti, Deepa; Jamge, S. B.

    2010-11-01

    Code density is a major requirement in embedded system design since it not only reduces the need for the scarce re-source memory but also implicitly improves further important design parameters like power consumption and performance. Within this paper we have introduced a novel and an efficient approach that belongs to statistical compression schemes as well as dictionary based compression schemes.

  13. Spatial Compressive Sensing for Strain Data Reconstruction from Sparse Sensors

    DTIC Science & Technology

    2014-10-01

    the novel theory of compressive sensing and principles of continuum mechanics. Compressive sensing , also known as compressed sensing , refers to the...asserts that certain signals or images can be recovered from what was previously believed to be a highly incomplete measurement. Compressed sensing ...matrix completion problem is quite similar to compressive sensing , as a similar heuristic approach , convex relaxation, is used to recover

  14. Magnetic Bunch Compression for a Compact Compton Source

    SciTech Connect

    Gamage, B.; Satogata, Todd J.

    2013-12-01

    A compact electron accelerator suitable for Compton source applications is in design at the Center for Accelerator Science at Old Dominion University and Jefferson Lab. Here we discuss two options for transverse magnetic bunch compression and final focus, each involving a 4-dipole chicane with M_{56} tunable over a range of 1.5-2.0m with independent tuning of final focus to interaction point $\\beta$*=5mm. One design has no net bending, while the other has net bending of 90 degrees and is suitable for compact corner placement.

  15. Knowledge-based image bandwidth compression and enhancement

    NASA Astrophysics Data System (ADS)

    Saghri, John A.; Tescher, Andrew G.

    1987-01-01

    Techniques for incorporating a priori knowledge in the digital coding and bandwidth compression of image data are described and demonstrated. An algorithm for identifying and highlighting thin lines and point objects prior to coding is presented, and the precoding enhancement of a slightly smoothed version of the image is shown to be more effective than enhancement of the original image. Also considered are readjustment of the local distortion parameter and variable-block-size coding. The line-segment criteria employed in the classification are listed in a table, and sample images demonstrating the effectiveness of the enhancement techniques are presented.

  16. Frequency Compression of Wideband Signals Using a Distributed Sampling Technique,

    DTIC Science & Technology

    1981-11-01

    length and finally a calculation of transmission line losses and their effects on the operation of the circuit . 4.3.2 Determination of W for a 50S2 line...and er = 4.7 fT = 1.8 x 103 Ghz Hence, moding effects are negligible in the prototype circuit . 4.3.4 Determination of Sampling Interval (TS ) and Other...conducted to determine the sensitivity, 1 dB compression point, dynamic range, gain and frequency response of the experimental circuit . Fig. 5.15a illustrates

  17. Evidence for Fisher renormalization in the compressible phi4 model.

    PubMed

    Tröster, A

    2008-04-11

    We present novel Fourier Monte Carlo simulations of a compressible phi4-model on a simple-cubic lattice with linear-quadratic coupling of order parameter and strain, focusing on the detection of fluctuation-induced first-order transitions and deviations from standard critical behavior. The former is indeed observed in the constant stress ensemble and for auxetic systems at constant strain, while for regular isotropic systems at constant strain, we find strong evidence for Fisher-renormalized critical behavior and are led to predict the existence of a tricritical point.

  18. A high resolution spectrum reconstruction algorithm using compressive sensing theory

    NASA Astrophysics Data System (ADS)

    Zheng, Zhaoyu; Liang, Dakai; Liu, Shulin; Feng, Shuqing

    2015-07-01

    This paper proposes a quick spectrum scanning and reconstruction method using compressive sensing in composite structure. The strain field of corrugated structure is simulated by finite element analysis. Then the reflect spectrum is calculated using an improved transfer matrix algorithm. The K-means singular value decomposition sparse dictionary is trained . In the test the spectrum with limited sample points can be obtained and the high resolution spectrum is reconstructed by solving sparse representation equation. Compared with the other conventional basis, the effect of this method is better. The match rate of the recovered spectrum and the original spectrum is over 95%.

  19. Insertion Profiles of 4 Headless Compression Screws

    PubMed Central

    Hart, Adam; Harvey, Edward J.; Lefebvre, Louis-Philippe; Barthelat, Francois; Rabiei, Reza; Martineau, Paul A.

    2013-01-01

    Purpose In practice, the surgeon must rely on screw position (insertion depth) and tactile feedback from the screwdriver (insertion torque) to gauge compression. In this study, we identified the relationship between interfragmentary compression and these 2 factors. Methods The Acutrak Standard, Acutrak Mini, Synthes 3.0, and Herbert-Whipple implants were tested using a polyurethane foam scaphoid model. A specialized testing jig simultaneously measured compression force, insertion torque, and insertion depth at half-screw-turn intervals until failure occurred. Results The peak compression occurs at an insertion depth of −3.1 mm, −2.8 mm, 0.9 mm, and 1.5 mm for the Acutrak Mini, Acutrak Standard, Herbert-Whipple, and Synthes screws respectively (insertion depth is positive when the screw is proud above the bone and negative when buried). The compression and insertion torque at a depth of −2 mm were found to be 113 ± 18 N and 0.348 ± 0.052 Nm for the Acutrak Standard, 104 ± 15 N and 0.175 ± 0.008 Nm for the Acutrak Mini, 78 ± 9 N and 0.245 ± 0.006 Nm for the Herbert-Whipple, and 67 ± 2N, 0.233 ± 0.010 Nm for the Synthes headless compression screws. Conclusions All 4 screws generated a sizable amount of compression (> 60 N) over a wide range of insertion depths. The compression at the commonly recommended insertion depth of −2 mm was not significantly different between screws; thus, implant selection should not be based on compression profile alone. Conically shaped screws (Acutrak) generated their peak compression when they were fully buried in the foam whereas the shanked screws (Synthes and Herbert-Whipple) reached peak compression before they were fully inserted. Because insertion torque correlated poorly with compression, surgeons should avoid using tactile judgment of torque as a proxy for compression. Clinical relevance Knowledge of the insertion profile may improve our understanding of the implants, provide a better basis for comparing screws

  20. Quantum data compression of a qubit ensemble.

    PubMed

    Rozema, Lee A; Mahler, Dylan H; Hayat, Alex; Turner, Peter S; Steinberg, Aephraim M

    2014-10-17

    Data compression is a ubiquitous aspect of modern information technology, and the advent of quantum information raises the question of what types of compression are feasible for quantum data, where it is especially relevant given the extreme difficulty involved in creating reliable quantum memories. We present a protocol in which an ensemble of quantum bits (qubits) can in principle be perfectly compressed into exponentially fewer qubits. We then experimentally implement our algorithm, compressing three photonic qubits into two. This protocol sheds light on the subtle differences between quantum and classical information. Furthermore, since data compression stores all of the available information about the quantum state in fewer physical qubits, it could allow for a vast reduction in the amount of quantum memory required to store a quantum ensemble, making even today's limited quantum memories far more powerful than previously recognized.

  1. Postprocessing of Compressed Images via Sequential Denoising

    NASA Astrophysics Data System (ADS)

    Dar, Yehuda; Bruckstein, Alfred M.; Elad, Michael; Giryes, Raja

    2016-07-01

    In this work we propose a novel postprocessing technique for compression-artifact reduction. Our approach is based on posing this task as an inverse problem, with a regularization that leverages on existing state-of-the-art image denoising algorithms. We rely on the recently proposed Plug-and-Play Prior framework, suggesting the solution of general inverse problems via Alternating Direction Method of Multipliers (ADMM), leading to a sequence of Gaussian denoising steps. A key feature in our scheme is a linearization of the compression-decompression process, so as to get a formulation that can be optimized. In addition, we supply a thorough analysis of this linear approximation for several basic compression procedures. The proposed method is suitable for diverse compression techniques that rely on transform coding. Specifically, we demonstrate impressive gains in image quality for several leading compression methods - JPEG, JPEG2000, and HEVC.

  2. The New CCSDS Image Compression Recommendation

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron B.; Masschelein, Bart; Moury, Gilles; Schafer, Christoph

    2004-01-01

    The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists a two dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An ASIC implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm.

  3. Stress analysis of shear/compression test

    SciTech Connect

    Nishijima, S.; Okada, T.; Ueno, S.

    1997-06-01

    Stress analysis has been made on the glass fiber reinforced plastics (GFRP) subjected to the combined shear and compression stresses by means of finite element method. The two types of experimental set up were analyzed, that is parallel and series method where the specimen were compressed by tilted jigs which enable to apply the combined stresses, to the specimen. Modified Tsai-Hill criterion was employed to judge the failure under the combined stresses that is the shear strength under the compressive stress. The different failure envelopes were obtained between the two set ups. In the parallel system the shear strength once increased with compressive stress then decreased. On the contrary in the series system the shear strength decreased monotonicly with compressive stress. The difference is caused by the different stress distribution due to the different constraint conditions. The basic parameters which control the failure under the combined stresses will be discussed.

  4. Multiresolution Distance Volumes for Progressive Surface Compression

    SciTech Connect

    Laney, D E; Bertram, M; Duchaineau, M A; Max, N L

    2002-04-18

    We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distance volumes for surface compression and progressive reconstruction for complex high genus surfaces.

  5. Image compression algorithm using wavelet transform

    NASA Astrophysics Data System (ADS)

    Cadena, Luis; Cadena, Franklin; Simonov, Konstantin; Zotin, Alexander; Okhotnikov, Grigory

    2016-09-01

    Within the multi-resolution analysis, the study of the image compression algorithm using the Haar wavelet has been performed. We have studied the dependence of the image quality on the compression ratio. Also, the variation of the compression level of the studied image has been obtained. It is shown that the compression ratio in the range of 8-10 is optimal for environmental monitoring. Under these conditions the compression level is in the range of 1.7 - 4.2, depending on the type of images. It is shown that the algorithm used is more convenient and has more advantages than Winrar. The Haar wavelet algorithm has improved the method of signal and image processing.

  6. Interactive computer graphics applications for compressible aerodynamics

    NASA Technical Reports Server (NTRS)

    Benson, Thomas J.

    1994-01-01

    Three computer applications have been developed to solve inviscid compressible fluids problems using interactive computer graphics. The first application is a compressible flow calculator which solves for isentropic flow, normal shocks, and oblique shocks or centered expansions produced by two dimensional ramps. The second application couples the solutions generated by the first application to a more graphical presentation of the results to produce a desk top simulator of three compressible flow problems: 1) flow past a single compression ramp; 2) flow past two ramps in series; and 3) flow past two opposed ramps. The third application extends the results of the second to produce a design tool which solves for the flow through supersonic external or mixed compression inlets. The applications were originally developed to run on SGI or IBM workstations running GL graphics. They are currently being extended to solve additional types of flow problems and modified to operate on any X-based workstation.

  7. Compressed bitmap indices for efficient query processing

    SciTech Connect

    Wu, Kesheng; Otoo, Ekow; Shoshani, Arie

    2001-09-30

    Many database applications make extensive use of bitmap indexing schemes. In this paper, we study how to improve the efficiencies of these indexing schemes by proposing new compression schemes for the bitmaps. Most compression schemes are designed primarily to achieve good compression. During query processing they can be orders of magnitude slower than their uncompressed counterparts. The new schemes are designed to bridge this performance gap by reducing compression effectiveness and improving operation speed. In a number of tests on both synthetic data and real application data, we found that the new schemes significantly outperform the well-known compression schemes while using only modestly more space. For example, compared to the Byte-aligned Bitmap Code, the new schemes are 12 times faster and it uses only 50 percent more space. The new schemes use much less space(<30 percent) than the uncompressed scheme and are faster in a majority of the test cases.

  8. Volumetric Video Compression for Interactive Playback★

    PubMed Central

    Sohn, Bong-Soo; Bajaj, Chandrajit; Siddavanahalli, Vinay

    2009-01-01

    We develop a volumetric video system which supports interactive browsing of compressed time-varying volumetric features (significant isosurfaces and interval volumes). Since the size of even one volumetric frame in a time-varying 3D data set is very large, transmission and on-line reconstruction are the main bottlenecks for interactive remote visualization of time-varying volume and surface data. We describe a compression scheme for encoding time-varying volumetric features in a unified way, which allows for on-line reconstruction and rendering. To increase the run-time decompression speed and compression ratio, we decompose the volume into small blocks and encode only the significant blocks that contribute to the isosurfaces and interval volumes. The results show that our compression scheme achieves high compression ratio with fast reconstruction, which is effective for client-side rendering of time-varying volumetric features. PMID:20072724

  9. Application of joint orthogonal bases in compressive sensing ghost image

    NASA Astrophysics Data System (ADS)

    Fan, Xiang; Chen, Yi; Cheng, Zheng-dong; Liang, Zheng-yu; Zhu, Bin

    2016-11-01

    Sparse decomposition is one of the core issue of compressive sensing ghost image. At this stage, traditional methods still have the problems of poor sparsity and low reconstruction accuracy, such as discrete fourier transform and discrete cosine transform. In order to solve these problems, joint orthogonal bases transform is proposed to optimize ghost imaging. First, introduce the principle of compressive sensing ghost imaging and point out that sparsity is related to the minimum sample data required for imaging. Then, analyze the development and principle of joint orthogonal bases in detail and find out it can use less nonzero coefficients to reach the same identification effect as other methods. So, joint orthogonal bases transform is able to provide the sparsest representation. Finally, the experimental setup is built in order to verify simulation results. Experimental results indicate that the PSNR of joint orthogonal bases is much higher than traditional methods by using same sample data in compressive sensing ghost image.Therefore, joint orthogonal bases transform can realize better imaging quality under less sample data, which can satisfy the system requirements of convenience and rapid speed in ghost image.

  10. Avalanches in compressed porous SiO(2)-based materials.

    PubMed

    Nataf, Guillaume F; Castillo-Villa, Pedro O; Baró, Jordi; Illa, Xavier; Vives, Eduard; Planes, Antoni; Salje, Ekhard K H

    2014-08-01

    The failure dynamics in SiO(2)-based porous materials under compression, namely the synthetic glass Gelsil and three natural sandstones, has been studied for slowly increasing compressive uniaxial stress with rates between 0.2 and 2.8 kPa/s. The measured collapsed dynamics is similar to Vycor, which is another synthetic porous SiO(2) glass similar to Gelsil but with a different porous mesostructure. Compression occurs by jerks of strain release and a major collapse at the failure point. The acoustic emission and shrinking of the samples during jerks are measured and analyzed. The energy of acoustic emission events, its duration, and waiting times between events show that the failure process follows avalanche criticality with power law statistics over ca. 4 decades with a power law exponent ɛ≃ 1.4 for the energy distribution. This exponent is consistent with the mean-field value for the collapse of granular media. Besides the absence of length, energy, and time scales, we demonstrate the existence of aftershock correlations during the failure process.

  11. Effects of turbulence compressibility and unsteadiness in compression corner flow

    NASA Technical Reports Server (NTRS)

    Brankovic, A.; Zeman, O.

    1994-01-01

    The structure of the separated flow region over a 20 degree compression corner at a free-stream Mach number of 2.84 is investigated computationally using a Reynolds averaged Navier Stokes (R.A.N.S.) solver and kappa-epsilon model. At this Mach number and ramp angle, a steady-state recirculation region of order delta(sub o) is observed, with onset of a 'plateau' in the wall pressure distribution near the corner. At lower ramp angles, separation is negligible, while at an angle of 24 degrees, separation regions of length 2 delta(sub o) are expected. Of interest here is the response of the mathematical model to inclusion of the pressure dilatation term for turbulent kinetic energy. Compared with the experimental data of Smits and Muck (1987), steady-state computations show improvement when the pressure dilatation term is included. Unsteady computations, using both unforced and then forced inlet conditions, did not predict the oscillatory motion of the separation bubble as observed in laboratory experiments. An analysis of the separation bubble oscillation and the turbulent boundary layer (T.B.L.) frequencies for this flow suggests that the bubble oscillations are of nearly the same order as the turbulent frequencies, and therefore difficult for the model to separate and resolve.

  12. An integrated circuit floating point accumulator

    NASA Technical Reports Server (NTRS)

    Goldsmith, T. C.

    1977-01-01

    Goddard Space Flight Center has developed a large scale integrated circuit (type 623) which can perform pulse counting, storage, floating point compression, and serial transmission, using a single monolithic device. Counts of 27 or 19 bits can be converted to transmitted values of 12 or 8 bits respectively. Use of the 623 has resulted in substantial savaings in weight, volume, and dollar resources on at least 11 scientific instruments to be flown on 4 NASA spacecraft. The design, construction, and application of the 623 are described.

  13. Volume 2 - Point Sources

    EPA Pesticide Factsheets

    Point source emission reference materials from the Emissions Inventory Improvement Program (EIIP). Provides point source guidance on planning, emissions estimation, data collection, inventory documentation and reporting, and quality assurance/quality contr

  14. A novel DDS using nonlinear ROM addressing with improved compression ratio and quantization noise.

    PubMed

    Chimakurthy, Lakshmi S Jyothi; Ghosh, Malinky; Dai, Fa Foster; Jaeger, Richard C

    2006-02-01

    This paper presents a novel direct digital frequency synthesis (DDFS) ROM compression technique based on two properties of a sine function: (a) piecewise linear technique to approximate a sinusoid, and (b) variation in the slope of the sinusoid at different phase angles. In the proposed DDFS architecture the ROM stores a few of the sinusoidal values, and the interpolation points between the successive stored values are calculated using linear and nonlinear addressing schemes. The nonlinear addressing scheme is used to adaptively vary the number of interpolation points as the slope of the sinusoid changes, leading to a greatly reduced ROM size. The proposed architecture achieves a high compression ratio with a spurious response comparable to that of recent ROM compression techniques. To validate the proposed DDS architecture, the linear, nonlinear, and conventional DDS ROM architectures were implemented in a Xilinx Spartan II FPGA and their spurious performances were compared.

  15. Comparing Point Clouds

    DTIC Science & Technology

    2004-04-01

    Point clouds are one of the most primitive and fundamental surface representations. A popular source of point clouds are three dimensional shape...acquisition devices such as laser range scanners. Another important field where point clouds are found is in the representation of high-dimensional...framework for comparing manifolds given by point clouds is presented in this paper. The underlying theory is based on Gromov-Hausdorff distances, leading

  16. Point-to-Point Multicast Communications Protocol

    NASA Technical Reports Server (NTRS)

    Byrd, Gregory T.; Nakano, Russell; Delagi, Bruce A.

    1987-01-01

    This paper describes a protocol to support point-to-point interprocessor communications with multicast. Dynamic, cut-through routing with local flow control is used to provide a high-throughput, low-latency communications path between processors. In addition multicast transmissions are available, in which copies of a packet are sent to multiple destinations using common resources as much as possible. Special packet terminators and selective buffering are introduced to avoid a deadlock during multicasts. A simulated implementation of the protocol is also described.

  17. Trabecular bone microdamage and microstructural stresses under uniaxial compression.

    PubMed

    Nagaraja, Srinidhi; Couse, Tracey L; Guldberg, Robert E

    2005-04-01

    The balance between local remodeling and accumulation of trabecular bone microdamage is believed to play an important role in the maintenance of skeletal integrity. However, the local mechanical parameters associated with microdamage initiation are not well understood. Using histological damage labeling, micro-CT imaging, and image-based finite element analysis, regions of trabecular bone microdamage were detected and registered to estimated microstructural von Mises effective stresses and strains, maximum principal stresses and strains, and strain energy density (SED). Bovine tibial trabecular bone cores underwent a stepwise uniaxial compression routine in which specimens were micro-CT imaged following each compression step. The results indicate that the mode of trabecular failure observed by micro-CT imaging agreed well with the polarity and distribution of stresses within an individual trabecula. Analysis of on-axis subsections within specimens provided significant positive relationships between microdamage and each estimated tissue stress, strain and SED parameter. In a more localized analysis, individual microdamaged and undamaged trabeculae were extracted from specimens loaded within the elastic region and to the apparent yield point. As expected, damaged trabeculae in both groups possessed significantly higher local stresses and strains than undamaged trabeculae. The results also indicated that microdamage initiation occurred prior to apparent yield at local principal stresses in the range of 88-121 MPa for compression and 35-43 MPa for tension and local principal strains of 0.46-0.63% in compression and 0.18-0.24% in tension. These data provide an important step towards understanding factors contributing to microdamage initiation and establishing local failure criteria for normal and diseased trabecular bone.

  18. MAXAD distortion minimization for wavelet compression of remote sensing data

    NASA Astrophysics Data System (ADS)

    Alecu, Alin; Munteanu, Adrian; Schelkens, Peter; Cornelis, Jan P.; Dewitte, Steven

    2001-12-01

    In the context of compression of high resolution multi-spectral satellite image data consisting of radiances and top-of-the-atmosphere fluxes, it is vital that image calibration characteristics (luminance, radiance) must be preserved within certain limits in lossy image compression. Though existing compression schemes (SPIHT, JPEG2000, SQP) give good results as far as minimization of the global PSNR error is concerned, they fail to guarantee a maximum local error. With respect to this, we introduce a new image compression scheme, which guarantees a MAXAD distortion, defined as the maximum absolute difference between original pixel values and reconstructed pixel values. In terms of defining the Lagrangian optimization problem, this reflects in minimization of the rate given the MAXAD distortion. Our approach thus uses the l-infinite distortion measure, which is applied to the lifting scheme implementation of the 9-7 floating point Cohen-Daubechies-Feauveau (CDF) filter. Scalar quantizers, optimal in the D-R sense, are derived for every subband, by solving a global optimization problem that guarantees a user-defined MAXAD. The optimization problem has been defined and solved for the case of the 9-7 filter, and we show that our approach is valid and may be applied to any finite wavelet filters synthesized via lifting. The experimental assessment of our codec shows that our technique provides excellent results in applications such as those for remote sensing, in which reconstruction of image calibration characteristics within a tolerable local error (MAXAD) is perceived as being of crucial importance compared to obtaining an acceptable global error (PSNR), as is the case of existing quantizer design techniques.

  19. Weakly relativistic and ponderomotive effects on self-focusing and self-compression of laser pulses in near critical plasmas

    SciTech Connect

    Bokaei, B.; Niknam, A. R.

    2014-10-15

    The spatiotemporal dynamics of high power laser pulses in near critical plasmas are studied taking in to account the effects of relativistic and ponderomotive nonlinearities. First, within one-dimensional analysis, the effects of initial parameters such as laser intensity, plasma density, and plasma electron temperature on the self-compression mechanism are discussed. The results illustrate that the ponderomotive nonlinearity obstructs the relativistic self-compression above a certain intensity value. Moreover, the results indicate the existence of the turning point temperature in which the compression process has its strongest strength. Next, the three-dimensional analysis of laser pulse propagation is investigated by coupling the self-focusing equation with the self-compression one. It is shown that in contrast to the case in which the only relativistic nonlinearity is considered, in the presence of ponderomotive nonlinearity, the self-compression mechanism obstructs the self-focusing and leads to an increase of the laser spot size.

  20. 46 CFR 194.20-17 - Compressed gases.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Compressed gases. (a) Nonflammable compressed gases (excluding oxygen) may be securely stowed in the... chemical storeroom. (b) Flammable compressed gases and oxygen shall be stowed in accordance with 49...

  1. Static Compression of Tetramethylammonium Borohydride

    SciTech Connect

    Dalton, Douglas Allen; Somayazulu, M.; Goncharov, Alexander F.; Hemley, Russell J.

    2011-11-15

    Raman spectroscopy and synchrotron X-ray diffraction are used to examine the high-pressure behavior of tetramethylammonium borohydride (TMAB) to 40 GPa at room temperature. The measurements reveal weak pressure-induced structural transitions around 5 and 20 GPa. Rietveld analysis and Le Bail fits of the powder diffraction data based on known structures of tetramethylammonium salts indicate that the transitions are mediated by orientational ordering of the BH{sub 4}{sup -} tetrahedra followed by tilting of the (CH{sub 3}){sub 4}N{sup +} groups. X-ray diffraction patterns obtained during pressure release suggest reversibility with a degree of hysteresis. Changes in the Raman spectrum confirm that these transitions are not accompanied by bonding changes between the two ionic species. At ambient conditions, TMAB does not possess dihydrogen bonding, and Raman data confirms that this feature is not activated upon compression. The pressure-volume equation of state obtained from the diffraction data gives a bulk modulus [K{sub 0} = 5.9(6) GPa, K'{sub 0} = 9.6(4)] slightly lower than that observed for ammonia borane. Raman spectra obtained over the entire pressure range (spanning over 40% densification) indicate that the intramolecular vibrational modes are largely coupled.

  2. Compressive sensing for nuclear security.

    SciTech Connect

    Gestner, Brian Joseph

    2013-12-01

    Special nuclear material (SNM) detection has applications in nuclear material control, treaty verification, and national security. The neutron and gamma-ray radiation signature of SNMs can be indirectly observed in scintillator materials, which fluoresce when exposed to this radiation. A photomultiplier tube (PMT) coupled to the scintillator material is often used to convert this weak fluorescence to an electrical output signal. The fluorescence produced by a neutron interaction event differs from that of a gamma-ray interaction event, leading to a slightly different pulse in the PMT output signal. The ability to distinguish between these pulse types, i.e., pulse shape discrimination (PSD), has enabled applications such as neutron spectroscopy, neutron scatter cameras, and dual-mode neutron/gamma-ray imagers. In this research, we explore the use of compressive sensing to guide the development of novel mixed-signal hardware for PMT output signal acquisition. Effectively, we explore smart digitizers that extract sufficient information for PSD while requiring a considerably lower sample rate than conventional digitizers. Given that we determine the feasibility of realizing these designs in custom low-power analog integrated circuits, this research enables the incorporation of SNM detection into wireless sensor networks.

  3. The compression pathway of quartz

    SciTech Connect

    Thompson, Richard M.; Downs, Robert T.; Dera, Przemyslaw

    2011-11-07

    The structure of quartz over the temperature domain (298 K, 1078 K) and pressure domain (0 GPa, 20.25 GPa) is compared to the following three hypothetical quartz crystals: (1) Ideal {alpha}-quartz with perfectly regular tetrahedra and the same volume and Si-O-Si angle as its observed equivalent (ideal {beta}-quartz has Si-O-Si angle fixed at 155.6{sup o}). (2) Model {alpha}-quartz with the same Si-O-Si angle and cell parameters as its observed equivalent, derived from ideal by altering the axial ratio. (3) BCC quartz with a perfectly body-centered cubic arrangement of oxygen anions and the same volume as its observed equivalent. Comparison of experimental data recorded in the literature for quartz with these hypothetical crystal structures shows that quartz becomes more ideal as temperature increases, more BCC as pressure increases, and that model quartz is a very good representation of observed quartz under all conditions. This is consistent with the hypothesis that quartz compresses through Si-O-Si angle-bending, which is resisted by anion-anion repulsion resulting in increasing distortion of the c/a axial ratio from ideal as temperature decreases and/or pressure increases.

  4. Shock compression profiles in ceramics

    SciTech Connect

    Grady, D.E.; Moody, R.L.

    1996-03-01

    An investigation of the shock compression properties of high-strength ceramics has been performed using controlled planar impact techniques. In a typical experimental configuration, a ceramic target disc is held stationary, and it is struck by plates of either a similar ceramic or by plates of a well-characterized metal. All tests were performed using either a single-stage propellant gun or a two-stage light-gas gun. Particle velocity histories were measured with laser velocity interferometry (VISAR) at the interface between the back of the target ceramic and a calibrated VISAR window material. Peak impact stresses achieved in these experiments range from about 3 to 70 GPa. Ceramics tested under shock impact loading include: Al{sub 2}O{sub 3}, AlN, B{sub 4}C, SiC, Si{sub 3}N{sub 4}, TiB{sub 2}, WC and ZrO{sub 2}. This report compiles the VISAR wave profiles and experimental impact parameters within a database-useful for response model development, computational model validation studies, and independent assessment of the physics of dynamic deformation on high-strength, brittle solids.

  5. Sugar Determination in Foods with a Radially Compressed High Performance Liquid Chromatography Column.

    ERIC Educational Resources Information Center

    Ondrus, Martin G.; And Others

    1983-01-01

    Advocates use of Waters Associates Radial Compression Separation System for high performance liquid chromatography. Discusses instrumentation and reagents, outlining procedure for analyzing various foods and discussing typical student data. Points out potential problems due to impurities and pump seal life. Suggests use of ribose as internal…

  6. Stabilization of Rayleigh-Taylor instability in the presence of viscosity and compressibility: A critical analysis

    NASA Astrophysics Data System (ADS)

    Mitra, A.; Roychoudhury, R.; Khan, M.

    2016-02-01

    The stabilization of the Rayleigh-Taylor instability growth rate due to the combined effect of viscosity and compressibility has been studied. A detailed explanation of the observed results has been made from theoretical point of view. The numerical results have been compared qualitatively with those of Plesset and Whipple [Phys. Fluids 17, 1 (1974)] and Bernstein and Book [Phys. Fluids 26, 453 (1983)].

  7. GPU Lossless Hyperspectral Data Compression System

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh I.; Keymeulen, Didier; Kiely, Aaron B.; Klimesh, Matthew A.

    2014-01-01

    Hyperspectral imaging systems onboard aircraft or spacecraft can acquire large amounts of data, putting a strain on limited downlink and storage resources. Onboard data compression can mitigate this problem but may require a system capable of a high throughput. In order to achieve a high throughput with a software compressor, a graphics processing unit (GPU) implementation of a compressor was developed targeting the current state-of-the-art GPUs from NVIDIA(R). The implementation is based on the fast lossless (FL) compression algorithm reported in "Fast Lossless Compression of Multispectral-Image Data" (NPO- 42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which operates on hyperspectral data and achieves excellent compression performance while having low complexity. The FL compressor uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. The new Consultative Committee for Space Data Systems (CCSDS) Standard for Lossless Multispectral & Hyperspectral image compression (CCSDS 123) is based on the FL compressor. The software makes use of the highly-parallel processing capability of GPUs to achieve a throughput at least six times higher than that of a software implementation running on a single-core CPU. This implementation provides a practical real-time solution for compression of data from airborne hyperspectral instruments.

  8. Fast compression implementation for hyperspectral sensor

    NASA Astrophysics Data System (ADS)

    Hihara, Hiroki; Yoshida, Jun; Ishida, Juro; Takada, Jun; Senda, Yuzo; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Ohgi, Nagamitsu

    2010-11-01

    Fast and small foot print lossless image compressors aiming at hyper-spectral sensor for the earth observation satellite have been developed. Since more than one hundred channels are required for hyper-spectral sensors on optical observation satellites, fast compression algorithm with small foot print implementation is essential for reducing encoder size and weight resulting in realizing light-weight and small-size sensor system. The image compression method should have low complexity in order to reduce size and weight of the sensor signal processing unit, power consumption and fabrication cost. Coding efficiency and compression speed enables enlargement of the capacity of signal compression channels, which resulted in reducing signal compression channels onboard by multiplexing sensor signal channels into reduced number of compression channels. The employed method is based on FELICS1, which is hierarchical predictive coding method with resolution scaling. To improve FELICS's performance of image decorrelation and entropy coding, we applied two-dimensional interpolation prediction and adaptive Golomb-Rice coding, which enables small footprint. It supports progressive decompression using resolution scaling, whilst still delivering superior performance as measured by speed and complexity. The small footprint circuitry is embedded into the hyper-spectral sensor data formatter. In consequence, lossless compression function has been added without additional size and weight.

  9. Cloud Optimized Image Format and Compression

    NASA Astrophysics Data System (ADS)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  10. Multiresolution Distance Volumes for Progressive Surface Compression

    SciTech Connect

    Laney, D; Bertram, M; Duchaineau, M; Max, N

    2002-01-14

    Surfaces generated by scientific simulation and range scanning can reach into the billions of polygons. Such surfaces must be aggressively compressed, but at the same time should provide for level of detail queries. Progressive compression techniques based on subdivision surfaces produce impressive results on range scanned models. However, these methods require the construction of a base mesh which parameterizes the surface to be compressed and encodes the topology of the surface. For complex surfaces with high genus and/or a large number of components, the computation of an appropriate base mesh is difficult and often infeasible. We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our method avoids the costly base-mesh construction step and offers several improvements over previous attempts at compressing signed-distance functions, including an {Omicron}(n) distance transform, a new zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distance volumes for surface compression and progressive reconstruction for complex high genus surfaces.

  11. Lossless compression of VLSI layout image data.

    PubMed

    Dai, Vito; Zakhor, Avideh

    2006-09-01

    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.

  12. Compression of Space for Low Visibility Probes.

    PubMed

    Born, Sabine; Krüger, Hannah M; Zimmermann, Eckart; Cavanagh, Patrick

    2016-01-01

    Stimuli briefly flashed just before a saccade are perceived closer to the saccade target, a phenomenon known as perisaccadic compression of space (Ross et al., 1997). More recently, we have demonstrated that brief probes are attracted towards a visual reference when followed by a mask, even in the absence of saccades (Zimmermann et al., 2014a). Here, we ask whether spatial compression depends on the transient disruptions of the visual input stream caused by either a mask or a saccade. Both of these degrade the probe visibility but we show that low probe visibility alone causes compression in the absence of any disruption. In a first experiment, we varied the regions of the screen covered by a transient mask, including areas where no stimulus was presented and a condition without masking. In all conditions, we adjusted probe contrast to make the probe equally hard to detect. Compression effects were found in all conditions. To obtain compression without a mask, the probe had to be presented at much lower contrasts than with masking. Comparing mislocalizations at different probe detection rates across masking, saccades and low contrast conditions without mask or saccade, Experiment 2 confirmed this observation and showed a strong influence of probe contrast on compression. Finally, in Experiment 3, we found that compression decreased as probe duration increased both for masks and saccades although here we did find some evidence that factors other than simply visibility as we measured it contribute to compression. Our experiments suggest that compression reflects how the visual system localizes weak targets in the context of highly visible stimuli.

  13. Compression of Space for Low Visibility Probes

    PubMed Central

    Born, Sabine; Krüger, Hannah M.; Zimmermann, Eckart; Cavanagh, Patrick

    2016-01-01

    Stimuli briefly flashed just before a saccade are perceived closer to the saccade target, a phenomenon known as perisaccadic compression of space (Ross et al., 1997). More recently, we have demonstrated that brief probes are attracted towards a visual reference when followed by a mask, even in the absence of saccades (Zimmermann et al., 2014a). Here, we ask whether spatial compression depends on the transient disruptions of the visual input stream caused by either a mask or a saccade. Both of these degrade the probe visibility but we show that low probe visibility alone causes compression in the absence of any disruption. In a first experiment, we varied the regions of the screen covered by a transient mask, including areas where no stimulus was presented and a condition without masking. In all conditions, we adjusted probe contrast to make the probe equally hard to detect. Compression effects were found in all conditions. To obtain compression without a mask, the probe had to be presented at much lower contrasts than with masking. Comparing mislocalizations at different probe detection rates across masking, saccades and low contrast conditions without mask or saccade, Experiment 2 confirmed this observation and showed a strong influence of probe contrast on compression. Finally, in Experiment 3, we found that compression decreased as probe duration increased both for masks and saccades although here we did find some evidence that factors other than simply visibility as we measured it contribute to compression. Our experiments suggest that compression reflects how the visual system localizes weak targets in the context of highly visible stimuli. PMID:27013989

  14. The Compressed Baryonic Matter Experiment at FAIR

    NASA Astrophysics Data System (ADS)

    Heuser, J. M.

    2011-04-01

    The Compressed Baryonic Matter (CBM) experiment is being planned at the international research centre FAIR, under realization next to the GSI laboratory in Darmstadt, Germany. Its physics programme addresses the QCD phase diagram in the region of highest net baryon densities. Of particular interest are the expected first order phase transition from partonic to hadronic matter, ending in a critical point, and modifications of hadron properties in the dense medium as a signal of chiral symmetry restoration. Laid out as a fixed-target experiment at the synchrotrons SIS-100/SIS-300, providing magnetic bending power of 100 and 300 T/m, the CBM detector will record both proton-nucleus and nucleus-nucleus collisions at beam energies up to 45A GeV. Hadronic, leptonic and photonic observables have to be measured with large acceptance. The nuclear interaction rates will reach up to 10 MHz to measure extremely rare probes like charm near threshold. Two versions of the experiment are being studied, optimized for either electron-hadron or muon identification, combined with silicon detector based charged-particle tracking and micro-vertex detection. The research programme will start at SIS-100 with ion beams between 2 and 11A GeV, and protons up to energies of 29 GeV using the HADES detector and an initial configuration of the CBM experiment. The CBM physics requires the development of novel detector systems, trigger and data acquisition concepts as well as innovative real-time reconstruction techniques. Progress with feasibility studies of the experiment and the development of its detector systems are discussed.

  15. Low Compression Tennis Balls and Skill Development

    PubMed Central

    Hammond, John; Smith, Christina

    2006-01-01

    Coaching aims to improve player performance and coaches have a number of coaching methods and strategies they use to enhance this process. If new methods and ideas can be determined to improve player performance they will change coaching practices and processes. This study investigated the effects of using low compression balls (LCBs) during coaching sessions with beginning tennis players. In order to assess the effectiveness of LCBs on skill learning the study employed a quasi-experimental design supported by qualitative and descriptive data. Beginner tennis players took part in coaching sessions, one group using the LCBs while the other group used standard tennis balls. Both groups were administered a skills at the beginning of a series of coaching sessions and again at the end. A statistical investigation of the difference between pre and post-test results was carried out to determine the effect of LCBs on skill learning. Additional qualitative data was obtained through interviews, video capture and the use of performance analysis of typical coaching sessions for each group. The skill test results indicated no difference in skill learning when comparing beginners using the LCBs to those using the standard balls. Coaches reported that the LCBs appeared to have a positive effect on technique development, including aspects of technique that are related to improving power of the shot. Additional benefits were that rallies went on longer and more opportunity for positive reinforcement. In order to provide a more conclusive answer to the effects of LCBs on skill learning and technique development recommendations for future research were established including a more controlled experimental environment and larger sample sizes across a longer period of time. Key Points LCB may aid skill learning in tennis. Qualitative indicators. Statistical evidence not conclusive. Further studies of larger groups recommended. PMID:24357952

  16. Analysis of kink band formation under compression

    NASA Technical Reports Server (NTRS)

    Hahn, H. Thomas

    1987-01-01

    The kink band formation in unidirectional composites under compression is analyzed in the present paper. The kinematics of kink band formation is described in terms of a deformation tensor. Equilibrium conditions are then applied to relate the compression load to the deformation of fibers. Since the in situ shear behavior of the matrix resin is not known, an analysis-experiment correlation is used to find the shear failure strain in the kink band. The present analysis thus elucidates the mechanisms and identifies the controlling parameters, of compression failure.

  17. Optimization of radar pulse compression processing

    NASA Astrophysics Data System (ADS)

    Song, Samuel M.; Kim, Woonkyung M.; Lee, Myung-Su

    1997-06-01

    We propose an optimal radar pulse compression technique and evaluate its performance in the presence of Doppler shift. The traditional pulse compression using Barker code increases the signal strength by transmitting a Barker coded long pulse. The received signal is then processed by an appropriate correlation processing. This Barker code radar pulse compression enhances the detection sensitivity while maintaining the range resolution of a single chip of the Barker coded long pulse. But unfortunately, the technique suffers from the addition of range sidelobes which sometimes will mask weak targets in the vicinity of larger targets. Our proposed optimal algorithm completely eliminates the sidelobes at the cost of additional processing.

  18. Compression of Complex-Valued SAR Imagery

    SciTech Connect

    Eichel P.; Ives, R.W.

    1999-03-03

    Synthetic Aperture Radars are coherent imaging systems that produce complex-valued images of the ground. Because modern systems can generate large amounts of data, there is substantial interest in applying image compression techniques to these products. In this paper, we examine the properties of complex-valued SAR images relevant to the task of data compression. We advocate the use of transform-based compression methods but employ radically different quantization strategies than those commonly used for incoherent optical images. The theory, methodology, and examples are presented.

  19. Calculation methods for compressible turbulent boundary layers

    NASA Technical Reports Server (NTRS)

    Bushnell, D. M.; Cary, A. M., Jr.; Harris, J. E.

    1976-01-01

    Calculation procedures for non-reacting compressible two- and three-dimensional turbulent boundary layers were reviewed. Integral, transformation and correlation methods, as well as finite difference solutions of the complete boundary layer equations summarized. Alternative numerical solution procedures were examined, and both mean field and mean turbulence field closure models were considered. Physics and related calculation problems peculiar to compressible turbulent boundary layers are described. A catalog of available solution procedures of the finite difference, finite element, and method of weighted residuals genre is included. Influence of compressibility, low Reynolds number, wall blowing, and pressure gradient upon mean field closure constants are reported.

  20. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  1. Evolution Of Nonlinear Waves in Compressing Plasma

    SciTech Connect

    P.F. Schmit, I.Y. Dodin, and N.J. Fisch

    2011-05-27

    Through particle-in-cell simulations, the evolution of nonlinear plasma waves is examined in one-dimensional collisionless plasma undergoing mechanical compression. Unlike linear waves, whose wavelength decreases proportionally to the system length L(t), nonlinear waves, such as solitary electron holes, conserve their characteristic size {Delta} during slow compression. This leads to a substantially stronger adiabatic amplification as well as rapid collisionless damping when L approaches {Delta}. On the other hand, cessation of compression halts the wave evolution, yielding a stable mode.

  2. Digital Image Compression Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.

    1993-01-01

    The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.

  3. Data compression for full motion video transmission

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Sayood, Khalid

    1991-01-01

    Clearly transmission of visual information will be a major, if not dominant, factor in determining the requirements for, and assessing the performance of, the SEI communications systems. Projected image/video requirements which are currently anticipated for SEI mission scenarios are presented. Based on this information and projected link performance figures, the image/video data compression requirements which would allow link closure are identified. Finally several approaches which could satisfy some of the compression requirements are presented and possible future approaches which show promise for more substantial compression performance improvement are discussed.

  4. Modulation compression for short wavelength harmonic generation

    SciTech Connect

    Qiang, J.

    2010-01-11

    Laser modulator is used to seed free electron lasers. In this paper, we propose a scheme to compress the initial laser modulation in the longitudinal phase space by using two opposite sign bunch compressors and two opposite sign energy chirpers. This scheme could potentially reduce the initial modulation wavelength by a factor of C and increase the energy modulation amplitude by a factor of C, where C is the compression factor of the first bunch compressor. Such a compressed energy modulation can be directly used to generate short wavelength current modulation with a large bunching factor.

  5. Compressible homogeneous shear: Simulation and modeling

    NASA Technical Reports Server (NTRS)

    Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.

    1992-01-01

    Compressibility effects were studied on turbulence by direct numerical simulation of homogeneous shear flow. A primary observation is that the growth of the turbulent kinetic energy decreases with increasing turbulent Mach number. The sinks provided by compressible dissipation and the pressure dilatation, along with reduced Reynolds shear stress, are shown to contribute to the reduced growth of kinetic energy. Models are proposed for these dilatational terms and verified by direct comparison with the simulations. The differences between the incompressible and compressible fields are brought out by the examination of spectra, statistical moments, and structure of the rate of strain tensor.

  6. Compressed Gas Safety for Experimental Fusion Facilities

    SciTech Connect

    Lee C. Cadwallader

    2004-09-01

    Experimental fusion facilities present a variety of hazards to the operators and staff. There are unique or specialized hazards, including magnetic fields, cryogens, radio frequency emissions, and vacuum reservoirs. There are also more general industrial hazards, such as a wide variety of electrical power, pressurized air, and cooling water systems in use, there are crane and hoist loads, working at height, and handling compressed gas cylinders. This paper outlines the projectile hazard assoicated with compressed gas cylinders and mthods of treatment to provide for compressed gas safety. This information should be of interest to personnel at both magnetic and inertial fusion experiments.

  7. Properties of compressible elastica from relativistic analogy.

    PubMed

    Oshri, Oz; Diamant, Haim

    2016-01-21

    Kirchhoff's kinetic analogy relates the deformation of an incompressible elastic rod to the classical dynamics of rigid body rotation. We extend the analogy to compressible filaments and find that the extension is similar to the introduction of relativistic effects into the dynamical system. The extended analogy reveals a surprising symmetry in the deformations of compressible elastica. In addition, we use known results for the buckling of compressible elastica to derive the explicit solution for the motion of a relativistic nonlinear pendulum. We discuss cases where the extended Kirchhoff analogy may be useful for the study of other soft matter systems.

  8. Data compression for full motion video transmission

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Sayood, Khalid

    1991-01-01

    Clearly transmission of visual information will be a major, if not dominant, factor in determining the requirements for, and assessing the performance of the Space Exploration Initiative (SEI) communications systems. Projected image/video requirements which are currently anticipated for SEI mission scenarios are presented. Based on this information and projected link performance figures, the image/video data compression requirements which would allow link closure are identified. Finally several approaches which could satisfy some of the compression requirements are presented and possible future approaches which show promise for more substantial compression performance improvement are discussed.

  9. An efficient compression scheme for bitmap indices

    SciTech Connect

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2004-04-13

    When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap code (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query response time

  10. Compressive response of Kevlar/epoxy composites

    SciTech Connect

    Yeh, J.R.; Teply, J.L.

    1988-03-01

    A mathematical model is developed from the principle of minimum potential energy to determine the longitudinal compressive response of unidirectional fiber composites. A theoretical study based on this model is conducted to assess the influence of local fiber misalignment and the nonlinear shear deformation of the matrix. Numerical results are compared with experiments to verify this study; it appears that the predicted compressive response coincides well with experimental results. It is also shown that the compressive strength of Kevlar/epoxy is dominated by local shear failure. 12 references.

  11. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  12. Image compression based on GPU encoding

    NASA Astrophysics Data System (ADS)

    Bai, Zhaofeng; Qiu, Yuehong

    2015-07-01

    With the rapid development of digital technology, the data increased greatly in both static image and dynamic video image. It is noticeable how to decrease the redundant data in order to save or transmit information more efficiently. So the research on image compression becomes more and more important. Using GPU to achieve higher compression ratio has superiority in interactive remote visualization. Contrast to CPU, GPU may be a good way to accelerate the image compression. Currently, GPU of NIVIDIA has evolved into the eighth generation, which increasingly dominates the high-powered general purpose computer field. This paper explains the way of GPU encoding image. Some experiment results are also presented.

  13. Logarithmic compression methods for spectral data

    DOEpatents

    Dunham, Mark E.

    2003-01-01

    A method is provided for logarithmic compression, transmission, and expansion of spectral data. A log Gabor transformation is made of incoming time series data to output spectral phase and logarithmic magnitude values. The output phase and logarithmic magnitude values are compressed by selecting only magnitude values above a selected threshold and corresponding phase values to transmit compressed phase and logarithmic magnitude values. A reverse log Gabor transformation is then performed on the transmitted phase and logarithmic magnitude values to output transmitted time series data to a user.

  14. Dynamic-Range Compression For Infrared Imagery

    NASA Technical Reports Server (NTRS)

    Cheng, Li-Jen; Liu, Hua-Kuang

    1989-01-01

    Photorefractive crystals covering detectors prevent saturation. To make full use of information in image, desirable to compress dynamic range of input intensity to within region of approximately linear response of detector. Dynamic-range compression exhibited by measurements of attenuation in photorefractive GaAs. Effective dynamic-range-compressor plate, film, or coating reduces apparent contrast of scene imaged on detector plane to within dynamic range of detectors; original image contrast or intensity data recovered subsequently in electronic image processing because range-compression function and inverse known.

  15. Dependability Improvement for PPM Compressed Data by Using Compression Pattern Matching

    NASA Astrophysics Data System (ADS)

    Kitakami, Masato; Okura, Toshihiro

    Data compression is popularly applied to computer systems and communication systems in order to reduce storage size and communication time, respectively. Since large data are used frequently, string matching for such data takes a long time. If the data are compressed, the time gets much longer because decompression is necessary. Long string matching time makes computer virus scan time longer and gives serious influence to the security of data. From this, CPM (Compression Pattern Matching) methods for several compression methods have been proposed. This paper proposes CPM method for PPM which achieves fast virus scan and improves dependability of the compressed data, where PPM is based on a Markov model, uses a context information, and achieves a better compression ratio than BW transform and Ziv-Lempel coding. The proposed method encodes the context information, which is generated in the compression process, and appends the encoded data at the beginning of the compressed data as a header. The proposed method uses only the header information. Computer simulation says that augmentation of the compression ratio is less than 5 percent if the order of the PPM is less than 5 and the source file size is more than 1M bytes, where order is the maximum length of the context used in PPM compression. String matching time is independent of the source file size and is very short, less than 0.3 micro seconds in the PC used for the simulation.

  16. Properties of Compressive Strength and Heating Value of Compressed Semi-Carbonized Sugi thinning

    NASA Astrophysics Data System (ADS)

    Sawai, Toru; Kajimoto, Takeshi; Akasaka, Motofumi; Kaji, Masuo; Ida, Tamio; Fuchihata, Manabu; Honjyo, Takako; Sano, Hiroshi

    Sugi thinnings with small diameter that are not suitable for lumber can be considered as important domestic energy resources. To utilize Sugi thinnings as alternative fuel of coal cokes, properties of compressive strength and heating value of compressed semi-carbonized wood fuel are investigated. To enhance the heating value, "semi-carbonization", that is the pyrolysis in the temperature range between 200 and 400 degree, is conducted. From the variation of heating value and energy yield of char with pyrolysis temperature, the semi-carbonization pyrolysis is found to be the upgrading technology to convert the woody biomass into the high energy density fuel at high energy yield. To increase the compressive strength, "Cold Isostatic Pressing" method is adopted. The compressive strength of the compressed wood fuel decreases with pyrolysis temperature, while the heating value increases. The drastic decrease in the compressive strength is observed at temperature of 250 degree. The increase in the hydrostatic compression pressure improves the compressive strength for an entire range of semi-carbonization pyrolysis. The alternative fuel with high heating value and high compressive strength can be produced by the semi-carbonization processing at temperature of 280 degree for wood fuel compressed at hydrostatic pressure of 200MPa.

  17. Data compression: The end-to-end information systems perspective for NASA space science missions

    NASA Technical Reports Server (NTRS)

    Tai, Wallace

    1991-01-01

    The unique characteristics of compressed data have important implications to the design of space science data systems, science applications, and data compression techniques. The sequential nature or data dependence between each of the sample values within a block of compressed data introduces an error multiplication or propagation factor which compounds the effects of communication errors. The data communication characteristics of the onboard data acquisition, storage, and telecommunication channels may influence the size of the compressed blocks and the frequency of included re-initialization points. The organization of the compressed data are continually changing depending on the entropy of the input data. This also results in a variable output rate from the instrument which may require buffering to interface with the spacecraft data system. On the ground, there exist key tradeoff issues associated with the distribution and management of the science data products when data compression techniques are applied in order to alleviate the constraints imposed by ground communication bandwidth and data storage capacity.

  18. Subpicosecond Compression Experiments at Los Alamos National Laboratory

    SciTech Connect

    Carlsten, B.E.; Feldman, D.W.; Kinross-Wright, J.M.; Milder, M.L.; Russell, S.J.; Plato, J.G.; Sherwood, B.A.; Weber, M.E.; Cooper, R.G.; Sturges, R.E.

    1996-04-01

    We report on recent experiments using a magnetic chicane compressor at 8 MeV. Electron bunches at both low (0.1 nC) and high (1 nC) charges were compressed from 10{endash}15 ps to less than 1 ps (FWHM). A transverse deflecting rf cavity was used to measure the bunch length at low charge; the bunch length at high charge was inferred from the induced energy spread of the beam. The longitudinal centrifugal space-charge force [{ital Phys}. {ital Rev}. {ital E} {bold 51}, 1453 (1995)] is calculated using a point-to-point numerical simulation and is shown not to influence the energy-spread measurement. {copyright} {ital 1996 American Institute of Physics.}

  19. Parallel Tensor Compression for Large-Scale Scientific Data.

    SciTech Connect

    Kolda, Tamara G.; Ballard, Grey; Austin, Woody Nathan

    2015-10-01

    As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memory parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.

  20. A nonlocal constitutive model for trabecular bone softening in compression.

    PubMed

    Charlebois, Mathieu; Jirásek, Milan; Zysset, Philippe K

    2010-10-01

    Using the three-dimensional morphological data provided by computed tomography, finite element (FE) models can be generated and used to compute the stiffness and strength of whole bones. Three-dimensional constitutive laws capturing the main features of bone mechanical behavior can be developed and implemented into FE software to enable simulations on complex bone structures. For this purpose, a constitutive law is proposed, which captures the compressive behavior of trabecular bone as a porous material with accumulation of irreversible strain and loss of stiffness beyond its yield point and softening beyond its ultimate point. To account for these features, a constitutive law based on damage coupled with hardening anisotropic elastoplasticity is formulated using density and fabric-based tensors. To prevent mesh dependence of the solution, a nonlocal averaging technique is adopted. The law has been implemented into a FE software and some simple simulations are first presented to illustrate its behavior. Finally, examples dealing with compression of vertebral bodies clearly show the impact of softening on the localization of the inelastic process.

  1. Predicting failure: acoustic emission of berlinite under compression.

    PubMed

    Nataf, Guillaume F; Castillo-Villa, Pedro O; Sellappan, Pathikumar; Kriven, Waltraud M; Vives, Eduard; Planes, Antoni; Salje, Ekhard K H

    2014-07-09

    Acoustic emission has been measured and statistical characteristics analyzed during the stress-induced collapse of porous berlinite, AlPO4, containing up to 50 vol% porosity. Stress collapse occurs in a series of individual events (avalanches), and each avalanche leads to a jerk in sample compression with corresponding acoustic emission (AE) signals. The distribution of AE avalanche energies can be approximately described by a power law p(E)dE = E(-ε)dE (ε ~ 1.8) over a large stress interval. We observed several collapse mechanisms whereby less porous minerals show the superposition of independent jerks, which were not related to the major collapse at the failure stress. In highly porous berlinite (40% and 50%) an increase of energy emission occurred near the failure point. In contrast, the less porous samples did not show such an increase in energy emission. Instead, in the near vicinity of the main failure point they showed a reduction in the energy exponent to ~ 1.4, which is consistent with the value reported for compressed porous systems displaying critical behavior. This suggests that a critical avalanche regime with a lack of precursor events occurs. In this case, all preceding large events were 'false alarms' and unrelated to the main failure event. Our results identify a method to use pico-seismicity detection of foreshocks to warn of mine collapse before the main failure (the collapse) occurs, which can be applied to highly porous materials only.

  2. Optical distortions by compressible turbulence

    NASA Astrophysics Data System (ADS)

    Mani, Ali

    Optical distortions induced by refractive index fluctuations in turbulent flows are a serious concern in airborne communication and imaging systems. This project focuses on aero-optical flows in which compressible turbulence is the dominant source of optical distortions. These flows include boundary layers, free shear layers, cavity flows, and wakes typically associated with flight conditions. The present study consists of two theoretical analyses and an extensive numerical investigation of optical distortions by separated shear layers and turbulent wakes. We present an analysis of far-field optical statistics in a general aero-optical framework. Based on this analysis, measures of far-field distortion, such as tilt, spread, and loss of focus-depth, are linked to key flow statistics. By employing these measures, we quantify distortion effects through a set of norms that have provable scaling properties with key optical parameters. The second analysis presents a theoretical estimate of the range of optically important flow scales in an arbitrary aero-optical flowfield. We show that in the limit of high Reynolds numbers, the smallest optically important scale does not depend on the Kolmogorov scale. For a given geometry this length scale depends only on the flow Mach number, freestream refractive index, and the optical wavelength. The provided formula can be used to estimate grid resolution requirements for numerical simulations of aero-optical phenomena. A rough estimate indicates that resolution requirements for accurate prediction of aero-optics is not much higher than typical LES requirements. As a model problem, compressible turbulent flows over a circular cylinder is considered to study the fundamental physics of aero-optical effects. Large-eddy simulation with a high-resolution numerical scheme is employed to compute variations of the refractive index field in the separated shear layers and turbulent wakes in a range of flow Mach numbers (0.2--0.85) and

  3. A Material Point Method for Complex Fluids

    NASA Astrophysics Data System (ADS)

    Ram, Daniel

    We present a novel Material Point Method for simulating complex materials. The method achieves plasticity effects via the temporal evolution of the left elastic Cauchy-Green strain. We recast the upper-convected derivative of the strain in the Oldroyd-B constitutive model as a plastic flow and are able to simulate elastic and viscoelastic effects. Our model provides a volume-preserving rate-based description of plasticity that does not require singular value decompositions. Our semi-implicit discretization allows for high-resolution simulations. We also present novel discretizations of the temporal update of the left elastic Cauchy-Green strain for several constitutive models that preserve symmetry and positive-definiteness of the strain for use in the Material Point Method. A novel modification to a constitutive model is also presented that models material softening under plastic compression.

  4. Musical beauty and information compression: Complex to the ear but simple to the mind?

    PubMed Central

    2011-01-01

    Background The biological origin of music, its universal appeal across human cultures and the cause of its beauty remain mysteries. For example, why is Ludwig Van Beethoven considered a musical genius but Kylie Minogue is not? Possible answers to these questions will be framed in the context of Information Theory. Presentation of the Hypothesis The entire life-long sensory data stream of a human is enormous. The adaptive solution to this problem of scale is information compression, thought to have evolved to better handle, interpret and store sensory data. In modern humans highly sophisticated information compression is clearly manifest in philosophical, mathematical and scientific insights. For example, the Laws of Physics explain apparently complex observations with simple rules. Deep cognitive insights are reported as intrinsically satisfying, implying that at some point in evolution, the practice of successful information compression became linked to the physiological reward system. I hypothesise that the establishment of this "compression and pleasure" connection paved the way for musical appreciation, which subsequently became free (perhaps even inevitable) to emerge once audio compression had become intrinsically pleasurable in its own right. Testing the Hypothesis For a range of compositions, empirically determine the relationship between the listener's pleasure and "lossless" audio compression. I hypothesise that enduring musical masterpieces will possess an interesting objective property: despite apparent complexity, they will also exhibit high compressibility. Implications of the Hypothesis Artistic masterpieces and deep Scientific insights share the common process of data compression. Musical appreciation is a parasite on a much deeper information processing capacity. The coalescence of mathematical and musical talent in exceptional individuals has a parsimonious explanation. Musical geniuses are skilled in composing music that appears highly complex to

  5. SUPG Finite Element Simulations of Compressible Flows

    NASA Technical Reports Server (NTRS)

    Kirk, Brnjamin, S.

    2006-01-01

    The Streamline-Upwind Petrov-Galerkin (SUPG) finite element simulations of compressible flows is presented. The topics include: 1) Introduction; 2) SUPG Galerkin Finite Element Methods; 3) Applications; and 4) Bibliography.

  6. Compression behavior of unidirectional fibrous composite

    NASA Technical Reports Server (NTRS)

    Sinclair, J. H.; Chamis, C. C.

    1982-01-01

    The longitudinal compression behavior of unidirectional fiber composites is investigated using a modified Celanese test method with thick and thin test specimens. The test data obtained are interpreted using the stress/strain curves from back-to-back strain gages, examination of fracture surfaces by scanning electron microscope, and predictive equations for distinct failure modes including fiber compression failure, Euler buckling, delamination, and flexure. The results show that the longitudinal compression fracture is induced by a combination of delamination, flexure, and fiber tier breaks. No distinct fracture surface characteristics can be associated with unique failure modes. An equation is described which can be used to extract the longitudinal compression strength knowing the longitudinal tensile and flexural strengths of the same composite system.

  7. Block adaptive rate controlled image data compression

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Hilbert, E.; Lee, J.-J.; Schlutsmeyer, A.

    1979-01-01

    A block adaptive rate controlled (BARC) image data compression algorithm is described. It is noted that in the algorithm's principal rate controlled mode, image lines can be coded at selected rates by combining practical universal noiseless coding techniques with block adaptive adjustments in linear quantization. Compression of any source data at chosen rates of 3.0 bits/sample and above can be expected to yield visual image quality with imperceptible degradation. Exact reconstruction will be obtained if the one-dimensional difference entropy is below the selected compression rate. It is noted that the compressor can also be operated as a floating rate noiseless coder by simply not altering the input data quantization. Here, the universal noiseless coder ensures that the code rate is always close to the entropy. Application of BARC image data compression to the Galileo orbiter mission of Jupiter is considered.

  8. Method for compression of binary data

    DOEpatents

    Berlin, G.J.

    1996-03-26

    The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression. 5 figs.

  9. Method for compression of binary data

    DOEpatents

    Berlin, Gary J.

    1996-01-01

    The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression.

  10. Seneca Compressed Air Energy Storage (CAES) Project

    SciTech Connect

    2012-11-30

    This document provides specifications for the process air compressor for a compressed air storage project, requests a budgetary quote, and provides supporting information, including compressor data, site specific data, water analysis, and Seneca CAES value drivers.

  11. Universal lossless compression algorithm for textual images

    NASA Astrophysics Data System (ADS)

    al Zahir, Saif

    2012-03-01

    In recent years, an unparalleled volume of textual information has been transported over the Internet via email, chatting, blogging, tweeting, digital libraries, and information retrieval systems. As the volume of text data has now exceeded 40% of the total volume of traffic on the Internet, compressing textual data becomes imperative. Many sophisticated algorithms were introduced and employed for this purpose including Huffman encoding, arithmetic encoding, the Ziv-Lempel family, Dynamic Markov Compression, and Burrow-Wheeler Transform. My research presents novel universal algorithm for compressing textual images. The algorithm comprises two parts: 1. a universal fixed-to-variable codebook; and 2. our row and column elimination coding scheme. Simulation results on a large number of Arabic, Persian, and Hebrew textual images show that this algorithm has a compression ratio of nearly 87%, which exceeds published results including JBIG2.

  12. A compressible model of soap film flow

    NASA Astrophysics Data System (ADS)

    Fast, Petri

    2004-11-01

    We consider flowing soap films, and present a new theoretical model that resembles the compressible two dimensional Navier-Stokes equations. In experiments, the thickness of a gravity driven soap film can undergo significant variations. The thickness of the soap film plays the role of a density field in a 2D model: Hence significant thickness variations give rise to 2D compressibility effects that have been observed in experiments. We present a systematic derivation of a new compressible model of soap film flow using thin film asymptotics. We discuss the properties of the model, and present criteria for using the incompressible or compressible limiting equations. The properties of the model are illustrated with computational experiments.

  13. Fingerprint Compression Based on Sparse Representation.

    PubMed

    Shao, Guangqi; Wu, Yanping; A, Yong; Liu, Xiao; Guo, Tiande

    2014-02-01

    A new fingerprint compression algorithm based on sparse representation is introduced. Obtaining an overcomplete dictionary from a set of fingerprint patches allows us to represent them as a sparse linear combination of dictionary atoms. In the algorithm, we first construct a dictionary for predefined fingerprint image patches. For a new given fingerprint images, represent its patches according to the dictionary by computing l(0)-minimization and then quantize and encode the representation. In this paper, we consider the effect of various factors on compression results. Three groups of fingerprint images are tested. The experiments demonstrate that our algorithm is efficient compared with several competing compression techniques (JPEG, JPEG 2000, and WSQ), especially at high compression ratios. The experiments also illustrate that the proposed algorithm is robust to extract minutiae.

  14. Super high compression of line drawing data

    NASA Technical Reports Server (NTRS)

    Cooper, D. B.

    1976-01-01

    Models which can be used to accurately represent the type of line drawings which occur in teleconferencing and transmission for remote classrooms and which permit considerable data compression were described. The objective was to encode these pictures in binary sequences of shortest length but such that the pictures can be reconstructed without loss of important structure. It was shown that exploitation of reasonably simple structure permits compressions in the range of 30-100 to 1. When dealing with highly stylized material such as electronic or logic circuit schematics, it is unnecessary to reproduce configurations exactly. Rather, the symbols and configurations must be understood and be reproduced, but one can use fixed font symbols for resistors, diodes, capacitors, etc. Compression of pictures of natural phenomena such as can be realized by taking a similar approach, or essentially zero error reproducibility can be achieved but at a lower level of compression.

  15. Pulse compression and prepulse suppression apparatus

    DOEpatents

    Dane, Clifford B.; Hackel, Lloyd A.; George, Edward V.; Miller, John L.; Krupke, William F.

    1993-01-01

    A pulse compression and prepulse suppression apparatus (10) for time compressing the output of a laser (14). A pump pulse (46) is separated from a seed pulse (48) by a first polarized beam splitter (20) according to the orientation of a half wave plate (18). The seed pulse (48) is directed into an SBS oscillator (44) by two plane mirrors (22, 26) and a corner mirror (24), the corner mirror (24) being movable to adjust timing. The pump pulse (46) is directed into an SBS amplifier 34 wherein SBS occurs. The seed pulse (48), having been propagated from the SBS oscillator (44), is then directed through the SBS amplifier (34) wherein it sweeps the energy of the pump pulse (46) out of the SBS amplifier (34) and is simultaneously compressed, and the time compressed pump pulse (46) is emitted as a pulse output (52). A second polarized beam splitter (38) directs any undepleted pump pulse 58 away from the SBS oscillator (44).

  16. Pulse compression and prepulse suppression apparatus

    DOEpatents

    Dane, C.B.; Hackel, L.A.; George, E.V.; Miller, J.L.; Krupke, W.F.

    1993-11-09

    A pulse compression and prepulse suppression apparatus (10) for time compressing the output of a laser (14). A pump pulse (46) is separated from a seed pulse (48) by a first polarized beam splitter (20) according to the orientation of a half wave plate (18). The seed pulse (48) is directed into an SBS oscillator (44) by two plane mirrors (22, 26) and a corner mirror (24), the corner mirror (24) being movable to adjust timing. The pump pulse (46) is directed into an SBS amplifier 34 wherein SBS occurs. The seed pulse (48), having been propagated from the SBS oscillator (44), is then directed through the SBS amplifier (34) wherein it sweeps the energy of the pump pulse (46) out of the SBS amplifier (34) and is simultaneously compressed, and the time compressed pump pulse (46) is emitted as a pulse output (52). A second polarized beam splitter (38) directs any undepleted pump pulse 58 away from the SBS oscillator (44).

  17. Relativistic laser pulse compression in magnetized plasmas

    SciTech Connect

    Liang, Yun; Sang, Hai-Bo Wan, Feng; Lv, Chong; Xie, Bai-Song

    2015-07-15

    The self-compression of a weak relativistic Gaussian laser pulse propagating in a magnetized plasma is investigated. The nonlinear Schrödinger equation, which describes the laser pulse amplitude evolution, is deduced and solved numerically. The pulse compression is observed in the cases of both left- and right-hand circular polarized lasers. It is found that the compressed velocity is increased for the left-hand circular polarized laser fields, while decreased for the right-hand ones, which is reinforced as the enhancement of the external magnetic field. We find a 100 fs left-hand circular polarized laser pulse is compressed in a magnetized (1757 T) plasma medium by more than ten times. The results in this paper indicate the possibility of generating particularly intense and short pulses.

  18. Ramp compression of iron to 273 GPa

    DOE PAGES

    Wang, Jue; Smith, Raymond F.; Eggert, Jon H.; ...

    2013-07-11

    Multiple thickness Fe foils were ramp compressed over several nanoseconds to pressure conditions relevant to the Earth’s core. Using wave-profile analysis, the sound speed and the stress-density response were determined to a peak longitudinal stress of 273 GPa. The measured stress-density states lie between shock compression and 300-K static data, and are consistent with relatively low temperatures being achieved in these experiments. Phase transitions generally display time-dependent material response and generate a growing shock. We demonstrate for the first time that a low-pressure phase transformation (α-Fe to ε-Fe) can be overdriven by an initial steady shock to avoid both themore » time-dependent response and the growing shock that has previously limited ramp-wave-loading experiments. Additionally, the initial steady shock pre-compresses the Fe and allows different thermodynamic compression paths to be explored.« less

  19. Ramp compression of iron to 273 GPa

    SciTech Connect

    Wang, Jue; Smith, Raymond F.; Eggert, Jon H.; Braun, Dave G.; Boehly, Thomas R.; Patterson, J. Reed; Celliers, Peter M.; Jeanloz, Raymond; Collins, Gilbert W.; Duffy, Thomas S.

    2013-07-11

    Multiple thickness Fe foils were ramp compressed over several nanoseconds to pressure conditions relevant to the Earth’s core. Using wave-profile analysis, the sound speed and the stress-density response were determined to a peak longitudinal stress of 273 GPa. The measured stress-density states lie between shock compression and 300-K static data, and are consistent with relatively low temperatures being achieved in these experiments. Phase transitions generally display time-dependent material response and generate a growing shock. We demonstrate for the first time that a low-pressure phase transformation (α-Fe to ε-Fe) can be overdriven by an initial steady shock to avoid both the time-dependent response and the growing shock that has previously limited ramp-wave-loading experiments. Additionally, the initial steady shock pre-compresses the Fe and allows different thermodynamic compression paths to be explored.

  20. Compression asphyxia from a human pyramid.

    PubMed

    Tumram, Nilesh Keshav; Ambade, Vipul Namdeorao; Biyabani, Naushad

    2015-12-01

    In compression asphyxia, respiration is stopped by external forces on the body. It is usually due to an external force compressing the trunk such as a heavy weight on the chest or abdomen and is associated with internal injuries. In present case, the victim was trapped and crushed under the falling persons from a human pyramid formation for a "Dahi Handi" festival. There was neither any severe blunt force injury nor any significant pathological natural disease contributing to the cause of death. The victim was unable to remove himself from the situation because his cognitive responses and coordination were impaired due to alcohol intake. The victim died from asphyxia due to compression of his chest and abdomen. Compression asphyxia resulting from the collapse of a human pyramid and the dynamics of its impact force in these circumstances is very rare and is not reported previously to the best of our knowledge.

  1. Hyperspectral image data compression based on DSP

    NASA Astrophysics Data System (ADS)

    Fan, Jiming; Zhou, Jiankang; Chen, Xinhua; Shen, Weimin

    2010-11-01

    The huge data volume of hyperspectral image challenges its transportation and store. It is necessary to find an effective method to compress the hyperspectral image. Through analysis and comparison of current various algorithms, a mixed compression algorithm based on prediction, integer wavelet transform and embedded zero-tree wavelet (EZW) is proposed in this paper. We adopt a high-powered Digital Signal Processor (DSP) of TMS320DM642 to realize the proposed algorithm. Through modifying the mixed algorithm and optimizing its algorithmic language, the processing efficiency of the program was significantly improved, compared the non-optimized one. Our experiment show that the mixed algorithm based on DSP runs much faster than the algorithm on personal computer. The proposed method can achieve the nearly real-time compression with excellent image quality and compression performance.

  2. Defocus cue and saliency preserving video compression

    NASA Astrophysics Data System (ADS)

    Khanna, Meera Thapar; Chaudhury, Santanu; Lall, Brejesh

    2016-11-01

    There are monocular depth cues present in images or videos that aid in depth perception in two-dimensional images or videos. Our objective is to preserve the defocus depth cue present in the videos along with the salient regions during compression application. A method is provided for opportunistic bit allocation during the video compression using visual saliency information comprising both the image features, such as color and contrast, and the defocus-based depth cue. The method is divided into two steps: saliency computation followed by compression. A nonlinear method is used to combine pure and defocus saliency maps to form the final saliency map. Then quantization values are assigned on the basis of these saliency values over a frame. The experimental results show that the proposed scheme yields good results over standard H.264 compression as well as pure and defocus saliency methods.

  3. Efficient Quantum Information Processing via Quantum Compressions

    NASA Astrophysics Data System (ADS)

    Deng, Y.; Luo, M. X.; Ma, S. Y.

    2016-01-01

    Our purpose is to improve the quantum transmission efficiency and reduce the resource cost by quantum compressions. The lossless quantum compression is accomplished using invertible quantum transformations and applied to the quantum teleportation and the simultaneous transmission over quantum butterfly networks. New schemes can greatly reduce the entanglement cost, and partially solve transmission conflictions over common links. Moreover, the local compression scheme is useful for approximate entanglement creations from pre-shared entanglements. This special task has not been addressed because of the quantum no-cloning theorem. Our scheme depends on the local quantum compression and the bipartite entanglement transfer. Simulations show the success probability is greatly dependent of the minimal entanglement coefficient. These results may be useful in general quantum network communication.

  4. Progressive compression versus graduated compression for the management of venous insufficiency.

    PubMed

    Shepherd, Jan

    2016-09-01

    Venous leg ulceration (VLU) is a chronic condition associated with chronic venous insufficiency (CVI), where the most frequent complication is recurrence of ulceration after healing. Traditionally, graduated compression therapy has been shown to increase healing rates and also to reduce recurrence of VLU. Graduated compression occurs because the circumference of the limb is narrower at the ankle, thereby producing a higher pressure than at the calf, which is wider, creating a lower pressure. This phenomenon is explained by the principle known as Laplace's Law. Recently, the view that compression therapy must provide a graduated pressure gradient has been challenged. However, few studies so far have focused on the potential benefits of progressive compression where the pressure profile is inverted. This article will examine the contemporary concept that progressive compression may be as effective as traditional graduated compression therapy for the management of CVI.

  5. Prechamber Compression-Ignition Engine Performance

    NASA Technical Reports Server (NTRS)

    Moore, Charles S; Collins, John H , Jr

    1938-01-01

    Single-cylinder compression-ignition engine tests were made to investigate the performance characteristics of prechamber type of cylinder head. Certain fundamental variables influencing engine performance -- clearance distribution, size, shape, and direction of the passage connecting the cylinder and prechamber, shape of prechamber, cylinder clearance, compression ratio, and boosting -- were independently tested. Results of motoring and of power tests, including several typical indicator cards, are presented.

  6. An image-data-compression algorithm

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Rice, R. F.

    1981-01-01

    Cluster Compression Algorithm (CCA) preprocesses Landsat image data immediately following satellite data sensor (receiver). Data are reduced by extracting pertinent image features and compressing this result into concise format for transmission to ground station. This results in narrower transmission bandwidth, increased data-communication efficiency, and reduced computer time in reconstructing and analyzing image. Similar technique could be applied to other types of recorded data to cut costs of transmitting, storing, distributing, and interpreting complex information.

  7. Lossless Video Sequence Compression Using Adaptive Prediction

    NASA Technical Reports Server (NTRS)

    Li, Ying; Sayood, Khalid

    2007-01-01

    We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.

  8. Method and apparatus for signal compression

    DOEpatents

    Carangelo, R.M.

    1994-02-08

    The method and apparatus of the invention effects compression of an analog electrical signal (e.g., representing an interferogram) by introducing into it a component that is a cubic function thereof, normally as a nonlinear negative signal in a feedback loop of an Op Amp. The compressed signal will most desirably be digitized and then digitally decompressed so as to produce a signal that emulates the original. 8 figures.

  9. [Realization of DICOM medical image compression technology].

    PubMed

    Wang, Chenxi; Wang, Quan; Ren, Haiping

    2013-05-01

    This paper introduces the implement method of DICOM medical image compression technology, The image part of DICOM files are extracted and converted to BMP format. The non-image information in DICOM file are stored into the text. When the final image of JPEG standard and non-image information are encapsulated to DICOM format images, it realizes the compression of medical image, which is beneficial to the image storage and transmission.

  10. Compression and extraction of stopped muons.

    PubMed

    Taqqu, D

    2006-11-10

    Efficient conversion of a standard positive muon beam into a high-quality slow muon beam is shown to be achievable by compression of a muon swarm stopped in an extended gas volume. The stopped swarm can be squeezed into a mm-size swarm flow that can be extracted into vacuum through a small opening in the stop target walls. Novel techniques of swarm compression are considered. In particular, a density gradient in crossed electric and magnetic fields is used.

  11. Application of Compressive Sensing to Digital Holography

    DTIC Science & Technology

    2015-05-01

    AFRL-RY-WP-TR-2015-0071 APPLICATION OF COMPRESSIVE SENSING TO DIGITAL HOLOGRAPHY Mark Neifeld University of Arizona...From - To) May 2015 Final 3 September 2013 – 27 February 2015 4. TITLE AND SUBTITLE APPLICATION OF COMPRESSIVE SENSING TO DIGITAL HOLOGRAPHY 5a...from under- sampled data. This work presents a new reconstruction algorithm for use with under-sampled digital holography measurements and yields

  12. Efficiency of compressed-air systems

    NASA Astrophysics Data System (ADS)

    The current state of knowledge in American industry concerning the energy efficient design and operation of industrial compressed air systems and system components is examined. Since there is no standard reference for designers and operators of compressed air systems which provides guidelines for maximizing the energy efficiency of these systems, a major product of this contract was the preparation of a guidebook for this purpose.

  13. Method and apparatus for signal compression

    DOEpatents

    Carangelo, Robert M.

    1994-02-08

    The method and apparatus of the invention effects compression of an analog electrical signal (e.g., representing an interferogram) by introducing into it a component that is a cubic function thereof, normally as a nonlinear negative signal in a feedback loop of an Op Amp. The compressed signal will most desirably be digitized and then digitally decompressed so as to produce a signal that emulates the original.

  14. Blind One-Bit Compressive Sampling

    DTIC Science & Technology

    2013-01-17

    notation and recalling some background from convex analysis . For the d-dimensional Euclidean space Rd, the class of all lower semicontinuous convex...compressed sensing, Applied and Computational Harmonic Analysis , 27 (2009), pp. 265 – 274. [3] P. T. Boufounos and R. G. Baraniuk, 1-bit compressive sensing...Convergence analysis of the algorithm is presented. Our approach is to obtain a sequence of optimization problems by successively approximating the ℓ0

  15. Lossy compression of weak lensing data

    DOE PAGES

    Vanderveld, R. Ali; Bernstein, Gary M.; Stoughton, Chris; ...

    2011-07-12

    Future orbiting observatories will survey large areas of sky in order to constrain the physics of dark matter and dark energy using weak gravitational lensing and other methods. Lossy compression of the resultant data will improve the cost and feasibility of transmitting the images through the space communication network. We evaluate the consequences of the lossy compression algorithm of Bernstein et al. (2010) for the high-precision measurement of weak-lensing galaxy ellipticities. This square-root algorithm compresses each pixel independently, and the information discarded is by construction less than the Poisson error from photon shot noise. For simulated space-based images (without cosmicmore » rays) digitized to the typical 16 bits per pixel, application of the lossy compression followed by image-wise lossless compression yields images with only 2.4 bits per pixel, a factor of 6.7 compression. We demonstrate that this compression introduces no bias in the sky background. The compression introduces a small amount of additional digitization noise to the images, and we demonstrate a corresponding small increase in ellipticity measurement noise. The ellipticity measurement method is biased by the addition of noise, so the additional digitization noise is expected to induce a multiplicative bias on the galaxies measured ellipticities. After correcting for this known noise-induced bias, we find a residual multiplicative ellipticity bias of m {approx} -4 x 10-4. This bias is small when compared to the many other issues that precision weak lensing surveys must confront, and furthermore we expect it to be reduced further with better calibration of ellipticity measurement methods.« less

  16. Multidimensional imaging using compressive Fresnel holography.

    PubMed

    Horisaki, Ryoichi; Tanida, Jun; Stern, Adrian; Javidi, Bahram

    2012-06-01

    We propose a generalized framework for single-shot acquisition of multidimensional objects using compressive Fresnel holography. A multidimensional object with spatial, spectral, and polarimetric information is propagated with the Fresnel diffraction, and the propagated signal of each channel is observed by an image sensor with randomly arranged optical elements for filtering. The object data are reconstructed using a compressive sensing algorithm. This scheme is verified with numerical experiments. The proposed framework can be applied to imageries for spectrum, polarization, and so on.

  17. Software For Tie-Point Registration Of SAR Data

    NASA Technical Reports Server (NTRS)

    Rignot, Eric; Dubois, Pascale; Okonek, Sharon; Van Zyl, Jacob; Burnette, Fred; Borgeaud, Maurice

    1995-01-01

    SAR-REG software package registers synthetic-aperture-radar (SAR) image data to common reference frame based on manual tie-pointing. Image data can be in binary, integer, floating-point, or AIRSAR compressed format. For example, with map of soil characteristics, vegetation map, digital elevation map, or SPOT multispectral image, as long as user can generate binary image to be used by tie-pointing routine and data are available in one of the previously mentioned formats. Written in FORTRAN 77.

  18. An algorithm for compression of bilevel images.

    PubMed

    Reavy, M D; Boncelet, C G

    2001-01-01

    This paper presents the block arithmetic coding for image compression (BACIC) algorithm: a new method for lossless bilevel image compression which can replace JBIG, the current standard for bilevel image compression. BACIC uses the block arithmetic coder (BAC): a simple, efficient, easy-to-implement, variable-to-fixed arithmetic coder, to encode images. BACIC models its probability estimates adaptively based on a 12-bit context of previous pixel values; the 12-bit context serves as an index into a probability table whose entries are used to compute p(1) (the probability of a bit equaling one), the probability measure BAC needs to compute a codeword. In contrast, the Joint Bilevel Image Experts Group (JBIG) uses a patented arithmetic coder, the IBM QM-coder, to compress image data and a predetermined probability table to estimate its probability measures. JBIG, though, has not get been commercially implemented; instead, JBIG's predecessor, the Group 3 fax (G3), continues to be used. BACIC achieves compression ratios comparable to JBIG's and is introduced as an alternative to the JBIG and G3 algorithms. BACIC's overall compression ratio is 19.0 for the eight CCITT test images (compared to JBIG's 19.6 and G3's 7.7), is 16.0 for 20 additional business-type documents (compared to JBIG's 16.0 and G3's 6.74), and is 3.07 for halftone images (compared to JBIG's 2.75 and G3's 0.50).

  19. Magnetized Plasma Compression for Fusion Energy

    NASA Astrophysics Data System (ADS)

    Degnan, James; Grabowski, Christopher; Domonkos, Matthew; Amdahl, David

    2013-10-01

    Magnetized Plasma Compression (MPC) uses magnetic inhibition of thermal conduction and enhancement of charge particle product capture to greatly reduce the temporal and spatial compression required relative to un-magnetized inertial fusion (IFE)--to microseconds, centimeters vs nanoseconds, sub-millimeter. MPC greatly reduces the required confinement time relative to MFE--to microseconds vs minutes. Proof of principle can be demonstrated or refuted using high current pulsed power driven compression of magnetized plasmas using magnetic pressure driven implosions of metal shells, known as imploding liners. This can be done at a cost of a few tens of millions of dollars. If demonstrated, it becomes worthwhile to develop repetitive implosion drivers. One approach is to use arrays of heavy ion beams for energy production, though with much less temporal and spatial compression than that envisioned for un-magnetized IFE, with larger compression targets, and with much less ambitious compression ratios. A less expensive, repetitive pulsed power driver, if feasible, would require engineering development for transient, rapidly replaceable transmission lines such as envisioned by Sandia National Laboratories. Supported by DOE-OFES.

  20. Anisotropic hydraulic permeability in compressed articular cartilage.

    PubMed

    Reynaud, Boris; Quinn, Thomas M

    2006-01-01

    The extent to which articular cartilage hydraulic permeability is anisotropic is largely unknown, despite its importance for understanding mechanisms of joint lubrication, load bearing, transport phenomena, and mechanotransduction. We developed and applied new techniques for the direct measurement of hydraulic permeability within statically compressed adult bovine cartilage explant disks, dissected such that disk axes were perpendicular to the articular surface. Applied pressure gradients were kept small to minimize flow-induced matrix compaction, and fluid outflows were measured by observation of a meniscus in a glass capillary under a microscope. Explant disk geometry under radially unconfined axial compression was measured by direct microscopic observation. Pressure, flow, and geometry data were input to a finite element model where hydraulic permeabilities in the disk axial and radial directions were determined. At less than 10% static compression, near free-swelling conditions, hydraulic permeability was nearly isotropic, with values corresponding to those of previous studies. With increasing static compression, hydraulic permeability decreased, but the radially directed permeability decreased more dramatically than the axially directed permeability such that strong anisotropy (a 10-fold difference between axial and radial directions) in the hydraulic permeability tensor was evident for static compression of 20-40%. Results correspond well with predictions of a previous microstructurally-based model for effects of tissue mechanical deformations on glycosaminoglycan architecture and cartilage hydraulic permeability. Findings inform understanding of structure-function relationships in cartilage matrix, and suggest several biomechanical roles for compression-induced anisotropic hydraulic permeability in articular cartilage.

  1. Lossless compression of instrumentation data. Final report

    SciTech Connect

    Stearns, S.D.

    1995-11-01

    This is our final report on Sandia National Laboratories Laboratory- Directed Research and Development (LDRD) project 3517.070. Its purpose has been to investigate lossless compression of digital waveform and image data, particularly the types of instrumentation data generated and processed at Sandia Labs. The three-year project period ran from October 1992 through September 1995. This report begins with a descriptive overview of data compression, with and without loss, followed by a summary of the activities on the Sandia project, including research at several universities and the development of waveform compression software. Persons who participated in the project are also listed. The next part of the report contains a general discussion of the principles of lossless compression. Two basic compression stages, decorrelation and entropy coding, are described and discussed. An example of seismic data compression is included. Finally, there is a bibliography of published research. Taken together, the published papers contain the details of most of the work and accomplishments on the project. This final report is primarily an overview, without the technical details and results found in the publications listed in the bibliography.

  2. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1992-01-01

    A process for x ray registration and differencing results in more efficient compression is discussed. Differencing of registered modeled subject image with a modeled reference image forms a differential image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three dimensional model, which three dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  3. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1990-01-01

    A process is disclosed for x ray registration and differencing which results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  4. Compression to prevent PTS: a controversy?

    PubMed

    Amin, Elham; Joore, Manuela A; ten Cate-Hoek, Arina J

    2016-03-01

    Compression therapy, prescribed as elastic compression stockings, is the cornerstone in the management of post-thrombotic syndrome. The effectiveness of elastic compression stockings has recently been called into question in a large randomized placebo-controlled trial. The findings however may be less contradictory than assumed and presented. The mechanistic substrate for the effectiveness of compression therapy is based on its ability to counteract venous hypertension, which is a central aspect in the pathophysiology of post-thrombotic syndrome. Nevertheless, despite elastic compression stockings a significant percentage (20-50%) of patients develops post-thrombotic syndrome, suggesting that there are other factors to be considered next to compression. Every patient has an individual baseline risk value, constituted of non-modifiable and modifiable risk factors (i.e. age, sex, bodyweight etcetera). Straining patients at risk is therefore crucial. Exploring additional or alternative forms of therapy is desirable as well since these are in addition to the risk factors, costs aspects and quality of life, puzzle pieces in the management of post-thrombotic syndrome, which once pieced together enables multifactorial yet individualized therapy.

  5. Normal and Time-Compressed Speech

    PubMed Central

    Lemke, Ulrike; Kollmeier, Birger; Holube, Inga

    2016-01-01

    Short-term and long-term learning effects were investigated for the German Oldenburg sentence test (OLSA) using original and time-compressed fast speech in noise. Normal-hearing and hearing-impaired participants completed six lists of the OLSA in five sessions. Two groups of normal-hearing listeners (24 and 12 listeners) and two groups of hearing-impaired listeners (9 listeners each) performed the test with original or time-compressed speech. In general, original speech resulted in better speech recognition thresholds than time-compressed speech. Thresholds decreased with repetition for both speech materials. Confirming earlier results, the largest improvements were observed within the first measurements of the first session, indicating a rapid initial adaptation phase. The improvements were larger for time-compressed than for original speech. The novel results on long-term learning effects when using the OLSA indicate a longer phase of ongoing learning, especially for time-compressed speech, which seems to be limited by a floor effect. In addition, for normal-hearing participants, no complete transfer of learning benefits from time-compressed to original speech was observed. These effects should be borne in mind when inviting listeners repeatedly, for example, in research settings.

  6. Robust object tracking in compressed image sequences

    NASA Astrophysics Data System (ADS)

    Mujica, Fernando; Murenzi, Romain; Smith, Mark J.; Leduc, Jean-Pierre

    1998-10-01

    Accurate object tracking is important in defense applications where an interceptor missile must hone into a target and track it through the pursuit until the strike occurs. The expense associated with an interceptor missile can be reduced through a distributed processing arrangement where the computing platform on which the tracking algorithm is run resides on the ground, and the interceptor need only carry the sensor and communications equipment as part of its electronics complement. In this arrangement, the sensor images are compressed, transmitted to the ground, and compressed to facilitate real-time downloading of the data over available bandlimited channels. The tracking algorithm is run on a ground-based computer while tracking results are transmitted back to the interceptor as soon as they become available. Compression and transmission in this scenario introduce distortion. If severe, these distortions can lead to erroneous tracking results. As a consequence, tracking algorithms employed for this purpose must be robust to compression distortions. In this paper we introduced a robust object racking algorithm based on the continuous wavelet transform. The algorithm processes image sequence data on a frame-by-frame basis, implicitly taking advantage of temporal history and spatial frame filtering to reduce the impact of compression artifacts. Test results show that tracking performance can be maintained at low transmission bit rates and can be used reliably in conjunction with many well-known image compression algorithms.

  7. Digital Data Registration and Differencing Compression System

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1996-01-01

    A process for X-ray registration and differencing results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic X-ray digital images.

  8. Compression-sensitive magnetic resonance elastography

    NASA Astrophysics Data System (ADS)

    Hirsch, Sebastian; Beyer, Frauke; Guo, Jing; Papazoglou, Sebastian; Tzschaetzsch, Heiko; Braun, Juergen; Sack, Ingolf

    2013-08-01

    Magnetic resonance elastography (MRE) quantifies the shear modulus of biological tissue to detect disease. Complementary to the shear elastic properties of tissue, the compression modulus may be a clinically useful biomarker because it is sensitive to tissue pressure and poromechanical interactions. In this work, we analyze the capability of MRE to measure volumetric strain and the dynamic bulk modulus (P-wave modulus) at a harmonic drive frequency commonly used in shear-wave-based MRE. Gel phantoms with various densities were created by introducing CO2-filled cavities to establish a compressible effective medium. The dependence of the effective medium's bulk modulus on phantom density was investigated via static compression tests, which confirmed theoretical predictions. The P-wave modulus of three compressible phantoms was calculated from volumetric strain measured by 3D wave-field MRE at 50 Hz drive frequency. The results demonstrate the MRE-derived volumetric strain and P-wave modulus to be sensitive to the compression properties of effective media. Since the reconstruction of the P-wave modulus requires third-order derivatives, noise remains critical, and P-wave moduli are systematically underestimated. Focusing on relative changes in the effective bulk modulus of tissue, compression-sensitive MRE may be useful for the noninvasive detection of diseases involving pathological pressure alterations such as hepatic hypertension or hydrocephalus.

  9. Issues in multiview autostereoscopic image compression

    NASA Astrophysics Data System (ADS)

    Shah, Druti; Dodgson, Neil A.

    2001-06-01

    Multi-view auto-stereoscopic images and image sequences require large amounts of space for storage and large bandwidth for transmission. High bandwidth can be tolerated for certain applications where the image source and display are close together but, for long distance or broadcast, compression of information is essential. We report on the results of our two- year investigation into multi-view image compression. We present results based on four techniques: differential pulse code modulation (DPCM), disparity estimation, three- dimensional discrete cosine transform (3D-DCT), and principal component analysis (PCA). Our work on DPCM investigated the best predictors to use for predicting a given pixel. Our results show that, for a given pixel, it is generally the nearby pixels within a view that provide better prediction than the corresponding pixel values in adjacent views. This led to investigations into disparity estimation. We use both correlation and least-square error measures to estimate disparity. Both perform equally well. Combining this with DPCM led to a novel method of encoding, which improved the compression ratios by a significant factor. The 3D-DCT has been shown to be a useful compression tool, with compression schemes based on ideas from the two-dimensional JPEG standard proving effective. An alternative to 3D-DCT is PCA. This has proved to be less effective than the other compression methods investigated.

  10. MRC for compression of Blake Archive images

    NASA Astrophysics Data System (ADS)

    Misic, Vladimir; Kraus, Kari; Eaves, Morris; Parker, Kevin J.; Buckley, Robert R.

    2002-11-01

    The William Blake Archive is part of an emerging class of electronic projects in the humanities that may be described as hypermedia archives. It provides structured access to high-quality electronic reproductions of rare and often unique primary source materials, in this case the work of poet and painter William Blake. Due to the extensive high frequency content of Blake's paintings (namely, colored engravings), they are not suitable for very efficient compression that meets both rate and distortion criteria at the same time. Resolving that problem, the authors utilized modified Mixed Raster Content (MRC) compression scheme -- originally developed for compression of compound documents -- for the compression of colored engravings. In this paper, for the first time, we have been able to demonstrate the successful use of the MRC compression approach for the compression of colored, engraved images. Additional, but not less important benefits of the MRC image representation for Blake scholars are presented: because the applied segmentation method can essentially lift the color overlay of an impression, it provides the student of Blake the unique opportunity to recreate the underlying copperplate image, model the artist's coloring process, and study them separately.

  11. Compressing bitmap indexes for faster search operations

    SciTech Connect

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2002-04-25

    In this paper, we study the effects of compression on bitmap indexes. The main operations on the bitmaps during query processing are bitwise logical operations such as AND, OR, NOT, etc. Using the general purpose compression schemes, such as gzip, the logical operations on the compressed bitmaps are much slower than on the uncompressed bitmaps. Specialized compression schemes, like the byte-aligned bitmap code(BBC), are usually faster in performing logical operations than the general purpose schemes, but in many cases they are still orders of magnitude slower than the uncompressed scheme. To make the compressed bitmap indexes operate more efficiently, we designed a CPU-friendly scheme which we refer to as the word-aligned hybrid code (WAH). Tests on both synthetic and real application data show that the new scheme significantly outperforms well-known compression schemes at a modest increase in storage space. Compared to BBC, a scheme well-known for its operational efficiency, WAH performs logical operations about 12 times faster and uses only 60 percent more space. Compared to the uncompressed scheme, in most test cases WAH is faster while still using less space. We further verified with additional tests that the improvement in logical operation speed translates to similar improvement in query processing speed.

  12. Hyperspace storage compression for multimedia systems

    NASA Astrophysics Data System (ADS)

    Holtz, Klaus E.; Lettieri, Alfred; Holtz, Eric S.

    1994-04-01

    Storing multimedia text, speech or images in personal computers now requires very large storage facilities. Data compression eases the problem, but all algorithms based on Shannon's information theory will distort the data with increased compression. Autosophy, an emerging science of `self-assembling structures', provides a new mathematical theory of `learning' and a new `information theory'. `Lossless' data compression is achieved by storing data in mathematically omni dimensional hyperspace. Such algorithms are already used in disc file compression and V.42 bis modems. Speech can be compressed using similar methods. `Lossless' autosophy image compression has been implemented and tested in an IBM PC (486), confirming the algorithms and theoretical predictions of the new `information theory'. Computer graphics frames or television images are disassembled into `known' fragments for storage in an omni dimensional hyperspace library. Each unique fragment is used only once. Each image frame is converted into a single output code which is later used for image retrieval. The hyperspace image library is stored on a disc. Experimental data confirms that hyperspace storage is independent of image size, resolution or frame rate; depending solely on `novelty' or `movement' within the images. The new algorithms promise dramatic improvements in all multimedia data storage.

  13. Composite lamina compressive properties using the Wyoming combined loading compression test method

    NASA Astrophysics Data System (ADS)

    Wegner, Peter Mark

    The determination of lamina compressive strength and modulus using the Wyoming Combined Loading Compression (CLC) test method was investigated. In this test method an untabbed [90/0]ns cross-ply test coupon is tested in uniaxial compression using the CLC test fixture. The longitudinal modulus and strength of the 0°-plies are determined by applying a back-out factor, calculated using Classical Lamination Theory, to the measured longitudinal laminate modulus and strength. A parametric study revealed that specimen quality, load train alignment, and fixture dimensional tolerances have a large impact on the measured compressive properties. Thus, a significant amount of time was dedicated to developing specimen fabrication and testing procedures to minimize variations in the measured compressive properties. A comparative study of the CLC and IITRI test fixtures showed that the CLC test fixture is superior to the IITRI fixture in many ways. Although the compressive properties measured using these two fixtures are often statistically equivalent, the CLC test fixture is easier to use, less expensive to fabricate, and much less massive than the IITRI fixture. In a second portion of the comparative study, the 0°-ply compressive strength obtained using [90/0]ns cross-ply test specimens was compared to the 0°-ply compressive strength obtained using quasi-isotropic test specimens. This revealed that the 0°-ply compressive strength was independent of the laminate orientation. This "backed-out" 0°-ply compressive strength is then by definition the "design value" for the strength of the composite material in compression. The present study showed that valid "design values" for the compressive strength of laminated composite materials can be obtained using the CLC test method. This was verified by testing two classes of structural components in compression, filament-wound cylinders, and honeycomb sandwich beams. The compressive strength of the 0°-plies at failure in the

  14. A dedicated compression device for high resolution X-ray tomography of compressed gas diffusion layers

    SciTech Connect

    Tötzke, C.; Manke, I.; Banhart, J.; Gaiselmann, G.; Schmidt, V.; Bohner, J.; Müller, B. R.; Kupsch, A.; Hentschel, M. P.; Lehnert, W.

    2015-04-15

    We present an experimental approach to study the three-dimensional microstructure of gas diffusion layer (GDL) materials under realistic compression conditions. A dedicated compression device was designed that allows for synchrotron-tomographic investigation of circular samples under well-defined compression conditions. The tomographic data provide the experimental basis for stochastic modeling of nonwoven GDL materials. A plain compression tool is used to study the fiber courses in the material at different compression stages. Transport relevant geometrical parameters, such as porosity, pore size, and tortuosity distributions, are exemplarily evaluated for a GDL sample in the uncompressed state and for a compression of 30 vol.%. To mimic the geometry of the flow-field, we employed a compression punch with an integrated channel-rib-profile. It turned out that the GDL material is homogeneously compressed under the ribs, however, much less compressed underneath the channel. GDL fibers extend far into the channel volume where they might interfere with the convective gas transport and the removal of liquid water from the cell.

  15. Robust Critical Point Detection

    SciTech Connect

    Bhatia, Harsh

    2016-07-28

    Robust Critical Point Detection is a software to compute critical points in a 2D or 3D vector field robustly. The software was developed as a part of the author's work at the lab as a Phd student under Livermore Scholar Program (now called Livermore Graduate Scholar Program).

  16. Nickel Curie Point Engine

    ERIC Educational Resources Information Center

    Chiaverina, Chris; Lisensky, George

    2014-01-01

    Ferromagnetic materials such as nickel, iron, or cobalt lose the electron alignment that makes them attracted to a magnet when sufficient thermal energy is added. The temperature at which this change occurs is called the "Curie temperature," or "Curie point." Nickel has a Curie point of 627 K, so a candle flame is a sufficient…

  17. Model Breaking Points Conceptualized

    ERIC Educational Resources Information Center

    Vig, Rozy; Murray, Eileen; Star, Jon R.

    2014-01-01

    Current curriculum initiatives (e.g., National Governors Association Center for Best Practices and Council of Chief State School Officers 2010) advocate that models be used in the mathematics classroom. However, despite their apparent promise, there comes a point when models break, a point in the mathematical problem space where the model cannot,…

  18. The Lagrangian points

    NASA Astrophysics Data System (ADS)

    Linton, J. Oliver

    2017-03-01

    There are five unique points in a star/planet system where a satellite can be placed whose orbital period is equal to that of the planet. Simple methods for calculating the positions of these points, or at least justifying their existence, are developed.

  19. Tension and compression fatigue response of unnotched 3D braided composites

    NASA Technical Reports Server (NTRS)

    Portanova, M. A.

    1992-01-01

    The unnotched compression and tension fatigue response of a 3-D braided composite was measured. Both gross compressive stress and tensile stress were plotted against cycles to failure to evaluate the fatigue life of these materials. Damage initiation and growth was monitored visually and by tracking compliance change during cycle loading. The intent was to establish by what means the strength of a 3-D architecture will start to degrade, at what point will it degrade beyond an acceptable level, and how this material will typically fail.

  20. Study on dynamic compression performance of K9 glass with prefabricated defects

    NASA Astrophysics Data System (ADS)

    Hu, Changming; Wang, Xiang; Cai, Lingcang; Liu, Cangli

    2012-03-01

    We conducted several planar impact experiments to study dynamic compression properties of K9 glass on powder gun using Photon Doppler Velocimetry (PDV) measure system. Samples are prefabricated some internal meso-defects by laser three-dimensional erosion technique before shock loading. Free surface velocities recorded by high temporal-spatial resolution PDV array or multi-point PDV. All these experimental results show some different properties influenced by evolutions of pre-existed internal defects face on free surface velocity profiles. The critical compression strength and dynamic evolution information of pre-existed internal defects can be derived from the experimental results tentatively.

  1. The critical compressibility factor value: Associative fluids and liquid alkali metals

    SciTech Connect

    Kulinskii, V. L.

    2014-08-07

    We show how to obtain the critical compressibility factor Z{sub c} for simple and associative Lennard-Jones fluids using the critical characteristics of the Ising model on different lattices. The results show that low values of critical compressibility factor are correlated with the associative properties of fluids in critical region and can be obtained on the basis of the results for the Ising model on lattices with more than one atom per cell. An explanation for the results on the critical point line of the Lennard-Jones fluids and liquid metals is proposed within the global isomorphism approach.

  2. Physicsdesign point for a 1MW fusion neutron source

    NASA Astrophysics Data System (ADS)

    Woodruff, Simon; Melnik, Paul; Sieck, Paul; Stuber, James; Romero-Talamas, Carlos; O'Bryan, John; Miller, Ronald

    2016-10-01

    We are developing a design point for a spheromak experiment heated by adiabatic compression for use as a compact neutron source. We utilize the CORSICA and NIMROD MHD codes as well as analytic modeling to assess a concept with target parameters R0 =0.5m, Rf =0.17m, T0 =1keV, Tf =8keV, n0 =2e20m-3 and nf = 5e21m-3, with radial convergence of C =R0/Rf =3. We present results from CORSICA showing the placement of coils and passive structure to ensure stability during compression. We specify target parameters for the compression in terms of plasma beta, formation efficiency and energy confinement. We present results simulations of magnetic compression using the NIMROD code to examine the role of rotation on the stability and confinement of the spheromak as it is compressed. Supported by DARPA Grant N66001-14-1-4044 and IAEA CRP on Compact Fusion Neutron Sources.

  3. Compression failure of angle-ply laminates

    NASA Technical Reports Server (NTRS)

    Peel, Larry D.; Hyer, Michael W.; Shuart, Mark J.

    1991-01-01

    The present work deals with modes and mechanisms of failure in compression of angle-ply laminates. Experimental results were obtained from 42 angle-ply IM7/8551-7a specimens with a lay-up of ((plus or minus theta)/(plus or minus theta)) sub 6s where theta, the off-axis angle, ranged from 0 degrees to 90 degrees. The results showed four failure modes, these modes being a function of off-axis angle. Failure modes include fiber compression, inplane transverse tension, inplane shear, and inplane transverse compression. Excessive interlaminar shear strain was also considered as an important mode of failure. At low off-axis angles, experimentally observed values were considerably lower than published strengths. It was determined that laminate imperfections in the form of layer waviness could be a major factor in reducing compression strength. Previously developed linear buckling and geometrically nonlinear theories were used, with modifications and enhancements, to examine the influence of layer waviness on compression response. The wavy layer is described by a wave amplitude and a wave length. Linear elastic stress-strain response is assumed. The geometrically nonlinear theory, in conjunction with the maximum stress failure criterion, was used to predict compression failure and failure modes for the angle-ply laminates. A range of wave length and amplitudes were used. It was found that for 0 less than or equal to theta less than or equal to 15 degrees failure was most likely due to fiber compression. For 15 degrees less than theta less than or equal to 35 degrees, failure was most likely due to inplane transverse tension. For 35 degrees less than theta less than or equal to 70 degrees, failure was most likely due to inplane shear. For theta less than 70 degrees, failure was most likely due to inplane transverse compression. The fiber compression and transverse tension failure modes depended more heavily on wave length than on wave amplitude. Thus using a single

  4. Steady-State and Dynamic Myoelectric Signal Compression Using Embedded Zero-Tree Wavelets

    DTIC Science & Technology

    2001-10-25

    MES compression. This research investigates static and dynamic MES compression using the embedded zero- tree wavelet ( EZW ) compression algorithm and...compression using a modified version of Shapiro’s [5] embedded zero-tree wavelet ( EZW ) compression algorithm. This research investigates static...and transient MES compression using the EZW compression algorithm and compares its performance to a standard wavelet compression technique. For

  5. 46 CFR 194.20-17 - Compressed gases.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 7 2013-10-01 2013-10-01 false Compressed gases. 194.20-17 Section 194.20-17 Shipping... Compressed gases. (a) Nonflammable compressed gases (excluding oxygen) may be securely stowed in the... chemical storeroom. (b) Flammable compressed gases and oxygen shall be stowed in accordance with 49...

  6. 46 CFR 194.20-17 - Compressed gases.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 7 2011-10-01 2011-10-01 false Compressed gases. 194.20-17 Section 194.20-17 Shipping... Compressed gases. (a) Nonflammable compressed gases (excluding oxygen) may be securely stowed in the... chemical storeroom. (b) Flammable compressed gases and oxygen shall be stowed in accordance with 49...

  7. 46 CFR 194.20-17 - Compressed gases.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 7 2012-10-01 2012-10-01 false Compressed gases. 194.20-17 Section 194.20-17 Shipping... Compressed gases. (a) Nonflammable compressed gases (excluding oxygen) may be securely stowed in the... chemical storeroom. (b) Flammable compressed gases and oxygen shall be stowed in accordance with 49...

  8. Jig For Compression-Relaxation Tests Of Plastics

    NASA Technical Reports Server (NTRS)

    Shelley, Richard M.; Daniel, James A.; Tapphorn, Ralph M.

    1991-01-01

    Improved jig facilitates tests of long-term compression-relaxation properties of plastics. Holds specimen in compression when removed from compression-testing machine, yet allows compression force on specimen to be measured when on machine. Useful in quantifying some of time-dependent properties of polymers, in investigations of effects of aging, and in ascertaining service lifetimes of polymeric components.

  9. Compression of echocardiographic scan line data using wavelet packet transform

    NASA Technical Reports Server (NTRS)

    Hang, X.; Greenberg, N. L.; Qin, J.; Thomas, J. D.

    2001-01-01

    An efficient compression strategy is indispensable for digital echocardiography. Previous work has suggested improved results utilizing wavelet transforms in the compression of 2D echocardiographic images. Set partitioning in hierarchical trees (SPIHT) was modified to compress echocardiographic scanline data based on the wavelet packet transform. A compression ratio of at least 94:1 resulted in preserved image quality.

  10. 29 CFR 1910.101 - Compressed gases (general requirements).

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... in § 1910.6. (b) Compressed gases. The in-plant handling, storage, and utilization of all compressed... 29 Labor 5 2011-07-01 2011-07-01 false Compressed gases (general requirements). 1910.101 Section..., DEPARTMENT OF LABOR OCCUPATIONAL SAFETY AND HEALTH STANDARDS Hazardous Materials § 1910.101 Compressed...

  11. 29 CFR 1910.101 - Compressed gases (general requirements).

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... in § 1910.6. (b) Compressed gases. The in-plant handling, storage, and utilization of all compressed... 29 Labor 5 2014-07-01 2014-07-01 false Compressed gases (general requirements). 1910.101 Section..., DEPARTMENT OF LABOR OCCUPATIONAL SAFETY AND HEALTH STANDARDS Hazardous Materials § 1910.101 Compressed...

  12. Inelastic compression legging produces gradient compression and significantly higher skin surface pressures compared with an elastic compression stocking.

    PubMed

    Kline, Cassie N; Macias, Brandon R; Kraus, Emily; Neuschwander, Timothy B; Angle, Niren; Bergan, John; Hargens, Alan R

    2008-01-01

    The purposes of this study were to (1) investigate compression levels beneath an inelastic legging equipped with a new pressure-adjustment system, (2) compare the inelastic compression levels with those provided by a well-known elastic stocking, and (3) evaluate each support's gradient compression production. Eighteen subjects without venous reflux and 12 patients with previously documented venous reflux received elastic and inelastic compression supports sized for the individual. Skin surface pressures under the elastic (Sigvaris 500, 30-40 mm Hg range, Sigvaris, Inc., Peachtree City, GA) and inelastic (CircAid C3 with Built-in-Pressure System [BPS], CircAid Medical Products, San Diego, CA) supports were measured using a calibrated Tekscan I-Scan device (Tekscan, Inc., Boston, MA). The elastic stocking produced significantly lower skin surface pressures than the inelastic legging. Mean pressures (+/- standard error) beneath the elastic stocking were 26 +/- 2 and 23 +/- 1 mm Hg at the ankle and below-knee regions, respectively. Mean pressures (+/- standard error) beneath the inelastic legging with the BPS were 50 +/- 3 and 38 +/- 2 mm Hg at the ankle and below-knee regions, respectively. Importantly, our study indicates that only the inelastic legging with the BPS produces significant ankle to knee gradient compression (p = .001).

  13. Compressibility of highly porous network of carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Rawal, Amit; Kumar, Vijay

    2013-10-01

    A simple analytical model for predicting the compressibility of highly porous network of carbon nanotubes (CNTs) has been proposed based on the theory of compression behavior of textile materials. The compression model of CNT network has accounted for their physical, geometrical, and mechanical properties. The compression behavior of multi-walled carbon nanotubes (MWCNTs) has been predicted and compared with the experimental data pertaining to the compressibility of highly porous nanotube sponges. It has been demonstrated that the compressibility of network of MWCNTs can be tailored depending upon the material parameters and the level of compressive stresses.

  14. Wavelet compression techniques for hyperspectral data

    NASA Technical Reports Server (NTRS)

    Evans, Bruce; Ringer, Brian; Yeates, Mathew

    1994-01-01

    Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet

  15. Microseismic source imaging in a compressed domain

    NASA Astrophysics Data System (ADS)

    Vera Rodriguez, Ismael; Sacchi, Mauricio D.

    2014-08-01

    Microseismic monitoring is an essential tool for the characterization of hydraulic fractures. Fast estimation of the parameters that define a microseismic event is relevant to understand and control fracture development. The amount of data contained in the microseismic records however, poses a challenge for fast continuous detection and evaluation of the microseismic source parameters. Work inspired by the emerging field of Compressive Sensing has showed that it is possible to evaluate source parameters in a compressed domain, thereby reducing processing time. This technique performs well in scenarios where the amplitudes of the signal are above the noise level, as is often the case in microseismic monitoring using downhole tools. This paper extends the idea of the compressed domain processing to scenarios of microseismic monitoring using surface arrays, where the signal amplitudes are commonly at the same level as, or below, the noise amplitudes. To achieve this, we resort to the use of an imaging operator, which has previously been found to produce better results in detection and location of microseismic events from surface arrays. The operator in our method is formed by full-waveform elastodynamic Green's functions that are band-limited by a source time function and represented in the frequency domain. Where full-waveform Green's functions are not available, ray tracing can also be used to compute the required Green's functions. Additionally, we introduce the concept of the compressed inverse, which derives directly from the compression of the migration operator using a random matrix. The described methodology reduces processing time at a cost of introducing distortions into the results. However, the amount of distortion can be managed by controlling the level of compression applied to the operator. Numerical experiments using synthetic and real data demonstrate the reductions in processing time that can be achieved and exemplify the process of selecting the

  16. Genetic optical design for a compressive sensing task

    NASA Astrophysics Data System (ADS)

    Horisaki, Ryoichi; Niihara, Takahiro; Tanida, Jun

    2016-10-01

    We present a sophisticated optical design method for reducing the number of photodetectors for a specific sensing task. The chosen design parameter is the point spread function, and the selected task is object recognition. The point spread function is optimized iteratively with a genetic algorithm for object recognition based on a neural network. In the experimental demonstration, binary classification of face and non-face datasets was performed with a single measurement using two photodetectors. A spatial light modulator operating in the amplitude modulation mode was provided in the imaging optics and was used to modulate the point spread function. In each generation of the genetic algorithm, the classification accuracy with a pattern displayed on the spatial light modulator was fed-back to the next generation to find better patterns. The proposed method increased the accuracy by about 30 % compared with a conventional imaging system in which the point spread function was the delta function. This approach is practically useful for compressing the cost, size, and observation time of optical sensors in specific applications, and robust for imperfections in optical elements.

  17. Constructive points of powerlocales

    NASA Astrophysics Data System (ADS)

    Vickers, Steven

    1997-09-01

    Results of Bunge and Funk and of Johnstone, providing constructively sound descriptions of the global points of the lower and upper powerlocales, are extended here to describe the generalized points and proved in a way that displays in a symmetric fashion two complementary treatments of frames: as suplattices and as preframes. Also described here are the points of the Vietoris powerlocale.In each of two special cases, an exponential D ( being the Sierpinski locale) is shown to be homeomorphic to a powerlocale: to the lower powerlocale when D is discrete, and to the upper powerlocale when D is compact regular.

  18. Data compression for the Cassini radio and plasma wave instrument

    NASA Technical Reports Server (NTRS)

    Farrell, W. M.; Gurnett, D. A.; Kirchner, D. L.; Kurth, W. S.; Woolliscroft, L. J. C.

    1993-01-01

    The Cassini Radio and Plasma Wave Science experiment will employ data compression to make effective use of the available data telemetry bandwidth. Some compression will be achieved by use of a lossless data compression chip and some by software in a dedicated 80C85 processor. A description of the instrument and data compression system are included in this report. Also, the selection of data compression systems and acceptability of data degradation is addressed.

  19. Image encryption and compression based on kronecker compressed sensing and elementary cellular automata scrambling

    NASA Astrophysics Data System (ADS)

    Chen, Tinghuan; Zhang, Meng; Wu, Jianhui; Yuen, Chau; Tong, You

    2016-10-01

    Because of simple encryption and compression procedure in single step, compressed sensing (CS) is utilized to encrypt and compress an image. Difference of sparsity levels among blocks of the sparsely transformed image degrades compression performance. In this paper, motivated by this difference of sparsity levels, we propose an encryption and compression approach combining Kronecker CS (KCS) with elementary cellular automata (ECA). In the first stage of encryption, ECA is adopted to scramble the sparsely transformed image in order to uniformize sparsity levels. A simple approximate evaluation method is introduced to test the sparsity uniformity. Due to low computational complexity and storage, in the second stage of encryption, KCS is adopted to encrypt and compress the scrambled and sparsely transformed image, where the measurement matrix with a small size is constructed from the piece-wise linear chaotic map. Theoretical analysis and experimental results show that our proposed scrambling method based on ECA has great performance in terms of scrambling and uniformity of sparsity levels. And the proposed encryption and compression method can achieve better secrecy, compression performance and flexibility.

  20. Compressed sensing for real-time energy-efficient ECG compression on wireless body sensor nodes.

    PubMed

    Mamaghanian, Hossein; Khaled, Nadia; Atienza, David; Vandergheynst, Pierre

    2011-09-01

    Wireless body sensor networks (WBSN) hold the promise to be a key enabling information and communications technology for next-generation patient-centric telecardiology or mobile cardiology solutions. Through enabling continuous remote cardiac monitoring, they have the potential to achieve improved personalization and quality of care, increased ability of prevention and early diagnosis, and enhanced patient autonomy, mobility, and safety. However, state-of-the-art WBSN-enabled ECG monitors still fall short of the required functionality, miniaturization, and energy efficiency. Among others, energy efficiency can be improved through embedded ECG compression, in order to reduce airtime over energy-hungry wireless links. In this paper, we quantify the potential of the emerging compressed sensing (CS) signal acquisition/compression paradigm for low-complexity energy-efficient ECG compression on the state-of-the-art Shimmer WBSN mote. Interestingly, our results show that CS represents a competitive alternative to state-of-the-art digital wavelet transform (DWT)-based ECG compression solutions in the context of WBSN-based ECG monitoring systems. More specifically, while expectedly exhibiting inferior compression performance than its DWT-based counterpart for a given reconstructed signal quality, its substantially lower complexity and CPU execution time enables it to ultimately outperform DWT-based ECG compression in terms of overall energy efficiency. CS-based ECG compression is accordingly shown to achieve a 37.1% extension in node lifetime relative to its DWT-based counterpart for "good" reconstruction quality.